text
stringlengths
4
2.78M
meta
dict
--- abstract: 'A polynomial counterpart of the Seiberg-Witten invariant associated with a negative definite plumbed $3$-manifold has been proposed by earlier work of the authors. It is provided by a special decomposition of the zeta-function defined by the combinatorics of the manifold. In this article we give an algorithm, based on multivariable Euclidean division of the zeta-function, for the explicit calculation of the polynomial, in particular for the Seiberg–Witten invariant.' address: - | BCAM - Basque Center for Applied Mathematics\ Mazarredo, 14 E48009 Bilbao, Basque Country – Spain\ - 'Alfréd Rényi Institute of Mathematics, Hungarian Academy of Sciences, 1053 Budapest, Reáltanoda u. 13-15, Hungary.' author: - Tamás László - Zsolt Szilágyi title: 'Némethi’s division algorithm for zeta-functions of plumbed 3-manifolds' --- Introduction ============ The main motivation of the present article is to understand a multivariable division algorithm, proposed by A. Némethi (cf. [@Npers], [@BN]), for the calculation of the normalized Seiberg–Witten invariant of a negative definite plumbed 3-manifold. The input is a multivariable zeta-function associated with the manifold and the output is a (Laurent) polynomial, called the polynomial part of the zeta-function. In particular, this is a polynomial ‘categorification’ of the Seiberg–Witten invariant in the sense that the sum of its coefficients equals with the normalized Seiberg–Witten invariant. The polynomial part was defined by the authors in [@LSz] as a possible solution for the multivariable ‘polynomial- and negative-degree part’ decomposition problem for the zeta-function (cf. [@BN; @LN; @LSz], see \[ss:polSW\]). The one-variable algorithm goes back to the work of Braun and Némethi [@BN]. In that case the polynomial part is simply given by a division principle. However, in general, we show that in order to recover the multivariable polynomial part of [@LSz] one constructs a polynomial by division and then one has to consider its terms with suitable multiplicity according to the corresponding exponents and the structure of the plumbing graph. In the sequel, we give some details about the algorithm and state further results of the present note. {#intro2} Let $M$ be a closed oriented plumbed 3-manifold associated with a connected negative definite plumbing graph $\Gamma$. Or, equivalently, $M$ is the link of a complex normal surface singularity, and $\Gamma$ is its dual resolution graph. Assume that $M$ is a rational homology sphere, ie. $\Gamma$ is a tree and all the plumbed surfaces have genus zero. Let $\mathcal{V}$ be the set of vertices of $\Gamma$, $\delta_v$ be the valency of a vertex $v\in\mathcal{V}$, and we distinguish the following subsets: the set of *nodes* $\mathcal{N}:=\{n\in \mathcal{V}:\delta_n \geq 3\}$ and the set of *ends* $\mathcal{E}=\{v\in \mathcal{V}:\delta_v= 1\}$. We consider the plumbed 4-manifold $\widetilde{X}$ associated with $\Gamma$. Its second homology $L:=H_2(\widetilde{X},\mathbb{Z})$ is a lattice, freely generated by the classes of 2-spheres $\{E_v\}_{v\in\mathcal{V}}$, endowed with the nondegenerate negative definite intersection form $(,)$. The second cohomology $L':=H^2(\widetilde{X},\mathbb{Z})$ is the dual lattice, freely generated by the (anti)dual classes $\{E_v^*\}_{v\in\mathcal{V}}$, where we set $(E^*_v,E_w)=-\delta_{vw}$, the negative of the Kronecker delta. The intersection form embeds $L$ into $L'$ and $H:=L'/L\simeq H_1(M,\mathbb{Z})$. Denote the class of $l'\in L'$ in $H$ by $[l']$. We denote by $\mathfrak{sw}^{norm}_h(M)$ the normalized Seiberg–Witten invariants of $M$ indexed by the group elements $h\in H$, see \[ss:sw\]. The *multivariable zeta-function* associated with $M$ (or $\Gamma$) is defined by $$f(\mathbf{t}) = \prod_{v\in \mathcal{V}}(1-\mathbf{t}^{E^{*}_{v}})^{\delta_{v}-2},$$ where $\mathbf{t}^{l'}:=\prod_{v\in\mathcal{V}}t_v^{l_v}$ for any $l'=\sum_{v\in\mathcal{V}}l_vE_v\in L'$. One has a natural decomposition into its $h$-equivariant parts $f(\mathbf{t})=\sum_{h\in H} f_{h}(\mathbf{t})$, see \[ss:def\]. By a result of [@LN], for our purposes, one can reduce the variables of $f_h$ to the variables of the nodes of the graph. Therefore we restrict our discussions to the reduced zeta-functions defined by $f_h(\mathbf{t}_{\mathcal{N}}) = f_{h}(\mathbf{t})|_{t_v=1,v\notin \mathcal{N}}$. Here we introduce notation $\mathbf{t}_{\mathcal{N}}^{l'} := \prod_{n\in\mathcal{N}}t_n^{l_n}$. {#ss:intro3} The multivariable polynomial part $P_h(\mathbf{t}_{\mathcal{N}})$ associated with $f_h(\mathbf{t}_{\mathcal{N}})$ ([@LSz]), is mainly a combination of the one- and two-variable cases studied by [@BN] and [@LN] corresponding to the structure of the *orbifold graph* $\Gamma^{orb}$. The vertices of $\Gamma^{orb}$ are the nodes of $\Gamma$ and two of them are connected by an edge if the corresponding nodes in $\Gamma$ are connected by a path which consists only vertices with valency $\delta_v=2$. The main property reads as $P_h(1)=\mathfrak{sw}^{norm}_h(M)$, see \[ss:polSW\]. Multivariable division algorithm {#sec:alg} -------------------------------- On $L\otimes \mathbb{Q}$ we consider the partial order: for any $l_1,l_2$ one writes $l_1 > l_2$ if $l_1-l_2=\sum_{v\in \mathcal{V}} \ell_v E_v$ with all $\ell_v > 0$. We introduce a multivariable division algorithm in \[ss:multidivision\], which provides a unique decomposition (Lemma \[lem:+dec\]) $$f_{h}(\mathbf{t}_{\mathcal{N}}) = P^{+}_{h}(\mathbf{t}_{\mathcal{N}}) + f^{neg}_{h}(\mathbf{t}_{\mathcal{N}}),$$ where $P^{+}_{h} (\mathbf{t}_{\mathcal{N}})= \sum_{\beta}p_{\beta}\mathbf{t}_{\mathcal{N}}^{\beta}$ is a Laurent polynomial such that $\beta\not <0$ for every monomial and $f_{h}^{neg}(\mathbf{t}_{\mathcal{N}})$ is a rational function with negative degree in $t_{n}$ for all $n\in \mathcal{N}$. In \[ss:multiplicity\] we define a multiplicity function $\mathfrak{s}$ involving the structure of $\Gamma^{orb}$ and we show in Theorem \[lm-0\] that the polynomial part $P_{h}(\mathbf{t}_{\mathcal{N}})$ can be computed from the quotient $P^{+}_{h}(\mathbf{t}_{\mathcal{N}})$ by taking its monomial terms with multiplicity $\mathfrak{s}$. More precisely, $$P_{h}(\mathbf{t}_{\mathcal{N}}) = \sum_{\beta}\mathfrak{s}(\beta) p_{\beta}\mathbf{t}_{\mathcal{N}}^{\beta}.$$ Comparisons ----------- A consequence of the above algorithm (cf. Remark \[rk:poly-plus\](\[poly-plus-i\])) is that in general $P_h$ is ‘thicker’ than $P^{+}_{h}$, in the sense that $\mathfrak{s}(\beta)\geq 1$ for all the exponents $\beta$ of $P^{+}_{h}$. This motivates the study of their comparison on two different classes of graphs. In the first case we assume that $\Gamma^{orb}$ is a bamboo, that is, there are no vertices with valency greater or equal than $3$. Notice that most of the examples considered in the aforementioned articles were taken from this class. We prove in Theorem \[thm:bamboo\] that for these graphs the two polynomials agree. Thus, the Seiberg–Witten invariant calculation is provided only by the division. The second class is defined by a topological criterion: they are the graphs of the 3-manifolds $S^3_{-p/q}(K)$ obtained by $(-p/q)$-surgery along the connected sum $K$ of some algebraic knots. We provide a concrete example of this class for which one has $P_{h}\neq P^{+}_{h}$ for some $h$, see \[ss:ex\]. More precisely, Theorem \[thm:Q\] proves that if we look at part of the polynomials consisting of monomials for which the exponent of the variable associated with the ‘central’ vertex of the graph (cf. \[ss:graph\]) is non-negative, then they agree. (See \[ss:str\] for precise formulation.) In fact, by Proposition \[prop:can\], for the canonical class $h=0$ these are the only monomials, hence $P_{0}=P^{+}_{0}$. Acknowledgements {#acknowledgements .unnumbered} ---------------- TL is supported by ERCEA Consolidator Grant 615655 – NMST and also by the Basque Government through the BERC 2014-2017 program and by Spanish excellence accreditation SEV-2013-0323. Partial financial support to ZsSz was provided by the ‘Lendület’ program of the Hungarian Academy of Sciences. Preliminaries ============= Links of normal surface singularities {#ss:link} ------------------------------------- ### {#section-1} Let $\Gamma$ be a connected negative definite plumbing graph with vertices $\mathcal{V}=\mathcal{V}(\Gamma)$. By plumbing disk bundles along $\Gamma$, we obtain a smooth 4–manifold $\widetilde{X}$ whose boundary is an oriented plumbed 3–manifold $M$. $\Gamma$ can be realized as the dual graph of a good resolution $\pi:\widetilde{X}\to X$ of some complex normal surface singularity $(X,o)$ and $M$ is called the link of the singularity. In our study, we assume that $M$ is a [*rational homology sphere*]{}, or, equivalently, $\Gamma$ is a tree and all the genus decorations are zero. Recall that $L:=H_2 (\widetilde{X},\mathbb{Z} )\simeq \mathbb{Z}\langle E_v\rangle_{v\in\mathcal{V}}$ is a lattice, freely generated by the classes of the irreducible exceptional divisors $\{E_v\}_{v\in\mathcal{V}}$ (ie. classes of $2$-spheres), with a nondegenerate negative definite intersection form $I:=[(E_v,E_w)]_{v,w\in \mathcal{V}}$. $L':=H^2( \widetilde{X},\mathbb{Z})\simeq Hom(L,\mathbb{Z})$ is the dual lattice, freely generated by the (anti)duals $\{E_v^*\}_{v\in \mathcal{V}}$. $L$ is embedded in $L'$ by the intersection form (which extends to $L\otimes \mathbb{Q}\supset L'$) and their finite quotient is $H:=L'/L \simeq H^2(\partial \widetilde{X},\mathbb{Z})\simeq H_{1}(M, \mathbb{Z})$. ### {#ss:det} The [*determinant*]{} of a subgraph $\Gamma'\subseteq \Gamma$ is defined as the determinant of the negative of the submatrix of $I$ with rows and columns indexed with vertices of $\Gamma'$, and it will be denoted by $\mathrm{det}_{\Gamma'}$. In particular, $\mathrm{det}_\Gamma:=\det(-I)=|H|$. We will also consider the following subgraphs: since $\Gamma$ is a tree, for any two vertices $v,w\in \mathcal{V}$ there is a unique minimal connected subgraph $[v,w]$ with vertices $\{v_{i}\}_{i=0}^{k}$ such that $v=v_{0}$ and $w=v_{k}$. Similarly, we also introduce notations $[v,w)$, $(v,w]$ and $(v,w)$ for the complete subgraphs with vertices $\{v_{i}\}_{i=0}^{k-1}$, $\{v_{i}\}_{i=1}^{k}$ and $\{v_{i}\}_{i=1}^{k-1}$ respectively. The inverse of $I$ has entries $(I^{-1})_{vw}=(E_v^*,E^*_w)$, all of them are negative. Moreover, they can be computed using determinants of subgraphs as (cf. [@EN page 83]) $$\label{eq:DETsgr} - (E_v^*,E^*_w) = \frac{\det_{\Gamma\setminus [v,w]}}{\det_{\Gamma}}.$$ ### {#ss:order} We can consider the following partial order on $L\otimes \mathbb{Q}$: for any $l_1,l_2$ one writes $l_1\geq l_2$ if $l_1-l_2=\sum_{v\in \mathcal{V}} \ell_v E_v$ with all $\ell_v\geq 0$. The Lipman (anti-nef) cone $\mathcal{S}'$ is defined by $\{l'\in L'\,:\, (l',E_v)\leq 0 \ \mbox{for all $v$}\}$ and it is generated over $\mathbb{Z}_{\geq 0}$ by the elements $E_v^*$. We use notation $\mathcal{S}'_{\mathbb{R}} := \mathcal{S}'\otimes \mathbb{R}$ for the real Lipman cone. ### {#section-2} Let $\widetilde{\sigma}_{can}$ be the [*canonical $spin^c$-structure*]{} on $\widetilde{X}$. Its first Chern class $c_1( \widetilde{\sigma}_{can})=-K\in L'$, where $K$ is the canonical class in $L'$ defined by the adjunction formulas $(K+E_v,E_v)+2=0$ for all $v\in\mathcal{V}$. The set of $spin^c$-structures $\mathrm{Spin}^c(\widetilde{X})$ of $\widetilde{X}$ is an $L'$-torsor, ie. if we denote the $L'$-action by $l'*\widetilde{\sigma}$, then $c_1(l'*\widetilde{\sigma})=c_1(\widetilde{\sigma})+2l'$. Furthermore, all the $spin^c$-structures of $M$ are obtained by restrictions from $\widetilde{X}$. $\mathrm{Spin}^c(M)$ is an $H$-torsor, compatible with the restriction and the projection $L'\to H$. The [*canonical $spin^c$-structure*]{} $\sigma_{can}$ of $M$ is the restriction of the canonical $spin^c$-structure $\widetilde{\sigma}_{can}$ of $\widetilde{X}$. Hence, for any $\sigma\in \mathrm{Spin}^c(M)$ one has $\sigma=h*\sigma_{can}$ for some $h\in H$. Seiberg–Witten invariants of normal surface singularities {#ss:sw} --------------------------------------------------------- For any closed, oriented and connected 3-manifold $M$ we consider the [*Seiberg–Witten invariant*]{} $\mathfrak{sw}:\mathrm{Spin}^c(M)\rightarrow \mathbb{Q}$, $\sigma\mapsto \mathfrak{sw}_{\sigma}(M)$. In the case of rational homology spheres, it is the signed count of the solutions of the ‘3-dimensional’ Seiberg–Witten equations, modified by the Kreck–Stolcz invariant (cf. [@Lim; @Nic04]). Since its calculation is difficult by the very definition, several topological/combinatorial interpretations have been invented in the last decades. Eg., [@Nic04] has showed that for rational homology spheres $\mathfrak{sw}(M)$ is equal with the Reidemeister–Turaev torsion normalized by the Casson–Walker invariant which, in some plumbed cases, can be expressed in terms of the graph and Dedekind–Fourier sums ([@Lescop; @NN1]). Furthermore, there exist surgery formulas coming from homology exact sequences (eg. Heegaard–Floer homology, monopole Floer homology, lattice cohomology, etc.), where the involved homology theories appear as categorifications of the (normalized) Seiberg–Witten invariant. In the case when $M$ is a rational homology sphere link of a normal surface singularity $(X,o)$, different type of surgery- ([@BN; @LNN]) and combinatorial formulas ([@LN; @LSz]) have been proved expressing the strong connection of the Seiberg–Witten invariant and the zeta-function/Poincaré series associated with $M$ ([@NJEMS]). This connection will be explained in the next section. Moreover, we emphasize that the Seiberg–Witten invariant plays a crucial role in the intimate relationship between the topology and geometry of normal surface singularities since it can be viewed as the topological ‘analogue’ of the geometric genus of $(X,o)$, cf. [@NN1]. For different purposes we may use different normalizations of the Seiberg–Witten invariant. The one we will consider in this article is the following: for any class $h\in H=L'/L$ we define the unique element $r_h\in L'$ characterized by $r_h\in \sum_{v}[0,1)E_v$ with $[r_h]=h$, then $$\label{swnorm} \mathfrak{sw}^{norm}_h(M):=-\frac{(K+2r_h)^2+|\mathcal{V}|}{8}-\mathfrak{sw}_{-h*\sigma_{can}}(M)$$ is called the [*normalized Seiberg–Witten invariant*]{} of $M$ associated with $h\in H$. Zeta-functions and Poincaré series {#s:ps} ---------------------------------- ### **Definitions and motivation** {#ss:def} We have already defined in section \[intro2\] the multivariable zeta-function $f(\mathbf{t})$ associated with the manifold $M$. Its multivariable Taylor expansion at the origin $Z(\mathbf{t})=\sum_{l'}p_{l'} \mathbf{t}^{l'} \in \mathbb{Z}[[L']]$ is called the [*topological Poincaré series*]{}, where $\mathbb{Z}[[L']]$ is the $\mathbb{Z}[L']$-submodule of $\mathbb{Z}[[t_{v}^{\pm 1/|H|}:v\in \mathcal{V}]]$ consisting of series $\sum_{l'\in L'}a_{l'}\mathbf{t}^{l'}$ with $a_{l'}\in \mathbb{Z}$ for all $l'\in L'$. It decomposes naturally into $Z(\mathbf{t})=\sum_{h\in H} Z_{h}(\mathbf{t})$, where $Z_{h}({\mathbf{t}})=\sum_{[l']=h} p_{l'} \mathbf{t}^{l'}$. By (\[ss:order\]), $Z(\mathbf{t})$ is supported in $\mathcal{S}'$, hence $Z_{h}(\mathbf{t})$ is supported in $(l'+L)\cap \mathcal{S}'$, where $l'\in L'$ with $[l']=h$. This decomposition induces a decomposition $f(\mathbf{t})=\sum_{h\in H}f_h(\mathbf{t})$ on the zeta-function level as well, where explicit formula for $f_h(\mathbf{t})$ is provided by [@LSznew]. The zeta-function and its series were introduced by the work of Némethi [@NPS], motivated by singularity theory. For a normal surface singularity $(X,o)$ with fixed resolution graph $\Gamma$ we may consider the equivariant divisorial Hilbert series $\mathcal{H}(\mathbf{t})$ which can be connected with the topology of the link $M$ by introducing the series $\mathcal{P}(\mathbf{t})= -\mathcal{H}( \mathbf{t}) \cdot \prod_{v\in \mathcal{V}}(1 - t_v^{-1})\in \mathbb{Z}[[L']]$. The point is that, for $h=0$, $Z_0(\mathbf{t})$ serves as the ‘topological candidate’ for $\mathcal{P}(\mathbf{t})$: they agree for several class of singularities, eg. for splice quotients (see [@NCL]), which contain all the rational, minimally elliptic or weighted homogeneous singularities. For more details regarding to this theory we refer to [@CDGPs; @CDGEq; @NPS; @NCL]. ### **Counting functions, Seiberg–Witten invariants and reduction** {#s:sw} For any $h\in H$ we define the [*counting function*]{} of the coefficients of $Z_{h}(\mathbf{t})=\sum_{[l']=h}p_{l'} \mathbf{t}^{l'}$ by $x\mapsto Q_{h}(x):=\sum_{l'\not\geq x,\, [l']=h} \, p_{l'}.$ This sum is finite since $\{l'\in \mathcal{S}'\,:\, l'\ngeq x\}$ is finite by \[ss:order\]. Its relation with the Seiberg–Witten invariant is given by a powerful result of Némethi [@NJEMS] saying that if $x\in (-K+ \textnormal{int}(\mathcal{S}'))\cap L$ then $$\label{eq:countf} Q_{h}(x)=\chi_{K+2r_h}(x)+\mathfrak{sw}_h^{norm}(M),$$ where $\chi_{K+2r_h}(x):=-(K+2r_h+x,x)/2$. Thus, $Q_{h}(x)$ is a multivariable quadratic polynomial on $L$ with constant term $\mathfrak{sw}^{norm}_h(M)$. Furthermore, the idea of the general framework given by [@LN] is the following: there exists a conical chamber decomposition of the real cone $\mathcal{S}'_{\mathbb{R}}=\cup_{\tau}\mathcal{C}_{\tau}$, a sublattice $\widetilde L\subset L$ and $l'_* \in \mathcal{S}'$ such that $Q_h(l')$ is a polynomial on $\widetilde L\cap(l'_* +\mathcal{C_{\tau}})$, say $Q^{\mathcal{C}_{\tau}}_h(l')$. This allows to define the [*multivariable periodic constant*]{} by $\mathrm{pc}^{\mathcal{C}_{\tau}}(Z_h):= Q^{\mathcal{C}_{\tau}}_h(0)$ associated with $h\in H$ and $\mathcal{C}_{\tau}$. Moreover, $Z_h(\mathbf{t})$ is rather special in the sense that all $Q^{\mathcal{C}_{\tau}}_h$ are equal for any $\mathcal{C}_{\tau}$. In particular, we say that there exists the periodic constant $\mathrm{pc}^{S'_{\mathbb{R}}}(Z_h):=\mathrm{pc}^{\mathcal{C}_{\tau}}(Z_h)$ associated with $S'_{\mathbb{R}}$, and in fact, it is equal with $\mathfrak{sw}^{norm}_h(M)$. We also notice that (\[eq:countf\]) has a geometric analogue which expresses the geometric genus of the complex normal surface singularity $(X,o)$ from the series $\mathcal{P}(\mathbf{t})$ (cf. [@NCL]). [@LN] has showed also that from the point of view of the above relation the number of variables of the zeta-function (or Poincaré series) can be reduced to the number of nodes $|\mathcal{N}|$. Thus, if we define the *reduced zeta-function* and *reduced Poincaré series* by $$f_h(\mathbf{t}_{\mathcal{N}}) = f_{h}(\mathbf{t})\mid_{t_v=1,v\notin \mathcal{N}} \qquad\textnormal {and } \qquad Z_h(\mathbf{t}_{\mathcal{N}}):=Z_h(\mathbf{t})\mid_{t_v=1,v\notin \mathcal{N}},$$ then there exists the periodic constant of $Z_h(\mathbf{t}_{\mathcal{N}})$ associated with the projected real Lipman cone $\pi_{\mathcal{N}}(S'_{\mathbb{R}})$, where $\pi_{\mathcal{N}}:\mathbb{R}\langle E_v\rangle_{v\in\mathcal{V}}\to \mathbb{R}\langle E_v\rangle_{v\in\mathcal{N}}$ is the natural projection along the linear subspace $\mathbb{R}\langle E_v\rangle_{v\notin\mathcal{N}}$, and $$\mathrm{pc}^{\pi_{\mathcal{N}}(S'_{\mathbb{R}})}(Z_h(\mathbf{t}_{\mathcal{N}}))=\mathrm{pc}^{S'_{\mathbb{R}}}(Z_h(\mathbf{t}))= \mathfrak{sw}^{norm}_h(M).$$ We set notation $\mathbf{t}_{\mathcal{N}}^{x} := \mathbf{t}^{\pi_{\mathcal{N}}(x)}$ for any $x\in L'$. The above identity allows us to consider only the reduced versions in our study, which has several advantages: the number of reduced variables is drastically smaller, hence reduces the complexity of the calculations; reflects to the complexity of the manifold $M$ (e.g. one-variable case is realized for Seifert 3-manifolds); for special classes of singularities the reduced series can be compared with certain geometric series (or invariants), cf. [@NPS]. ‘Polynomial-negative degree part’ decomposition {#ss:polSW} ----------------------------------------------- ### **One-variable case** {#ss:onenode} Let $s(t)$ be a one-variable rational function of the form $B(t)/A(t)$ with $A(t)=\prod_{i=1}^d(1-t^{a_i})$ and $a_i>0$. Then by [@BN 7.0.2] one has a unique decomposition $s(t)=P(t)+s^{neg}(t)$, where $P(t)$ is a polynomial and $s^{neg}(t)=R(t)/A(t)$ has negative degree with vanishing periodic constant. Hence, the periodic constant $\mathrm{pc}(s)$ (associated with the Taylor expansion of $s$ and the cone $\mathbb{R}_{\geq0}$) equals $P(1)$. $P(t)$ is called the *polynomial part* while the rational function $s^{neg}(t)$ is called the *negative degree part* of the decomposition. The decomposition can be deduced easily by the following division on the individual rational fractions: $$\label{eq:div} \frac{t^{b}}{\prod_i(1-t^{a_i})}=-\frac{t^{b-a_{i_0}}}{\prod_{i\neq i_0}(1-t^{a_i})} + \frac{t^{b-a_{i_0}}}{\prod_{i}(1-t^{a_i})}=\sum_{\substack{x_i\geq 1\\ \sum_i x_ia_i\leq b } } p_{(x_i)}\cdot t^{ b-\sum_i x_ia_i} + \substack{ \textnormal{negative degree} \\ \\\textnormal{rational function} },$$ for some coefficients $p_{(x_i)}\in \mathbb{Z}$. ### **Multivariable case** {#ss:twonode} The idea towards to the multivariable generalization goes back to the theory developed in [@LN], saying that the counting functions associated with zeta-functions are Ehrhart-type quasipolynomials inside the chambers of an induced chamber-decomposition of $L\otimes\mathbb{R}$. Moreover, the previous one-variable division can be generalized to two-variable functions of the form $s(\mathbf{t})=B(\mathbf{t})/(1-\mathbf{t}^{a_1})^{d_1}(1-\mathbf{t}^{a_2})^{d_2}$ with $a_i>0$. In particular, for $f_h(\mathbf{t}_{\mathcal{N}})$ viewed as a function in variables $t_n$ and $t_{n'}$, where $n,n'\in\mathcal{N}$ and there is an edge $\overline{nn'}$ connecting them in $\Gamma^{orb}$ (see [@LN Section 4.5] and [@LSz Lemma 19]). For more variables, the direct generalization using a division principle for the individual rational terms seems to be hopeless because the (Ehrhart) quasipolynomials associated with the counting functions can not be controlled inside the difficult chamber decomposition of $\mathcal{S}'_{\mathbb{R}}$. Nevertheless, the authors in [@LSz] have proposed a decomposition $$f_h(\mathbf{t}_{\mathcal{N}})=P_h(\mathbf{t}_{\mathcal{N}})+f^{-}_h(\mathbf{t}_{\mathcal{N}})$$ which defines the polynomial part as $$\label{eq:polpartdef} P_{h}(\mathbf{t}_{\mathcal{N}}) = \sum_{\overline{nn'} \ edge \ of \ \Gamma^{orb}} P^{n,n'}_{h}(\mathbf{t}_{\mathcal{N}}) - \sum_{n\in \mathcal{N}} (\delta_{n,\mathcal{N}}-1)P_{h}^{n}(\mathbf{t}_{\mathcal{N}}),$$ where $P^n_h(\mathbf{t}_{\mathcal{N}})$ for any $n\in N$ are the polynomial parts given by the decompositions of $f_h(\mathbf{t}_{\mathcal{N}})$ as a one-variable function in $t_n$, while $P^{n,n'}_{h}(\mathbf{t}_{\mathcal{N}})$ are the polynomial parts viewed $f_h(\mathbf{t}_{\mathcal{N}})$ as a two-variable function in $t_n$ and $t_{n'}$ for any $n,n'\in\mathcal{N}$ so that there are connected by an edge in $\Gamma^{orb}$. Then [@LSz Theorem 16] implies the main property of the decomposition $$\label{eq:polpsw} P_h(1)=\mathfrak{sw}^{norm}_h(M).$$ Decomposition by multivariable division and proof of the algorithm ================================================================== In this section we prove the algorithm which expresses the general multivariable polynomial part of [@LSz] in terms of a multivariable Euclidean division and a multiplicity function. Multivariable Euclidean division {#ss:multidivision} -------------------------------- We consider two Laurent polynomials $A(\mathbf{t}_{\mathcal{N}})$ and $B(\mathbf{t}_{\mathcal{N}})$ supported on the lattice $\pi_{\mathcal{N}}(L')$. The partial order $l_{1}>l_{2}$ if $l_{1}-l_{2}=\sum_{v\in \mathcal{V}}\ell_{v}$ with $\ell_{v}>0$ for all $v\in \mathcal{V}$ on $L\otimes \mathbb{Q}$ induces a partial order on monomial terms and we assume that $A(\mathbf{t}_{\mathcal{N}})$ has a unique maximal monomial term with respect to this partial order denoted by $A_{a} \mathbf{t}_{\mathcal{N}}^{a}$ such that $a>0$. We introduce the following multivariable Euclidean division algorithm. We start with quotient $C=0$ and remainder $R=0$. For a monomial term $B_{b}\mathbf{t}_{\mathcal{N}}^{b}$ of $B(\mathbf{t}_{\mathcal{N}})$ if $b\not<a$ then we subtract $(B_{b}\mathbf{t}_{\mathcal{N}}^{b}/A_{a}\mathbf{t}_{\mathcal{N}}^{a})\cdot A(\mathbf{t}_{\mathcal{N}})$ from $B(\mathbf{t}_{\mathcal{N}})$ and we add $B_{b}\mathbf{t}_{\mathcal{N}}^{b}/A_{a}\mathbf{t}_{\mathcal{N}}^{a}$ to the quotient $C(\mathbf{t}_{\mathcal{N}})$, otherwise we pass $B_{b}\mathbf{t}_{\mathcal{N}}^{b}$ from $B(\mathbf{t}_{\mathcal{N}})$ to the remainder $R(\mathbf{t}_{\mathcal{N}})$. By the assumption on $A(\mathbf{t}_{\mathcal{N}})$ the algorithm terminates in finite steps and gives a unique decomposition $$B(\mathbf{t}_{\mathcal{N}}) = C(\mathbf{t}_{\mathcal{N}}) \cdot A(\mathbf{t}_{\mathcal{N}}) + R(\mathbf{t}_{\mathcal{N}})$$ such that $C(\mathbf{t}_{\mathcal{N}})$ is a supported on $\{l'\in \pi_{\mathcal{N}}(L'):l'\not<0\}$ and $R(\mathbf{t}_{\mathcal{N}})$ is supported on $\{l'\in \pi_{\mathcal{N}}(L'):l'<a\}$. The following decomposition generalizes the one and two-variable cases. \[lem:+dec\] For any $h\in H$ there exists a unique decomposition $$\label{eq:decomp} f_{h}(\mathbf{t}_{\mathcal{N}}) = P^{+}_{h}(\mathbf{t}_{\mathcal{N}}) + f^{neg}_{h}(\mathbf{t}_{\mathcal{N}}),$$ where $P^{+}_{h} (\mathbf{t}_{\mathcal{N}})= \sum_{\beta\in\mathcal{B}_h}p_{\beta}\mathbf{t}_{\mathcal{N}}^{\beta}$ is a Laurent polynomial such that $\beta\not<0$ and $f_{h}^{neg}(\mathbf{t}_{\mathcal{N}})$ is a rational function with negative degree in $t_{n}$ for all $n\in \mathcal{N}$. First of all we use the fact that for any $h\in H$ one can write $f_{h}(\mathbf{t}_{\mathcal{N}}) = \mathbf{t}_{\mathcal{N}}^{r_h}\cdot\sum_{\ell}b_{\ell} \mathbf{t}_{\mathcal{N}}^{\ell}/\prod_{n\in \mathcal{N}}(1 - \mathbf{t}_{\mathcal{N}}^{a_{n}})$, where $\ell, a_{n}\in \mathbb{Z}\langle E_{n}\rangle_{n\in \mathcal{N}}$ so that $a_{n} = \lambda_{n}\pi_{\mathcal{N}}(E^{*}_{n})$ for some $\lambda_{n}>0$, $\ell\in \mathbb{R}_{\geq 0}\langle a_{n}\rangle_{n\in\mathcal{N}}$ and $b_{\ell}\in\mathbb{Z}$ (for more precise formulation see [@LSznew]). Note that $A(\mathbf{t}_{\mathcal{N}})=\prod_{n\in \mathcal{N}}(1-\mathbf{t}_{\mathcal{N}}^{a_{n}})$ has a unique maximal term $(-1)^{|\mathcal{N}|}\mathbf{t}_{\mathcal{N}}^{\sum_{n\in \mathcal{N}}a_{n}}$ with $\sum_{n\in \mathcal{N}}a_{n}>0$. Thus, by the above multivariable Euclidean division we can write $$\label{eq:uniqueness} \mathbf{t}_{\mathcal{N}}^{r_{h}} \sum_{\ell} b_{\ell} \mathbf{t}_{\mathcal{N}}^{\ell} = P^{+}_{h}(\mathbf{t}_{\mathcal{N}}) \cdot \prod_{n\in \mathcal{N}}(1-\mathbf{t}_{\mathcal{N}}^{a_{n}})+R_{h}(\mathbf{t}_{\mathcal{N}})$$ and we set $f^{neg}_{h}(\mathbf{t}_{\mathcal{N}}) := \frac{R_{h}(\mathbf{t}_{\mathcal{N}})}{\prod_{n\in \mathcal{N}}(1-\mathbf{t}_{\mathcal{N}}^{a_{n}})}$. The uniqueness is followed by the assumptions on $P_{h}^{+}$ and $f_{h}^{neg}$, since (\[eq:uniqueness\]) can be viewed as a one-variable relation considering other variables as coefficients. Multiplicity and relation to the polynomial part {#ss:multiplicity} ------------------------------------------------ We will show that the polynomial part can be computed from the multivariable quotient $P^{+}_{h}$ by taking its monomial terms with a suitable multiplicity. We start by defining the following type of partial orders $\{\mathcal{N},>\}$. Choose a node $n_{0}\in \mathcal{N}$ and orient edges of $\Gamma^{orb}$ (cf. \[ss:intro3\]) towards to the direction of $n_{0}$. This induces a partial order on the set of nodes: $n>n'$ if there is an edge in $\Gamma^{orb}$ connecting them, oriented from $n$ to $n'$. Note that $n_{0}$ is the unique minimal node with respect to this partial order. Associated with the above partial order and a monomial $\mathbf{t}_{\mathcal{N}}^{\beta} = \prod_{n\in \mathcal{N}}t_{n}^{\beta_{n}}$, we define the following two sign-functions: $\mathfrak{s}_{n}(\beta)=1$ if $\beta_{n}\geq 0$ and $0$ otherwise, respectively, assuming $n>n'$ for some $n,n'\in \mathcal{N}$ we set $\mathfrak{s}_{n>n'}(\beta)=1$ if $\beta_{n}\geq0$ and $\beta_{n'}<0$, and $0$ otherwise. Finally, we define the *multiplicity function* $\mathfrak{s}(\beta) = \mathfrak{s}_{n_{0}}(\beta) + \sum_{n>n'}\mathfrak{s}_{n>n'}(\beta)$. The function $\mathfrak{s}$ does not depend on the above partial orders. This can be checked easily for two partial orders with unique minimal nodes connected by an edge in $\Gamma^{orb}$. \[lm-0\] Consider the multivariable quotient $P^{+}_{h}(\mathbf{t}_{\mathcal{N}})=\sum_{\beta\in \mathcal{B}_h}p_{\beta}\mathbf{t}_{\mathcal{N}}^{\beta}$ of $f_{h}$. Then the polynomial part defined in (\[eq:polpartdef\]) has the following form $$P_{h}(\mathbf{t}_{\mathcal{N}}) = \sum_{\beta\in \mathcal{B}_h}\mathfrak{s}(\beta) p_{\beta}\mathbf{t}_{\mathcal{N}}^{\beta}.$$ Recall that the polynomial part $P_{h}(\mathbf{t}_{\mathcal{N}})$ is defined by (\[eq:polpartdef\]) using the polynomials $P_{h}^{n'}(\mathbf{t}_{\mathcal{N}})$ and $P_{h}^{n,n'}(\mathbf{t}_{\mathcal{N}})$ for any $n,n'\in\mathcal{N}$ for which there exists an edge connecting them in $\Gamma^{orb}$. Moreover, $P^{n'}_{h}$ and $P^{n,n'}_{h}$ are results of one- and two-variable divisions in variables $t_{n'}$ and $t_{n},t_{n'}$, while considering other variables as coefficients. These divisions can be deduced by the above algorithm if we replace the partial order on $L\otimes \mathbb{Q}$ by the corresponding projections ‘$<_{n'}$’ and ‘$<_{n,n'}$’. That is, $a<_{n'}b$ and $a<_{n,n'}b$ if $a_{n'}<b_{n'}$ and $a_{n}<b_{n}$, $a_{n'}<b_{n'}$, respectively. Since $a\not<_{n'}b$ and $a\not<_{n,n'}b$ both imply $a\not<b$, the monomial terms of $P^{n'}_{h}$ and $P^{n,n'}_{h}$ can be found among monomial terms of $P^{+}_{h}$, more precisely $$\begin{gathered} P^{n'}_{h}(\mathbf{t}_{\mathcal{N}}) = \sum_{\substack{\beta\in \mathcal{B}_{h} \\ \beta_{n'}\geq0}} p_{\beta} \mathbf{t}_{\mathcal{N}}^{\beta} = \sum_{\beta \in \mathcal{B}_{h}}\mathfrak{s}_{n'}(\beta) p_{\beta}\mathbf{t}_{\mathcal{N}}^{\beta}, \\ P^{n,n'}_{h}(\mathbf{t}_{\mathcal{N}}) = \sum_{\substack{\beta\in \mathcal{B}_{h} \\ \beta_{n}\textnormal{ or } \beta_{n'}\geq0}} p_{\beta} \mathbf{t}_{\mathcal{N}}^{\beta} = \sum_{\beta \in \mathcal{B}_{h}}(\mathfrak{s}_{n'}(\beta) + \mathfrak{s}_{n>n'}(\beta)) p_{\beta}\mathbf{t}_{\mathcal{N}}^{\beta}\end{gathered}$$ (assuming that $n>n'$). Thus, $$\begin{aligned} P_{h}(\mathbf{t}_{\mathcal{N}}) =& \sum_{n>n'} P^{n,n'}_{h}(\mathbf{t}_{\mathcal{N}}) - \sum_{n'\in \mathcal{N}} (\delta_{n',\mathcal{N}}-1)P_{h}^{n'}(\mathbf{t}_{\mathcal{N}}) \\ =& \sum_{n>n'} \sum_{\beta\in \mathcal{B}_{h}}(\mathfrak{s}_{n'}(\beta)+\mathfrak{s}_{n>n'}(\beta))p_{\beta}\mathbf{t}_{\mathcal{N}}^{\beta} - \sum_{n'\in \mathcal{N}}(\delta_{n',\mathcal{N}}-1)\sum_{\beta\in \mathcal{B}_{h}}\mathfrak{s}_{n'}(\beta)p_{\beta} \mathbf{t}_{\mathcal{N}}^{\beta}\\ =& \sum_{\beta\in \mathcal{B}_{h}} \Big[ \sum_{n>n'} \mathfrak{s}_{n'}(\beta)+\mathfrak{s}_{n>n'}(\beta) - \sum_{n'\in \mathcal{N}}(\delta_{n',\mathcal{N}}-1)\mathfrak{s}_{n'}(\beta) \Big] p_{\beta} \mathbf{t}_{\mathcal{N}}^{\beta} \\ =& \sum_{\beta \in \mathcal{B}_{h}} (\mathfrak{s}_{n_{0}}(\beta) + \sum_{n>n'}\mathfrak{s}_{n>n'}(\beta))p_{\beta} \mathbf{t}_{\mathcal{N}} =\sum_{\beta \in \mathcal{B}_{h}} \mathfrak{s}(\beta) p_{\beta}\mathbf{t}_{\mathcal{N}},\end{aligned}$$ since $\#\{n \,|\, n>n' \}=\delta_{n',\mathcal{N}}-1$ for $n'\neq n_{0}$ and $\#\{n \,|\, n>n_{0} \}=\delta_{n_{0},\mathcal{N}}$, where $n_{0}$ is the unique minimal node with respect to the partial order. \[rk:poly-plus\] (i) \[poly-plus-i\] For $\beta < 0$ we have $\mathfrak{s}(\beta)=0$, while for $\beta \not< 0$ we have $\mathfrak{s}(\beta)\geq1$. Hence, the multiplicity $\mathfrak{s}(\beta)$ is non-zero for every $\beta\in \mathcal{B}_h$, thus every monomial of $P^{+}_{h}$ appears in $P_{h}$. (ii) \[poly-plus-ii\] The reduced Poincaré series $Z_{h}(\mathbf{t}_{\mathcal{N}})$ is the Taylor expansion of $f_{h}(\mathbf{t}_{\mathcal{N}})$ considering $\mathbf{t}_{\mathcal{N}}$ small. One can think of the ‘endless’ multivariable Euclidean division as expansion of $f_{h}(\mathbf{t}_{\mathcal{N}})$ considering $\mathbf{t}_{\mathcal{N}}$ large. If we take each term of this latter expansion with multiplicity $\mathfrak{s}$ then we recover $P_{h}$, since terms with negative degree in each $t_{n}$ have zero multiplicity. Comparisons, examples and $P^+$ =============================== The aim of this section is to compare the two polynomials $P_{h}(\mathbf{t}_{\mathcal{N}})$ and $P^{+}_{h}(\mathbf{t}_{\mathcal{N}})$, given by the two different decompositions, through crucial classes of negative definite plumbing graphs. In case of the first class, when the orbifold graph is a bamboo, we will prove that the two polynomials agree. The second class is also motivated by singularity theory and contains the graphs of the manifolds $S^3_{-p/q}(K)$ where $K\subset S^3$ is the connected sum of algebraic knots. Although this class gives examples when the two polynomials do not agree, their structure can be understood using some specialty of these manifolds. The orbifold graph is a bamboo {#ss:orb} ------------------------------ Let $\Gamma$ be a negative definite plumbing graph with set of nodes $\mathcal{N}=\{n_{1},\ldots,n_{k}\}$. In this section we will assume that its orbifold graph $\Gamma^{orb}$ is a *bamboo*, ie. $\Gamma^{orb}$ has no nodes.\ (n1) at (0,0) ; at (0,-0.3) [$n_{1}$]{}; (n1) circle (); (n2) at (1,0) ; at (1,-0.3) [$n_{2}$]{}; (n2) circle (); (nk-1) at (3,0) ; at (3,-0.3) [$n_{k-1}$]{}; (nk-1) circle (); (nk) at (4,0) ; at (4,-0.3) [$n_{k}$]{}; (nk) circle (); (0,0) – (1.5,0); (2.5,0) – (4,0); at (2,0) [$\ldots$]{}; Then we have the following result: \[thm:bamboo\] If the orbifold graph $\Gamma^{orb}$ is a bamboo then $P_{h}(\mathbf{t}_{\mathcal{N}}) = P^{+}_{h}(\mathbf{t}_{\mathcal{N}})$ for any $h\in H$, ie. every monomial term of $P^{+}_{h}(\mathbf{t}_{\mathcal{N}})$ appears in $P_{h}(\mathbf{t}_{\mathcal{N}})$ with multiplicity $1$. Denote by $\mathfrak{v}_{i}:=\pi_{\mathcal{N}}(E^{*}_{n_{i}})$ the projected vectors for all $i=1,\ldots,k$. When $\Gamma^{orb}$ is a bamboo we can write $f_{h}(\mathbf{t}_{\mathcal{N}})$ as linear combination of fractions of form $\displaystyle \frac{\mathbf{t}_{\mathcal{N}}^{\alpha}}{(1-\mathbf{t}_{\mathcal{N}}^{\lambda_{1}\mathfrak{v}_{1}})(1-\mathbf{t}_{\mathcal{N}}^{\lambda_{k}\mathfrak{v}_{k}})}$ for some $ \alpha \in \mathbb{R}_{\geq0}\langle \mathfrak{v}_{i}\rangle_{i=\overline{1,k}} \cap \mathbb{Z}\langle \pi_{\mathcal{N}}(E^{*}_{v})\rangle_{v\in \mathcal{V}}$ and $\lambda_{1},\lambda_{k}>0$ (cf. [@LSznew]). By the uniqueness of the decomposition (\[eq:decomp\]) and Theorem \[lm-0\] it is enough to prove the following proposition. \[prop-1\] Let $\alpha\in \mathbb{R}_{\geq0}\langle \mathfrak{v}_{i}\rangle_{i=\overline{1,k}} \cap \mathbb{Z}\langle \pi_{\mathcal{N}}(E^{*}_{v})\rangle_{v\in \mathcal{V}}$ and consider the following fraction $\displaystyle\varphi(\mathbf{t}_{\mathcal{N}})=\frac{\mathbf{t}_{\mathcal{N}}^{\alpha}}{(1-\mathbf{t}_{\mathcal{N}}^{\lambda_{1}\mathfrak{v}_{1}})(1-\mathbf{t}_{\mathcal{N}}^{\lambda_{k}\mathfrak{v}_{k}})}$, $\lambda_{1},\lambda_{k}>0$. Then for any monomial $\mathbf{t}_{\mathcal{N}}^{\beta}$ of the quotient $\varphi^{+}$ given by the decomposition $\varphi=\varphi^{+}+\varphi^{neg}$ of Lemma \[lem:+dec\] one has $\mathfrak{s}(\beta)=1$. The main tool in the proof of the proposition will be the following lemma. \[lm-1\] For any $\beta = \sum_{\ell=1}^{k}\beta_{\ell}E_{n_{\ell}} \in \alpha - \mathbb{R}_{\geq0}\langle \mathfrak{v}_{1}, \mathfrak{v}_{k}\rangle$ with not all $\beta_{\ell}$ negative we have $$\beta_{1},\ldots,\beta_{i-1}<0\leq \beta_{i},\ldots,\beta_{j} \geq 0 >\beta_{j+1},\ldots,\beta_{k}$$ for some $i,j\in\{1,\ldots,k\}$. We denote by $\mathcal{E}_{i}=\mathcal{E}_{i}(\alpha)$ the intersection $\{\beta=\sum_{\ell=1}^{k}\beta_{\ell}E_{n_{\ell}} \,|\, \beta_{i}=0\}\cap (\alpha - \mathbb{R}_{\geq0}\langle \mathfrak{v}_{1},\mathfrak{v}_{k}\rangle)$ and we consider the parametric line $\beta(t) = t\beta + (1-t)\alpha$, $t\in \mathbb{R}$ connecting $\alpha$ to $\beta$. Whenever $\beta(t)$ crosses $\mathcal{E}_{i}$ as $t$ goes from $0$ to $1$ the sign of $\beta_{i}(t)$ changes from positive to negative. Thus, the order in which $\beta(t)$ crosses $\mathcal{E}_{i}$ determines the order in which $\beta_{i}(t)$’s change sign, consequently determines the sign configuration of $\beta_{i}=\beta_{i}(1)$, $i=1,\ldots,k$. \[lm-2\] Let $\sigma_{i}=\sigma_{i}(\alpha)$ and $\tau_{i}=\tau_{i}(\alpha)$ be such that $\alpha-\sigma_{i}\mathfrak{v}_{1} = (\alpha - \mathbb{R}_{\geq0}\mathfrak{v}_{1}) \cap \mathcal{E}_{i}$ and $\alpha-\tau_{i}\mathfrak{v}_{k} = (\alpha - \mathbb{R}_{\geq0}\mathfrak{v}_{k}) \cap \mathcal{E}_{i}$ for any $i=1,\ldots,k$. If $\alpha=a_{\ell}\mathfrak{v}_{\ell}$, $a_{\ell}\geq0$ for some $\ell \in\{1,\dots,k\}$ then we have $$\sigma_{1}(\alpha) < \ldots < \sigma_{\ell}(\alpha) = \ldots =\sigma_{k}(\alpha) \qquad and\qquad \tau_{1}(\alpha)=\ldots=\tau_{\ell}(\alpha) > \ldots > \tau_{k}(\alpha).$$ Moreover, for general $\alpha \in \mathbb{R}_{\geq0} \langle \mathfrak{v}_{i}\rangle_{i=\overline{1,k}}$ one has $\sigma_{1}(\alpha)\leq \ldots \leq \sigma_{k}(\alpha)$ and $\tau_{1}(\alpha) \geq \ldots \geq \tau_{k}(\alpha)$. Note that we have additivity $\tau_{i}(\alpha'+\alpha'') = \tau_{i}(\alpha')+\tau_{i}(\alpha'')$ and $\sigma_{i}(\alpha'+\alpha'') = \sigma_{i}(\alpha')+\sigma_{i}(\alpha'')$, hence we may assume that $\alpha=a_{\ell}\mathfrak{v}_{\ell}$. Moreover, we will only prove the lemma for $\sigma_{i}$’s. The intersection point $\alpha-\sigma_{i}\mathfrak{v}_{1}$ is characterized by $(\alpha-\sigma_{i}\mathfrak{v}_{1},E^{*}_{n_{i}})=0$, whence $$\sigma_{i} = \sigma_{i}(a_{\ell}\mathfrak{v}_{\ell}) = \frac{(a_{\ell}\mathfrak{v}_{\ell}, E^{*}_{n_{i}})}{(\mathfrak{v}_{1},E^{*}_{n_{i}})} = a_{\ell}\frac{(E^{*}_{n_{\ell}}, E^{*}_{n_{i}})}{(E^{*}_{n_{1}},E^{*}_{n_{i}})}.$$ Therefore, it is enough to show that $$\label{eq-a1} \frac{(E^{*}_{n_{\ell}}, E^{*}_{n_{i}})}{(E^{*}_{n_{1}},E^{*}_{n_{i}})} < \frac{(E^{*}_{n_{\ell}}, E^{*}_{n_{i+1}})}{(E^{*}_{n_{1}},E^{*}_{n_{i+1}})}, \quad \forall\ i < \ell \quad \textnormal{ and } \quad \frac{(E^{*}_{n_{\ell}}, E^{*}_{n_{i}})}{(E^{*}_{n_{1}},E^{*}_{n_{i}})} = \frac{(E^{*}_{n_{\ell}}, E^{*}_{n_{i+1}})}{(E^{*}_{n_{1}},E^{*}_{n_{i+1}})}, \quad \forall\ i \geq \ell.$$ Recall that by (\[eq:DETsgr\]) $\displaystyle (E^{*}_{v}, E^{*}_{w})=-\frac{\det_{\Gamma\setminus[v,w]}}{\det_{\Gamma}}$ for any vertices $v,w$, hence (\[eq-a1\]) is equivalent to the following determinantal relations $$\label{eq-a2} \det\nolimits_{\Gamma\setminus [n_{1},n_{i}]}\det\nolimits_{\Gamma\setminus [n_{i+1},n_{\ell}]}-\det\nolimits_{\Gamma\setminus [n_{1},n_{i+1}]} \cdot \det\nolimits_{\Gamma\setminus [n_{i},n_{\ell}]} >0, \quad \forall\ i<\ell,$$ and equality for $ i\geq \ell$. We use the technique of N. Duchon (cf. [@EN Section 21]) to reduce (\[eq-a2\]) to the case when $\Gamma$ is a bamboo. To do so, we can remove peripheral edges of a graph in order to simplify graph determinant computations. Removal of such an edge is compensated by adjusting the decorations of the graph. Let $v$ be a vertex with decoration $b_{v}$ and which is connected by an edge only to a vertex $w$ with decoration $b_{w}$. If we remove this edge and replace the decoration of the vertex $w$ by $ b_{w}-b_{v}^{-1}$ then the resulting non-connected graph will be also negative definite and its determinant does not change. Using this technique we remove consecutively every edges on the legs of $\Gamma$, and denote the resulting decorated graph by $\Gamma'$ which consists of a bamboo – connecting the nodes $n_{1}$ and $n_{k}$ – and isolated vertices. Note that $\det\nolimits_{\Gamma\setminus [n_{i},n_{j}]} = \det_{\Gamma'\setminus [n_{i},n_{j}]}$ for all $i,j=1,\ldots,k$. Moreover, (\[eq-a2\]) is equivalent with $$\label{eq-a3} \det\nolimits_{\Gamma'\setminus [n_{1},n_{i}]}\det\nolimits_{\Gamma'\setminus [n_{i+1},n_{\ell}]} - \det\nolimits_{\Gamma'\setminus [n_{1},n_{i+1}]} \cdot \det\nolimits_{\Gamma'\setminus [n_{i},n_{\ell}]} >0, \quad \forall\ i<\ell,$$ and equality for $i\geq \ell$, respectively. From point of view of (\[eq-a3\]) we can forget about the isolated vertices of $\Gamma'$, ie. we may assume that $\Gamma'$ is a bamboo. If we denote by $\det'_{[n_{i},n_{j}]}$ the determinant of the graph $[n_{i},n_{j}]$ as subgraph of (the bamboo) $\Gamma'$ then for $i<\ell$ we have $$\begin{gathered} \det\nolimits_{\Gamma'\setminus [n_{1},n_{i}]}\det\nolimits_{\Gamma'\setminus [n_{i+1},n_{\ell}]} - \det\nolimits_{\Gamma'\setminus [n_{1},n_{i+1}]} \cdot \det\nolimits_{\Gamma'\setminus [n_{i},n_{\ell}]} = \\ \det'\nolimits_{[n_{1},n_{i+1})} \cdot \det'\nolimits_{(n_{i},n_{k}]}\cdot \det'\nolimits_{(n_{\ell}, n_{k}]} - \det'\nolimits_{[n_{1},n_{i})} \cdot \det'\nolimits_{(n_{i+1},n_{k}]}\cdot \det'\nolimits_{(n_{\ell}, n_{k}]} \\ = \det'\nolimits_{[n_{1},n_{k}]} \cdot \det'\nolimits_{(n_{i},n_{i+1})}\cdot \det'\nolimits_{(n_{\ell}, n_{k}]},\end{gathered}$$ where the second equality uses the identity $$\det'\nolimits_{[n_{1},n_{i+1})}\cdot\det'\nolimits_{(n_{i},n_{k}]}=\det'\nolimits_{[n_{1},n_{k}]}\cdot\det'\nolimits_{(n_{i},n_{i+1})}+\det'\nolimits_{[n_{1},n_{i})}\cdot\det'\nolimits_{(n_{i+1},n_{k}]}$$ from [@LSznew Lemma 2.1.2]. $\Gamma'$ is also negative definite, hence $ \det'\nolimits_{[n_{1},n_{k}]} \cdot \det'\nolimits_{(n_{i},n_{i+1})}\cdot \det'\nolimits_{(n_{\ell}, n_{k}]} >0$ (note that $\det'\nolimits_{(n_{k},n_{k}] }=1$). If $i\geq \ell$ then it is easy to see $$\begin{gathered} \det\nolimits_{\Gamma'\setminus [n_{1},n_{i}]}\det\nolimits_{\Gamma'\setminus [n_{i+1},n_{\ell}]} - \det\nolimits_{\Gamma'\setminus [n_{1},n_{i+1}]} \cdot \det\nolimits_{\Gamma'\setminus [n_{i},n_{\ell}]} = \\ \det'\nolimits_{(n_{i},n_{k}]} \cdot \det'\nolimits_{[n_{1},n_{\ell})}\cdot \det'\nolimits_{(n_{i+1}, n_{k}]} - \det'\nolimits_{(n_{i+1},n_{k}]} \cdot \det'\nolimits_{[n_{1},n_{\ell})}\cdot \det'\nolimits_{(n_{i}, n_{k}]} = 0.\end{gathered}$$ We also introduce additional notations $\mathcal{E}_{0} = \mathcal{E}_{0}(\alpha)=\alpha- \mathbb{R}_{\geq0}\mathfrak{v}_{k}$ and $\mathcal{E}_{k+1}=\mathcal{E}_{k+1}(\alpha) =\alpha-\mathbb{R}_{\geq0}\mathfrak{v}_{1}$. Moreover, denote by $\varepsilon_{i,j}=\varepsilon_{i,j}(\alpha) = \mathcal{E}_{i}(\alpha)\cap \mathcal{E}_{j}(\alpha)$ the intersection points of segments $\mathcal{E}_{i}$ and $\mathcal{E}_{j}$. \[lm-21\] On $\mathcal{E}_{i}(\alpha)$ the intersection points are in the following order: $\varepsilon_{i,0}(\alpha) $, $\ldots$, $\varepsilon_{i,i-1}(\alpha)$, $\varepsilon_{i,i+1}(\alpha)$, $\ldots$, $\varepsilon_{i,k+1}(\alpha)$ for all $i=0,\ldots,k+1$ and for all $\alpha \in \mathbb{R}_{\geq0}\langle \mathfrak{v}_{i}\rangle_{i=\overline{1,k}}$. For $i=0$ and $i=k+1$ the statement is immediate from Lemma \[lm-2\]. Notice that we have defined $\sigma_{i}=\sigma_{i}(\alpha)$ and $\tau_{i}=\tau_{i}(\alpha)$ such that $\varepsilon_{i,0} = \alpha-\tau_{i}\mathfrak{v}_{k}$ and $\varepsilon_{i,k+1} = \alpha-\sigma_{i} \mathfrak{v}_{1}$. If $t_{i,j}=t_{i,j}(\alpha)\in [0,1]$ such that $\varepsilon_{i,j} = (1-t_{i,j})\varepsilon_{i,0} + t_{i,j}\varepsilon_{i,k+1}$, then we have to prove that $t_{i,j}(\alpha) \leq t_{i,j+1}(\alpha)$ for all $j$. Indeed, the case $\alpha=a_{\ell} \mathfrak{v}_{\ell}$, $a_{\ell}\geq0$ follows directly from the first part of Lemma \[lm-2\]. Generally, notice first the additivity $\varepsilon_{i,j}(\alpha'+\alpha'') = \varepsilon_{i,j}(\alpha')+\varepsilon_{i,j}(\alpha'')$ (as vectors), hence $$\label{eq-a4} t_{i,j}(\alpha'+\alpha'') = \frac{t_{i,j}(\alpha') {\sigma}_{i}(\alpha')+t_{i,j}(\alpha''){\sigma}_{i}(\alpha'')}{{\sigma}_{i}(\alpha')+{\sigma}_{i}(\alpha'')},$$ which gives the result using $t_{i,j}(a_{\ell}\mathfrak{v}_{\ell}) \leq t_{i,j+1}(a_{\ell}\mathfrak{v}_{\ell}) $ for all $j$ and $\ell$. \[lm-3\] The bounded region $(\alpha - \mathbb{R}_{\geq0}\langle\mathfrak{v}_{1},\mathfrak{v}_{k}\rangle) \setminus \mathbb{R}_{<0}\langle E_{n}\rangle _{n\in \mathcal{N}}$ is the union of quadrangles between segments $\mathcal{E}_{i}, \mathcal{E}_{i+1}, \mathcal{E}_{j}, \mathcal{E}_{j+1}$ or triangles (degenerated cases). These polygons may intersect each other only at the boundary. ; ; (E0) at (,); at (E0) [$\mathcal{E}_{0}$]{}; (Ek+1) at (,[-]{}); at (Ek+1) [$\mathcal{E}_{k+1}$]{}; (e0k+1) at (0,0); (e0k+1) circle (); at (e0k+1) [$\varepsilon_{0,k+1}=\alpha$]{}; 01[0.91]{} (e01) at ([01]{},[01]{}); (e01) circle (); at (e01) [$\varepsilon_{0,1}$]{}; (ekk+1) at ([01]{},[-01]{}); (ekk+1) circle (); at (ekk+1) [$\varepsilon_{k,k+1}$]{}; 01[0.77]{} (e02) at ([01]{},[01]{}); (e02) circle (); at (e02) [$\varepsilon_{0,2}$]{}; (ek-1k+1) at ([01]{},[-01]{}); (ek-1k+1) circle (); at (ek-1k+1) [$\varepsilon_{k-1,k+1}$]{}; 01[0.23]{} (e0k) at ([01]{},[01]{}); (e0k) circle (); at (e0k) [$\varepsilon_{0,k}$]{}; (e1k+1) at ([01]{},[-01]{}); (e1k+1) circle (); at (e1k+1) [$\varepsilon_{1,k+1}$]{}; 01[0.42]{} (e0k-1) at ([01]{},[01]{}); (e0k-1) circle (); at (e0k-1) [$\varepsilon_{0,k-1}$]{}; (e2k+1) at ([01]{},[-01]{}); (e2k+1) circle (); at (e2k+1) [$\varepsilon_{2,k+1}$]{}; (e0k+1) edge (E0); (e0k+1) edge (Ek+1); (e0k) edge (ekk+1); (e0k-1) edge (ek-1k+1); (e02) edge (e2k+1); (e01) edge (e1k+1); (e1k) at (intersection of [e0k–ekk+1]{} and [e01–e1k+1]{}); (e1k) circle (); at (e1k) [$\varepsilon_{1,k}$]{}; (e1k-1) at (intersection of [e01–e1k+1]{} and [e0k-1–ek-1k+1]{}); (e1k-1) circle (); at (e1k-1) [$\varepsilon_{1,k-1}$]{}; (e12) at (intersection of [e01–e1k+1]{} and [e02–e2k+1]{}); (e12) circle (); at (e12) [$\varepsilon_{1,2}$]{}; (e2k) at (intersection of [e0k–ekk+1]{} and [e02–e2k+1]{}); (e2k) circle (); at (e2k) [$\varepsilon_{2,k}$]{}; (ek-1k) at (intersection of [e0k–ekk+1]{} and [ek-1k+1–e0k-1]{}); (ek-1k) circle (); at (ek-1k) [$\varepsilon_{k-1,k}$]{}; (e2k-1) at (intersection of [e02–e2k+1]{} and [e0k-1–ek-1k+1]{}); (e2k-1) circle (); at (e2k-1) [$\varepsilon_{2,k-1}$]{}; (d01) at ($0.565*(E0)+(0.15,0.25)$); (d02) at ($0.535*(E0)+(0.15,0.25)$); (d01)–(d02); (d01) at ($0.565*(Ek+1)+(0.15,-0.25)$); (d02) at ($0.535*(Ek+1)+(0.15,-0.25)$); (d01)–(d02); The segments $\mathcal{E}_{i}$ divide $(\alpha - \mathbb{R}_{\geq0} \langle \mathfrak{v}_{1}, \mathfrak{v}_{k}\rangle) \setminus \mathbb{R}_{<0}\langle E_{n}\rangle _{n\in \mathcal{N}} $ into convex polygons. By Lemma \[lm-21\], we can assume that $[\varepsilon_{i,j}, \varepsilon_{i,j+1}]$ and $[\varepsilon_{i+1,j},\varepsilon_{i,j}]$ are two faces at vertex $\varepsilon_{i,j}$ of such a polygon. Moreover, $\varepsilon_{i+1,j}$ and $\varepsilon_{i,j+1}$ must be also vertices of the polygon and another two faces must lie on segments $\mathcal{E}_{i+1}$ and $\mathcal{E}_{j+1}$. Hence, the segments $\mathcal{E}_{i}$, $\mathcal{E}_{j}$, $\mathcal{E}_{i+1}$, $\mathcal{E}_{j+1}$ form a convex polygon with vertices $\varepsilon_{i,j}$, $\varepsilon_{i+1,j}$, $\varepsilon_{i,j+1}$ and $\varepsilon_{i+1,j+1}$. The polygon can degenerate into triangles with vertices $\varepsilon_{i,j}$, $\varepsilon_{i,j+1}$ and $\varepsilon_{i+1,j+1}$. Let $\beta \in (\alpha - \mathbb{R}_{\geq0} \langle \mathfrak{v}_{1}, \mathfrak{v}_{k}\rangle) \setminus \mathbb{R}_{<0}\langle E_{n}\rangle _{n\in \mathcal{N}}$ be fixed. Consider the parametric line $\beta(t) = t\beta+(1-t)\alpha$ connecting $\beta$ to the vertex $\alpha$ of the affine cone. The order in which $\beta(t)$ intersects the segments $\mathcal{E}_{i}$ as $t$ goes from $0$ to $1$ tells us the order in which $\beta_{i}$’s are changing signs. In the beginning, every $\beta_{i}>0$ and $\beta(t)$ sits in the polygon with vertices $\alpha=\varepsilon_{0,k+1}$, $\varepsilon_{0,k}$, $\varepsilon_{1,k}$, $\varepsilon_{k+1,1}$, with sides lying on $\mathcal{E}_{0}$, $\mathcal{E}_{k+1}$, $\mathcal{E}_{1}$, $\mathcal{E}_{k}$. We also say that we have already intersected $\mathcal{E}_{0}$ and $\mathcal{E}_{k+1}$. Then $\beta(t)$ either intersects $\mathcal{E}_{1}$, hence $\beta_{1}$ changes to $\beta_{1}<0$ and $\beta(t)$ arrives into the polygon $\varepsilon_{1,k+1}, \varepsilon_{1,k}, \varepsilon_{2,k}, \varepsilon_{2,k+1}$ with sides on $\mathcal{E}_{1}$, $\mathcal{E}_{2}$, $\mathcal{E}_{k}$, $\mathcal{E}_{k+1}$, or, it intersects $\mathcal{E}_{k}$ implying that $\beta_{k}$ becomes negative and $\beta(t)$ arrives into the polygon with sides on $\mathcal{E}_{0}$, $\mathcal{E}_{1}$, $\mathcal{E}_{k-1}$, $\mathcal{E}_{k}$. Therefore, we have crossed $\mathcal{E}_{0}, \mathcal{E}_{1}, \mathcal{E}_{k+1}$ in the first, while $\mathcal{E}_{0}, \mathcal{E}_{k}, \mathcal{E}_{k+1}$ in the second case. By induction, we assume that $\beta(t)$ lies in the polygon with sides $\mathcal{E}_{i}, \mathcal{E}_{i+1}, \mathcal{E}_{j}, \mathcal{E}_{j+1}$ for some $t$ and it has already crossed $\mathcal{E}_{0}, \ldots, \mathcal{E}_{i}, \mathcal{E}_{j+1},\ldots, \mathcal{E}_{k+1}$, that is $\beta_{1},\ldots,\beta_{i}, \beta_{j+1},\ldots,\beta_{k}<0$ and $\beta_{i+1},\ldots, \beta_{j}\geq 0$. Thus, $\beta(t)$ must intersects $\mathcal{E}_{i+1}$ or $\mathcal{E}_{j}$. Therefore, either $\beta_{i+1}$ changes sign to $\beta_{i+1}<0$ and $\beta(t)$ arrives into the polygon with sides $\mathcal{E}_{i+1}, \mathcal{E}_{i+2}, \mathcal{E}_{j}, \mathcal{E}_{j+1}$, or $\beta_{j}$ changes to $\beta_{j}<0$ and $\beta(t)$ arrives into the polygon with $\mathcal{E}_{i}, \mathcal{E}_{i+1}, \mathcal{E}_{j}, \mathcal{E}_{j-1}$. Hence, the induction stops after passing each $\mathcal{E}_{i}$ and proves the desired configuration of signs. If $p_{\beta}\mathbf{t}_{\mathcal{N}}^{\beta}$ is a monomial term of $\varphi^{+}(\mathbf{t}_{\mathcal{N}})$ then $\beta \in \alpha - \mathbb{R}_{\geq0}\langle \mathfrak{v}_{1}, \mathfrak{v}_{k}\rangle$, moreover not all $\beta_{\ell}$ are negative and we have sign configuration as in Lemma \[lm-1\]. To compute the multiplicity $\mathfrak{s}(\beta)$ we choose the ordering of nodes $n_{\ell}>n_{\ell+1}$ for all $\ell=1,\ldots,k-1$. If $\beta_{k}\geq0$ then $\mathfrak{s}_{n_{k}}(\beta)=1$ and $\mathfrak{s}_{n_{\ell}>n_{\ell+1}}(\beta)=0$ for all $\ell=1,\ldots,k-1$, thus $\mathfrak{s}(\beta)=\mathfrak{s}_{n_{k}}(\beta) + \sum_{\ell=1}^{k-1}\mathfrak{s}_{n_{\ell}>n_{\ell+1}}(\beta)=1$. If $\beta_{k}<0$ then $\mathfrak{s}_{n_{k}}(\beta)=0$ and $\mathfrak{s}_{n_{\ell}>n_{\ell+1}}(\beta)=0$ for all $\ell$ except for $\ell=j$, for which $\beta_{j}\geq0$ and $\beta_{j+1}<0$, thus $\mathfrak{s}(\beta)=1$ in this case too. An example with higher multiplicities {#ss:ex} ------------------------------------- Consider the following negative definite plumbing graph $\Gamma$ given by the left hand side of the following picture. ; ; (v2) at ([4\*]{},0); (v2) circle (); (v0) at ([6\*]{},0); (v0) circle (); (v01) at ([8\*]{},0); (v01) circle (); (v1) at ([4.5\*]{},[2\*]{}); (v1) circle (); (v3) at ([4.5\*]{},[-2\*]{}); (v3) circle (); (v11) at ([5\*]{},[3.5\*]{}); (v11) circle (); (v12) at ([3\*]{},[2.5\*]{}); (v12) circle (); (v21) at ([2.5\*]{},[1\*]{}); (v21) circle (); (v22) at ([2.5\*]{},[-1\*]{}); (v22) circle (); (v31) at ([3\*]{},[-2.5\*]{}); (v31) circle (); (v32) at ([5\*]{},[-3.5\*]{}); (v32) circle (); (v0) edge (v01); (v0) edge (v1); (v1) edge (v11); (v1) edge (v12); (v0) edge (v2); (v0) edge (v3); (v2) edge (v21); (v2) edge (v22); (v3) edge (v31); (v3) edge (v32); at (v0) [$E_+$]{}; at (v1) [$E_1$]{}; at (v01) [$E_{+1}$]{}; at (v2) [$E_2$]{}; at (v3) [$E_3$]{}; at ([5\*]{},[4\*]{}) [$-3$]{}; at ([2.5\*]{},[2.5\*]{}) [$-2$]{}; at ([2\*]{},[1\*]{}) [$-3$]{}; at ([2\*]{},[-1\*]{}) [$-2$]{}; at ([2.5\*]{},[-2.5\*]{}) [$-3$]{}; at ([5\*]{},[-4\*]{}) [$-2$]{}; at ([6.3\*]{},[-0.5\*]{}) [$-22$]{}; at ([8\*]{},[-0.5\*]{}) [$-2$]{}; at ([4.25\*]{},[-1.6\*]{}) [$-1$]{}; at ([4\*]{},[-0.5\*]{}) [$-1$]{}; at ([4.25\*]{},[1.6\*]{}) [$-1$]{}; at ([0.5\*]{},0) [$\Gamma:$]{}; (u2) at ([15\*]{},0) ; (u0) at ([17\*]{},0) ; (u1) at ([15.5\*]{},[2\*]{}); (u3) at ([15.5\*]{},[-2\*]{}); (u0) circle (); (u1) circle (); (u2) circle (); (u3) circle (); (u3)–(u0); (u1)–(u0); (u2)–(u0); at (u0) [$-$]{}; at (u1) [$+$]{}; at (u2) [$+$]{}; at (u3) [$-$]{}; The associated plumbed 3-manifold is obtained by $(-7/2)$-surgery along the connected sum of three right handed trefoil knots in $S^3$. Its group $H\simeq \mathbb{Z}_7$ is cyclic of order $7$, generated by the class $[E_{+1}^*]$, where $E_{+1}^*$ is the dual base element in $L'$. For simplicity, we set $\bar{l}:=\pi_{\mathcal{N}}(l)$ for $l\in L\otimes\mathbb{Q}$ and use short notation $(l_+,l_1,l_2,l_3)$ for $\overline{l}=l_+E_+ + \sum_{i=1}^3 l_iE_i$. Notice that every exponent $\beta=(\beta_+,\beta_1,\beta_2,\beta_3)$ appearing in $P^+(\mathbf{t}_{\mathcal{N}})$ can be written in the form $\beta=c_+\bar{E}^*_+ +\sum_{i=1}^3 c_i \bar{E}^*_i -\sum_{i=1}^3 \sum_{j=1}^2 x_{ij} \bar{E}^*_{ij}-x_{+1}\bar{E}^*_{+1}$ for some $0\leq c_+\leq 2$, $0\leq c_i\leq 1$ and $x_{ij},x_{+1}\geq 1$. Eg, for the choice $c_+=2$, $c_i=1$, $x_{+1}=x_{ij}=1$ for $i\in\{1,2\}$ and $x_{3j}=2$ we get $\beta_0=(-1/7,1/7,1/7,-34/7)$. Moreover, one can check that this is the only way to write $\beta_0$ in the above form. Therefore, the orientation given by the right hand side of the picture above implies that $\mathfrak{s}(\beta_0)=2$. In fact, $\beta_0$ belongs to $P_6^+(\mathbf{t}_{\mathcal{N}})$. Hence, by Theorem \[lm-0\] $$P_6(\mathbf{t}_{\mathcal{N}})\neq P_6^+(\mathbf{t}_{\mathcal{N}}).$$ We also emphasize that the exponents $$(-1/7,1/7,1/7,-34/7), (-1/7,1/7,-34/7,1/7), (-1/7,-34/7,1/7,1/7) \ \mbox{with coefficient}\ p_{\beta}=1 \ \mbox{and}$$ $$(-1/7,1/7,1/7,-27/7), (-1/7,1/7,-27/7,1/7), (-1/7,-27/7,1/7,1/7) \ \mbox{with coefficient}\ p_{\beta}=-1$$ (all of them from $P_6^+(\mathbf{t}_{\mathcal{N}})$) are the only exponents with $\mathfrak{s}(\beta)=2>1$. Hence, although the two polynomials may be different, it still holds that $P_h(1)=P_h^+(1)=\mathfrak{sw}_h^{norm}$ for any $h\in \mathbb{Z}_7$. On the 3-manifold $S^3_{-p/q}(K)$ --------------------------------- ### **Algebraic knots** {#sec:algknots} Assume $K\subset S^3$ is an algebraic knot, ie. it is the link of an irreducible plane curve singularity defined by the function germ $\mathfrak{f}:(\mathbb{C}^2,0)\rightarrow (\mathbb{C},0)$. The [*Newton pairs*]{} of $K$ are the pairs of integers $\{(p_i,q_i)\}_{i=1}^r$, where $p_i\geq 2$, $q_i\geq 1$, $q_1>p_1$ and gcd$(p_i,q_i)=1$. They are the exponents appearing naturally in the normal form of $\mathfrak{f}$. From topological point of view, it is more convenient to use the [*linking pairs*]{} $(p_i,a_i)_{i=1}^r$ (the decorations of the splice diagram, cf. [@EN]), which can be calculated recursively by $$\label{eq:linkp} a_1=q_1 \ \mbox{ and} \ a_{i+1}=q_{i+1}+a_i p_i p_{i+1} \ \mbox{ for }\ i\geq 1.$$ The set of intersection multiplicities of $\mathfrak{f}$ with all possible analytic germs is a [*numerical semigroup*]{} denoted by $\mathcal{M}_\mathfrak{f}$. Although its definition is analytic, $\mathcal{M}_\mathfrak{f}$ is described combinatorially by its Hilbert basis: $p_1p_2\cdots p_r$, $a_{i}p_{i+1}\cdots p_r$ for $1\leq i \leq r-1$, and $a_r$. In fact, $|\mathbb{Z}_{\geq 0}\setminus \mathcal{M}_\mathfrak{f}|=\mu_\mathfrak{f}/2$ (cf. [@Milnorbook]), where $\mu_\mathfrak{f}$ is the Milnor number of $\mathfrak{f}$. The Frobenius number of $\mathcal{M}_\mathfrak{f}$ is $\mu_\mathfrak{f}-1$, and for $\ell\leq \mu_\mathfrak{f}-1$ one has the symmetry: $$\label{eq:sym} \ell\in \mathcal{M}_\mathfrak{f} \ \ \mbox{if and only if} \ \ \mu_\mathfrak{f}-1-\ell\not\in \mathcal{M}_\mathfrak{f}.$$ We emphasize that the integer $\delta_\mathfrak{f}:=\mu_\mathfrak{f}/2$ is called the delta-invariant of $\mathfrak{f}$, which equals the minimal Seifert genus of the knot $K$. The [*Alexander polynomial*]{} $\Delta(t)$ of $K$ (normalized by $\Delta(1)=1$) can be calculated in terms of the linking pairs via the formula $$\label{eq:Alex} \Delta(t)=\frac{ (1-t^{a_1p_1p_2\cdots p_r})(1-t^{a_2p_2\cdots p_r})\cdots (1-t^{a_r p_r})(1-t)}{(1-t^{a_1p_2\cdots p_r})(1-t^{a_2p_3\cdots p_r})\cdots (1-t^{a_r})(1-t^{p_1\cdots p_r})}.$$ It has degree $\mu_\mathfrak{f}$. On the other hand, $\Delta(t)/(1-t)=\sum_{\ell\in \mathcal{M}_\mathfrak{f}} t^{\ell}$ is the *monodromy zeta-function* of $\mathfrak{f}$ (cf. [@CDG99]), whose [*polynomial part*]{} is calculated explicitly by the gaps of the semigroup: $P_\mathfrak{f}(t)=-\sum_{\ell\notin \mathcal{M}_\mathfrak{f}} t^{\ell}$ (cf. [@LSznew 7.1.2]). Hence, the degree of $P_{\mathfrak{f}}(t)$ equals $\mu_\mathfrak{f}-1$. The [*embedded minimal good resolution graph*]{} of $\mathfrak{f}$ (or the minimal negative-definite plumbing graph of $K$) has the shape of at (-5.5,1.5) ; (-5.5,1.5) circle (); at (-7.5,1.5) ; (-7.5,1.5) circle (); at (-5.5,-0.5) ; (-5.5,-0.5) circle (); (-5.5,1.5) edge (-7.5,1.5); (-5.5,1.5) edge (-5.5,-0.5); (-5.5,1.5) edge (-4,1.5); at (-4,1.5) ; at (-3.5,1.5) ; at (-3,1.5) ; (-3.5,1.5) edge (-3,1.5); at (-2.5,1.5) ; (v1) at (-1,1.5) ; (-1,1.5) circle (); at (-1,-0.5) ; (-1,-0.5) circle (); (v2) at (1.5,1.5) ; (-1,1.5) edge (-2.5,1.5); (-1,1.5) edge (-1,-0.5); (v1) edge (v2); at (-5.5,2) [$v_1$]{}; at (-1,2) [$v_r$]{}; at (-0.5,1) [$-1$]{}; at (2.5,1.5) [$K$]{}; at (-9.5,1.5) [$\Gamma_\mathfrak{f}:$]{}; where the arrowhead, attached to the unique $(-1)$-vertex, represents the knot $K$. Its decorations can be calculated from the Newton pairs $\{(p_i,q_i)\}_{i}$ using eg. [@EN], see also [@Nded Section 4.I]. The graph has an additional multiplicity decoration: the multiplicity of a vertex is the coefficient of the pullback-divisor of $\mathfrak{f}$ along the corresponding exceptional divisor, while the arrowhead has the multiplicity decoration 1. Eg., we set $m_\mathfrak{f}:=a_r p_r$ to be the multiplicity of the $(-1)$–vertex. Notice that the isotopy type of $K\subset S^3$ is completely characterized by any of the following invariants highlighted above. For general references see [@BKcurves], [@EN] and also the presentation of [@Nded] and [@NR]. ### **The plumbing of $S^3_{-p/q}(K)$** {#ss:graph} Let $p/q>0$ ($p>0$, $\gcd(p,q)=1$) be a positive rational number and $\{K_j\}_{j=1}^\nu$ be a collection of algebraic knots. Then we consider the oriented 3-manifold $M= S^3_{-p/q}(K)$, obtained by $(-p/q)$-surgery along the connected sum $K=K_1\#\cdots \#K_\nu\subset S^3$ of the knots $K_j$. All the invariants associated with $K_j$, listed in the previous section, will be indexed by $j$. Eg., the linking pairs of $K_j$ will be denoted by $(p^{(j)}_i,a^{(j)}_i)_{i=1}^{r_j}$, the Alexander polynomial by $\Delta^{(j)}(t)$ and $m^{(j)}$ stands for the multiplicity of the $(-1)$-vertex in the minimal plumbing graph of $K_j$ as above. Set also $m:=\sum_{j=1}^{\nu}m^{(j)}$. The schematic picture of the plumbing graph $\Gamma$ of the oriented 3–manifold $M=S^3_{-p/q}(K)$ has the following form (cf. [@BodNsw]): (v0) at (10.5,0); (vr1) at (6.5,4.5) ; (vrj) at (6.5,1); (vrv) at (6.5,-4); (v01) at (13,0); (v0s) at (18.5,0); (v0) circle (0.1); (vr1) circle (0.1); (vrj) circle (0.1); (vrv) circle (0.1); (v01) circle (0.1); (v0s) circle (0.1); (vr1) edge (v0); (vrj) edge (v0); (vrv) edge (v0); (v01) edge (v0); (v01) edge (v0s); (v11) at (5.5,5) ; (v12) at (5.5,4) ; (v21) at (5.5,-3.5) ; (v22) at (5.5,-4.5) ; (vr1) edge (v11); (v12) edge (vr1); (vrv) edge (v21); (vrv) edge (v22); (erj) at (6.5,-1) ; (vr1j) at (2.5,1) ; (er1j) at (2.5,-1) ; (erj) circle (0.1); (vr1j) circle (0.1); (er1j) circle (0.1); (s1) at (1,1) ; (s1j) at (4,1) ; (s2) at (0.5,1) ; (s3) at (0,1) ; (s4) at (-0.5,1) ; (s3j) at (4.5,1) ; (s2j) at (5,1) ; (s0j) at (5.5,1) ; (vrj) edge (erj); (vr1j) edge (er1j); (vr1j) edge (s1); (vr1j) edge (s1j); (s2) edge (s3); (s2j) edge (s3j); (vrj) edge (s0j); (v1j) at (-1.5,1) ; (e1j) at (-1.5,-1) ; (e2j) at (-3.5,1) ; (v1j) circle (0.1); (e1j) circle (0.1); (e2j) circle (0.1); (v1j) edge (e2j); (s4) edge (v1j); (v1j) edge (e1j); (3,-3) rectangle (7,-5); (3,5.5) rectangle (7,3.5); (-4.5,3) rectangle (7.6,-2.5); (v3) at (9,0) ; (v4) at (9,-1) ; (v3) edge (v4); at (4,4) [$\Gamma^{(1)}$]{}; at (4,-2) [$\Gamma^{(j)}$]{}; at (4,-4.5) [$\Gamma^{(\nu)}$]{}; at (-1,1.5) [$v^{(j)}_1$]{}; at (2.5,2) [$v^{(j)}_{i}$]{}; at (6.5,1.5) [$v^{(j)}_{r_j}$]{}; at (7,0.5) [$-1$]{}; at (6.5,5) [$-1$]{}; at (6.5,-3.5) [$-1$]{}; at (10.5,0.5) [$v_+$]{}; at (13,0.5) [$v_{+1}$]{}; at (18.5,0.5) [$v_{+s}$]{}; at (10.5,-1) [$-k_0-m$]{}; at (13,-1) [$-k_1$]{}; at (18.5,-1) [$-k_s$]{}; at (-1,5) [$\Gamma:$]{}; at (1.5,1.5) [$a^{(j)}_i$]{}; at (3.5,1.5) [$D^{(j)}_i$]{}; at (2,0) [$p^{(j)}_i$]{}; (-4,2.5) rectangle (3,-2); at (-3.2,-1.5) [$\Gamma_i^{(j)}$]{}; where the dash-lines represent strings of vertices. The integers $k_0\geq 1$ and $k_i\geq 2$ $(1\leq i \leq s)$, in the decorations of the vertices $v_{+i}$, are determined by the Hirzebruch/negative continued fraction expansion $$p/q=[k_{0},\ldots, k_s]= k_{0}-1/(k_{1}-1/(\cdots -1/k_{s})\cdots ).$$ We write $E_+$, $E_{+i}$ and $E^{(j)}_{i}$ for the base elements corresponding to the vertices $v_+$, $v_{+i}$ and $v^{(j)}_i$, respectively. It is also known that $H=L'/L\simeq \mathbb{Z}_p$ is the cyclic group of order $p$, generated by $[E^*_{+s}]$ (for a complete proof see [@BodNsw Lemma 6]). In the above picture we have put at the node $v^{(j)}_{i}$ its splice diagram decorations $a^{(j)}_i$, $p^{(j)}_i$ and $D^{(j)}_i$ (cf. [@EN]). Eg., if we use notation $$\label{eq:Gamma_ij} \Gamma^{(j)}_i \ \ \mbox{for the subgraph spanned by the nodes $\{v^{(j)}_{i'}\}_{i'=1}^{i}$ and their corresponding end-vertices,}$$ then $D^{(j)}_i=\det(\Gamma\setminus \Gamma^{(j)}_i)$. In particular, $\Gamma^{(j)}=:\Gamma^{(j)}_{r_j}$ and its self-intersection decorations are the same as of the embedded minimal good resolution graph of $K_j$, which we omit from the picture for simplicity. In the next lemma we prove some useful formulas. \[lem:apD\] (i) \[apD-i\] $D^{(j)}_i = p + a^{(j)}_i p^{(j)}_i \left( p^{(j)}_{i+1}\cdots p^{(j)}_{r_{j}} \right)^{2} q,\ $ for $1\leq i\leq r_{j}$; (ii) \[apD-ii\] $a^{(j)}_{i+1}D^{(j)}_i = q^{(j)}_{i+1} p + a^{(j)}_i p^{(j)}_i p^{(j)}_{i+1} D^{(j)}_{i+1},\ $ for $1\leq i\leq r_{j}-1$. Let $K'_j$ be the knot with Newton pairs $(p^{(j)}_{i'},q^{(j)}_{i'})_{i'=i+1}^{r_{j}}$. The graph $\Gamma\setminus \Gamma^{(j)}_i$ is the plumbing graph of the manifold $S^3_{-p'/q}(K'_j\# \ \#_{j'\neq j} K_{j'})$ for some $p'$ which can be computed as follows. The new linking pairs $(p^{(j)}_{i'},\widetilde{a}^{(j)}_{i'})_{i'=i+1}^{r_{j}}$ can be calculated recursively using (\[eq:linkp\]) and $\widetilde{a}^{(j)}_{i+1}=q^{(j)}_{i+1}$. Hence, we find the identity $$\widetilde{a}^{(j)}_r=a^{(j)}_r - a^{(j)}_i p^{(j)}_i \left( p^{(j)}_{i+1}\cdots p^{(j)}_{r-1} \right)^{2} p^{(j)}_{r_{j}},$$ which implies that the multiplicity $\widetilde{m}^{(j)}$ of the $(-1)$-vertex in the embedded graph of $K'_j$ equals $\widetilde{a}^{(j)}_{r_{j}} p^{(j)}_{r_{j}} = \widetilde{m}^{(j)}-a^{(j)}_i p^{(j)}_i \left( p^{(j)}_{i+1}\cdots p^{(j)}_{r_{j}} \right)^2$. Since the decoration on $v_+$ remains unchanged we must have for the Hirzebruch/negative continued fraction $$p'/q =\big[ k_0+a^{(j)}_i p^{(j)}_i \left( p^{(j)}_{i+1}\cdots p^{(j)}_{r_{j}} \right)^2,k_1,\dots,k_s \big] = p/q+a^{(j)}_ip^{(j)}_i \left( p^{(j)}_{i+1}\cdots p^{(j)}_{r_{j}} \right)^2.$$ Finally, note that $p'=D^{(j)}_{i}$ is the determinant of the graph $\Gamma\setminus \Gamma^{(j)}_{i}$ [@BodNsw Lemma 6]. This concludes the formula of (\[apD-i\]) . The recursive identity of (\[apD-ii\]) can be easily verified using (\[apD-i\]). ### **Seiberg–Witten invariant via Alexander polynomials** We consider the product of the Alexander polynomials $\Delta(t):=\prod_{j}\Delta^{(j)}(t)$ with degree $\mu:=\sum_j \mu^{(j)}$. By the known facts $\Delta(1)=1$ and $\Delta'(1)=\mu/2$ we get a unique decomposition $$\Delta(t)=1+(\mu/2)(t-1)+(t-1)^2\cdot \mathcal{Q}(t)$$ for some polynomial with integral coefficients $\mathcal{Q}(t)=\sum _{i=0}^{\mu-2}\mathfrak{q}_i t^i$ of degree $\mu-2$. We remark that the coefficients of $\mathcal{Q}$ has many interesting arithmetical properties. Eg., notice that $\mathfrak{q}_0=\mu/2$, $\mathfrak{q}_{\mu-2}=1$ and $\mathfrak{q}_{\mu-2-i} = \mathfrak{q}_i+i+1-\mu/2$ for $0\leq i \leq \mu-2$, given by the symmetry of $\Delta$. The explicit calculation of a general coefficient is rather hard, one can expect it to be connected with some counting function in a semigroup/affine monoid structure associated with the manifold $M$ (cf. [@LSznew]). In particular, if $\nu=1$ one can check that $\mathfrak{q}_i=\#\{n \not\in\mathcal{M}\ :\ n>i\}$, where $\mathcal{M}$ is the semigroup of the unique algebraic knot $K$. More details and discussions about these coefficients can be found eg. in [@BodN]. We look at the decomposition $\mathcal{Q}(t)=\sum_{h\in\mathbb{Z}_p} \mathcal{Q}_h(t)$ where $\mathcal{Q}_h(t):=\sum_{i\geq 0} \mathfrak{q}_{[(ip+h)/q]} t^{[\frac{ip+h}{q}]}$ and consider the following (different) normalization of the Seiberg–Witten invariants: $$\label{eq:swnorm2} \widetilde{\mathfrak{sw}}^{norm}_{a}(M):=-\mathfrak{sw}_{-[hE^*_{+s}]\ast\sigma_{can}}(M)- ((K+2hE^*_{+s})^2+\mathcal{V})/8 \ \ \ \ \ \mbox{for} \ \ 0\leq h< p.$$ Then the following identity is known by [@BN; @Ngr; @NR]: $$\label{thm:Q} \mathcal{Q}_h(1)=\widetilde{\mathfrak{sw}}^{norm}_{h}(M).$$ ### **On the structure of the polynomial part** {#ss:str} For any $h\in\mathbb{Z}_p$ consider the decomposition $$f_{h}(\mathbf{t}_{\mathcal{N}}) = P^{+}_{h}(\mathbf{t}_{\mathcal{N}}) + f^{neg}_{h}(\mathbf{t}_{\mathcal{N}}),$$ given by Lemma \[lem:+dec\], ie. $f_{h}^{neg}(\mathbf{t}_{\mathcal{N}})$ has negative degree in each variable and write $P^{+}_{h} (\mathbf{t}_{\mathcal{N}})= \sum_{\beta\in\mathcal{B}_h}p_{\beta}\mathbf{t}_{\mathcal{N}}^{\beta}$ where $\beta=(\beta_v)_{v\in\mathcal{N}}$ and $\beta\nless 0$. Let $\beta_+$ be the $E_+$-coefficient of $\beta$ and set $\mathcal{B}:=\bigcup_{h}\mathcal{B}_h$ too. For any polynomial $\mathcal{P}(\mathbf{t}_{\mathcal{N}})$ we consider the decomposition $\mathcal{P}_{\beta_+\geq0}(\mathbf{t}_{\mathcal{N}})+\mathcal{P}_{\beta_+<0}(\mathbf{t}_{\mathcal{N}})$ so that the first part consists of those monomial terms for which $\beta_+\geq 0$, and similarly, all the terms of the second part have $\beta_+<0$. By definitions we have $P^{+}_{h,\beta_+\geq 0} (\mathbf{t}_{\mathcal{N}})=P^{v_+}_h(\mathbf{t}_{\mathcal{N}})$ and Theorem \[lm-0\] concludes that the monomial terms of $P_h(\mathbf{t}_{\mathcal{N}})$ are exactly of $P^{+}_{h} (\mathbf{t}_{\mathcal{N}})$ with multiplicities. Therefore, in general, the difference polynomial $\mathcal{D}_h(\mathbf{t}_{\mathcal{N}}):=P_h(\mathbf{t}_{\mathcal{N}})-P^{v_+}_h(\mathbf{t}_{\mathcal{N}})$ consists of $P_{h,\beta_+<0}(\mathbf{t}_{\mathcal{N}})$ and the higher multiplicity terms ($\mathfrak{s}(\beta)\geq 2$) from $P_{h,\beta_+\geq0}(\mathbf{t}_{\mathcal{N}})$. However, in the next theorem we show that there are no monomial terms in $P_{h,\beta_+\geq0}(\mathbf{t}_{\mathcal{N}})$ with $\mathfrak{s}(\beta)\geq 2$, ie. $\mathcal{D}_h(\mathbf{t}_{\mathcal{N}})=P_{h,\beta_+<0}(\mathbf{t}_{\mathcal{N}})$. \[thm2\] $$P_{h,\beta_+\geq0}(\mathbf{t}_{\mathcal{N}})=P^{+}_{h,\beta_+\geq 0} (\mathbf{t}_{\mathcal{N}})=P^{v_+}_h(\mathbf{t}_{\mathcal{N}}).$$ First of all we may assume that $\nu\geq 2$ otherwise we have the situation of section \[ss:orb\]. We fix the orientation of $\Gamma^{orb}$ towards to the node $v_+$ and consider its induced partial order on $\mathcal{N}$ (see \[sec:alg\](2)). For any $\beta\in \mathcal{B}$ for which $\beta_+\geq 0$ one has $\mathfrak{s}_{v_+}(\beta)=1$, thus by Theorem \[lm-0\] we have to prove that $\mathfrak{s}_{v^{(j)}_{i}>v^{(j)}_{i+1}}(\beta)=0$ for any $j\in\{1,\dots, \nu\}$ and $i\in\{1,\dots, r_j\}$. (We set $v^{(j)}_{i+1}:=v_+$.) In order to see this, we prove that the sign configuration on the subgraphs $\Gamma^{(j)}$ behaves exactly as in Lemma \[lm-1\]. Thus, assuming $\beta_+\geq 0$, for any $j$ we show that $$\label{eq:conf} \beta^{(j)}_1,\dots,\beta^{(j)}_{i-1}<0\leq \beta^{(j)}_i,\dots,\beta^{(j)}_{r_j},\beta_+ \ \ \mbox{for some} \ \ i\in\{1,\dots, r_j\}.$$ Therefore, it is enough to show that Proposition \[prop-1\] can be applied to the zeta-function $f(\mathbf{t}_{\mathcal{N}_j})$ reduced to the subset of nodes $\mathcal{N}_{j}$ consisting of $v_i^{(j)}$ and $v_+$ of $\Gamma$. Indeed, for a fixed $j$ we construct a new plumbing graph $\Gamma_{M_j}$ by deleting all the subgraphs $\Gamma^{(j')}$ and its adjacent edges in $\Gamma$ for any $j'\neq j$ and modifying the decoration of $v_+$ into $-k_0-m^{(j)}$. Then the new graph $\Gamma_{M_j}$ is the plumbing graph of the manifold $M_j:=S^3_{-p/q}(K_j)$. Or, if we look at $\Gamma$ as the minimal good resolution graph of a normal surface singularity then one can obtain a new resolution graph by blowing down all the subgraphs $\Gamma^{(j')}$. In this resolution, the new exceptional divisor corresponding to the vertex $v_+$ is a rational curve with singular points and self-intersection $-k_0-m^{(j)}$. If we disregard the singularities of this divisor then we obtain a normal surface singularity whose link is $M_j$ and its minimal good resolution is $\Gamma_{M_j}$. We distinguish the invariants of the new graphs in the following way: $L_j$ denotes the lattice associated with $\Gamma_{M_j}$ with base elements $E_{v,j}$, the dual lattice will be denoted by $L'_j$ with base elements $E^{*}_{v,j}$. We identify $\mathcal{N}_{j}$ of $\Gamma$ with the same set of vertices of $\Gamma_{M_j}$ (notice that $v_+$ is no longer a node in the last graph). Then one can also identify the base elements of $\pi_{\mathcal{N}_{j}}(L)$ and $\pi_{\mathcal{N}_{j}}(L_j)$. In particular, one can show that $\pi_{\mathcal{N}_{j}}(E^*_+)=\pi_{\mathcal{N}_{j}}(E^{*}_{+,j})$. Using the above identifications and formula (\[eq:Alex\]) for Alexander polynomials one can check the following identity $$f(\mathbf{t}_{\mathcal{N}_j})=f_{j}(\mathbf{t}_{\mathcal{N}_j})\prod_{j'\neq j} \Delta^{(j')}(\mathbf{t}_{\mathcal{N}_j}^{E^*_+}),$$ where $f_{j}(\mathbf{t}_{\mathcal{N}_j})$ is the zeta-function associated with $\Gamma_{M_j}$ restricted to $\mathcal{N}_j$. The only problem is that $\mathcal{N}_j$ contains $v_+$ which is no longer a node in $\Gamma_{M_j}$. Nevertheless, we can blow up the vertex $v_+$ and denote the new graph by $\Gamma'_{M_j}$. Then the newly created $(-1)$-vertex is connected to $v_+$ (if $q=1$ then we can create two such $(-1)$-vertices), hence $v_+$ becomes a node of $\Gamma'_{M_j}$. Using the natural identifications we have $\pi_{\mathcal{N}_j}(E^{*}_{+,j})=\pi_{\mathcal{N}_j}(E^{*}_{b,j})$ where $E^{*}_{b,j}$ denotes the newly created dual base element. Moreover, one has $f_{j}(\mathbf{t}_{\mathcal{N}_j})=f'_{j}(\mathbf{t}_{\mathcal{N}_j})$, where $f'_j$ is associated with $\Gamma'_{M_j}$. Finally, the rational function $f'_j(\mathbf{t}_{\mathcal{N}_j})\prod_{j'\neq j} \Delta^{(j')}(\mathbf{t}_{\mathcal{N}_j}^{E^*_+})$ is the sum of rational fractions as in Proposition \[prop-1\] which implies the sign configuration (\[eq:conf\]) by Lemma \[lm-1\]. We notice that for the difference polynomial $\mathcal{D}_h(\mathbf{t}_{\mathcal{N}}):=P_h(\mathbf{t}_{\mathcal{N}})-P^{v_+}_h(\mathbf{t}_{\mathcal{N}})$ one has $$\mathcal{D}_h(1)=\mathfrak{sw}^{norm}_{h}(M)-\widetilde{\mathfrak{sw}}^{norm}_{h}(M)=\chi(r_{[hE^*_{+s}]})-\chi(hE^*_{+s}),$$ where $\chi(l'):=-(K+l',l')/2$ for any $l'\in L'$. This follows from (\[eq:polpsw\]), (\[thm:Q\]) and the fact that $P^{v_+}_h(t)=\mathcal{Q}_{h}(t)$, which is proven in [@BN 8.1]. Thus, Theorem \[thm2\] implies that $P_{h,\beta_+<0}$ counts only the difference between the normalizations and the Seiberg–Witten information is contained in $P^{+}_{h,\beta_+\geq 0}$. ### **Canonical case $h=0$** From geometric point of view the main interest focuses to the case when $h=0$, since $f_0(\mathbf{t}_{\mathcal{N}})$ is related with analytic Poincaré series associated with a normal surface singularity whose link is $M$ (cf. Section \[s:ps\], eg. in the case when $q=1$ the manifold $M=S^3_{-p}(K)$ may appear as the link of a superisolated singularity). In this case one has $\mathcal{D}_0(1)=P_{0,\beta_+<0}(1)=0$, although it may happen that there are some monomial terms appearing in $P_{0,\beta_+<0}$. This can indeed occur for $h\neq 0$ as shown by the example from Section \[ss:ex\]. However, in the sequel we prove that for $h=0$ this is not the case, ie. $$P^+_{0,\beta_+<0}(\mathbf{t}_{\mathcal{N}})=P_{0,\beta_+<0}(\mathbf{t}_{\mathcal{N}})\equiv 0.$$ Thus, we have $P_0(\mathbf{t}_{\mathcal{N}})=P^+_0(\mathbf{t}_{\mathcal{N}})$, in particular $P^+_{0}(1)=\mathfrak{sw}^{norm}_0(M)$. \[lem:a0\] Let $\mathfrak{f}^{(j)}_i$ be the irreducible plane curve singularity with Newton pairs $(p^{(j)}_{i'},q^{(j)}_{i'})_{i'=1}^i$ for any $1\leq j\leq \nu$ and $1\leq i\leq r_j$ and its associated semigroup will be denoted by $\mathcal{M}_{\mathfrak{f}^{(j)}_i}$. For any $\beta\in \mathcal{B}$, $1\leq j\leq \nu$ and $1\leq i\leq r_j$ we have the following relations (i) \[a0-i\] $$a^{(j)}_{i+1}\beta^{(j)}_{i}=a^{(j)}_{i}p^{(j)}_{i}\beta^{(j)}_{i+1}+q^{(j)}_{i+1}\ell_{\mathfrak{f}^{(j)}_i}^{\beta},$$ where $\ell_{\mathfrak{f}^{(j)}_i}^{\beta} \in \mathbb{Z}\setminus \mathcal{M}_{\mathfrak{f}^{(j)}_i}$ depending on $\beta$. In particular, for $i=r_j$ we set $a^{(j)}_{r_j+1}:=1$, $q^{(j)}_{r_j+1}:=1$ and $\beta^{(j)}_{r_j+1}:=\beta_+$, hence the identity becomes $\beta^{(j)}_{r_j}=m^{(j)}\beta_+ + \ell_{\mathfrak{f}^{(j)}}^{\beta}$. (ii) \[a0-ii\] $$\beta^{(j)}_{i}<a^{(j)}_{i}p^{(j)}_{i}\cdots p^{(j)}_{r_j}(\beta_+ +1).$$ (\[a0-i\]) We can write $$\label{eq:sumbet} \beta=k_+E_+^*+\sum_j(\sum_i k^{(j)}_i E_i^{(j)*}-\sum_{v\in \mathcal{E}^{(j)}}x^{(j)}_v E_v^{(j)*})-x_+E^*_{+s}$$ for some integers $0\leq k_+\leq \nu-1$, $k^{(j)}_i\in \{0,1\}$ and $x^{(j)}_v,x_+ \geq 1$, where we use notation $\mathcal{E}^{(j)}$ for the set of end-vertices of $\Gamma^{(j)}$. For any subgraph $\Gamma'$ of $\Gamma$ let us denote by $\beta^{\Gamma'}$ the partial sum considering only those terms from the right hand side of (\[eq:sumbet\]) which are associated with the nodes and end-vertices of $\Gamma'$. Recall that we have defined the subgraphs $\Gamma^{(j)}_i$ in (\[eq:Gamma\_ij\]). Then, we claim that $$\label{eq:longcalc} a^{(j)}_{i+1}\beta^{(j)}_{i}:=a^{(j)}_{i+1}\cdot(\beta,-E_i^{(j)*})=a^{(j)}_{i}p^{(j)}_{i}\beta^{(j)}_{i+1}+\Big(\frac{a^{(j)}_{i+1}D^{(j)}_{i}}{p^{(j)}_{i+1}D^{(j)}_{i+1}}-a^{(j)}_{i}p^{(j)}_{i}\Big)\cdot\beta^{\Gamma^{(j)}_i}_{i+1}.$$ For the above expression we have used the following identities: $a^{(j)}_{i+1}(E_v^*,E_i^{(j)*})$ equals either with $\frac{a^{(j)}_{i+1}D^{(j)}_{i}}{p^{(j)}_{i+1}D^{(j)}_{i+1}}(E_v^*,E_{i+1}^{(j)*})$ in the case when $v$ is a node or an end-vertex of $\Gamma^{(j)}_i$, or, with $a^{(j)}_{i}p^{(j)}_{i}(E_v^*,E_{i+1}^{(j)*})$ otherwise. Moreover, one can check from (\[eq:sumbet\]) that $$\label{eq:beta-explained} \beta^{\Gamma^{(j)}_i}_{i+1}=\frac{p^{(j)}_{i+1}D^{(j)}_{i+1}}{p}\Big(\sum_{i'=1}^{i}\big(k_{i'}^{(j)}\cdot a^{(j)}_{i'} p^{(j)}_{i'}\dots p^{(j)}_{i}-x^{(j)}_{v_{i'}}\cdot a^{(j)}_{i'}p^{(j)}_{i'+1}\dots p^{(j)}_{i}\big) - x^{(j)}_{v_{0}}\cdot p^{(j)}_{1}\dots p^{(j)}_{i}\Big).$$ Now, the idea is that by (\[eq:beta-explained\]) and (\[eq:Alex\]) the quantity $\ell_{\mathfrak{f}^{(j)}_i}^{\beta}:=p/(p^{(j)}_{i+1}D^{(j)}_{i+1})\beta^{\Gamma^{(j)}_i}_{i+1}$ can be viewed as an exponent coming from the division of the monodromy zeta-function of $\mathfrak{f}^{(j)}_i$. Hence, it is either negative or it is an exponent of the polynomial part of the monodromy zeta-function which implies $\ell_{\mathfrak{f}^{(j)}_i}^{\beta}\notin \mathcal{M}_{\mathfrak{f}^{(j)}_i}$ by [@LSznew]. Therefore (\[eq:longcalc\]) transforms into $$a^{(j)}_{i+1}\beta^{(j)}_{i}=a^{(j)}_{i}p^{(j)}_{i}\beta^{(j)}_{i+1}+\frac{a^{(j)}_{i+1}D^{(j)}_{i}-a^{(j)}_{i}p^{(j)}_{i}p^{(j)}_{i+1}D^{(j)}_{i+1}}{p} \ell_{\mathfrak{f}^{(j)}_i}^{\beta}=a^{(j)}_{i}p^{(j)}_{i}\beta^{(j)}_{i+1}+q^{(j)}_{i+1}\ell_{\mathfrak{f}^{(j)}_i}^{\beta},$$ where the second equality uses Lemma \[lem:apD\](\[apD-ii\]). (\[a0-ii\]) According to the proof of part (\[a0-i\]) and section \[sec:algknots\] we can write $\ell_{\mathfrak{f}^{(j)}_i}^{\beta}=\mu_{\mathfrak{f}^{(j)}_i}-1-s_{\mathfrak{f}^{(j)}_i}$ for some $s_{\mathfrak{f}^{(j)}_i}\in \mathcal{M}_{\mathfrak{f}^{(j)}_i}$. Therefore (\[a0-i\]) implies $\beta^{(j)}_{i}\leq (a^{(j)}_{i}p^{(j)}_{i}/a^{(j)}_{i+1})\beta^{(j)}_{i+1}+(q^{(j)}_{i+1}/a^{(j)}_{i+1})(\mu_{\mathfrak{f}^{(j)}_i}-1)$, which induces the following inequality $$\label{eq:ineq} \beta^{(j)}_{i} \leq a^{(j)}_{i}p^{(j)}_{i}\cdots p^{(j)}_{r_j}\beta_{+} + \frac{q^{(j)}_{i+1}}{a^{(j)}_{i+1}}(\mu_{\mathfrak{f}^{(j)}_i}-1) + \sum_{i'=i+1}^{r_j}\frac{a^{(j)}_{i}p^{(j)}_{i}\cdots p^{(j)}_{i'-1}q^{(j)}_{i'+1}}{a^{(j)}_{i'}a^{(j)}_{i'+1}}(\mu_{\mathfrak{f}^{(j)}_{i'}}-1).$$ We apply (\[eq:linkp\]) on $q_{i'+1}^{(j)}$ to get $$\label{eq:ineq2} \beta^{(j)}_{i} \leq a^{(j)}_{i}p^{(j)}_{i}\cdots p^{(j)}_{r_j}\beta_{+} + (\mu_{\mathfrak{f}^{(j)}_i}-1) + \sum_{i'=i+2}^{r_j}\frac{a^{(j)}_{i}p^{(j)}_{i}\cdots p^{(j)}_{i'-1}}{a^{(j)}_{i'}}(\mu_{\mathfrak{f}^{(j)}_{i'}}-1) - \frac{a^{(j)}_{i}p^{(j)}_{i}\cdots p^{(j)}_{i'}}{a^{(j)}_{i'}}(\mu_{\mathfrak{f}^{(j)}_{i'-1}}-1).$$ We use a well-known recursive formula $\mu_{\mathfrak{f}^{(j)}_{i'}}=(a^{(j)}_{i'}-1)(p^{(j)}_{i'}-1)+p^{(j)}_{i'}\mu_{\mathfrak{f}^{(j)}_{i'-1}}$ (see eg. [@Nded (4.13)]) for the Milnor numbers of $\mathfrak{f}^{(j)}_{i'}$, which can be rewritten for our purpose in the form $$\frac{a^{(j)}_{i}p^{(j)}_{i}\cdots p^{(j)}_{i'-1}}{a^{(j)}_{i'}}(\mu_{\mathfrak{f}^{(j)}_{i'}}-1) - \frac{a^{(j)}_{i}p^{(j)}_{i}\cdots p^{(j)}_{i'}}{a^{(j)}_{i'}}(\mu_{\mathfrak{f}^{(j)}_{i'-1}}-1) = a^{(j)}_{i}p^{(j)}_{i}\cdots p^{(j)}_{i'} - a^{(j)}_{i}p^{(j)}_{i}\cdots p^{(j)}_{i'-1}.$$ Finally, this recursion can be applied repeatedly to (\[eq:ineq\]) in order to deduce $$\beta^{(j)}_{i} \leq a^{(j)}_{i}p^{(j)}_{i}\cdots p^{(j)}_{r_j}\beta_+ + \sum_{i'={i+1}}^{r_j}(a^{(j)}_{i}p^{(j)}_{i}\cdots p^{(j)}_{i'} - a^{(j)}_{i}p^{(j)}_{i}\cdots p^{(j)}_{i'-1}) +\mu_{\mathfrak{f}^{(j)}_{i}}-1<a^{(j)}_{i}p^{(j)}_{i}\cdots p^{(j)}_{r_j}(\beta_+ +1),$$ where the second (strict) inequality uses [@Nded Theorem 4.12(a)], saying that $m_{\mathfrak{f}^{(j)}_{i}}-\mu_{\mathfrak{f}^{(j)}_{i}}+1\geq 2|\mathcal{V}(\Gamma_{i}^{(j)})|-1>0$. \[prop:can\] For $h=0$ one has $P_0(\mathbf{t}_{\mathcal{N}})=P^+_0(\mathbf{t}_{\mathcal{N}})=P^{v_+}_0(\mathbf{t}_{\mathcal{N}})$. By Theorems \[lm-0\] and \[thm2\] we have to show that $P^+_{0,\beta_+<0}(\mathbf{t}_{\mathcal{N}})\equiv 0$. Moreover, using the configuration of signs from the proof of Theorem \[thm2\], it needs to be proved that $\beta_+<0$ implies $\beta^{(j)}_{i}<0$ for any $1\leq j\leq \nu$ and $1\leq i\leq r_j$. This is implied by part (\[a0-ii\]) of Lemma \[lem:a0\], since $\beta_+,\beta^{(j)}_{i}\in \mathbb{Z}$ in the case $h=0$. Question about $P^+$ -------------------- We have shown an example in Section \[ss:ex\] in which $P_{h}(\mathbf{t}_{\mathcal{N}})\neq P^{+}_{h}(\mathbf{t}_{\mathcal{N}})$ for some $h\in H$. Hence, by Theorem \[lm-0\] the polynomial part $P_{h}$ in general can be ‘thicker’ than $P^{+}_{h}$. Nevertheless, they have the same set of exponents for the monomials which object presumably plays an important role in geometrical applications. On the other hand, the calculation of $P^{+}_{h}$ is much more effective, therefore, it is natural to pose the question whether it can replace $P_{h}$ as a polynomial part. More precisely, we ask the following: *Is it true in general that $P^{+}_{h}(1)=\mathfrak{sw}_h^{norm}$?* [30]{} Braun, G. and Némethi, A.: Surgery formula for Seiberg–Witten invariants of negative definite plumbed 3–manifolds, [ *J. für die reine und ang. Math.*]{} [**638**]{} (2010), 189–208. Brieskorn, E. and Knörrer, H.: Plane Algebraic Curves, Birkhäuser, Boston, 1986. Bodnár, J. and Némethi, A.: Lattice cohomology and rational cuspidal curves, [*Math. Research Letters*]{} [**23**]{} (2) (2016), 339–375. Bodnár, J. and Némethi, A.: Seiberg–Witten invariant of the universal abelian cover of $S^3_{-p/q}(K)$, [*Singularities and Computer Algebra: Festschrift for Gert-Martin Greuel on the Occasion of his 70th Birthday*]{}, Springer 2017, 173–197. Campillo, A., Delgado, F. and Gusein-Zade, S. M.: On the monodromy of a plane curve singularity and Poincaré series of the ring of functions on the curve, [*Func. Anal. and its Appl.*]{} [**33**]{} (1999), no. 1, 56–57. Campillo, A., Delgado, F. and Gusein-Zade, S. M.: Poincaré series of a rational surface singularity, [*Invent. Math.*]{} [**155**]{} (2004), no. 1, 41–53. Campillo, A., Delgado, F. and Gusein-Zade, S. M.: Universal abelian covers of rational surface singularities and multi-index filtrations, [*Funk. Anal. i Prilozhen.*]{} [**42**]{} (2008), no. 2, 3–10. Eisenbud, D. and Neumann, W.: Three–dimensional link theory and invariants of plane curve singularities, [ Princeton Univ. Press]{}, 1985. László, T. and Némethi, A.: Ehrhart theory of polytopes and Seiberg-Witten invariants of plumbed 3–manifolds, [*Geometry and Topology*]{} [**18**]{} (2014), no. 2, 717–778. László, T., Nagy, J. and Némethi, A.: Surgery formulae for the Seiberg–Witten invariant of plumbed 3-manifold, [*arXiv:1702.06692 \[math.GT\]*]{} (2017). László, T. and Szilágyi, Zs.: On Poincaré series associated with links of normal surface singularities, [*arXiv:1503.09012v2 \[math.GT\]*]{} (2015). László, T. and Szilágyi, Zs.: Non-normal affine monoids, modules and Poincaré series of plumbed 3-manifolds, [*Acta Math. Hungar.*]{} (2017), doi:10.1007/s10474-017-0726-2. Lim, Y.: Seiberg–Witten invariants for 3–manifolds in the case $b_1=0$ or $1$, [*Pacific J. of Math.*]{} [**195**]{} (2000), no. 1, 179–204. Lescop, C.: Global surgery formula for the Casson–Walker invariant, [*Ann. of Math. Studies*]{} [**140**]{}, Princeton Univ. Press, 1996. Milnor, J.: Singular points of complex hypersurfaces, [*Ann. of Math. Studies*]{}, [**61**]{}, Princeton Univ. Press, 1968. Némethi, A.: Dedekind sums and the signature of $f(x,y)+z^N$,II., [*Selecta Mathematica, New Series*]{}, [**5**]{} 1999, 161–179. Némethi, A.: Graded roots and singularities, [*Proceedings of ‘Advanced School and Workshop on Singularities in Geometry and Topology’; ICTP (Trieste, Italy)*]{}, World Sci. Publ., Hackensack, NJ, 2007, 394–463. Némethi, A.: Poincaré series associated with surface singularities, in Singularities I, 271–297, [*Contemp. Math.*]{} [**474**]{}, Amer. Math. Soc., Providence RI, 2008. Némethi, A.: The Seiberg–Witten invariants of negative definite plumbed 3–manifolds, [*J. Eur. Math. Soc.*]{} [**13**]{} (2011), 959–974. Némethi, A.: The cohomology of line bundles of splice–quotient singularities, [*Advances in Math.*]{} [**229**]{} 4 (2012), 2503–2524. Némethi, A.: Personal communications. Némethi, A. and Román, F.: The lattice cohomology of $S^3_{-d}(K)$, Proceedings of [*Zeta Functions in Algebra and Geometry*]{}, Contemporary Mathematics, [**566**]{} (2012), 261–292. Némethi, A. and Nicolaescu, L.I.: Seiberg–Witten invariants and surface singularities, [*Geometry and Topology*]{} [**6**]{} (2002), 269–328. Nicolaescu, L.: Seiberg–Witten invariants of rational homology $3$–spheres, [*Comm. in Cont. Math.*]{} [**6**]{} no. 6 (2004), 833–866.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we survey results and recent progresses on the equivariant bordism classification of 2-torus manifolds and unitary toric manifolds.' address: 'School of Mathematical Sciences, Fudan University, Shanghai, 200433, P.R. China.' author: - Zhi Lü title: '**Equivariant bordism of 2-torus manifolds and unitary toric manifolds–a survey**' --- Introduction {#int} ============ An $n$-dimensional [*2-torus manifold*]{} is a smooth closed $n$-dimensional (not necessarily oriented) manifold equipped with an effective smooth $({\Bbb Z}_2)^n$-action, so its fixed point set is empty or consists of a set of isolated points (see [@l1; @lm]). A $2n$-dimensional [*unitary toric manifold*]{}, introduced by Masuda in [@ma], is a unitary $2n$-dimensional manifold equipped with an effective $T^n$-action fixing a nonempty fixed point set and preserving the tangential stably complex structure of $M^{2n}$, where $T^n$ is the torus group of rank $n$ and a unitary manifold is an oriented closed smooth manifold whose tangent bundle admits a stably complex structure. The seminal work of Davis and Januszkiewicz in [@dj] studied two kinds of equivariant manifolds: [*small covers*]{} and [*quasitoric manifolds*]{}, which are the real and complex topological versions of toric varieties in algebraic geometry, respectively, where a small cover of dimension $n$ (resp. a quasitoric manifold of dimension $2n$) is a smooth closed $n$-dimensional manifold (resp. $2n$-dimensional manifold) admiting a locally standard $({\Bbb Z}_2)^n$-action (resp. $T^n$-action) such that the orbit space is a simple convex polytope. As shown in [@dj], small covers and quasitoric manifolds have a very beautiful algebraic topology and provide a strong link between equivariant topology, polytope theory and combinatorics. Obviously, each small cover is a special 2-torus manifold. Buchstaber and Ray showed in [@br] that each quasitoric manifold with an omniorientation always admits a compatible tangential stably complex structure. Thus, small covers and omnioriented quasitoric manifolds provide abundant examples of 2-torus manifolds and unitary toric manifolds, respectively. .1cm In nonequivariant case, Buchstaber and Ray showed in [@br] that each class of $\mathfrak{N}_n$ (resp. $\Omega_{2n}^U$) contains an $n$-dimensional small cover (resp. a $2n$-dimensional quasitoric manifold) as its representative, where $\mathfrak{N}_*=\sum_{m\geq 0} \mathfrak{N}_m$ (resp. $\Omega_*^U=\sum_{m\geq 0}\Omega_{2m}^U$) is the ring formed by the unoriented bordism classes of all smooth closed manifolds (resp. the unitary bordism classes of all unitary manifolds). The work of Buchstaber and Ray gives a motivation to the study of the equivariant bordism classification of 2-torus manifolds and unitary toric manifolds, so that the following question arises naturally. 1. [*Can preferred representatives in the classes of $\mathcal{Z}_n(({\Bbb Z}_2)^n)\ ($resp. $\mathcal{Z}_{2n}^{U}(T^n))$ be chosen from small covers (resp. omnioriented quasitoric manifolds)?*]{} where $\mathcal{Z}_n(({\Bbb Z}_2)^n)\ ($resp. $\mathcal{Z}_{2n}^{U}(T^n))$ denotes the group produced by the $({\Bbb Z}_2)^n$-equivariant unoriented bordism classes of all $n$-dimensional 2-torus manifolds (resp. the $T^n$-equivariant unitary bordism classes of all $2n$-dimensional unitary toric manifolds). Note that the cartesian products of actions also define the graded rings $\mathfrak{M}_*=\bigoplus_{n\geq 0}\mathcal{Z}_n(({\Bbb Z}_2)^n)$ and $\Xi_*=\bigoplus_{n\geq 0}\mathcal{Z}_{2n}^{U}(T^n)$, which, significantly, turn out to be non-commutative. With respect to this question, the author of this paper first dealt with the case of 2-torus manifolds and proposed the following concrete conjecture in [@l1]. .1cm [**Conjecture $(\ast)$:**]{} [*Each class of $\mathcal{Z}_n(({\Bbb Z}_2)^n)$ contains a small cover as its representative.*]{} .1cm It was shown in [@l1] that Conjecture $(\ast)$ is true for $n\leq 3$, and there is also an essential link among the equivariant bordism of 3-dimensional 2-torus manifolds, 3-dimensional colored polytopes and mod 2 GKM graphs of valence 3. Moreover, the following question also arises naturally although 2-torus manifolds (resp. unitary toric manifolds) form a much wider class than small covers (resp. omnioriented quasitoric manifolds). 1. [*Is there still a strong link among the equivariant bordism of 2-torus manifolds and unitary toric manifolds, colored polytopes and (mod 2) GKM graphs in the general case?*]{} The purpose of this paper is to investigate results and recent development on the equivariant bordism classification of 2-torus manifolds and unitary toric manifolds with respect to the above questions. .1cm In the setting of 2-torus manifolds, a significant amount of technical machinery has been developed in [@lt] by defining a differential operator on the ¡°dual¡± algebra of the unoriented $({\Bbb Z}_2)^n$-representation algebra introduced by Conner and Floyd, so that the satisfactory solutions to the questions (Q1) and (Q2) can be obtained, and in particular, Conjecture $(*)$ can be answered affirmatively. In addition, this technical machinery can be combined with the mod 2 GKM theory and the Davis–Januszkiewicz Theory of small covers together very well, so that one can determine how the graded noncommutative ring $\mathfrak{M}_*$ is generated, and find some essential relationships among 2-torus manifolds, coloring polynomials, colored simple convex polytopes, colored graphs. It should be pointed out that the classical $({\Bbb Z}_2)^n$-equivariant bordism theory and results (e.g., tom Dieck’s existence theorem) also play important roles on the study of the equivariant bordism classification of 2-torus manifolds. In Section \[2-torus\], we shall systemically introduce the developed equivariant unoriented bordism theory in the setting of 2-torus manifolds. .1cm In the setting of unitary toric manifolds, whether or not each class of $\mathcal{Z}_{2n}^{U}(T^n)$ is represented by an omnioriented quasitoric manifold is still open in the general case. However, some techniques and ideas developed in the setting of 2-torus manifolds can still be carried out very well in this case. Indeed, Darby in [@dar] drew those techniques and ideas into the case of unitary toric manifolds with some new viewpoints, so that significant advances can be obtained in many aspects, such as the graded noncommutative ring $\Xi_*$, some essential links to polytope theory and torus graphs, and so on. The key reason why the case of unitary toric manifolds has less progresses than that of 2-torus manifolds is because of the existence of an infinite number of irreducible complex $T^n$-representations. This leads the case of unitary toric manifolds to be much more difficult and complicated. We shall give an introduction on the study of $\mathcal{Z}_{2n}^{U}(T^n)$ in Section \[unitary\]. In addition, in Section \[unitary\] we also shall mention the work on the equivariant Chern numbers of unitary toric manifolds in [@lt1] which is related to Kosniowski conjecture, and some problems and conjectures will also be proposed therein. .1cm Finally we survey a result on the relation between $\mathcal{Z}_n(({\Bbb Z}_2)^n)$ and $\mathcal{Z}_{2n}^{U}(T^n))$ in Section \[rela\]. Equivariant unoriented bordism of 2-torus manifolds {#2-torus} =================================================== In the early 1960s, Conner and Floyd [@cf] (also see [@co]) begun the study of geometric equivariant unoriented and oriented bordism theories for smooth closed manifolds with periodic diffeomorphisms, and the subject has also continued to develop and to flourish by extending their ideas to other equivariant bordisms since then. For example, the homotopy theoretic analogue was described by tom Dieck [@tom3]. A main aspect of Conner–Floyd’s work is on the study of $({\Bbb Z}_2)^n$-equivariant unoriented bordism. Conner and Floyd studied the localization of $({\Bbb Z}_2)^n$-equivariant unoriented bordism. The following localization theorem is due to Conner and Folyd [@cf] for $n=1$ and Stong [@s] for the general case. \[ring\] The ring homomorphism $\phi_*: \mathfrak{N}_*^{({\Bbb Z}_2)^n}=\sum_{i\geq 0}\mathfrak{N}_i^{({\Bbb Z}_2)^n}\longrightarrow \mathfrak{N}_*(BO),$ defined by mapping the equivariant unoriented class of a smooth closed $({\Bbb Z}_2)^n$-manifold to the unoriented bordism class of its normal bundle of fixed point set, is injective, where $\mathfrak{N}_*^{({\Bbb Z}_2)^n}$ is the ring formed by the equivariant unoriented bordism classes of all smooth closed $({\Bbb Z}_2)^n$-manifolds. However, generally speaking, the ring structure of $\mathfrak{N}_*^{({\Bbb Z}_2)^n}$ is still far from settled except for the case $n=1$ (see [@a; @sinh]). In [@cf], Conner and Floyd discussed the case in which the fixed point set of the action is isolated, and introduced and studied a graded commutative algebra over ${\Bbb Z}_2$ with unit, $\mathcal{Z}_*(({\Bbb Z}_2)^n)=\sum_{m\geq 0}\mathcal{Z}_m(({\Bbb Z}_2)^n)$, where $\mathcal{Z}_m(({\Bbb Z}_2)^n)$ consists of $({\Bbb Z}_2)^n$-equivariant unoriented bordism classes of all smooth closed $m$-manifolds with effective $({\Bbb Z}_2)^n$-actions fixing a finite set. Clearly, $\mathcal{Z}_*(({\Bbb Z}_2)^n)$ is a subring of $\mathfrak{N}_*^{({\Bbb Z}_2)^n}$, and when $m=n$, $\mathcal{Z}_n(({\Bbb Z}_2)^n)$ is exactly formed by the classes of all 2-torus $n$-manifolds. Also, the restriction to $\mathcal{Z}_*(({\Bbb Z}_2)^n)$ of $\phi_*$ gives the following monomorphism (still denoted by $\phi_*$) $$\phi_*: \mathcal{Z}_*(({\Bbb Z}_2)^n)\longrightarrow \mathcal{R}_*(({\Bbb Z}_2)^n)$$ defined by $\{M\}\longmapsto \sum_{p\in M^{({\Bbb Z}_2)^n}}[\tau_pM]$ where $\tau_pM$ denotes the real $({\Bbb Z}_2)^n$-representation on the tangent space at $p\in M^{({\Bbb Z}_2)^n}$, and $\mathcal{R}_*(({\Bbb Z}_2)^n)=\sum_{m\geq 0}\mathcal{R}_m(({\Bbb Z}_2)^n)$ is the graded polynomial algebra over ${\Bbb Z}_2$ generated by the isomorphism classes of one-dimensional irreducible real $({\Bbb Z}_2)^n$-representations, which was introduced by Conner and Folyd and is called the [*Conner–Floyd unoriented $({\Bbb Z}_2)^n$-representation algebra*]{} here. Conner and Floyd showed in [@cf] that when $n=1$ $\mathcal{Z}_*(\Z_2)\cong\Z_2$, and when $n=2$, $\mathcal{Z}_*((\Z_2)^2)\cong\Z_2[u]$ where $u$ denotes the class of ${\Bbb R}P^2$ with the standard $(\Z_2)^2$-action. Since then, any new progress on $\mathcal{Z}_*(({\Bbb Z}_2)^n)$ has not been made until the work in  [@l1] appeared in 2009. When $n=3$, the group structure of $\mathcal{Z}_3((\Z_2)^3)$ was determined in [@l1] (see also [@ly]), and it was also shown therein that $\dim_{\Z_2}\mathcal{Z}_3((\Z_2)^3)=13$. .1cm The objective of this section is to survey the recent progress on the study of $\mathcal{Z}_n(({\Bbb Z}_2)^n)$ (i.e., the equivariant unoriented bordism of 2-torus manifolds) in the general case. The reformulation of the existence theorem of tom Dieck and a differential operator ----------------------------------------------------------------------------------- In [@tom1 Theorem 6], tom Dieck showed an existence theorem, saying that the existence of an $m$-dimensional smooth closed $({\Bbb Z}_2)^n$-manifold $M^m$ fixing a finite set can be characterized by the integral property of its fixed point data. In [@lt], Lü and Tan gave a simple proof to show that the existence theorem of tom Dieck can be formulated into the following result in terms of Kosniowski and Stong’s localization formula ([@ks]). \[[[@lt Theorem 2.2]]{}\] \[dks\] Let $\{\tau_1, ..., \tau_l\}$ be a collection of $m$-dimensional faithful $({\Bbb Z}_2)^n$-representations in $\mathcal{R}_m(({\Bbb Z}_2)^n)$. Then a necessary and sufficient condition that $\tau_1+\cdots+\tau_l\in \text{\rm Im}\phi_m$ $($or $\{\tau_1, ..., \tau_l\}$ is the fixed point data of a $({\Bbb Z}_2)^n$-manifold $M^m)$ is that for all symmetric polynomial functions $f(x_1,...,x_m)$ over ${\Bbb Z}_2$, $$\label{formula-tks1}\sum_{i=1}^l{{f(\tau_i)}\over{\chi^{({\Bbb Z}_2)^n}(\tau_i)}}\in H^*(B({\Bbb Z}_2)^n;{\Bbb Z}_2)$$ where $\chi^{({\Bbb Z}_2)^n}(\tau_i)$ denotes the equivariant Euler class of $\tau_i$, which is a product of $m$ nonzero elements of $H^1(B({\Bbb Z}_2)^n;{\Bbb Z}_2)$, and $f(\tau_i)$ means that variables $x_1,...,x_m$ in the function $f(x_1,...,x_m)$ are replaced by those $m$ degree-one factors in $\chi^{({\Bbb Z}_2)^n}(\tau_i)$. Although all elements of $\text{\rm Im} \phi_*$ can be characterized by the formula (\[formula-tks1\]), it is still quite difficult to determine the algebra structure of $\text{\rm Im} \phi_*\cong \mathcal{Z}_*(({\Bbb Z}_2)^n)$. It is well-known that all irreducible real $({\Bbb Z}_2)^n$-representations bijectively correspond to all elements in $\text{\rm Hom}(({\Bbb Z}_2)^n,{\Bbb Z}_2)$, where every irreducible real representation of $({\Bbb Z}_2)^n$ has the form $\lambda_\rho: ({\Bbb Z}_2)^n\times{\Bbb R}\longrightarrow{\Bbb R}$ with $\lambda_\rho(g,x)=(-1)^{\rho(g)}x$ for $\rho\in\text{\rm Hom}(({\Bbb Z}_2)^n,{\Bbb Z}_2)$, and $\lambda_\rho$ is trivial if $\rho(g)=0$ for all $g\in ({\Bbb Z}_2)^n$. Write $J_n^{\Bbb R}=\text{\rm Hom}(({\Bbb Z}_2)^n,{\Bbb Z}_2)$ and regard it as the set of all all irreducible real $({\Bbb Z}_2)^n$-representations. Then the free polynomial algebra on $J_n^{\Bbb R}$ over ${\Bbb Z}_2$, denoted by ${\Bbb Z}_2[J_n^{\Bbb R}]$, can be identified with $\mathcal{R}_*(({\Bbb Z}_2)^n)$. Similarly, one has also another free polynomial algebra ${\Bbb Z}_2[J_n^{*{\Bbb R}}]$ on $J_n^{*{\Bbb R}}$ over ${\Bbb Z}_2$, which is called the [*dual algebra*]{} of ${\Bbb Z}_2[J_n^{\Bbb R}]=\mathcal{R}_*(({\Bbb Z}_2)^n)$, where $J_n^{*{\Bbb R}}=\text{\rm Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^n)$ is the dual of $J_n^{\Bbb R}$ as ${\Bbb Z}_2$-linear spaces. A square-free homogeneous polynomial $g =\sum_it_{i,1}\cdots t_{i,n}$ of degree $n$ in ${\Bbb Z}_2[J_n^{\Bbb R}]$ is called a [*faithful $({\Bbb Z}_2)^n$-polynomial*]{} if each monomial $t_{i,1}\cdots t_{i,n}$ is a faithful $({\Bbb Z}_2)^n$-representative in ${\Bbb Z}_2[J_n^{\Bbb R}]$ (i.e., $\{t_{i,1}\cdots t_{i,n}\}$ forms a basis of $J_n^{\Bbb R}$). Both $J_n^{\Bbb R}$ and $J_n^{*{\Bbb R}}$ are isomorphic to $({\Bbb Z}_2)^n$ and are dual to each other by the following pairing: $$\label{pairing} \langle\cdot, \cdot\rangle: J_n^{*{\Bbb R}}\times J_n^{\Bbb R}\longrightarrow \text{\rm Hom}({\Bbb Z}_2, {\Bbb Z}_2)$$ defined by $\langle \xi, \rho\rangle=\rho\circ \xi$, composition of homomorphisms. Thus, each faithful $({\Bbb Z}_2)^n$-polynomial $g =\sum_it_{i,1}\cdots t_{i,n}$ of degree $n$ in ${\Bbb Z}_2[J_n^{\Bbb R}]$ determines a unique homogeneous polynomial $g^*=\sum_i s_{i,1}\cdots s_{i,n}$ in ${\Bbb Z}_2[J_n^{*{\Bbb R}}]$, which is called the [*dual $({\Bbb Z}_2)^n$-polynomial*]{} of $g$, where $\{s_{i,1},..., s_{i,n}\}$ is the dual basis of $\{t_{i,1}, ..., t_{i,n}\}$, determined by the pairing (\[pairing\]). .1cm In [@lt], Lü and Tan defined a [*differential operator*]{} $d$ on ${\Bbb Z}_2[J_n^{*{\Bbb R}}]$ as follows: for each monomial $s_1\cdots s_i$ of degree $i\geq 1$ $$d_i(s_1\cdots s_i)=\begin{cases} \sum_{j=1}^is_1\cdots s_{j-1}\widehat{s}_js_{j+1}\cdots s_i &\text{ if } i>1\\ 1 &\text{ if } i=1. \end{cases}$$ and $d_0(1)=0$, where the symbol $\widehat{s}_j$ means that $s_j$ is deleted. Obviously, $d^2=0$, so $({\Bbb Z}_2[J_n^{*{\Bbb R}}], d)$ forms a chain complex. Then, Lü and Tan proved the following theorem which formulates a simple criterion of a faithful $({\Bbb Z}_2)^n$-polynomial $g\in \text{\rm Im}\phi_n$ in terms of the vanishing of the differential $d$ on the dual of $g$. \[main result\] Let $g=\sum_i t_{i,1}\cdots t_{i, n}$ be a faithful $({\Bbb Z}_2)^n$-polynomial in ${\Bbb Z}_2[J_n^{\Bbb R}]$. Then $g\in \text{\rm Im}\phi_n$ if and only if $d(g^*)=0$. By $\mathcal{S}(J_n^{\Bbb R})$ (resp. $\mathcal{S}(J_n^{*{\Bbb R}})$) one denotes the infinite symmetric tensor algebra on $J_n^{\Bbb R}$ (resp. $J_n^{*{\Bbb R}})$ over ${\Bbb Z}_2$. It turns out that $\mathcal{S}(J_n^{\Bbb R})$ (resp. $\mathcal{S}(J_n^{*{\Bbb R}})$) is in effect the same as the graded polynomial algebra over ${\Bbb Z}_2$, in indeterminates that are basis elements for $J_n^{\Bbb R}$ (resp. $J_n^{*{\Bbb R}})$. Then one has that $$\mathcal{S}(J_n^{\Bbb R})\cong \mathcal{S}(J_n^{*{\Bbb R}}) \cong H^*(B({\Bbb Z}_2)^n;\Z_2)$$ as algebras since both $J_n^{\Bbb R}$ and $J_n^{*{\Bbb R}}$ are isomorphic to $H^1(B({\Bbb Z}_2)^n;{\Bbb Z}_2)$ as ${\Bbb Z}_2$-linear spaces. .1cm Theorems \[dks\] and \[main result\] give following interesting algebraic corollary, which indicates a unification of “differential" and “integral" in some sense. \[diff-formula\] Let $g=\sum_i t_{i,1}\cdots t_{i, n}$ be a faithful $({\Bbb Z}_2)^n$-polynomial in ${\Bbb Z}_2[J_n^{\Bbb R}]$. Then $d(g^*)=0$ if and only if for all symmetric polynomial functions $f(x_1,...,x_n)$ over ${\Bbb Z}_2$, $$\label{integral} \sum_{i}{{f(t_{i,1}, ..., t_{i, n})}\over{t_{i,1}\cdots t_{i, n}}}\in\mathcal{S}(J_n^{\Bbb R})$$ when $t_{i,1}\cdots t_{i, n}$ and $f(t_{i,1}, ..., t_{i, n})$ are regarded as polynomials in $\mathcal{S}(J_n^{\Bbb R})$. The proof of Theorem \[main result\] is based upon another characterization of $g\in \text{\rm Im} \phi_n$ in terms of $({\Bbb Z}_2)^n$-colored graphs (or mod 2 GKM graphs), which will be introduced in the next subsection. $({\Bbb Z}_2)^n$-colored graphs and small covers ------------------------------------------------ In [@gkm], Goresky, Kottwitz and MacPherson established the GKM theory, indicating that there is an essential link between topology and geometry of torus actions and the combinatorics of colored graphs (see also [@gz]). Such a link has already been expanded to the case of mod 2-torus actions (see, e.g., [@bl; @bgh; @l2; @l3]). Specifically, assume that $M^m$ is a smooth closed $m$-manifold with an effective smooth $({\Bbb Z}_2)^n$-action fixing a nonempty finite set $M^{({\Bbb Z}_2)^n}$, which implies $m\geq n$ (see [@ap]). Then we know from [@l2; @l3] that the $({\Bbb Z}_2)^n$-action on $M^m$ defines a regular graph $\Gamma_M$ of valence $m$ with the vertex set $M^{({\Bbb Z}_2)^n}$ and a $({\Bbb Z}_2)^n$-coloring $\alpha$. In the extreme case where $m=n$ (i.e., $M^n$ is a 2-torus manifold), one knows from [@bl; @l2; @l3] that such a $({\Bbb Z}_2)^n$-colored graph $(\Gamma_M, \alpha)$ is uniquely determined by the $({\Bbb Z}_2)^n$-action where $\alpha$ is defined as a map from the set $E_{\Gamma_M}$ of all edges of $\Gamma_M$ to all non-trivial elements of $J_n^{\Bbb R}$, and it satisfies the following properties:. 1. for each vertex $v$ of $\Gamma_M$, $\prod_{x\in E_v}\alpha(x)$ is faithful in ${\Bbb Z}_2[J_n^{\Bbb R}]$, where $E_v$ denotes the set of all edges adjacent to $v$; 2. for each edge $e$ of $\Gamma_M$, $\alpha(E_u)\equiv \alpha(E_v) \mod \alpha(e)$ in $J_n^{\Bbb R}$ where $u$ and $v$ are two endpoints of $e$. The pair $(\Gamma_M, \alpha)$ is called the [*$({\Bbb Z}_2)^n$-colored graph*]{} of the 2-torus manifold $M^n$. .1cm Guillemin and Zara [@gz] formulated the results of GKM theory in terms of a colored graph, and developed the GKM theory combinatorially. They defined and studied the abstract GKM graphs. This idea may still be carried out in the mod 2 case. Following [@l2], let $\Gamma$ be a finite regular graph of valence $n$ without loops. If there is a map $\alpha$ from the set $E_\Gamma$ of all edges of $\Gamma$ to all nontrivial elements of $J_n^{\Bbb R}$ satisfying the properties (P1) and (P2) as above, then the pair $(\Gamma, \alpha)$ is called an [*abstract $({\Bbb Z}_2)^n$-colored graph*]{} of $\Gamma$, and $\alpha$ is called a [*$({\Bbb Z}_2)^n$-coloring*]{} on $\Gamma$. .1cm Let $(\Gamma, \alpha)$ be an abstract $({\Bbb Z}_2)^n$-colored graph. Set $$g_{(\Gamma, \alpha)} = \sum_{v\in V_\Gamma} \prod_{x\in E_v}\alpha(x)$$ which is called the [*$({\Bbb Z}_2)^n$-coloring polynomial*]{} of $(\Gamma, \alpha)$. Obviously, $g_{(\Gamma, \alpha)}$ is a faithful $({\Bbb Z}_2)^n$-polynomial in ${\Bbb Z}_2[J_n^{\Bbb R}]$. It was shown in [@l2 Proposition 2.2] that for an abstract $({\Bbb Z}_2)^n$-colored graph $(\Gamma, \alpha)$, the collection $\{\alpha(E_v), v\in V_{\Gamma}\}$ is always realizable as the fixed point data of some 2-torus manifold $M^n$, which implies that the $({\Bbb Z}_2)^n$-coloring polynomial $g_{(\Gamma, \alpha)}$ of $(\Gamma, \alpha)$ must belong to the image $\text{\rm Im}\phi_n$. However, this result does not tell us whether $(\Gamma, \alpha)$ is the $({\Bbb Z}_2)^n$-colored graph $(\Gamma_M, \alpha)$ of $M^n$ or not, which is related to the following geometric realization problem: [*under what condition can $(\Gamma, \alpha)$ become a $({\Bbb Z}_2)^n$-colored graph of some 2-torus manifold?*]{} Some work for the geometric realization problem has been studied in details in [@bl]. On the other hand, one has known from [@bl] or [@l3 Section 2] that each 2-torus manifold $M^n$ determines a $({\Bbb Z}_2)^n$-colored graph $(\Gamma_M, \alpha)$, and the corresponding $({\Bbb Z}_2)^n$-coloring polynomial $g_{(\Gamma_M, \alpha)}$ is exactly $\phi_n(\{M^n\})$. This gives another characterization of a faithful $({\Bbb Z}_2)^n$-polynomial $g\in \text{\rm Im}\phi_n$ in terms of $({\Bbb Z}_2)^n$-colored graphs. \[[[@lt Theorem 4.2]]{}\]\[color poly\] A faithful $({\Bbb Z}_2)^n$-polynomial $g$ in ${\Bbb Z}_2[J_n^{\Bbb R}]$ belongs to $\text{\rm Im}\phi_n$ if and only if it is the $({\Bbb Z}_2)^n$-coloring polynomial of an abstract $({\Bbb Z}_2)^n$-colored graph $(\Gamma, \alpha)$. It was shown in [@lt Propositions 5.2–5.3] that for a faithful $({\Bbb Z}_2)^n$-polynomial $g\in {\Bbb Z}_2[J_n^{\Bbb R}]$, if $g\in \text{\rm Im}\phi_n$ then $d(g^*)=0$, and if $d(g^*)=0$, then $g$ is the $G_n$-coloring polynomial of an abstract $({\Bbb Z}_2)^n$-colored graph. Then Theorem \[main result\] follows from Theorem \[color poly\]. Now by $\mathcal{G}(({\Bbb Z}_2)^n)$ we denote the set of all abstract $({\Bbb Z}_2)^n$-colored graphs $(\Gamma, \alpha)$. Two abstract $({\Bbb Z}_2)^n$-colored graphs $(\Gamma_1, \alpha_1)$ and $(\Gamma_2, \alpha_2)$ in $\mathcal{G}(({\Bbb Z}_2)^n)$ are said to be [*equivalent*]{} if $g_{(\Gamma_1, \alpha_1)}=g_{(\Gamma_2, \alpha_2)}$, denoted by $(\Gamma_1, \alpha_1)\sim(\Gamma_2, \alpha_2)$. On the coset $\mathcal{G}(({\Bbb Z}_2)^n)/\sim$, define the addition $+$ as follows: $$\{(\Gamma_1, \alpha_1)\}+\{(\Gamma_2, \alpha_2)\}:=\{(\Gamma_1, \alpha_1)\sqcup(\Gamma_2, \alpha_2)\}$$ where $\sqcup$ means the disjoint union. \[graph\] $\mathcal{Z}_n(({\Bbb Z}_2)^n)$ is isomorphic to $\mathcal{G}(({\Bbb Z}_2)^n)/\sim$. Davis–Januszkiewicz theory gives another link between the equivariant topology and the combinatorics of simple convex polytopes. An $n$-dimensional [*small cover*]{} $\pi: M^n\longrightarrow P^n$ is a smooth closed $n$-manifold $M^n$ with a locally standard $({\Bbb Z}_2)^n$-action such that its orbit space is a simple convex $n$-polytope $P^n$, where a locally standard $({\Bbb Z}_2)^n$-action on $M^n$ means that this $({\Bbb Z}_2)^n$-action on $M^n$ is locally isomorphic to a faithful representation of $({\Bbb Z}_2)^n$ on ${\Bbb R}^n$. Each small cover $\pi: M^n \longrightarrow P^n$ determines a characteristic function $\lambda$ (here we call it a [*$({\Bbb Z}_2)^n$-coloring*]{}) on $P^n$, defined by mapping all facets (i.e., $(n-1)$-dimensional faces) of $P^n$ to nontrivial elements of $J_n^{*{\Bbb R}}$ such that $n$ facets meeting at each vertex are mapped to $n$ linearly independent elements. A fascinating characteristic for $\pi: M^n\longrightarrow P^n$ is that $M^n$ can be recovered by the pair $(P^n, \lambda)$, so that the algebraic topology of $M^n$ is essentially consistent with the algebraic combinatorics of $(P^n, \lambda)$. .1cm Now suppose that $\pi: M^n\longrightarrow P^n$ is a small cover, and $\lambda: \mathcal{F}(P^n)\longrightarrow J_n^{*{\Bbb R}}$ is its characteristic function, where $\mathcal{F}(P^n)$ consists of all facets of $P^n$. Given a vertex $v$ of $P^n$, since $P^n$ is simple, there are $n$ facets $F_1, ..., F_n$ in $\mathcal{F}(P^n)$ such that $v=F_1\cap\cdots\cap F_n$. Then the vertex $v$ determines a monomial $\prod_{i=1}^n\lambda(F_i)$ of degree $n$ in ${\Bbb Z}_2[J_n^{*{\Bbb R}}]$, whose dual by the pairing (\[pairing\]) is faithful in ${\Bbb Z}_2[J_n^{\Bbb R}]$. Here $\prod_{i=1}^n\lambda(F_i)$ is called the [*$({\Bbb Z}_2)^n$-coloring monomial at $v$*]{}, denoted by $\lambda_v$. Moreover, all vertices in the vertex set $V_{P^n}$ of $P^n$ via $\lambda$ give a polynomial $\sum_{v\in V_{P^n}}\lambda_v$ of degree $n$ in ${\Bbb Z}_2[J_n^{*{\Bbb R}}]$, which is denoted by $g_{(P^n, \lambda)}$, and is called $g_{(P^n, \lambda)}$ the [*$({\Bbb Z}_2)^n$-coloring polynomial*]{} of $(P^n, \lambda)$. On the other hand, let $(\Gamma_M, \alpha)$ be the $({\Bbb Z}_2)^n$-colored graph of $\pi: M^n\longrightarrow P^n$, and let $g_{(\Gamma_M, \alpha)}$ be the $({\Bbb Z}_2)^n$-coloring polynomial of $(\Gamma_M, \alpha)$. One knows from [@l2 Proposition 4.1; Remark 4] that $\Gamma_M$ is exactly the 1-skeleton of $P^n$, and both $\lambda$ and $\alpha$ determine each other. This gives \[small\] $g_{(P^n, \lambda)}$ is the dual polynomial of $g_{(\Gamma_M, \alpha)}$. \[p-formula\] The following formulae for a colored polytope $(P^n, \lambda)$ with $P^n=P_1\times P_2$ were obtained. 1. Product formula ([@lt Proposition 4.10]). $g_{(P_1\times P_2, \lambda)}=g_{(P_1, \lambda_1)}g_{(P_2, \lambda_2)};$ 2. Connected sum formula ([@lt Proposition 4.12]). $g_{(P_1\sharp_{v_1, v_2} P_2, \lambda)}=g_{(P_1, \lambda_1)}+g_{(P_2, \lambda_2)}$ where $\lambda_i$ is the restriction to $P_i$ of $\lambda$, and $v_i$ is a vertex of $P_i$. Proposition \[small\] provides us much insight to the study on $\mathcal{Z}_n(({\Bbb Z}_2)^n)$. A further question is [*whether or not can the dual $g^*$ of $g\in \text{\rm Im} \phi_n$ be characterized in terms of $({\Bbb Z}_2)^n$-colored $n$-polytopes?*]{} The positive solution of this question means that the Conjecture $(*)$ holds. Structure of $\mathcal{Z}_n(({\Bbb Z}_2)^n)$ -------------------------------------------- Now let us look at the structure of $\mathcal{Z}_n(({\Bbb Z}_2)^n)$. One has by Theorem \[main result\] that as linear spaces over ${\Bbb Z}_2$, $\mathcal{Z}_n(({\Bbb Z}_2)^n)$ is isomorphic to the linear space $\mathcal{V}_n$ formed by all faithful $({\Bbb Z}_2)^n$-polynomials $g\in {\Bbb Z}_2[J_n^{\Bbb R}]$ with $d(g^*)=0$. Then, the problem can be further reduced to studying the linear space $\mathcal{V}^*_n$ formed by the dual polynomials of those polynomials in $\mathcal{V}_n$. In [@lt Proposition 6.7], Lü and Tan first showed that $\mathcal{V}^*_n$ is generated by the $({\Bbb Z}_2)^n$-polynomials of products of simplices with $({\Bbb Z}_2)^n$-colorings, and they then showed in the proof of [@lt Theorem 2.5] that each polynomial of $\mathcal{V}^*_n$ is exactly the $({\Bbb Z}_2)^n$-coloring polynomial of a $({\Bbb Z}_2)^n$-colored simple convex $n$-polytope. This gives \[relation1\] A faithful $({\Bbb Z}_2)^n$-polynomial $g\in {\Bbb Z}_2[J_n^{\Bbb R}]$ belongs to $\text{\rm Im}\phi_n$ if and only if its dual polynomial $g^*$ is the $({\Bbb Z}_2)^n$-coloring polynomial of a $({\Bbb Z}_2)^n$-colored simple convex polytope $(P^n, \lambda)$. As a consequence, one has that The Conjecture $(*)$ holds. The proof of Theorem \[relation1\] in [@lt] also tells us the basic structure of the graded noncommutative ring $\mathfrak{M}_*=\sum_{n\geq 1}\mathcal{Z}_n(({\Bbb Z}_2)^n)$, which is stated as follows. \[[[@lt Theorem 2.6]]{}\]\[compute\] $\mathfrak{M}_*$ is generated by the equivariant unoriented bordism classes of all generalized real Bott manifolds. Generalized real Bott manifolds belong to a class of nicely behaved small covers, which were introduced and studied in [@cms]. A [*generalized real Bott tower*]{} of height $n$ is a sequence of ${\Bbb R}P^{n_i}$-bundles with $n_i\geq 1$: $$\begin{CD} B^{\Bbb R}_n@ >{\pi_n}>> B^{\Bbb R}_{n-1}@>{\pi_{n-1}}>> \cdots@>{\pi_2}>>B^{\Bbb R}_{1}@ >{\pi_1}>>B^{\Bbb R}_{0}=\{\text{a point}\} \end{CD}$$ where each $\pi_i: B^{\Bbb R}_i\longrightarrow B^{\Bbb R}_{i-1}$ for $i=1, ..., n$ is the projectivization of a Whitney sum of $n_i+1$ real line bundles over $B^{\Bbb R}_i$, and $B^{\Bbb R}_i$ is called an [*$i$-stage generalized real Bott manifold*]{} or a [*generalized real Bott manifold of height $i$*]{}. It is well-known that $B^{\Bbb R}_i$ is a small cover over $\Delta^{n_1}\times\cdots\times \Delta^{n_i}$ where $\Delta^{n_j}$ is an $n_j$-dimensional simplex. Theorem \[compute\] gives a strong connection between the computation of equivariant bordism groups $\mathcal{Z}_n(({\Bbb Z}_2)^n)$ and the Davis–Januszkiewicz theory of small covers. As a computational application, one has that For $n=3$, $\dim_{{\Bbb Z}_2}\mathcal{Z}_3(({\Bbb Z}_2)^3)=13$, and for $n=4$, $\dim_{{\Bbb Z}_2}\mathcal{Z}_4(({\Bbb Z}_2)^4)=510$. A summary and further problems {#app-2} ------------------------------ Together with Theorems \[dks\], \[main result\], \[color poly\] and \[relation1\], it follows that there are some essential relationships among 2-torus manifolds, coloring polynomials, colored simple convex polytopes, colored graphs, which are stated as follows: \[summary\] Let $g=\sum_i t_{i,1}\cdots t_{i, n}$ be a faithful $({\Bbb Z}_2)^n$-polynomial in ${\Bbb Z}_2[J_n^{\Bbb R}]$, and $g^*$ be the dual polynomial of $g$. Then the following statements are all equivalent. 1. $g\in \text{\rm Im} \phi_n$ $($i.e., there is an $n$-dimensional 2-torus manifold $M^n$ such that $g=\sum_{p\in M^G}[\tau_pM])$; 2. $g$ is the $({\Bbb Z}_2)^n$-coloring polynomial of a $({\Bbb Z}_2)^n$-colored graph $(\Gamma, \alpha)$; 3. $g=\sum_i t_{i,1}\cdots t_{i, n}$ possesses the property that for any symmetric polynomial function $f(x_1, ..., x_n)$ over ${\Bbb Z}_2$, $$\sum_{i}{{f(t_{i,1}, ..., t_{i, n})}\over{t_{i,1}\cdots t_{i, n}}}\in\mathcal{S}(J_n^{\Bbb R});$$ 4. $d(g^*)=0$; 5. $g^*$ is the $({\Bbb Z}_2)^n$-coloring polynomial of a $({\Bbb Z}_2)^n$-colored simple convex polytope $(P^n, \lambda)$. Based upon the above equivalent results, it seems to be interesting to discuss the properties of regular graphs and simple convex polytopes. In Theorem \[summary\](2), $\Gamma$ can actually be chosen as the 1-skeleton of a polytope. However, for a $({\Bbb Z}_2)^n$-colored graph $(\Gamma, \alpha)$, we don’t know when $\Gamma$ will become the 1-skeleton of a polytope. Indeed, given a graph, to determine whether it is the 1-skeleton of a polytope or not is a quite difficult problem except for the known Steinitz theorem (see [@g]). In addition, the product formula in Remark \[p-formula\] tells us that a simple convex polytope with a coloring is indecomposable if its coloring polynomial is indecomposable. These observations lead us to pose the following problems: 1. [*For a $({\Bbb Z}_2)^n$-colored graph $(\Gamma, \alpha)$, under what condition will $\Gamma$ be the 1-skeleton of a polytope?*]{} 2. [*Given a $({\Bbb Z}_2)^n$-colored simple convex polytope $(P^n, \lambda)$, can we give a necessary and sufficient condition that $P^n$ is indecomposable?*]{} Equivariant unitary bordism of unitary toric manifolds {#unitary} ====================================================== So far, there have been many different ways to the study of equivariant unitary bordism ring $\Omega_*^{U, G}$ (see, e.g., [@bpr; @cf1; @tom3; @lo; @n; @n1]), where $G$ is a compact Lie group, and $\Omega_*^{U, G}$ is generated by the equivariant unitary bordism classes of all unitary $G$-manifolds. However, the explicit ring structure of $\Omega_*^{U, G}$ is still difficult to calculate and only partial results are known. Some explicit computations of various equivariant unitary bordism rings were made by Kosniowski [@k], Landweber [@la], and Stong [@s2] for $G={\Bbb Z}_p$ and by Kosniowski–Yahia [@ky] and Sinha [@sin] for $G=S^1$. More recently, when $G=T^n$, Hanke showed in [@han Theorem 1] the existence of a pull back square $$\begin{CD} \Omega_*^{U, T^n} @>>> MU_*[e_V^{-1}, Y_{V,d}]\\ @VVV @VVV\\ MU_*^{T^n} @>>> MU_*[e_V, e_V^{-1}, Y_{V,d}] \end{CD}$$ with all maps being injective, so that the Pontrjagin–Thom map induces a ring isomorphism $$\Omega_*^{U, T^n}\cong MU_*^{T^n}\cap MU_*[e_V^{-1}, Y_{V,d}]$$ where $MU_*^{T^n}$ is the homotopy theoretic equivariant unitary bordism ring (which was defined by tom Dieck in [@tom3]), $MU_*$ is the ordinary homotopy theoretic unitary bordism ring, and $e_V$ are the Euler classes of nontrivial irreducible complex $T^n$-representations $V$, $Y_{V, d}$ are the classes of degree $2d$ $(2\leq d\leq \infty)$ represented by the $T^n$-bundle $E\otimes V\longrightarrow {\Bbb C}P^{d-1}$ with $E\longrightarrow {\Bbb C}P^{d-1}$ being the hyperplane line bundle (see [@han] for more details). In his Ph. D’s thesis [@dar Proposition 4.24], Darby showed an refinement result to the case in which the fixed point set is isolated, saying that the commutative diagram $$\begin{CD} \mathcal{Z}_*^{U}(T^n) @>>> \Omega_*^{U, T^n}\\ @VVV @VVV\\ {\Bbb Z}[e_V^{-1}] @>>> MU_*[e_V^{-1}, Y_{V,d}] \end{CD}$$ has all maps injective and is a pullback square, where $\mathcal{Z}_*^{U}(T^n)=\sum_{m\geq 0}\mathcal{Z}_{2m}^{U}(T^n)$ is the subring of $\Omega_*^{U, T^n}$ that consists of all classes that can be represented by unitary $T^n$-manifolds with finite fixed point set. Furthermore, Darby obtained in [@dar Corollary 6.20] the following monomorphism from his refinement result: $$\varphi_n: \mathcal{Z}_{2n}^{U}(T^n)\longrightarrow \Lambda_{\Bbb Z}^n(J^{\Bbb C}_n)$$ where $J^{\Bbb C}_n$ denotes the set of irreducible $T^n$-representations so $J^{\Bbb C}_n$ can be identified with $ \text{\rm Hom}(T^n, S^1)$, and $\Lambda_{\Bbb Z}(J^{\Bbb C}_n)$ is the free exterior algebra on $J^{\Bbb C}_n$ over ${\Bbb Z}$ with the graded structure $\Lambda_{\Bbb Z}(J^{\Bbb C}_n)=\bigoplus_m\Lambda^m_{\Bbb Z}(J^{\Bbb C}_n)$ having the property $\Lambda^k_{\Bbb Z}(J^{\Bbb C}_n)\wedge\Lambda^l_{\Bbb Z}(J^{\Bbb C}_n)\subset \Lambda^{k+l}_{\Bbb Z}(J^{\Bbb C}_n)$. The monomorphism $\varphi_n: \mathcal{Z}_{2n}^{U}(T^n)\longrightarrow \Lambda^n_{\Bbb Z}(J^{\Bbb C}_n)$ is an analogue of $\phi_*: \mathcal{Z}_*(({\Bbb Z}_2)^n)\longrightarrow {\Bbb Z}_2[J_n^{\Bbb R}]$ as stated in Section \[2-torus\]. Moreover, Darby carried out his study on $\mathcal{Z}_{2n}^{U}(T^n)$ (i.e., the equivariant unitary bordism of all unitary toric $2n$-manifolds) by capturing the ideas developed in the setting of 2-torus manifolds, which will be introduced in the next subsection. Structure of $\Xi_*$, faithful exterior polynomials and torus graphs -------------------------------------------------------------------- An exterior polynomial $g$ in $\Lambda_{\Bbb Z}^n(J^{\Bbb C}_n)$ is said to be [*faithful*]{} if the indeterminates from each monomial of $g$ form a basis of $J^{\Bbb C}_n$. Similarly to the definition of $\Lambda_{\Bbb Z}(J^{\Bbb C}_n)$, define $\Lambda_{\Bbb Z}(J_n^{*{\Bbb C}})$ to be the free exterior algebra on $J_n^{*{\Bbb C}}$ over ${\Bbb Z}$, where $J_n^{*{\Bbb C}}=\text{\rm Hom}(S^1, T^n)$. Since both $J_n^{{\Bbb C}}$ and $J_n^{*{\Bbb C}}$ are isomorphic to ${\Bbb Z}^n$ and are dual by the following pair $$\label{pairing1} \langle\cdot, \cdot\rangle: J_n^{*{\Bbb C}}\times J_n^{{\Bbb C}}\longrightarrow \text{\rm Hom}(S^1, S^1)\cong {\Bbb Z}$$ defined by $\langle \xi, \rho\rangle=\rho\circ \xi$, for each faithful exterior polynomial $g\in \Lambda_{\Bbb Z}^n(J^{\Bbb C}_n)$ one may obtain a [*dual polynomial*]{} $g^*\in \Lambda_{\Bbb Z}^n(J_n^{*{\Bbb C}})$ by considering the dual basis in $J_n^{*{\Bbb C}}$ of the basis in $J^{\Bbb C}_n$ produced by each monomial of $g$. Clearly, the monomorphism $\varphi_n$ maps each nonzero class $\beta$ of $\mathcal{Z}_{2n}^{U}(T^n)$ to an faithful exterior polynomial $\varphi_n(\beta)$ in $\Lambda_{\Bbb Z}^n(J^{\Bbb C}_n)$. .1cm Similarly, a differential operator $d$ on $\Lambda_{\Bbb Z}(J_n^{*{\Bbb C}})$ may be defined as follows: for each monomial $s_1\wedge\cdots\wedge s_k\in \Lambda^k_{\Bbb Z}(J_n^{*{\Bbb C}})$ with $k\geq 1$ $$d_k(s_1\wedge\cdots\wedge s_k)=\begin{cases} \sum_{i=1}^k(-1)^{i+1}s_1\wedge\cdots s_{i-1}\wedge \widehat{s}_i\wedge s_{i+1}\wedge\cdots\wedge s_k & \text{\rm if } k>1\\ 1 & \text{\rm if } k=1 \end{cases}$$ and $d_0(1)=0$. .1cm Recall that $\Xi_*=\bigoplus_{n\geq 0}\mathcal{Z}_{2n}^{U}(T^n)$ is the graded noncommutative ring generated by the equivariant unitary bordism classes of all unitary toric manifolds. Let $K_n$ denote the abelian group of all faithful exterior polynomials $g\in \Lambda_{\Bbb Z}^n(J^{\Bbb C}_n)$ such that $d(g^*)=0$. Then $K_*=\bigoplus_{n\geq 0}K_n$ forms a graded noncommutative subring of $\Lambda_{\Bbb Z}(J^{\Bbb C}_n)$. Darby showed that Both $\Xi_*$ and $K_*$ are isomorphic. Furthermore, a faithful exterior polynomial $g$ in $\Lambda_{\Bbb Z}^n(J^{\Bbb C}_n)$ belongs to $\text{\rm Im}\varphi_n$ if and only if $d(g^*)=0$. A [*torus graph*]{} is a pair $(\Gamma, \alpha)$ consisting of an $n$-valent regular graph $\Gamma$ with a torus axial function $\alpha: E_\Gamma\longrightarrow J^{\Bbb C}_n$ subject to the following properties: 1. $\alpha(\overline{e})=\pm\alpha(e)$; 2. for each vertex $v$, $\alpha(E_v)$ forms a basis of $J^{\Bbb C}_n$; 3. for each edge $e$, $\alpha(E_{i(e)})\equiv \alpha(E_{t(e)})\mod \alpha(e)$ where $E_\Gamma$ denotes the set of oriented edges of $\Gamma$, that is, each edge appears twice in $E_\Gamma$ with opposite orientations, $i(e)$ and $t(e)$ denote the initial and terminal vertices of an edge $e\in E_\Gamma$, respectively, $\overline{e}$ denotes the edge $e$ with its opposite orientation, and for each vertex $v$, $E_v=\{e\in E_\Gamma| i(e)=v\}$. Note that a torus axial function is different from the axial function for GKM-graphs as defined in [@gz1], which requires that $\alpha(\overline{e})=-\alpha(e)$ as well as that the elements of $\alpha(E_v)$ are pairwise linearly independent. As shown in [@mmp], a torus graph is not a GKM graph in general, each torus manifold $M$ determines a torus graph $(\Gamma_M, \alpha)$, and all torus graphs are orientable, where an [*orientation*]{} of a torus graph $(\Gamma, \alpha)$ is an assignment $\sigma: V_\Gamma\longrightarrow\{\pm1\}$ satisfying $\sigma(i(e))\alpha(e)=-\sigma(i(\overline{e})\alpha(\overline{e})$ for every $e\in E_\Gamma$, and a [*torus manifold*]{} of dimension $2n$ is a smooth closed $2n$-manifold with an effective $T^n$-action fixing a nonempty set (so a unitary toric manifold is a special torus manifold). Darby showed in [@dar Proposition 6.11] that the torus graph of a unitary toric manifold is orientable, and he gave the definition of the torus polynomial $g_{(\Gamma, \alpha, \sigma)}$ of an oriented torus graph $(\Gamma, \alpha, \sigma)$ (see [@dar Definition 6.17]). Furthermore, Darby characterized the torus polynomials of oriented torus graphs in terms of the vanishing of the differential $d$ on the dual polynomials. Let $g\in \Lambda_{\Bbb Z}^n(J^{\Bbb C}_n)$ be a faithful polynomial. Then $g$ is the torus polynomial of an oriented torus graph if and only if $d(g^*)=0$. A [*quasitoric manifold*]{} is an even-dimensional smooth closed manifold $M^{2n}$ equipped with a locally standard smooth $T^n$-action such that the orbit space is a simple $n$-polytope $P$. Like small covers, each quasitoric manifold $\pi: M^{2n}\longrightarrow P^n$ determines a characteristic map $\lambda$ (also called a ${\Bbb Z}^n$-coloring here) on $P^n$, which sends each facet of $P^n$ onto non-trivial elements of $J^{*{\Bbb C}}_n$, unique up to sign, such that the $n$ facets of $P^n$ meeting at a vertex are mapped to a basis of $J^{*{\Bbb C}}_n$, and $M^{2n}$ can be recovered from the combinatorial data $(P^n, \lambda)$. As shown in [@br], each quasitoric manifold $\pi: M^{2n}\longrightarrow P^n$ with an omniorientation is a unitary toric manifold, where an [*omniorientation*]{} consists of a choice of orientation for $M^{2n}$ and for every submanifold $\pi^{-1}(F)$, $F\in \mathcal{F}(P^n)$ (the set of all facets of $P^n$). Each omnioriented quasitoric manifold still determines a pair $(P^n, \lambda)$, which is called the [*quasitoric pair*]{}. Then the following result means that the study on omnioriented quasitoric manifolds can be reduced to the study on quasitoric pairs. There is a bijection between the set of quasitoric pairs and the set of omnioriented quasitoric manifolds. To consider whether each class of $\mathcal{Z}_{2n}^{U}(T^n)$ is represented by an omnioriented quasitoric manifold, Darby in [@dar] introduced a graded noncommutative ring $\mathcal{Q}_*$, which is generated by all quasitoric pairs with the addition and the multiplication given by the disjoint union and the cartesian product, respectively. Then he studied the homomorphism of non-commutative graded rings $$\mathcal{M}: \mathcal{Q}_*\longrightarrow \Xi_*$$ by constructing the omnioriented quasitoric manifold associated to a quasitoric pair. This homomorphism is not a monomorphism, but if it is surjective, then one can obtain that each class of $\mathcal{Z}_{2n}^{U}(T^n)$ is represented by an omnioriented quasitoric manifold. Darby made a significant advance to show that when $n=1,2$, $\mathcal{M}$ is surjective (see [@dar Corollaries 8.8 and 8.10]). .1cm Darby also extended the connected sum construction of quasitoric pairs which allows for a more general notion of the equivariant connected sum of omnioriented quasitoric manifolds ([@dar §7.3]), and obtained the connected sum formula and the product formula for quasitoric pairs ([@dar Lemmas 7.5 and 7.11]). In addition, for an omnioriented quasitoric manifold $\pi: M^{2n}\longrightarrow P^n$, he also showed that the polynomial of the quasitoric pair $(P^n, \lambda)$ is the dual of the torus polynomial of the associated oriented torus graph $(\Gamma_M, \alpha)$ ([@dar Formula (7.5)]). Equivariant Chern numbers and the number of fixed points for unitary torus manifolds ------------------------------------------------------------------------------------ In [@ggk], Guillemin, Ginzburg and Karshon showed that the equivariant unitary bordism class of a unitary $T^n$-manifold with isolated fixed points is completely determined by its equivariant Chern numbers. In [@lt1], Lü and Tan gave a refinement of their result in the setting of unitary toric manifolds. \[bounds\] Let $\beta=\{M\}$ be a class in $\mathcal{Z}_{2n}^U(T^n)$. Then $\beta=0$ if and only if the equivariant Chern numbers $\langle (c_1^{T^n})^i(c_2^{T^n})^j, [M]\rangle=0$ for all $i, j\in {\Bbb N}$, where $[M]$ is the fundamental class of $M$ with respect to the given orientation. In [@ck], Kosniowski studied unitary $S^1$-manifolds and when the fixed points are isolated, he proposed the following conjecture. .1cm [**Conjecture**]{} (Kosniowski). [*Suppose that $M^{2n}$ is a unitary $S^1$-manifold with isolated fixed points. If $M^{2n}$ does not bound equivariantly then the number of fixed points is greater than $f(n)$, where $f(n)$ is some linear function.*]{} As was noted by Kosniowski in [@ck], the most likely function is $f(n)={n\over 2}$, so the number of fixed points of $M^{2n}$ is at least $[{n\over 2}]+1$. With respect to this conjecture, recently some related works have been done (see [@ckp; @ll; @pt]). For example, Li and Liu showed in [@ll] that if $M^{2mn}$ is an almost complex manifold and there exists a partition $\lambda=(\lambda_1,\dots,\lambda_r)$ of weight $m$ such that the corresponding Chern number $\langle(c_{\lambda_1}\dots c_{\lambda_r})^n, [M]\rangle$ is nonzero, then any $S^1$-action on $M$ must have at least $n+1$ fixed points. In the case of the unitary torus manifolds, one can apply Theorem \[bounds\] to obtain the following result, which further provides supporting evidence to the Kosniowski conjecture. \[number\] Suppose that $M^{2n}$ is a $(2n)$-dimensional unitary toric manifold. If $M^{2n}$ does not bound equivariantly, then the number of fixed points is at least $\lceil{n\over2}\rceil+1$, where $\lceil{n\over2}\rceil$ denotes the minimal integer no less than ${n\over 2}$. Problems and conjectures ------------------------ We would like to conclude this section with the following problems and conjectures: Let $g=\sum_i t_{i,1}\wedge\cdots\wedge t_{i, n}$ be a faithful exterior polynomial in $\Lambda_{\Bbb Z}^n[J_n^{\Bbb C}]$. Is the following fact true? .1cm [**Fact.**]{} $d(g^*)=0$ if and only if for all symmetric polynomial functions $f(x_1,...,x_n)$ over ${\Bbb Z}$, $$\label{integral} \sum_{i}{{f(t_{i,1}, ..., t_{i, n})}\over{t_{i,1}\cdots t_{i, n}}}\in\mathcal{S}(J_n^{\Bbb C})$$ when $t_{i,1}\cdots t_{i, n}$ and $f(t_{i,1}, ..., t_{i, n})$ are regarded as polynomials in $\mathcal{S}(J_n^{\Bbb C})$, where $\mathcal{S}(J_n^{\Bbb C})$ is the symmetric algebra on $J_n^{\Bbb C}$ over ${\Bbb Z}$. Each class of $\mathcal{Z}_{2n}^{U}(T^n)$ is represented by an omnioriented quasitoric manifold. Equivalently, the homomorphism $\mathcal{M}: \mathcal{Q}_*\longrightarrow \Xi_*$ is surjective. The number $\lceil{n\over2}\rceil+1$ is the best possible lower bound of the number of fixed points for nonbounding unitary toric manifolds of dimension $2n$. Relation between $\mathcal{Z}_n(({\Bbb Z}_2)^n)$ and $\mathcal{Z}_{2n}^{U}(T^n)$ {#rela} ================================================================================ Milnor’s work tells us in [@m] (see also [@s1]) that there is a homomorphism $ F_*: \Omega^U_*\longrightarrow \mathfrak{N}^2_* $ where $\mathfrak{N}^2_*=\{\alpha^2| \alpha\in \mathfrak{N}_*\}$. This actually implies that there is a covering homomorphism $ H_n: \Omega^U_{2n}\longrightarrow \mathfrak{N}_{n} $ which is induced by $\theta_n\circ F_n$, where $\theta_n:\mathfrak{N}^2_{n}\longrightarrow\mathfrak{N}_n$ is defined by mapping $\alpha^2\longmapsto \alpha$. This observation leads to the following natural question: [*Is there a homomorphism $\widetilde{H}_n: \mathcal{Z}_{2n}^{U}(T^n)\longrightarrow \mathcal{Z}_n(({\Bbb Z}_2)^n)$ such that $\widetilde{H}_n$ is onto?*]{} .1cm In [@lt2], Lü and Tan discussed this question. The homomorphism $\widetilde{H}_n$ is defined as follows: First, a class $\{M^{2n}\}$ in $\mathcal{Z}_{2n}^{U}(T^n))$ gives an oriented torus graph $(\Gamma_M, \alpha)$ of $M^{2n}$. Next one may obtain a $({\Bbb Z}_2)^n$-colored graph $(\Gamma, \overline{\alpha})$ from $(\Gamma_M, \alpha)$ such that $\Gamma=\Gamma_M$ and $\overline{\alpha}$ is the mod 2 reduction of $\alpha$, and then the coloring polynomial of $(\Gamma, \overline{\alpha})$ determines a class of $\mathcal{Z}_n(({\Bbb Z}_2)^n)$ as desired. In particular, if $M^{2n}$ is an omnioriented quasitoric manifold over a simple convex polytope $P$, then $\widetilde{H}_n$ exactly maps $\{M^{2n}\}$ into the class of the fixed point set (as a small cover over $P$) of the natural conjugation involution on $M^{2n}$ (see [@dj Corollary 1.9]). Furthermore, it was showed by using Atiyah–Bott–Berline–Vergne localization theorem and a classical result of Stong that $\widetilde{H}_n$ is well-defined. With a very technical method, Lü and Tan showed that \[onto\] The homomorphism $\widetilde{H}_n: \mathcal{Z}_{2n}^{U}(T^n)\longrightarrow \mathcal{Z}_n(({\Bbb Z}_2)^n)$ is surjective. Theorem \[onto\] gives an answer to the lifting problem from small covers to quasitoric manifolds in the sense of equivariant bordism, where the lifting problem is explained as follows: Given a $({\Bbb Z}_2)^n$-coloring $\lambda: \mathcal{F}(P^n)\longrightarrow J_n^{*{\Bbb R}}\cong ({\Bbb Z}_2)^n$ on a simple convex $n$-polytope $P^n$, [*does there exist a ${\Bbb Z}^n$-coloring $\widetilde{\lambda}: \mathcal{F}(P^n)\longrightarrow J_n^{*{\Bbb C}}\cong {\Bbb Z}^n$ such that the following diagram commutes?*]{} $$\xymatrix{ &J_n^{*{\Bbb C}} \ar[d]^{\mod 2}\\ \mathcal{F}(P^n) \ar[ur]^{\widetilde{\lambda}}\ar[r]_{\lambda} & J_n^{*{\Bbb R}} }$$ where $\mathcal{F}(P^n)$ denotes the set of all facets of $P^n$. This problem was posed by the author of this paper at the conference on toric topology held in Osaka in November 2011. The problem is still open except for the cases of $n\leq 3$ and $m-n\leq 3$, where $m$ is the number of all facets of $P^n$ (see [@cp]). [99]{} Alexander, J. C. The bordism ring of manifolds with involution. Proc. Amer. Math. Soc. [**31**]{} (1972), 536–542. C. Allday and V. Puppe, [*Cohomological Methods in Transformation Groups*]{}, Cambridge Studies in Advanced Mathematics, [**32**]{}, Cambridge University Press, 1993. Z. Q. Bao and Z. Lü, [*Manifolds associated with $(\Z_2)^n$-colored regular graphs*]{}, Forum Math [**24**]{} (2012), 121–149. D. Biss, V. Guillemin and T. S. Holm, [*The mod 2 cohomology of fixed point sets of anti-symplectic involutions*]{}, Adv. Math. [**185**]{} (2004), 370–399. V. M. Buchstaber and T.E. Panov, [*Torus actions and their applications in topology and combinatorics*]{}, University Lecture Series, 24. American Mathematical Society, Providence, RI, 2002. V. M. Buchstaber, T.E. Panov and N. Ray, [*Toric Genera*]{}, Internat. Math. Res. Notices [**2010**]{}, No. 16, 3207–3262. V. M. Buchstaber and N. Ray, [*Toric manifolds and complex cobordisms*]{}, Uspekhi Mat. Nauk [**53**]{} (1998), 139–140. In Russian; translated in Russ. Math. Surv. [**53**]{} (1998), 371–373. H. W. Cho, J. H. Kim and H. C. Park, [*On the conjecture of Kosniowski*]{}, Asian J. Math. [**16**]{} (2012), no. 2, 271–278. S. Y. Choi, M. Masuda and D. Y. Suh, [*Topological classification of generalized Bott manifolds*]{}, Trans. Amer. Math. Soc. [**362**]{} (2010), 1097–1112. S. Y. Choi, and H. C. Park, [*Wedge operations and torus symmetries*]{}, arXiv:1305.0136 P.E. Conner, Differentiable Periodic Maps, 2nd Edition, [**738**]{}, Springer–Verlag, 1979. P.E. Conner and E.E. Floyd, [*Differentiable periodic maps*]{}, Ergebnisse Math. Grenzgebiete, N. F., Bd. [**33**]{}, Springer-Verlag, Berlin, 1964. P.E. Conner and E.E. Floyd, [*Periodic maps which preserve a complex structure*]{}, Bull. Amer. Math. Soc. [**70**]{} (1964), 574–579. A. Darby, [*Quesitoric manifolds in equivariant complex bordism*]{}, Ph.D. thesis, The University of Manchester, 2013. M. Davis and T. Januszkiewicz, [*Convex polytopes, Coxeter orbifolds and torus actions*]{}, Duke Math. J. [**61**]{} (1991), 417-451. T. tom Dieck, [*Bordism of $G$-manifolds and integrality theorems*]{}, Topology [**9**]{} (1970), 345–358. T. tom Dieck, [*Characteristic numbers of G-manifolds I*]{}, Invent. Math. [**13**]{} (1971), 213–224. V. Guillemin, V. Ginzburg and Y. Karshon, [*Moment maps, cobordisms, and Hamiltonian group actions*]{}, in ‘Mathematical Surveys and Monographs’, [**98**]{}, American Mathematical Society, Providence, RI, 2002. M. Goresky, R. Kottwitz, R. MacPherson, *Equivariant cohomology, Koszul duality, and the localization theorem*, Invent. Math. **131** (1998), 25–83. B. Grünbaum, [*Convex Polytopes*]{}, Second Edition, Graduate Texts in Mathematics [**221**]{}, Springer–Verlag, 2003. V. Guillemin and C. Zara, [*1-Skeleta, Betti numbers, and equivariant cohomology*]{}, Duke Math. J. [**107**]{} (2001), 283–349. V. Guillemin and C. Zara, [*Equivariant de Rham Theory and Graphs*]{}, Asian J. Math. [**3**]{} (1999), 49–76. B. Hanke, [*Geometric versus homotopy theoretic equivariant bordism*]{}, Math. Ann. [**332**]{} (2005), 677–696. C. Kosniowski, [*Generators of the ${\Bbb Z}/p$ bordism ring: serendipity*]{}, Math. Z. [**149**]{} (1976), 121–130. C. Kosniowski, [*Some formulae and conjectures associated with circle actions*]{}, Topology Symposium, Siegen 1979 (Proc. Sympos., Univ. Siegen, Siegen, 1979), pp331–339, Lecture Notes in Math., [**788**]{}, Springer, Berlin, 1980. C. Kosniowski and R.E. Stong, [*$({\Bbb Z}_2)^k$-actions and characteristic numbers*]{}, Indiana Univ. Math. J. [**28**]{} (1979), 723–743. C. Kosniowski and M. Yahia, [*Unitary bordism of circle actions*]{}, Proceedings of the Edinburgh Mathematical Society [**26**]{} (1983), 97–105. P.S. Landweber, [*Equivariant bordism and cyclic groups*]{}, Proc. Amer. Math. Soc. [**31**]{} (1972), 564–570. P. Li and K. F. Liu, [*Some remarks on circle action on manifolds*]{}, Mathematical Research Letters [**18**]{} (2011), 437–446. P. Löffler, [*Characteristic Numbers of Unitary Torus-Manifolds*]{}, Bull. Amer. Math. Soc. [**79**]{} (1973), 1262–1263. P. Löffler, [*Bordismengruppen Unitärer Torusmannigfaltigkeiten*]{}, Manuscripta Mathematica [**12**]{} (1974), 307–327. Z. Lü, [*Graphs of 2-torus actions*]{}, Toric topology, 261–272, Contemp. Math., [**460**]{}, Amer. Math. Soc., Providence, RI, 2008. Z. Lü, [*2-torus manifolds, cobordism and small covers*]{}, Pacific J. Math. [**241**]{} (2009), 285–308. Z. Lü, [*Graphs and $({\Bbb Z}_2)^k$-actions*]{}, arXiv: math.AT/0508643. Z. Lü and M. Masuda, [*Equivariant classification of 2-torus manifolds*]{}, Colloq. Math. [**115**]{} (2009), 171–188. Z. Lü and Q. B. Tan, [*Equivariant Chern numbers and the number of fixed points for unitary torus manifolds*]{}, Math. Res. Lett. [**18**]{} (2011), no. 6, 1319–1325. Z. Lü and Q. B. Tan, [*Small covers and the equivariant bordism classification of 2-torus manifolds*]{}, Int. Math. Res. Notices (First published online: September 3, 2013), doi: 10.1093/imrn/rnt183. Z. Lü and Q. B. Tan, [*The relation between equivariant bordism groups of 2-torus manifolds and unitary toric manifolds*]{}, preprint. Z. Lü and L. Yu, [*Topological types of 3-dimensional small covers*]{}, Forum Math. [**23**]{} (2011), 245–284. M. Masuda, [*Unitary toric manifolds, multi-fans and equivariant index*]{}, Tohoku Math. J. [**51**]{} (1999), 237–265. H. Maeda, M. Masuda and T. Panov, [*Torus Graphs and Simplicial Posets*]{}, Adv. in Math. [**212**]{} (2007), 458–483. S. P. Novikov, [*Methods of algebraic topology from the point of view of cobordism theory*]{}, Izvestiya Akademii Nauk SSSR. Seriya Matematicheskaya [**32**]{} (1968), 1245–1263 (Russian). English translation in Mathematics of the USSR–Izvestiya [**2**]{} (1968), 1193–1211. S. P. Novikov, [*Adams operations and fixed points*]{}, Izvestiya Akademii Nauk SSSR. Seriya Matematicheskaya [**31**]{} (1967), 855–951 (Russian). English translation in Mathematics of the USSR–Izvestiya [**1**]{} (1967), 827–913. A. Pelayo and S. Tolman, [*Fixed points of symplectic periodic flows*]{}, Ergodic Theory and Dynamical Systems [**31**]{} (2011), 1237–1247. J.W. Milnor, [*On the Stiefel–Whitney numbers of complex manifolds and of spin manifolds*]{}, Topology, [**3**]{} (1965), 223–230. Dev P. Sinha, [*Real equivariant bordism and stable transversality obstructions for ${\Bbb Z}/2$*]{}, Proc. Amer. Math. Soc. [**130**]{} (2002), no. 1, 271–281. Dev P. Sinha, [*Computations of complex equivariant bordism rings*]{}, Amer. J. Math. [**123**]{} (2001), no. 4, 577–605. R.E. Stong, [*Notes on cobordism theory*]{}, Princeton University Press, 1968. R.E. Stong, [*Equivariant bordism and $({\Bbb Z}_2)^k$-actions*]{}, Duke Math. J. [**37**]{} (1970), 779–785. R.E. Stong, [*Complex and Oriented Equivariant Bordism*]{}, In Topology of Manifolds (Proceedings of The University of Georgia Institute, Athens, GA, 1969), 291–316. Chicago, IL: Markham, 1970.
{ "pile_set_name": "ArXiv" }
--- abstract: | Observational evidence for strong magnetic fields throughout the envelopes of evolved stars is increasing. Many of the instruments coming on line in the near-future will be able to make further contributions to this field. Specifically, maser polarization observations and dust/line polarization in the sub-mm regime has the potential to finally provide a definite picture of the magnetic field strength and configuration from the Asymptotic Giant Branch (AGB) all the way to the Planetary Nebula phase. While current observations are limited in sample size, strong magnetic fields appear ubiquitous at all stages of (post-)AGB evolution. Recent observations also strongly support a field structure that is maintained from close to the star to several thousands of AU distance. While its origin is still unclear, the magnetic field is thus a strong candidate for shaping the stellar outflows on the path to the planetary nebula phase and might even play a role in determining the stellar mass-loss.\ [**Keywords.**]{}Planetary Nebulae – Stars: AGB and post-AGB – Magnetic fields author: - 'W. H. T. Vlemmings' title: 'Magnetic fields around (post-)AGB stars and (Pre-)Planetary Nebulae' --- Introduction ============ Strongly asymmetric planetary nebulae (PNe) have been shown to be common. The research into their shaping processes has become a fundamental part of our attempts to further the understanding of the return of processed material into the ISM by low- and intermediate-mass stars at the end of their evolution. Whereas the standard interacting winds scenario [@Kwok78] can explain a number of the PN properties, an important discovery has been that the collimated outflows of the pre-PNe (P-PNe), where such outflows are common, have a momentum that exceeds that which can be supplied by radiation pressure alone [@Bujarrabal01]. The source of this momentum excess has been heavily debated during the past several years, with the most commonly invoked cause being magnetic fields, binary or disk interaction or a combination of these [e.g. @Balick02]. Due to a number of similarities with the jets and outflows produces by young stellar objects, the study of P-PNe outflows provides further research opportunities into a potentially universal mechanism of jet launching. Here I will review the observational evidence for strong magnetic fields in PNe as well as around their AGB and post-AGB progenitors. I will give an overview of the methods that can be used to study magnetic fields, especially in light of the plethora of new instruments that will be available shortly. Finally, I will discuss a number of questions related this topic that we can expect to be answered with the new instruments in the next few years. Observational Techniques - Polarization ======================================= With the exception of observations where the magnetic field strength is estimated assuming forms of energy equilibrium, such as synchrotron observations, the magnetic field strength and structure is typically determined from polarization observations. Circular Polarization --------------------- Circular polarization, generated through Zeeman splitting, can be used to measure the magnetic field strength. It measures the total field strength when the splitting is large and the line-of-sight component of the field when the splitting is small. The predominant source of magnetic field strength information during the late stages of stellar evolution comes from maser circular polarization observations, and particularly the common SiO, H$_2$O and OH masers. These can show circular polarization fractions ranging from $\sim0.1$% (H$_2$O) up to $\sim100$% (OH) and are, because of their compactness and strength, excellent sources to be observed with high angular resolution. Unfortunately, the analysis of maser polarization is not straightforward [For a review, see @Vlemmings07], and it has taken a long time before maser observations were acknowledged to provide accurate magnetic field measurements. More recently, the first attempts have been made to detect the Zeeman splitting of non-maser molecular lines in circumstellar envelopes, such as CN [@Herpin09]. As many of these occur at shorter wavelength in the (sub-)mm regime, the advent of the Atacama Large (sub-)Millimeter Array will further enhance these types of studies. Linear Polarization ------------------- Linear polarization, probing the structure of the plane-of-the-sky component of the magnetic field, can be observed both in the dust (through aligned grains) and molecular lines (through radiation anisotropy - the Goldreich-Kylafis effect). Typical percentages of linear polarization range from up to a few percent (e.g. dust, CO, H$_2$O masers) to several tens of percent (OH and SiO masers). Again the interpretation of maser polarization depends on a number of intrinsic maser properties, but in specific instances maser linear polarization can even be used to determine the full 3-dimensional field morphology. In addition to the geometry, the linear polarization of most notably dust, can also be used to obtain a value for the strength of the plane-of-the-sky component of the magnetic field. This is done using the Chandrasekhar-Fermi method, which refers to the relation between the turbulence induced scatter of polarization vectors and the magnetic field strength. Current Status - Evolved Star Magnetic Fields ============================================= AGB Stars --------- Most AGB magnetic field measurements come from maser polarization observations (SiO, H$_2$O and OH). These have revealed a strong magnetic field throughout the circumstellar envelope. In Figure \[bvsr\], I have indicated the magnetic field strength in the regions of the envelope traced by the maser measurements throughout AGB envelopes. While a clear trend with increasing distance from the star is seen, the lack of accurate information on the location of the maser with respect to the central stars makes it difficult to constrain this relation beyond stating that it seems to vary between $B\propto R^{-2}$ (solar-type) and $B\propto R^{-1}$ (toroidal). Future observations of CO polarization might be able to provide further constraints. As the masers used for these studies are mostly found in oxygen-rich AGB stars, it has to be considered that the sample is biased. However, recent CN Zeeman splitting observations [@Herpin09] seem to indicate that similar strength fields are found around carbon-rich stars. Beyond determining the magnetic field strength, the large scale structure of the magnetic field is more difficult to determine, predominantly because the maser observations often probe only limited line-of-sights. Even though specifically OH observations seem to indicate a systematic field structure, it has often been suggested that there might not be a large scale component to the field that would be necessary to shape the outflow [@Soker02]. So far the only shape constraints throughout the envelope have been determined for the field around the supergiant star VX Sgr (Fig.\[vxsgr\]), where maser observations spanning 3 orders of magnitude in distance are all consistent with a large scale, possibly dipole shaped, magnetic field. Post-AGB Stars and P-PNe ------------------------ Similar to the AGB stars, masers are the major source of magnetic field information of post-AGB and P-PNe, with the majority of observations focused on OH masers. These have revealed magnetic field strengths similar to those of AGB stars (few mG) and a clear large scale magnetic field structure [e.g. @Bains03]. The most promising results have come after the detection of the so-called ’water-fountain’ sources. These sources exhibit fast and highly collimated H$_2$O maser jets that often extend beyond even the regular OH maser shell. With the dynamical age of the jet of order 100 years, they potentially are the progenitors of the bipolar (P-)PNe. Although the masers are often too weak for a detection of the magnetic field, observations of the arch-type of the water-fountains, W43A, have revealed a strong toroidal magnetic field that is collimating the jet [Fig. \[w43a\] and @Vlemmings06]. Planetary Nebulae ----------------- During the PN phase, masers are rare and weak and until now only the PN K3-35 has had a few mG magnetic field measured in its OH masers [@Miranda01]. Fortunately, there are a few other methods of measuring PN magnetic fields. The field orientation in the dust of the nebula can be determined using dust continuum polarization observations and current observations seem to indicate toroidal fields, with the dust alignment likely occurring close to the dust formation zone [@Sabin07]. Faraday rotation studies are potentially also able to study the magnetic field in the interaction region between the interstellar medium and the stellar outflow. In contrast to AGB stars, the central stars of PNe also show atomic lines that can be used to directly probe the magnetic fields on the surface of these stars. While measurements are still rare, observations of for example the central star of NGC 1360, indicate a field of order several kG [@Jordan05]. Origin of the Magnetic Field ============================ Despite the strong observational evidence for evolved star magnetic fields, the origin of these fields is still unclear. In single stars, differential rotation between the AGB star core and the envelope could potentially result in sufficiently strong magnetic field [@Blackman01]. However, as the energy loss due to a rotating magnetic field drag drains the rotation needed to maintain the field within several tens of years, an additional source of energy is needed [e.g. @Nordhaus06]. If AGB stars would be able to have a sun-like convective dynamo, magnetically dominated explosions could indeed result from single stars. Alternatively, the energy could be provided by the interaction with a circumstellar disk, although the origin of the disk is then another puzzle. Another explanation for maintaining a magnetic field is the interaction between a binary companion or potentially a heavy planet, with common-envelope evolution providing paths to both magnetically as well as thermally driven outflows [@Nordhaus06]. A companion could be the cause of the precession seen in a number of water-fountain and (P-)PNe jets. However, to date, the majority of the stars with measured magnetic fields do not show any other indication of binarity. Photosphere SiO H$_2$O OH ---------------------- -------------------- ----------------------- ---------------------- ---------------------- ---------------------- $B$ \[G\] $\sim50$? $\sim3.5$ $\sim0.3$ $\sim0.003$ $R$ \[AU\] - $\sim3$ $\sim25$ $\sim500$ - $[2-4]$ $[5-50]$ $[100-10.000]$ $V_{\rm exp}$ \[km s$^{-1}$\] $\sim5$ $~\sim5$ $\sim8$ $\sim10$ $n_{\rm H_2}$ \[cm$^{-3}$\] $\sim10^{14}$ $\sim10^{10}$ $\sim10^{8}$ $\sim10^{6}$ $T$ \[K\] $\sim2500$ $\sim1300$ $\sim500$ $\sim300$ $B^2/8\pi$ \[dyne cm$^{-2}$\] $\mathbf{10^{+2.0}}$? $\mathbf{10^{+0.1}}$ $\mathbf{10^{-2.4}}$ $10^{-6.4}$ $nKT$ \[dyne cm$^{-2}$\] $10^{+1.5}$ $10^{-2.8}$ $10^{-5.2}$ $10^{-7.4}$ $\rho V_{\rm exp}^2$ \[dyne cm$^{-2}$\] $10^{+1.5}$ $10^{-2.5}$ $10^{-4.1}$ $\mathbf{10^{-5.9}}$ $V_A$ \[km s$^{-1}$\] $\sim15$ $\sim100$ $\sim300$ $\sim8$ : Energy densities in AGB envelopes \[energy\] Effect of the Magnetic Field ============================ Until a more complete sample of magnetically active AGB stars, post-AGB stars and (P-)PNe is known, it is hard to observationally determine the effect of the magnetic field on these late stages of evolution. Starting with the AGB phase, a number of theoretical works have described the potential of magnetic fields in (at least partly) driving the stellar mass-loss through Alfv[é]{}n waves[e.g. @Falceta02], or through the creation of cool spots on the surface above with dust can form easier [@Soker98]. As current models of dust and radiation driven winds are still unable to explain especially the mass-loss of oxygen-rich stars, magnetic fields might provide the missing component of this problem, with tentative evidence already pointing to a relation between the magnetic field strength and mass-loss rate. Other theoretical works have focused on the magnetic shaping of the stellar winds [e.g. @Chevalier94; @Garcia05; @Frank04]. But to properly determine the possible effect of the magnetic fields, it is illustrative to study the approximate ratios of the magnetic, thermal and kinematic energies contained in the stellar wind. In Table.\[energy\] I list these energies along with the Alfv[é]{}n velocities and typical temperature, velocity and temperature parameters in the envelope of AGB stars. While many values are quite uncertain, as the masers that are used to probe them can exist in a fairly large range of conditions, it seems that the magnetic energy dominates out to $\sim50-100$ AU in the circumstellar envelope. This would correspond to the so-called ’launch’ region of magneto-hydrodynamic (MHD) outflows, which typically extend to no more than $\sim50R_i$, with $R_i$ the inner-most radial scale of launch engine [e.g. @Blackman09]. A rough constraint on $R_i$ thus seems to be $\sim1-2$ AU, close to the surface of the star. Outlook ======= While progress in studying the magnetic fields of evolved stars has been significant, a number of crucial questions remain to be answered. Several of these can be addressed with the new and upgraded telescopes in the near future. For example, the upgraded EVLA and eMERLIN will uniquely be able to determine the location of the masers in the envelope with respect to the central star, giving us, together with polarization observations, crucial information on the shape and structure of the magnetic field throughout the envelopes. ALMA will be able to add further probes of magnetic fields with for example high frequency masers and CO polarization observations, significantly expanding our sample of stars with magnetic field measurements. With the ALMA sensitivity, polarization will be easily detectable even in short observations and thus, even if not the primary goal, polarization calibration should be done. The new low-frequency arrays can potentially be used to determine magnetic fields in the interface between the ISM and PNe envelopes through Faraday rotation observations. With the advances in the search for binaries and the theories of common-envelope evolution and MHD outflow launching, the new observations will address for example: - [Under what conditions does the magnetic field dominate over e.g. binary interaction when shaping outflows?]{} - [Are magnetic fields as widespread in evolved stars as they seem?]{} - [What is the origin of the AGB magnetic field - can we find the binaries/heavy planets that might be needed?]{} - [Is there a relation between AGB mass-loss and magnetic field strength?]{} WV acknowledges the support by the Deutsche Forschungsgemeinschaft (DFG) through the Emmy Noether Research grant VL 61/3-1, and the work by the various researchers that have been crucial in the development of the area of evolved star magnetic field research (including those that I neglected to reference in this review). natexlab\#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\]\[\][[\#2](#2)]{} , N., [Vlemmings]{}, W., & [van Langevelde]{}, H. J. 2010, , 509, A26+ , I., [Gledhill]{}, T. M., [Yates]{}, J. A., & [Richards]{}, A. M. S. 2003, , 338, 287 , B., & [Frank]{}, A. 2002, , 40, 439 , E. G. 2009, in IAU Symposium, vol. 259 of IAU Symposium, 35 , E. G., [Frank]{}, A., [Markiel]{}, J. A., [Thomas]{}, J. H., & [Van Horn]{}, H. M. 2001, , 409, 485 , V., [Castro-Carrizo]{}, A., [Alcolea]{}, J., & [S[á]{}nchez Contreras]{}, C. 2001, , 377, 868 , R. A., & [Luo]{}, D. 1994, , 421, 225 , D., & [Jatenco-Pereira]{}, V. 2002, , 576, 976 , A., & [Blackman]{}, E. G. 2004, , 614, 737 , G., [L[ó]{}pez]{}, J. A., & [Franco]{}, J. 2005, , 618, 919 , F., [Baudry]{}, A., [Thum]{}, C., [Morris]{}, D., & [Wiesemeyer]{}, H. 2006, , 450, 667 , F., [Baudy]{}, A., [Josselin]{}, E., [Thum]{}, C., & [Wiesemeyer]{}, H. 2009, in IAU Symposium, vol. 259 of IAU Symposium, 47 , S., [Werner]{}, K., & [O’Toole]{}, S. J. 2005, , 432, 273 , A. J., [Diamond]{}, P. J., [Gonidakis]{}, I., [Mitra]{}, M., [Yim]{}, K., [Pan]{}, K., & [Chiang]{}, H. 2009, , 698, 1721 , S., [Purton]{}, C. R., & [Fitzgerald]{}, P. M. 1978, , 219, L125 , L. F., [G[ó]{}mez]{}, Y., [Anglada]{}, G., & [Torrelles]{}, J. M. 2001, , 414, 284 , J., & [Blackman]{}, E. G. 2006, , 370, 2004 , G. M., [Pashchenko]{}, M. I., & [Colom]{}, P. 2010, Astronomy Reports, 54, 400 , L., [Zijlstra]{}, A. A., & [Greaves]{}, J. S. 2007, , 376, 378 , N. 1998, , 299, 1242 — 2002, , 336, 826 , M., [Cohen]{}, R. J., & [Richards]{}, A. M. S. 2001, , 371, 1012 , W. H. T. 2007, in IAU Symposium, edited by [J. M. Chapman & W. A. Baan]{}, vol. 242 of IAU Symposium, 37 , W. H. T., [Diamond]{}, P. J., & [Imai]{}, H. 2006, , 440, 58 , W. H. T., [Diamond]{}, P. J., & [van Langevelde]{}, H. J. 2002, , 394, 589 , W. H. T., [Humphreys]{}. E. M. L., & [Franco-Hern[á]{}ndez]{}, R. 2010, , submitted , W. H. T., [van Langevelde]{}, H. J., & [Diamond]{}, P. J. 2005, , 434, 1029
{ "pile_set_name": "ArXiv" }
--- abstract: 'A non-minimal photon-torsion axial coupling in the quantum electrodynamics (QED) framework is considered. The geometrical optics in Riemannian-Cartan spacetime is considering and a plane wave expansion of the electromagnetic vector potential is considered leading to a set of the equations for the ray congruence. Since we are interested mainly on the torsion effects in this first report we just consider the Riemann-flat case composed of the Minkowskian spacetime with torsion. It is also shown that in torsionic de Sitter background the vacuum polarisation does alter the propagation of individual photons, an effect which is absent in Riemannian spaces.' --- **[Non-Riemannian geometrical optics in QED]{}** [By L.C. Garcia de Andrade[^1]]{} Introduction ============ A renewed interest in nonlinear electrodynamics has been recent put forward in the papers of Novello and his group [@1; @2] specially concerning the investigation of a Born-Infeld electrodynamics in the context of general relativity (GR). The interesting feature of these non-linear electrodynamics and some Chern-Simons electrodynamics is the fact as shown previously by de Sabbata and Gasperini [@3; @4] which consider a perturbative approach to QED and ends up with a generalized Maxwell equation with totally skew torsion. The photon-torsion perturbation obtained by this perturbative calculation allows the production of virtual pairs which is the vacuum polarization effect or QED. In this paper we consider the non-minimal extension of QED, given previously in Riemannian spacetime by Drummond and Hathrell [@5] to Riemann-Cartan geometry. We should like to stress here that the photon-torsion coupling considered in the paper comes from the interaction of a Riemann-Cartan tensor to the electromagnetic field tensor in the Lagrangean action term of the type $R_{ijkl}F^{ij}F^{kl}$ where $i,j=0,1,2,3$. Therefore here we do not have the usual problems of the noninteraction between photons and torsion as appears in the usual Maxwell electrodynamics [@6]. The plan of the paper is as follows : In section II we consider the formulation of the Riemann-Cartan (RC) nonlinear electrodynamics and show that in de Sitter case the vacuum polarisation does alter the propagation of individual photons. In section III the Riemann-flat case is presented and geometrical optics in non-Riemannian spacetime along with ray equations are given. Section IV deals the conclusions and discussions. De Sitter torsioned spacetime and nonlinear electrodynamics =========================================================== Since the torsion effects are in general two weak as can be seen from recent evaluations with K mesons (kaons) [@7] which yields $10^{-32} GeV$, we consider throughout the paper that second order effects on torsion can be drop out from the formulas of electrodynamics and curvature. In this section we consider a simple cosmological application concerning the nonlinear electrodynamics in de Sitter spacetime background. The Lagrangean used in this paper is obtained from the work of Drummond et al [@5] $$W= \frac{1}{m^{2}}\int{d^{4}x (-g)^{\frac{1}{2}}(aRF^{ij}F_{ij}+bR_{ik}F^{il}{F^{k}}_{l}+cR_{ijkl}F^{ij}F^{kl}+dD_{i}F^{ij}D_{k}{F^{k}}_{j})} \label{1}$$ The constant values $a,b,c,d$ may be obtained by means of the conventional Feynman diagram techniques [@5]. The field equations obtained are [@5] $$D_{i}F^{ik}+\frac{1}{{m_{e}}^{2}}D_{i}[4aRF^{ik}+2b({R^{i}}_{l}F^{lk}-{R^{k}}_{l}F^{li})+4c{R^{ik}}_{lr}F^{lr}]=0 \label{2}$$ $$D_{i}F^{jk}+D_{j}F^{ki}+D_{k}F^{ij}=0 \label{3}$$ where $D_{i}$ is the Riemannian covariant derivative, $F^{ij}={\partial}^{i}A^{j}-{\partial}^{j}A^{i}$ is the electromagnetic field tensor non-minimally coupled to gravity, and $A^{i}$ is the electromagnetic vector potential, R is the Riemannian Ricci scalar, $R_{ik}$ is the Ricci tensor and $R_{ijkl}$ is the Riemann tensor. Before we apply it to the de Sitter model, let us consider several simplifications. The first concerns the fact that that the photon is treated as a test particle , and the second considers simplifications on the torsion field. The Riemann-Cartan curvature tensor is given by $${{R^{*}}^{ij}}_{kl}= {R^{ij}}_{kl}+ D^{i}{K^{j}}_{kl}-D^{j}{K^{i}}_{kl}+{[K^{i},K^{j}]}_{kl} \label{4}$$ the last term here shall be dropped since we are just considering the first order terms on the contortion tensor. Quantities with an upper asterix represent RC geometrical quantities. We also consider only the axial part of the contortion tensor $K_{ijk}$ in the form $$K^{i}= {\epsilon}^{ijkl}K_{jkl} \label{5}$$ to simplify equation (\[2\]) we consider the expression for the Ricci tensor as $${{R^{*}}^{i}}_{k} = {{R}^{i}}_{k}-{{\epsilon}^{i}}_{klm}D^{[l}K^{m]} \label{6}$$ where ${{\epsilon}^{i}}_{klm}$ is the totally skew symmetric Levi-Civita symbol. By considering the axial torsion as coming from a dilaton field ${\phi}$ one obtains $$K^{i}= D^{i}{\phi} \label{7}$$ Substitution of expression (\[7\]) into formula (\[6\]) yields $${\partial}^{{[l}}K^{m]}=0 \label{8}$$ Thus expression (\[6\]) reduces to $${{R^{*}}^{i}}_{k} = {{R}^{i}}_{k} \label{9}$$ Therefore note that in the Riemann-flat case we shall be considering in the next section, $R_{ijkl}=0$ and ${{R^{*}}^{i}}_{k}=0$ which strongly simplifies the Maxwell type equation. In this section the de Sitter curvature $${R}_{ijkl}= K(g_{ik}g_{jl}- g_{il}g_{jk}) \label{10}$$ contraction of this expression yields $$R= K \label{11}$$ and substitution of these contractions into the Maxwell-like equation one obtains $$(1+2{\xi}^{2}K)D_{i}{F^{i}}_{k}={\epsilon}_{klmn}D^{i}K^{n}D_{i}F^{lm} \label{12}$$ Here ${\xi}^{2}=\frac{\alpha}{90{\pi}{m_{e}}^{2}}$ where $m_{e}$ is the electron mass and ${\alpha}$ is the fine structure constant. This equation shows that the vacuum polarisation alters the photon propagation in de Sitter spacetime with torsion. This result may provide interesting applications in cosmology such as in the study of optical activity in cosmologies with torsion such as in Kalb-Ramond cosmology [@8]. Riemann-flat nonlinear torsionic electrodynamics ================================================ In this section we shall be concerned with the application of nonlinear electrodynamics with torsion in the Riemann-flat case, where the Riemann curvature tensor vanishes. In particular we shall investigate the non-Riemannian geometrical optics associated with that. Earlier L.L.Smalley [@9] have investigated the extension of Riemannian to non-Riemannian RC geometrical optics in the usual electrodynamics, nevertheless in his approach was not clear if the torsion really coud coupled with photon. Since the metric considered here is the Minkowski metric ${\eta}_{ij}$ we note that the Riemannian Christoffel connection vanishes and the Riemannian derivative operator $D_{k}$ shall be replaced in this section by the partial derivative operator ${\partial}_{k}$. With these simplifications the Maxwell like equation (\[2\]) becomes $${\partial}_{i}F^{ij}+{\xi}^{2}{R^{ij}}_{kl}{\partial}_{i}F^{kl}=0 \label{13}$$ which reduces to $${\partial}_{i}F^{ik}+{\xi}^{2}[{{\epsilon}^{k}}_{jlm}{\partial}^{i}K^{m}-{{\epsilon}^{i}}_{jlm}{\partial}^{k}K^{m}]{\partial}_{i}F^{jl}=0 \label{14}$$ we may also note that when the contortion is parallel transported in the last section the equations reduce to the usual Maxwell equation $$D_{i}F^{il}=0 \label{15}$$ Since we are considering the non-minimal coupling the Lorentz condition on the vector potential is given by $${\partial}_{i}A^{i}=0 \label{16}$$ with this usual Lorentz condition substituted into the Maxwell-like equation one obtains the wave equation for the vector electromagnetic potential as $${\Box}A^{i}+{\xi}^{2}[{{\epsilon}^{k}}_{jlm}{\partial}^{i}K^{m}-{{\epsilon}^{i}}_{jlm}{\partial}^{k}K^{m}]{\partial}_{k}{\partial}^{j}A^{l}=0 \label{17}$$ Now to obtain the equations for the Riemann-Cartan geometrical optics based on the nonlinear electrodynamics considered here we just consider the plane wave expansion $$A^{i}=Re[(a^{i}+{\epsilon}b^{i}+c^{i}{\epsilon}^{2}+...)e^{i\frac{\theta}{\epsilon}}] \label{18}$$ Substitution of this plane wave expansion into the Lorentz gauge condition one obtains the usual orthogonality condition between the wave vector $k_{i}={\partial}_{i}{\theta}$ and the amplitude $a^{i}$ up to the lowest order given by $$k^{i}a_{i}= 0 \label{19}$$ note that by considering the complex polarisation given by $a^{i}=af_{i}$ expression (\[19\]) reduces to $$k_{i}f^{i}=0 \label{20}$$ $${\partial}_{i}k^{i}= \frac{{\xi}^{2}}{a^{2}}{\epsilon}^{ijkl}a_{i}b_{k}{k_{j}}^{,r}{\partial}_{r}K_{l} \label{21}$$ this equation describes the expansion or focusing of the ray congruence and the influence of contortion inhomogeneity in it. The last equation is $$k^{i}k_{i}=-\frac{{\xi}^{2}}{a^{2}}{\epsilon}^{ijkl}a_{i}[a_{j}b_{k}-a_{k}b_{j}]k^{r}{\partial}_{r}K_{l} \label{22}$$ note however that the RHS of (\[22\]) vanishes identically due to the symmetry in the product of the $a^{n}$ vector contracting with the skew Levi-Civita symbol, and finally we are left with the null vector condition $k_{j}k^{j}=0$. Discussion and conclusions ========================== The geometrical optics discussed in the last section allows us to build models to test torsion effects on gravitational optical phenomena such as gravitational lensing and optical activity. Besides the geometrical optics investigated in the last section could be reproduced in the case of de Sitter cosmology. This approach can be considered in near future. Acknowledgements {#acknowledgements .unnumbered} ================ I would like to express my gratitude to Prof. M. Novello for helpful discussions on the subject of this paper. Financial support from CNPq. is grateful acknowledged. [9]{} M. Novello and J. M. Salim,Phys. Rev. D (2000). M. Novello and J. M. Salim,Phys. Rev. D (1979). V. de Sabbata and M. Gasperini, Introduction to Gravitation (1980) World scientific. V. de Sabbata and C. Sivaram, Spin and Torsion in Gravitation (1995) world scientific. I. T. Drummond and S.J. Harthrell, Phys. Rev. D (1980)22,343. L.C. Garcia de Andrade,Gen. Rel. and Gravitation (1990)622. U. Mahanta and P. Das, Torsion constraints from the recent measurement of the muon anomaly (2002) hep-th/0205278. S. Kar,P. Majumdar,S. SenGupta and S. Sur, Cosmic optical activity from an inhomogeneous Kalb-Ramond field,arxiv hep-th/0109135 v1. L. Smalley,Phys. Lett. 117 A (1986) 267. [^1]: Departamento de Física Teórica - IF - UERJ - Rua São Francisco Xavier 524, Rio de Janeiro, RJ, Maracanã, CEP:20550.e-mail:[email protected]
{ "pile_set_name": "ArXiv" }
--- abstract: 'The [*schematic CERES method*]{} [@CERESS2] is a recently developed method of cut elimination for [*proof schemata*]{}, that is a sequence of proofs with a recursive construction. Proof schemata can be thought of as a way to circumvent adding an induction rule to the **LK**-calculus. In this work, we formalize a schematic version of the [*infinitary pigeonhole principle*]{}, which we call the Non-injectivity Assertion schema (NiA-schema), in the **LKS**-calculus [@CERESS2], and analyse the clause set schema extracted from the NiA-schema using some of the structure provided by the schematic CERES method. To the best of our knowledge, this is the first application of the constructs built for proof analysis of proof schemata to a mathematical argument since its publication. We discuss the role of [*Automated Theorem Proving*]{} (ATP) in schematic proof analysis, as well as the shortcomings of the schematic CERES method concerning the formalization of the NiA-schema, namely, the expressive power of the [*schematic resolution calculus*]{}. We conclude with a discussion concerning the usage of ATP in schematic proof analysis.' author: - David Cerna - Alexander Leitsch bibliography: - 'references.bib' subtitle: '\[Extended Paper\]' title: 'Analysis of Clause set Schema Aided by Automated Theorem Proving: A Case Study' --- Introduction {#sec:Introduction} ============ In Gentzen’s [*Hauptsatz*]{} [@Gentzen1935], a sequent calculus for first order logic was introduced, namely, the **LK**-calculus. He then went on to show that the [*cut*]{} inference rule is redundant and in doing so, was able to show consistency of the calculus. The method he developed for eliminating cuts from **LK**-derivations works by inductively reducing the cuts in a given **LK**-derivation to cuts which either have a reduced [*formula complexity*]{} and/or reduced [*rank*]{} [@prooftheory]. This method of cut elimination is known as [*reductive cut elimination*]{}. A useful result of cut elimination for the **LK**-calculus is that cut-free **LK**-derivations have the [*subformula property*]{}, i.e. every formula occurring in the derivation is a subformula of some formula in the end sequent. This property allows for the construction of [*Herbrand sequents*]{} and other objects which are essential in proof analysis. However, eliminating cuts from **LK**-derivations does have its disadvantages, mainly concerning the number of computations steps needed and the size of the final cut-free proof. As pointed out by George Boolos in “Don’t eliminate cut”  [@Dontelimcut], sometimes the elimination of cut inference rules from an **LK**-proof can result in an non-elementary explosion in the size of the proof. Though using cut elimination, it is also possible to gain mathematical knowledge concerning the connection between different proofs of the same theorem. For example, Jean-Yves Girard’s application of reductive cut elimination to a variation of Fürstenberg-Weiss’ proof of Van der Waerden’s theorem  [@ProocomWaerdens1987] resulted in the [*analytic*]{} proof of Van der Waerden’s theorem as found by Van der Waerden himself. From the work of Girard, it is apparent that interesting results can be derived from eliminating cuts in “mathematical” proofs. A more recently developed method of cut elimination, the CERES method  [@CERES], provides the theoretic framework to directly study the cut structure of **LK**-derivations, and in the process reduces the computational complexity of deriving a cut-free proof. The cut structure is transformed into a clause set allowing for clausal analysis of the resulting clause form. Methods of reducing clause set complexity, such as [*subsumption*]{} and [*tautology elimination*]{} can be applied to the characteristic clause set to reduce its complexity. It was shown by Baaz & Leitsch in “Methods of cut Elimination” [@Baaz:2013:MC:2509679] that this method of cut elimination has a [*non-elementary speed up*]{} over reductive cut elimination. In the same spirit of Girard’s work, Baaz et al.  [@Baaz:2008:CAF:1401273.1401552] applied the CERES method to a formalized mathematical proof. At the time of applying the method to Fürstenberg’s proof of the infinitude of primes, the CERES method had been generalized to [*higher-order logic*]{}  [@CERESHIGH] and an attempt was made to apply this generalized method to to the formal version of Fürstenberg’s proof. However, the tremendous complexity of the higher-order clause set [^1] suggested the use of an alternative method. Instead of formalizing the proof as a single higher-order proof, formalize it as a sequence of first-order proofs enumerated by a single numeric parameter, of which indexes the number of primes assumed to exists. The resulting schema of clause sets was refuted by a resolution schema resulting in Euclid’s argument for prime construction. The resulting specification was produced on the mathematical meta-level. At that time no object-level construction of the refutation schema existed. A mathematical formalizations of Fürstenberg’s proof requires induction. In the higher-order formalization, induction is easily formalized as part of the formula language. However in first-order, an induction rule needs to be added to the **LK**-calculus. As it was shown in  [@CERESS2], Reductive cut elimination does not work in the presence of an induction rule in the **LK**-calculus. Also, other systems [@Mcdowell97cut-eliminationfor] which provided cut elimination in the presence of an induction rule do so at the loss of some essential properties, for example the subformula property. In “Cut-Elimination and Proof Schemata” [@CERESS2], a version of the **LK**-calculus was introduced (**LKS**-calculus) allowing for the formalization of sequences of proofs as a single object level construction, i.e. the [*proof schema*]{}, as well as a framework for performing cut elimination on proof schemata. Cut elimination performed within the framework of  [@CERESS2] results in cut-free proof schemata with the subformula property. Essentially, the concepts found in  [@CERES] were generalized to handle recursively defined proofs. It was shown in  [@CERESS2] that [*schematic*]{} characteristic clause sets are always unsatisfiable, but it is not known whether a given schematic characteristic clause set will have a refutation expressible within the language provided for the resolution refutation schema. This gap distinguishes the schematic version of the CERES method from the previously developed versions. In this work, we continue the tradition outlined above of providing a case study of an application of a “new” method of cut elimination to a mathematical proof. Though our example is relatively less grand than the previously chosen proof it gives an example of a particularly hard single parameter induction. We chose the [*tape proof*]{}, found in  [@TAPEPROOFNOEQ; @tapeproofpaper; @TapeproofEX2], and generalize it by considering a codomain of size $n$ rather than of size two. A well known variation of our generalization has been heavily studied in literature under the guise of the [*Pigeonhole Principle*]{} (PHP). Our generalization will be referred to as the [*Non-injectivity Assertion*]{} (NiA). Though such a proof seems straight forward to formalize within the **LKS**-calculus, without a change to the construction used in [@tapeproofpaper], there was a forced [*eigenvariable*]{} violation. After formalizing the NiA as a proof schema (the NiA-schema) we apply the schematic CERES method. In our attempt to construct an ACNF schema [@CERESS2] we heavily use Automated Theorem Provers (ATP), specifically SPASS  [@SpassProver], to develop the understanding needed for construction of such a schema. SPASS was used over other theorem provers mainly due to familiarity. How theorem provers were used in our attempt to construct an ACNF schema will be an emphasis of this work. As an end result, we were able to “mathematically” express an ACNF schema of the NiA-schema to a great enough extent to produce instances of the ACNF in the **LK**-calculus; in a similar way as in the Fürstenberg’s proof analysis [@Baaz:2008:CAF:1401273.1401552]. Though, in our case we have a refutation for every instance (only the first few where found in [@Baaz:2008:CAF:1401273.1401552]) . It remains an open problem whether a more expressive language is needed to express the ACNF of the NiA-schema in the framework of [@CERESS2] . We conjecture that ATP will play an important role in resolving this question as well as in future proof analysis using the schematic CERES method. The paper is structured as follows: In Sec. \[sec:SCERES\], we introduce the **LKS**-calculus and the essential concepts from [@CERESS2] concerning the schematic clause set analysis. In Sec. \[sec:MathNiA\] & \[sec:FormNiA\], we formalize the NiA-schema in the **LKS**-calculus. In Sec. \[sec:CCSSE\], we extract the characteristic clause set from the NiA-schema and perform [*normalization*]{} and tautology elimination. In Sec. \[sec:ATPANAL\], we analysis the extracted characteristic clause set with the aid of SPASS. In Sec. \[sec:refuteset\], we provide a (“mathematically defined”) ACNF schema of the extracted characteristic clause set. In Sec. \[sec:Conclusion\], we conclude the paper and discuss future work. The **LKS**-calculus and Clause set Schema {#sec:SCERES} ========================================== In this section we introduce the **LKS**-calculus which will be used to formalize the NiA-schema, and the parts of the schematic CERES method concerned with characteristic clause set extraction. We refrain from introducing the resolution refutation calculus provided in [@CERESS2] because it does not particularly concern the work of this paper. Though we provide a resolution refutation of the characteristic clause set of the NiA-schema, there is a good reason to believe the constructed resolution refutation is outside the expressive power of the current schematic resolution refutation calculus. More specifically, the provided resolution refutation grows as a function of the free parameter $n$ with respect to a constant change in depth, i.e. grows wider faster than it grows deep. For more detail concerning the schematic CERES method, see [@CERESS2]. Schematic language, proofs, and the **LKS**-calculus ---------------------------------------------------- The **LKS**-calculus is based on the **LK**-calculus constructed by Gentzen [@Gentzen1935]. When one grounds the [*parameter*]{} indexing an **LKS**-derivation, the result is an **LK**-derivation [@CERESS2]. The term language used is extended to accommodate the schematic constructs of **LKS**-derivations. We work in a two-sorted setting containing a [*schematic sort*]{} $\omega$ and an [*individual sort*]{} $\iota$. The schematic sort only contains numerals constructed from the constant $0:\omega$, a monadic function $s(\cdot):\omega \rightarrow \omega$ and a single free variable, the free parameter indexing **LKS**-derivations, of which we represent using $n$. The individual sort is constructed in a similar fashion to the standard first order language [@prooftheory] with the addition of schematic functions. Thus, $\iota$ contains countably many constant symbols, countably many [*constant function symbols*]{}, and [*defined function symbols*]{}. The constant function symbols are part of the standard first order language and the defined function symbols are used for schematic terms. Though, defined function symbols can also unroll to numerals and thus can be of type $\omega^n \to \omega$. The $\iota$ sort also has [*free*]{} and [*bound*]{} variables and an additional concept, [*extra variables*]{} [@CERESS2]. These are variables introduced during the unrolling of defined function ([*predicate*]{}) symbols. We do not use extra variables in the formalization of the NiA-schema, but they are essential for the refutation of the characteristic clause set. Also important are the [*schematic variable symbols*]{} which are variables of type $\omega \rightarrow \iota$. Essentially second order variables, though, when evaluated with a [*ground term*]{} from the $\omega$ sort we treat them as first order variables. Our terms are built inductively using constants and variables as a base. Formulae are constructed inductively using countably many [*constant predicate symbols*]{} (atomic formulae), logical operators $\vee$,$\wedge$,$\rightarrow$,$\neg$,$\forall$, and $\exists$, as well as [*defined predicate symbols*]{} which are used to construct schematic formulae. In this work [*iterated $\bigvee$*]{} is the only defined predicate symbol used, and of which has the following term algebra: $$\label{eq:one} \varepsilon_{\vee}= \bigvee_{i=0}^{s(y)} P(i) \equiv \left\lbrace \begin{array}{c} {\displaystyle \bigvee_{i=0}^{s(y)} P(i) \Rightarrow \bigvee_{i=0}^{y} P(i) \vee P(s(y)) }\\ {\displaystyle \bigvee_{i=0}^{0} P(i) \Rightarrow P(0)} \end{array}\right.$$ From the above described term and formulae language we can provide the inference rules of the **LKE**-calculus, essentially the **LK**-calculus [@prooftheory] plus an equational theory $\varepsilon$ (in our case $\varepsilon_{\vee}$ Eq. \[eq:one\]). This theory, concerning our particular usage, is a primitive recursive term algebra describing the structure of the defined function(predicate) symbols. The **LKE**-calculus is the base calculus for the **LKS**-calculus which also includes [*proof links*]{} which will be described shortly. In the $\varepsilon$ inference rule, the term $t$ in the sequent $S$ is replaced by a term $t'$ such that, given the equational theory $\varepsilon$, $\varepsilon \models t = t'$. To extend the **LKE**-calculus with proof links we need a countably infinite set of [*proof symbols*]{} denoted by $\varphi, \psi,\varphi_{i}, \psi_{j} \ldots$. Let $S(\bar{x})$ by a sequent with schematic variables $\bar{x}$, then by the sequent $S(\bar{t})$ we use to denote the sequent $S(\bar{x})$ where each of the variables in $\bar{x}$ is replaced by the terms in the vector $\bar{t}$ respectively, assuming that they have the appropriate type. Let $\varphi$ be a proof symbol and $S(\bar{x})$ a sequent, then the expression is called a [*proof link*]{} . For a variable $n:\omega$, proof links such that the only arithmetic variable is $n$ are called [*$n$-proof links*]{} . The sequent calculus $\mathbf{LKS}$ consists of the rules of $\mathbf{LKE}$, where proof links may appear at the leaves of a proof. \[def.proofschema\] Let $\psi$ be a proof symbol and $S(n,\bar{x})$ be a sequent such that $n:\omega$. Then a [*proof schema pair for $\psi$*]{} is a pair of $\mathbf{LKS}$-proofs $(\pi,\nu(k))$ with end-sequents $S(0,\bar{x})$ and $S(k+1,\bar{x})$ respectively such that $\pi$ may not contain proof links and $\nu(k)$ may contain only proof links of the form and we say that it is a proof link to $\psi$. We call $S(n,\bar{x})$ the end sequent of $\psi$ and assume an identification between the formula occurrences in the end sequents of $\pi$ and $\nu(k)$ so that we can speak of occurrences in the end sequent of $\psi$. Finally a proof schema $\Psi$ is a tuple of proof schema pairs for $\psi_1 , \cdots \psi_\alpha$ written as $\left\langle \psi_1 , \cdots \psi_\alpha \right\rangle$, such that the $\mathbf{LKS}$-proofs for $\psi_{\beta}$ may also contain $n$-proof links to $\psi_{\gamma}$ for $1\leq \beta < \gamma\leq \alpha$. We also say that the end sequent of $\psi_1$ is a the end sequent of $\Psi$. We will not dive further into the structure of proof schemata and instead refer the reader to [@CERESS2]. We now introduce the [*characteristic clause set schema*]{}. Characteristic Clause set Schema -------------------------------- The construction of the characteristic clause set as described for the CERES method [@CERES] required inductively following the formula occurrences of cut formula ancestors up the proof tree to the leaves. However, in the case of proof schemata, the concept of ancestors and formula occurrence is more complex. A formula occurrence might be an ancestor of a cut formula in one recursive call and in another it might not. Additional machinery is necessary to extract the characteristic clause term from proof schemata. A set $\Omega$ of formula occurrences from the end-sequent of an LKS-proof $\pi$ is called [*a configuration for $\pi$*]{}. A configuration $\Omega$ for $\pi$ is called relevant w.r.t. a proof schema $\Psi$ if $\pi$ is a proof in $\Psi$ and there is a $\gamma \in \mathbb{N}$ such that $\pi$ induces a subproof $\pi$ of $\Psi \downarrow \gamma$ such that the occurrences in $\Omega$ correspond to cut-ancestors below $\pi$ [@thesis2012Tsvetan]. Note that the set of relevant cut-configurations can be computed given a proof schema $\Psi$. To represent a proof symbol $\varphi$ and configuration $\Omega$ pairing in a clause set we assign them a [*clause set symbol*]{} $cl^{\varphi,\Omega}(a,\bar{x})$, where $a$ is an arithmetic term. \[def:charterm\] Let $\pi$ be an $\mathbf{LKS}$-proof and $\Omega$ a configuration. In the following, by $\Gamma_{\Omega}$ , $\Delta_{\Omega}$ and $\Gamma_{C}$ , $\Delta_{C}$ we will denote multisets of formulas of $\Omega$- and $cut$-ancestors respectively. Let $r$ be an inference in $\pi$. We define the clause-set term $\Theta_r^{\pi,\Omega}$ inductively: - if $r$ is an axiom of the form $\Gamma_{\Omega} ,\Gamma_C , \Gamma \vdash \Delta_{\Omega} ,\Delta_C , \Delta$, then\ $\Theta_{r}^{\pi,\Omega} = \left\lbrace \Gamma_{\Omega} ,\Gamma_C \vdash \Delta_{\Omega} ,\Delta_C \right\rbrace $ - if $r$ is a proof link of the form then define $\Omega'$ as the set of formula occurrences from $\Gamma_{\Omega} ,\Gamma_C \vdash \Delta_{\Omega} ,\Delta_C$ and $\Theta_{r}^{\pi,\Omega} = cl^{\psi,\Omega}(a,\bar{u})$ - if $r$ is a unary rule with immediate predecessor $r'$ , then $\Theta_{r}^{\pi,\Omega} = \Theta_{r'}^{\pi,\Omega}$ - if $r$ is a binary rule with immediate predecessors $r_1 $, $r_2 $, then - if the auxiliary formulas of $r$ are $\Omega$- or $cut$-ancestors, then $\Theta_{r}^{\pi,\Omega} = \Theta_{r_1}^{\pi,\Omega} \oplus \Theta_{r_2}^{\pi,\Omega}$ - otherwise, $\Theta_{r}^{\pi,\Omega} = \Theta_{r_1}^{\pi,\Omega} \otimes \Theta_{r_2}^{\pi,\Omega}$ Finally, define $\Theta^{\pi,\Omega} = \Theta_{r_0}^{\pi,\Omega}$ where $r_0$ is the last inference in $\pi$ and $\Theta^{\pi} = \Theta^{\pi,\emptyset}$. We call $\Theta^{\pi}$ the characteristic term of $\pi$. Clause terms evaluate to sets of clauses by $|\Theta| = \Theta$ for clause sets $\Theta$, $|\Theta_1 \oplus \Theta_2| = |\Theta_1| \cup |\Theta_2$, $|\Theta_1 \otimes \Theta_2| = \{C \circ D \mid C \in |\Theta_1|, D \in |\Theta_2|\}$. The characteristic clause term is extracted for each proof symbol in a given proof schema $\Psi$, and together they make the characteristic clause set schema for $\Psi$, $CL(\Psi)$. “Mathematical” proof of the NiA Statement {#sec:MathNiA} ========================================= In this section we provide a mathematical proof of the NiA statement (Thm. \[thm:finalpart\]). The proof is very close in structure to the formal proof written in the **LKS**-calculus, which can be found in Sec. \[sec:FormNiA\]. We skip the basic structure of the proof and outline the structure emphasising the cuts. We will refer to the interval $\left\lbrace 0, \cdots, n-1 \right\rbrace $ as $\mathbb{N}_{n}$. Let $rr_{f}(n)$ be the following sentence, for $n\geq 2$: There exists $p,q \in \mathbb{N}$ such that $p < q$ and $f(p) = f(q)$, or for all $x \in \mathbb{N}$ there exists a $y \in \mathbb{N}$ such that $x\leq y$ and $f(y)\in \mathbb{N}_{n-1}$. \[lem:Inducbase\] Let $f:\mathbb{N} \rightarrow \mathbb{N}_{n}$, where $n\in \mathbb{N}$, be total, then $rr_{f}(n)$ or there exists $p,q \in \mathbb{N}$ such that $p < q$ and $f(p) = f(q)$. We can split the codomain into $\mathbb{N}_{n-1}$ and $\left\lbrace n \right\rbrace$, or the codomain is $\left\lbrace 0 \right\rbrace$. \[lem:inDucstep\] Let $f$ be a function as defined in Lem. \[lem:Inducbase\] and $2< m\leq n$, then if $rr_{f}(m)$ holds so does $rr_{f}(m-1)$. Apply the steps of Lem. \[lem:Inducbase\] to the right side of the [*or*]{} in $rr_{f}(m)$. \[thm:finalpart\] Let $f$ be a function as defined in Lem. \[lem:Inducbase\] , then there exists $i,j \in \mathbb{N}$ such that $i<j$ and $f(i) = f(j)$. Chain together the implications of Lem. \[lem:inDucstep\] and derive $rr_{f}(2)$, the rest is trivial by Lem. \[lem:Inducbase\]. This proof makes clear that the number of cuts needed to prove the statement is parametrized by the size of the codomain of the function $f$. The formal proof of the next section outlines more of the basic assumptions being that they are needed for constructing the characteristic clause set. NiA formalized in the **LKS**-calculus {#sec:FormNiA} ====================================== In this section we provide a formalization of the NiA-schema whose proof schema representation is $\left\langle (\omega(0),\omega(n+1)),(\psi(0),\psi(n+1)) \right\rangle$. Cut-ancestors will be marked with a $^*$ and $\Omega$-ancestors with $^{**}$. Numerals (terms of the $\omega$ sort) will be marked with $\overline{\cdot}$. We will make the following abbreviations: $ EQ_{f} \equiv \exists p \exists q( p < q \wedge f(p)= f(q))$, $I(\overline{n}) \equiv \forall x \exists y ( x\leq y \wedge \bigvee_{i=\overline{0}}^{\overline{n}} f(y) = \overline{i})$, $I_s(\overline{n}) \equiv \forall x \exists y ( x\leq y \wedge f(y) = \overline{n})$ and $AX_{eq}(\overline{n}) \equiv f(\beta) = \overline{n}^{*} ,f(\alpha) = \overline{n}^{*} \vdash f(\beta)= f(\alpha)$ (the parts of $AX_{eq}(\overline{n})$ marked as cut ancestors are always cut ancestors in the NiA-schema). Characteristic Clause set Schema Extraction {#sec:CCSSE} ============================================ The outline of the formal proof provided above highlights the inference rules which directly influence the characteristic clause set schema of the NiA-schema. Also to note are the configurations of the NiA-schema which are relevant, namely, the empty configuration $\emptyset$ and a schema of configurations $\Omega(\overline{n}) \equiv \forall x \exists y ( x\leq y \wedge \bigvee_{i=\overline{0}}^{\overline{n}} f(y) = \overline{i})$. Thus, we have the following: \[seq:charclaset\] $$CL_{NiA}(0)\equiv \Theta^{\omega,\emptyset}(0)\equiv \left\lbrace \left( cl^{\psi,\Omega(\overline{0})}(\overline{0})\oplus \vdash \alpha\leq \alpha \right)\oplus \vdash f(\alpha)=\overline{0} \right\rbrace$$ $$cl^{\psi,\Omega(\overline{0})}(\overline{0}) \equiv\Theta^{\psi,\Omega(\overline{0})}(0) \equiv\left\lbrace s(\beta)\leq \alpha \vdash \otimes f(\alpha)=\overline{0}, f(\beta)=\overline{0}\vdash \right\rbrace$$ $${\scriptstyle CL_{NiA}(\overline{n+1})\equiv \Theta^{\omega,\emptyset}(\overline{n+1})\equiv \left\lbrace \left( cl^{\psi,\Omega(\overline{n+1})}(\overline{n+1})\oplus \vdash \alpha\leq \alpha \right)\oplus \vdash \bigvee_{i=\overline{0}}^{\overline{n+1}} f(\alpha)=\overline{i} \right\rbrace }$$ $$\begin{array}{c} {\scriptstyle cl^{\psi,\Omega(\overline{n+1})}(\overline{n+1})\equiv \Theta^{\psi,\Omega(\overline{n+1})}(\overline{n+1})\equiv \left\lbrace \left( cl^{\psi,\Omega(\overline{n})}(\overline{n}) \oplus \left( s(\beta)\leq \alpha \vdash \otimes f(\alpha)= \overline{n+1}, f(\beta)=\overline{n+1}\vdash \right) \right) \right. } \\ {\scriptstyle \left. \oplus \left( max(\alpha,\beta)\leq \gamma \vdash \alpha \leq \gamma \right) \oplus \left(max(\alpha,\beta)\leq \gamma \vdash \beta \leq \gamma \right) \right\rbrace } \end{array}$$ In the characteristic clause set schema $CL_{NiA}(\overline{n+1})$ presented in Eq.\[seq:charclaset\] tautologies are already eliminated. [*Evaluation*]{} of $CL_{NiA}(\overline{n+1})$ yields the following clause set $C(n)$: $$\begin{array}{l} (C1)\ \vdash \alpha \leq \alpha,\ (C2)\ max(\alpha, \beta) \leq \gamma \vdash \alpha \leq \gamma,\ (C3)\ max(\alpha, \beta) \leq \gamma \vdash \beta \leq \gamma \\ (C4_{0})\ f(\beta) = \overline{0} , f(\alpha) = \overline{0} , s(\beta) \leq \alpha \vdash \\ \ldots \ldots\\ \ldots \ldots \\ (C4_{n})\ f(\beta) = \overline{n} , f(\alpha) = \overline{n} , s(\beta) \leq \alpha \vdash \\ (C5)\ \vdash f(\alpha) = \overline{0} , \cdots , f(\alpha) = \overline{n} \end{array}$$ Clausal Analysis Aided by ATP {#sec:ATPANAL} ============================= The result of characteristic clause set extraction for proof schemata is a sequence of clause sets representing the cut structure (See Sec. \[sec:CCSSE\]), rather than a single clause set representing the cut structure. Thus, unlike applications of the first-order CERES method to formal proofs [@tapeproofpaper], where a theorem prover is used exclusively to find a refutation, we can only rely on theorem provers for suggestions. Essentially, we need the theorem provers to help with the construction of two elements of the schematic resolution refutation: the induction invariants and the term language. For this clause set analysis, we exclusively used SPASS [@SpassProver] in the “out of the box mode”. We did not see a point to working with the configurations of SPASS being that for sufficiently small instances of $C(n)$ it found a refutation, and our goal was not to find an elegant proof using the theorem prover, but rather a refutation with the aid of the theorem prover; the “out of the box mode” was enough for this goal[^2]. Though as a side note, refutations found by SPASS were not the smallest, the resolution refutation that SPASS gave as output for $C(4)$[^3] used $(C5)$ in the refutation tree 1806 times. The resolution refutation we provide used $(C5)$ only 65 times. Though, it is not that our final refutation is wildly different, SPASS ended up deriving clauses using derived clauses which could easily be derived from the initial clause set. An essential feature we were looking for in the refutations found by SPASS were sequences of clauses which mimic the stepcase construction of the induction axiom, i.e. $\forall x (\varphi(x) \rightarrow \varphi(x+1))$. An example of such a sequence from the refutation of $C(4)$, of which will be the basis of Thm. \[thm:refofC\], is as follows: --------------------------------------------------------------------------------------------------- $1[0:Inp] \ \|\| \ \Rightarrow \ eq(f(U),3) \ , \ eq(f(U),2) \ , \ eq(f(U),1) \ , \ eq(f(U),0)*$ $2795[0:MRR:1.3,2764.0] \ \|\| \ \Rightarrow \ eq(f(U),3) \ , \ eq(f(U),2) \ , \ eq(f(U),1)$ $3015[0:MRR:2795.2,2984.0] \ \|\| \ \Rightarrow \ eq(f(U),3) \ , \ eq(f(U),2)$ $3096[0:MRR:3015.1,3065.0] \ \|\| \ \Rightarrow \ eq(f(U),3)$ --------------------------------------------------------------------------------------------------- Essentially, if we where to interpret the initial clause as defining a function (a function whose domain is the natural numbers and whose codomain is the set $\left[0,n \right]$) we see that at first we assume the function has a codomain of size $n$, and than we derive that it cannot have a codomain of size $n$, but rather of size $n-1$, and so on, until we derive that its codomain is empty, contradicting the original assumption, that is that the codomain is non-empty (i.e. clause $(C5)$). This pattern can be found in other instances of the refutation of $C(n)$. This sequences seems to be an essential part, even the only part, needed to define a recursive refutation of $C(n)$, though if and only if, $C(n)$ is refutable with a [*total induction*]{}, of which such a refutation has not been found and is unlikely to exists. Something which is not completely apparent in SPASS refutation for $C(n)$, $n<4$, is the gap (in numbering) between clause $1$ and clause $2795$ in Fig. \[fig:seqone\]. To derive clause $2795$ for clause $1$ in one step we need to first derive the following clause: $$2764[0:MRR:2714.1,2749.1] \ \|\| \ eq(f(U),0)* \ \Rightarrow$$ of which deriving is almost as difficult as deriving the sequence of Fig. \[fig:seqone\]. Essentially to derive clause $2764$, the SPASS refutation eludes to the need of an inner recursion bounded by the outer recursion. Essentially, we start from a clause of the following form:\ $\begin{array}{l} 2272[0:Res:955.3,159.1] \ \|\| \ eq(f(U),0)* \ , \ eq(f(V),1)* \ , \ eq(f(W),2)* \ , \ \\ eq(f(X),3)* \ \Rightarrow \end{array}$\ stating that the codomain is empty and derive that this implies some element $k$ is not in the codomain. Clause $2272$ is essential for Lem. \[lem:l13\] and is one of the clauses of Lem. \[lem:lbase\]. Up to this point we have an idea of the overall structure of the refutation, but so far, we have not discussed the term structure and unifiers used by SPASS. Essentially, how was the recursive max term construction of Def. \[def:maxterm\] found? Looking at the following two derived clauses from $C(3)$ and $C(4)$ we see that the nesting of the $\max$ term grows with respect to the free parameter:\ $20[0:Res:15.0,4.0] \ \|\| \ \Rightarrow \ le(U,max(max(max(V,U),W),X))$\ $54[0:Res:19.0,4.0] \ \|\| \ \Rightarrow \ le(U,max(max(max(max(V,U),W),X),Y))$\ However, in clause $20$ and $54$ the associativity is the opposite of Def. \[def:maxterm\]. We found that the refutation of Sec. \[sec:refuteset\] is easier when we switch the association of the max term construction. Also, both clause $20$ and $54$ do not contain successor function ($s(\cdot )$) encapsulation of the variables while Def. \[def:maxterm\] does. The $s(\cdot )$ terms were added because of the clauses $C4_i$. The literal $s(\alpha) \leq \beta$ enforces the addition of an $s(\cdot )$ term anyway during the unification. This can be see in Lem. \[lem:first\] and Cor. \[cor:first\], \[cor:second\], & \[cor:third\]. However, we have not been able to prove the necessity of these max function constructions, nor find a refutation without them. The result of all these observations was Lem. \[lem:lbase\]. After proving that the Lem. \[lem:lbase\] clause set is indeed derivable from $C(n)$ using resolution, we constructed it to see what the SPASS refutation looked like for $C(4)$. We abbreviate the term $max(max(max(s(x_{0}),s(x_1)),s(x_2)),s(x_3))$ by $m(\bar{x}_{4})$: $1: eq(f(m(\bar{x}_{4})),2) \vee eq(f(m(\bar{x}_{4})),1) \vee eq(f(m(\bar{x}_{4})),0)$\ $2:\neg eq(f(x_2),2) \vee eq(f(m(\bar{x}_{4})),1) \vee eq(f(m(\bar{x}_{4})),0)$\ $3:\neg eq(f(x_1),1) \vee eq(f(m(\bar{x}_{4})),2) \vee eq(f(m(\bar{x}_{4})),0)$\ $4:\neg eq(f(x_0),0) \vee eq(f(m(\bar{x}_{4})),2) \vee eq(f(m(\bar{x}_{4})),1)$\ $5:\neg eq(f(x_2),2) \vee \neg eq(f(x_1),1) \vee eq(f(m(\bar{x}_{4})),0)$\ $6: \neg eq(f(x_2),2) \vee \neg eq(f(x_0),0) \vee eq(f(m(\bar{x}_{4})),1)$\ $7:\neg eq(f(x_1),1) \vee \neg eq(f(x_0),0)\vee eq(f(m(\bar{x}_{4})),2)$\ $8:\neg eq(f(x_1),1) \vee \neg eq(f(x_0),0) \vee \neg eq(f(x_{2}),2)$ Feeding this derived clause set to SPASS for several instances aided the construction of the well ordering of Def. \[def:importantordering\] and the structure of the resolution refutation found in Lem. \[lem:l13\]. Refutation of the NiA-schema’s Characteristic Clause Set Schema {#sec:refuteset} =============================================================== In this section we provide a refutation of $C(n)$ for every value of $n$. We prove this result by first deriving a set of clauses which we will consider the least elements of a well ordering. Then we show how resolution can be applied to this least elements to derive clauses of the form $f(\alpha)= \overline{i} \vdash $ for $0\leq i \leq n$. The last step is simply to take the clause $(C5)$ from the clause set $C(n)$ and resolve it with each of the $f(\alpha) = \overline{i}\vdash $ clauses. \[def:maxterm\] We define the primitive recursive term $m(k,\overline{x},t)$, where $\overline{x}$ is a schematic variable and $t$ a term, as follows: $\left\lbrace m(k+1,\overline{x},t) \Rightarrow \right. $\ $ \left. m(k,\overline{x},max(s(x_{k+1}),t)) \ ; \ m(0,t) \Rightarrow t \right\rbrace$ \[def:resstep\] We define the resolution rule $res(\sigma,P)$ where $\sigma$ is a unifier and $P$ is a predicate as follows: The predicates $P^*$ and $P^{**}$ are defined such that $P^{**}\sigma = P^*\sigma = P$. Also, there are no occurrences of $P$ in $\Pi'\sigma$ and $P$ in $\Delta\sigma$. This version of the resolution rule is not complete for unsatisfiable clause sets, but simplifies the outline of the refutation. \[lem:first\] Given $0\leq k$ and $0 \leq n$, the clause $\vdash t \leq m(k,\overline{x},t)$ is derivable by resolution from $C(n)$. Let us consider the case when $k=0$, the clause we would like to show derivability of is $\vdash t \leq m(0,t)$, which is equivalent to the clause $\vdash t \leq t$, an instance of (C1). Assuming the lemma holds for all $m<k+1$, we show that the lemma holds for $k+1$. By the induction hypothesis, the instance $\vdash max(s(x_{k+1}),t') \leq m(k,\overline{x},max(s(x_{k+1}),t'))$ is derivable. Thus, the following derivation proves that the clause $\vdash t' \leq m(k+1,\overline{x}_{k+1},t')$, where $t= max(s(x_{k+1}),t')$ for some term $t'$ is derivable: $$P = max(s(x_{k+1}),t) \leq m(k,\overline{x},max(s(x_{k+1}),t))$$ $$\sigma =\left\lbrace \beta \leftarrow s(x_{k+1}), \gamma \leftarrow m(k,\overline{x},max(s(x_{k+1}),t)) , \delta \leftarrow t \right\rbrace$$\ $\square$ See Sec. \[Appendix\] for proofs of the following three corollaries. \[cor:first\] Given $0\leq k,n$, the clause $ \vdash s(x_{k+1})\leq m(k,\overline{x},max(s(x_{k+1}),t))$ is derivable by resolution from $C(n)$. \[cor:second\] Given $0\leq k$ and $0 \leq n$, the clause $ f(x_{k+1})= i, $\ $f(m(k,\overline{x},max(s(x_{k+1}),t))) = i \vdash$ for $0\leq i \leq n$ is derivable by resolution from $C(n)$. \[cor:third\] Given $0\leq k$ and $0 \leq n$, the clause $ f(x_{k+1})= i, f(m(k,\overline{x}_{k},s(x_{k+1}))) = i \vdash$ for $0\leq i \leq n$ is derivable by resolution from $C(n)$. Given $0 \leq n$, $-1\leq k \leq j \leq n$,a variable $z$, and a bijective function $b: \mathbb{N}_{n} \rightarrow \mathbb{N}_{n}$ we define the following formulae: $$c_{b}(k,j,z) \equiv \bigwedge_{i=0}^{k} f(x_{b(i)}) = b(i) \vdash \bigvee_{i=k+1}^{j} f(m(n,\overline{x},z)) = b(i).$$ The formulae $c_{b}(-1,-1,z) \equiv \ \vdash$, and $c_{b}(-1,n,z) \equiv \ \vdash \bigvee_{i=0}^{n} f(z) = i$ for all values of $n$ . \[lem:lbase\] Given $0 \leq n$, $-1\leq k \leq n$ and for all bijective functions $b: \mathbb{N}_{n} \rightarrow \mathbb{N}_{n}$. the formula $c_{b}(k,n,z)$ is derivable by resolution from C(n). See Sec. \[sec:lbaseproof\]. Greatest lower bounds of Def. \[def:importantordering\]. Given $0 \leq n$, $0\leq k \leq j \leq n$, and a bijective function $b: \mathbb{N}_{n} \rightarrow \mathbb{N}_{n}$ we define the following formulae: $$c'_{b}(k,j) \equiv \bigwedge_{i=0}^{k} f(x_{i+1}) = b(i) \vdash \bigvee_{i=k+1}^{j} f(m(k,\overline{x}_{k},s(x_{k+1})) = b(i).$$ \[lem:lbase2\] Given $0 \leq n$, $0\leq k \leq n$ and for all bijective functions $b: \mathbb{N}_{n} \rightarrow \mathbb{N}_{n}$. the formula $c'_{b}(k,n)$ is derivable by resolution from C(n). See Sec. \[sec:lbase2proof\]. \[def:importantordering\] Given $0\leq n$ we define the ordering relation $\lessdot_{n}$ over $A_{n} = \left\lbrace (i,j) | i\leq j \right. $ $\left. \wedge 0 \leq i,j \leq n \wedge i,j \in \mathbb{N} \right\rbrace$ s.t. for $(i,j),(l,k) \in A_n$, $(i,j) \lessdot_{n} (l,k)$ iff $i,k,l \leq n$, $j<n$, $l\leq i$, $k\leq j$, and $i = l \leftrightarrow j \not = k$ and $j = k \leftrightarrow i \not = l$. The ordering $\lessdot_{n}$ over $A_{n}$ for $0\leq n$ is a complete well ordering. Every chain has a greatest lower bound, namely, one of the members of $A_{n}$, $(i,n)$ where $0\leq i \leq n$, and it is transitive, anti-reflexive, and anti-symmetric. The clauses proved derivable by Lem. \[lem:lbase2\] can be paired with members of $A_{n}$ as follows, $c'_{b}(k,n)$ is paired with $(k,n)$. Thus, each $c'_{b}(k,n) $ is essentially the greatest lower bound of some chain in the ordering $\lessdot_{n}$ over $A_{n}$. \[lem:l13\] Given $0\leq k \leq j\leq n$, for all bijective functions $b: \mathbb{N}_{n} \rightarrow \mathbb{N}_{n}$ the clause $c'_{b}(k,j)$ is derivable from C(n). We will prove this lemma by induction over $A_{n}$. The base cases are the clauses $c'_{b}(k,n)$ from Lem. \[lem:lbase2\]. Now let us assume that the lemma holds for all clauses $c'_{b}(k,i)$ pairs such that, $0\leq k \leq j <i \leq n$ and for all clauses $c'_{b}(w,j)$ such that $0\leq k< w \leq j\leq n$, then we want to show that the lemma holds for the clause $c'_{b}(k,j)$. We have not made any restrictions on the bijections used, we will need two different bijections to prove the theorem. The following derivation provides proof: $P_{b}(k+1) = f(m(k,\overline{x}_{k},s(x_{k+1}))) = b(k+1)$, $\Pi_{b}(k) \equiv \bigwedge_{i=0}^{k} f(x_{b(i)}) = b(i)$, $$\Delta_{b}(k,j) \equiv \bigvee_{i=k+1}^{j} f(m(k,\overline{x}_{k},s(x_{k+1}))) = b(i),$$ $$\sigma =\left\lbrace x_{b'(k+1)} \leftarrow m(k,\overline{x}_{k},s(x_{k+1}))\right\rbrace$$ We assume that $b'(k+1) = b(j+1)$ and that $b'(x)=b(x)$ for $0 \leq x \leq k$. \[thm:refofC\] Given $n \geq 0$, $C(n)$ derives $\vdash$. By Lem. \[lem:l13\], The clauses $f(x)= 0 \vdash $ , $\cdots$ , $f(x)= n \vdash $ are derivable. Thus, we can prove the statement by induction on the instantiation of the clause set. When $n=0$, the clause (C5) is $\vdash f(x)= 0$ which resolves with $f(x)= 0 \vdash $ to derive $\vdash$. Assuming that for all $n'\leq n$ the theorem holds we now show that it holds for $n+1$. The clause (C5) from the clause set $C(n+1)$ is the clause (C5) from the clause set $C(n)$ with the addition of a positive instance of $\vdash f(\alpha)= (n+1)$. Thus, by the induction hypothesis we can derive the clause $\vdash f(\alpha)= (n+1)$. By Lem. \[lem:l13\] we can derive $f(x)= (n+1) \vdash $, and thus, resolving the two derived clauses results in $\vdash$. Conclusion {#sec:Conclusion} ========== At the end of the introduction, we outlined some essential points to be addressed in future work, i.e. finding a refutation which fits the framework of  [@CERESS2] or showing that it is not possible and constructing a more expressive language. Concerning the compression (see Sec. \[growthrate\]), knowing the growth rate of the ACNF can help in the construction of a more expressive language for the refutations, and will be part of the future investigation. However, there is an interesting points which was not addressed, namely extraction of a [*Herbrand system*]{}. The extraction of [*Herbrand system*]{} is the theoretical advantage this framework has over the previously investigated system [@Mcdowell97cut-eliminationfor][^4] for cut elimination in the presence of induction, but without a refutation within the expressive power of the resolution calculus, the method of [@CERESS2] cannot be used to extract a Herbrand system from our refutation. We plan to investigate the extraction of a Herbrand system for the NiA-schema given the current state of the proof analysis. Development of such a method can help find Herbrand systems in other cases when the ACNF-schema cannot be expressed in the calculus provided in [@CERESS2]. Appendix {#Appendix} ======== Proof of Lem. \[cor:first\] --------------------------- $$P = max(s(x_{k+1}),t) \leq m(k,\overline{x},max(s(x_{k+1}),t))$$ $$\sigma =\left\lbrace \beta \leftarrow s(x_{k+1}), \gamma \leftarrow m(k,\overline{x},max(s(x_{k+1}),t)) , \delta \leftarrow t \right\rbrace$$\ $\square$ Proof of Cor. \[cor:second\] ---------------------------- $$P = s(x_{k+1}) \leq m(k,\overline{x}_{k},max(s(x_{k+1}),t))$$ $$\sigma =\left\lbrace \alpha \leftarrow x_{k+1}, \beta \leftarrow m(k,\overline{x}_{k},max(s(x_{k+1}),t)) \right\rbrace$$\ $\square$ Proof of Cor. \[cor:third\] --------------------------- $$P = s(x_{k+1}) \leq m(k,\overline{x}_{k},s(x_{k+1}))$$ $$\sigma =\left\lbrace \alpha \leftarrow x_{k+1}, \beta \leftarrow m(k,\overline{x}_{k},s(x_{k+1}))) \right\rbrace$$\ $\square$ Proof of Cor. \[lem:lbase\] {#sec:lbaseproof} --------------------------- We prove this lemma by induction on $k$ and a case distinction on $n$. When $n=0$ there are two possible values for $k$, $k=0$ or $k=-1$. When $k=-1$ the clause is an instance of (C5). When $k=0$ we have the following derivation: $$P = f(max(s(x_{1}),z)) = b(0)$$ $$\sigma =\left\lbrace y \leftarrow max(s(x_{1}),z)\right\rbrace$$ By $(Cor. \ref{cor:second}[i\leftarrow b(0), k\leftarrow 0 ])$ we mean take the clause that is proven derivable by Cor. \[cor:second\] and instantiate the free parameters of Cor. \[cor:second\], i.e. $i$ and $k$, with the given terms, i.e. $b(0)$ and $0$. Remember that $b(0)$ can be either $0$ or $1$. We will use this syntax through out the dissertion. When $n>0$ and $k=-1$ we again trivially have (C5). When $n>0$ and $k=0$, the following derivation suffices: $$P = f(max(s(x_{1}),z)) = b(0)$$ $$\sigma =\left\lbrace y \leftarrow max(s(x_{1}),z)\right\rbrace$$ The main difference between the case for $n=1$ and $n>1$ is the possible instantiations of the bijection at $0$. In the case of $n>1$, $b(0) = 0 \ \vee \cdots \vee \ b(0) = n$. Now we assume that for all $w< k+1 <n$ and $n>0$ the theorem holds, we proceed to show that the theorem holds for $k+1$. The following derivation will suffice: $$P = f(m(k,\overline{x}_{k},max(s(x_{k+1}),t))) = b(k+1)$$ $$\sigma =\left\lbrace y \leftarrow max(s(x_{k+1}),z)\right\rbrace$$\ $\square$ Proof of Lem. \[lem:lbase2\] {#sec:lbase2proof} ---------------------------- We prove this lemma by induction on $k$ and a case distinction on $n$. When $n=0$ it must be the case that $k=0$. When $k=0$ we have the following derivation : $$P = f(s(x_{1})) = 0$$ $$\sigma =\left\lbrace y \leftarrow s(x_{1})\right\rbrace$$ Remember that $b(0)$ can only be mapped to $0$. When $n>0$ and $k=0$, the following derivation suffices: $$P = f(s(x_{1})) = b(0)$$ $$\sigma =\left\lbrace y \leftarrow s(x_{1})\right\rbrace$$ The main difference between the case for $n=0$ and $n>0$ is the possible instantiations of the bijection at $0$. In the case of $n>0$, $b(0) = 0 \ \vee \cdots \vee \ b(0) = n$. Now we assume that for all $w\leq k$ the theorem holds, we proceed to show that the theorem holds for $k+1$. The following derivation will suffice: $$P = f(m(k,\overline{x}_{k},max(s(x_{k+1}),t))) = b(k+1)$$ $$\sigma =\left\lbrace y \leftarrow max(s(x_{k+1}),z)\right\rbrace$$\ $\square$ SPASS Refutation of $C(n)$: Instance Four {#sec:spassreffour} ----------------------------------------- The refutation provided in this section is almost identical to the output from SPASS except for a few minor changes to the syntax to aid reading.\ \ $1[0:Inp] \ \|\| \ \Rightarrow \ eq(f(U),3) \ , \ eq(f(U),2) \ , \ eq(f(U),1) \ , \ eq(f(U),0)*$\ \ $2[0:Inp] \ \|\| \ \Rightarrow \ le(U,U)*$\ \ $3[0:Inp] \ \|\| \ le(max(U,V),W)* \ \Rightarrow \ le(U,W)$\ \ $4[0:Inp] \ \|\| \ le(max(U,V),W)* \ \Rightarrow \ le(V,W)$\ \ $5[0:Inp] \ \|\| \ le(s(U),V)*+ \ , \ eq(f(U),0)* \ , \ eq(f(V),0)* \ \Rightarrow $\ \ $6[0:Inp] \ \|\| \ le(s(U),V)*+ \ , \ eq(f(U),1)* \ , \ eq(f(V),1)* \ \Rightarrow $\ \ $7[0:Inp] \ \|\| \ le(s(U),V)*+ \ , \ eq(f(U),2)* \ , \ eq(f(V),2)* \ \Rightarrow $\ \ $8[0:Inp] \ \|\| \ le(s(U),V)*+ \ , \ eq(f(U),3)* \ , \ eq(f(V),3)* \ \Rightarrow $\ \ $9[0:Res:2.0,4.0] \ \|\| \ \Rightarrow \ le(U,max(V,U))$\ \ $10[0:Res:9.0,4.0] \ \|\| \ \Rightarrow \ le(U,max(V,max(W,U)))$\ \ $12[0:Res:2.0,3.0] \ \|\| \ \Rightarrow \ le(U,max(U,V))$\ \ $13[0:Res:9.0,3.0] \ \|\| \ \Rightarrow \ le(U,max(V,max(U,W)))$\ \ $15[0:Res:12.0,3.0] \ \|\| \ \Rightarrow \ le(U,max(max(U,V),W))$\ \ $16[0:Res:12.0,4.0] \ \|\| \ \Rightarrow \ le(U,max(max(V,U),W))$\ \ $19[0:Res:15.0,3.0] \ \|\| \ \Rightarrow \ le(U,max(max(max(U,V),W),X))$\ \ $20[0:Res:15.0,4.0] \ \|\| \ \Rightarrow \ le(U,max(max(max(V,U),W),X))$\ \ $23[0:Res:2.0,8.0] \ \|\| \ eq(f(U),3) \ , \ eq(f(s(U)),3)* \ \Rightarrow$\ \ $25[0:Res:10.0,8.0] \ \|\| \ eq(f(U),3) \ , \ eq(f(max(V,max(W,s(U)))),3)* \ \Rightarrow$\ \ $27[0:Res:12.0,8.0] \ \|\| \ eq(f(U),3) \ , \ eq(f(max(s(U),V)),3)* \ \Rightarrow$\ \ $28[0:Res:15.0,8.0] \ \|\| \ eq(f(U),3) \ , \ eq(f(max(max(s(U),V),W)),3)* \ \Rightarrow$\ \ $42[0:Res:2.0,7.0] \ \|\| \ eq(f(U),2) \ , \ eq(f(s(U)),2)* \ \Rightarrow$\ \ $43[0:Res:9.0,7.0] \ \|\| \ eq(f(U),2) \ , \ eq(f(max(V,s(U))),2)* \ \Rightarrow$\ \ $44[0:Res:10.0,7.0] \ \|\| \ eq(f(U),2) \ , \ eq(f(max(V,max(W,s(U)))),2)* \ \Rightarrow$\ \ $50[0:Res:12.0,7.0] \ \|\| \ eq(f(U),2) \ , \ eq(f(max(s(U),V)),2)* \ \Rightarrow$\ \ $52[0:Res:16.0,7.0] \ \|\| \ eq(f(U),2) \ , \ eq(f(max(max(V,s(U)),W)),2)* \ \Rightarrow$\ \ $54[0:Res:19.0,4.0] \ \|\| \ \Rightarrow \ le(U,max(max(max(max(V,U),W),X),Y))$\ \ $59[0:Res:20.0,7.0] \ \|\| \ eq(f(U),2) \ , \ eq(f(max(max(max(V,s(U)),W),X)),2)* \ \Rightarrow$\ \ $69[0:Res:2.0,6.0] \ \|\| \ eq(f(U),1) \ , \ eq(f(s(U)),1)* \ \Rightarrow$\ \ $70[0:Res:9.0,6.0] \ \|\| \ eq(f(U),1) \ , \ eq(f(max(V,s(U))),1)* \ \Rightarrow$\ \ $74[0:Res:13.0,6.0] \ \|\| \ eq(f(U),1) \ , \ eq(f(max(V,max(s(U),W))),1)* \ \Rightarrow$\ \ $79[0:Res:16.0,6.0] \ \|\| \ eq(f(U),1) \ , \ eq(f(max(max(V,s(U)),W)),1)* \ \Rightarrow$\ \ $89[0:Res:2.0,5.0] \ \|\| \ eq(f(U),0) \ , \ eq(f(s(U)),0)* \ \Rightarrow$\ \ $90[0:Res:9.0,5.0] \ \|\| \ eq(f(U),0) \ , \ eq(f(max(V,s(U))),0)* \ \Rightarrow$\ \ $98[0:Res:12.0,5.0] \ \|\| \ eq(f(U),0) \ , \ eq(f(max(s(U),V)),0)* \ \Rightarrow$\ \ $123[0:Res:1.3,89.1] \ \|\| \ eq(f(U),0) \ \Rightarrow \ eq(f(s(U)),3) \ , \ eq(f(s(U)),2) \ , \ eq(f(s(U)),1)$\ \ $159[0:Res:54.0,8.0] \ \|\| \ eq(f(U),3) \ , \ eq(f(max(max(max(max(V,s(U)),W),X),Y)),3)* \ \Rightarrow$\ \ $196[0:Res:1.3,90.1] \ \|\| \ eq(f(U),0) \ \Rightarrow \ eq(f(max(V,s(U))),3) \\\\ eq(f(max(V,s(U))),2) \ , \ eq(f(max(V,s(U))),1)$\ \ $197[0:Res:1.3,98.1] \ \|\| \ eq(f(U),0) \ \Rightarrow \ eq(f(max(s(U),V)),3) \\\\ eq(f(max(s(U),V)),2) \ , \ eq(f(max(s(U),V)),1)$\ \ $423[0:Res:196.3,79.1] \ \|\| \ eq(f(U),0) \ , \ eq(f(V),1) \ \Rightarrow \\\\ eq(f(max(max(W,s(V)),s(U))),3) \ , \ eq(f(max(max(W,s(V)),s(U))),2)$\ \ $450[0:Res:197.3,74.1] \ \|\| \ eq(f(U),0) \ , \ eq(f(V),1) \ \Rightarrow \\\\ eq(f(max(s(U),max(s(V),W))),3) \ , \ eq(f(max(s(U),max(s(V),W))),2)$\ \ $955[0:Res:423.3,59.1] \ \|\| \ eq(f(U),0) \ , \ eq(f(V),1) \ , \ eq(f(W),2) \\\\ \Rightarrow \ eq(f(max(max(max(X,s(W)),s(V)),s(U))),3)$\ \ $1009[0:Res:450.3,44.1] \ \|\| \ eq(f(U),0) \ , \ eq(f(V),1) \ , \ eq(f(W),2) \\\\ \Rightarrow \ eq(f(max(s(U),max(s(V),s(W)))),3)$\ \ $2272[0:Res:955.3,159.1] \ \|\| \ eq(f(U),0)* \ , \ eq(f(V),1)* \ , \ eq(f(W),2)* \\\\ eq(f(X),3)* \ \Rightarrow$\ \ $2273[0:MRR:1009.3,2272.3] \ \|\| \ eq(f(U),0)*+ \ , \ eq(f(V),1)* \\\\ eq(f(W),2)* \ \Rightarrow$\ \ $2301[0:MRR:450.3,2273.2] \ \|\| \ eq(f(U),0) \ , \ eq(f(V),1) \\\\ \Rightarrow \ eq(f(max(s(U),max(s(V),W))),3)$\ \ $2450[0:Res:2301.2,25.1] \ \|\| \ eq(f(U),0)* \ , \ eq(f(V),1)* \\\\ eq(f(W),3)* \ \Rightarrow$\ \ $2459[0:MRR:2301.2,2450.2] \ \|\| \ eq(f(U),0)*+ \ , \ eq(f(V),1)* \ \Rightarrow$\ \ $2577[0:MRR:123.3,2459.1] \ \|\| \ eq(f(U),0) \ \Rightarrow \ eq(f(s(U)),3) \\\\ eq(f(s(U)),2)$\ \ $2578[0:MRR:196.3,2459.1] \ \|\| \ eq(f(U),0) \ \Rightarrow \\ eq(f(max(V,s(U))),3) \ , \ eq(f(max(V,s(U))),2)$\ \ $2613[0:Res:2578.2,50.1] \ \|\| \ eq(f(U),0) \ , \ eq(f(V),2) \ \Rightarrow \ eq(f(max(s(V),s(U))),3)$\ \ $2615[0:Res:2578.2,52.1] \ \|\| \ eq(f(U),0) \ , \ eq(f(V),2) \\ \Rightarrow \ eq(f(max(max(W,s(V)),s(U))),3)$\ \ $2676[0:Res:2615.2,28.1] \ \|\| \ eq(f(U),0)* \ , \ eq(f(V),2)* \ , \ eq(f(W),3)* \ \Rightarrow$\ \ $2684[0:MRR:2613.2,2676.2] \ \|\| \ eq(f(U),0)*+ \ , \ eq(f(V),2)* \ \Rightarrow$\ \ $2714[0:MRR:2577.2,2684.1] \ \|\| \ eq(f(U),0) \ \Rightarrow \ eq(f(s(U)),3)$\ \ $2715[0:MRR:2578.2,2684.1] \ \|\| \ eq(f(U),0) \ \Rightarrow \ eq(f(max(V,s(U))),3)$\ \ $2749[0:Res:2715.1,27.1] \ \|\| \ eq(f(U),0)* \ , \ eq(f(V),3)* \ \Rightarrow$\ \ $2764[0:MRR:2714.1,2749.1] \ \|\| \ eq(f(U),0)* \ \Rightarrow $\ \ $2795[0:MRR:1.3,2764.0] \ \|\| \ \Rightarrow \ eq(f(U),3) \ , \ eq(f(U),2) \ , \ eq(f(U),1)$\ \ $2796[0:Res:2795.2,69.1] \ \|\| \ eq(f(U),1) \ \Rightarrow \ eq(f(s(U)),3) \ , \ eq(f(s(U)),2)$\ \ $2797[0:Res:2795.2,70.1] \ \|\| \ eq(f(U),1) \ \Rightarrow \ eq(f(max(V,s(U))),3) \\ eq(f(max(V,s(U))),2)$\ \ $2831[0:Res:2797.2,50.1] \ \|\| \ eq(f(U),1) \ , \ eq(f(V),2) \\ \Rightarrow \ eq(f(max(s(V),s(U))),3)$\ \ $2833[0:Res:2797.2,52.1] \ \|\| \ eq(f(U),1) \ , \ eq(f(V),2) \\ \Rightarrow \ eq(f(max(max(W,s(V)),s(U))),3)$\ \ $2896[0:Res:2833.2,28.1] \ \|\| \ eq(f(U),1)* \ , \ eq(f(V),2)* \ , \ eq(f(W),3)* \ \Rightarrow$\ \ $2904[0:MRR:2831.2,2896.2] \ \|\| \ eq(f(U),1)*+ \ , \ eq(f(V),2)* \ \Rightarrow$\ \ $2934[0:MRR:2796.2,2904.1] \ \|\| \ eq(f(U),1) \ \Rightarrow \ eq(f(s(U)),3)$\ \ $2935[0:MRR:2797.2,2904.1] \ \|\| \ eq(f(U),1) \ \Rightarrow \ eq(f(max(V,s(U))),3)$\ \ $2969[0:Res:2935.1,27.1] \ \|\| \ eq(f(U),1)* \ , \ eq(f(V),3)* \ \Rightarrow$\ \ $2984[0:MRR:2934.1,2969.1] \ \|\| \ eq(f(U),1)* \ \Rightarrow$\ \ $3015[0:MRR:2795.2,2984.0] \ \|\| \ \Rightarrow \ eq(f(U),3) \ , \ eq(f(U),2)$\ \ $3016[0:Res:3015.1,42.1] \ \|\| \ eq(f(U),2) \ \Rightarrow \ eq(f(s(U)),3)$\ \ $3017[0:Res:3015.1,43.1] \ \|\| \ eq(f(U),2) \ \Rightarrow \ eq(f(max(V,s(U))),3)$\ \ $3050[0:Res:3017.1,27.1] \ \|\| \ eq(f(U),2)* \ , \ eq(f(V),3)* \ \Rightarrow$\ \ $3065[0:MRR:3016.1,3050.1] \ \|\| \ eq(f(U),2)* \ \Rightarrow$\ \ $3096[0:MRR:3015.1,3065.0] \ \|\| \ \Rightarrow \ eq(f(U),3)$\ \ $3098[0:MRR:23.1,23.0,3096.0] \ \|\| \ \Rightarrow$\ \ Growth Rate of Refutation {#growthrate} ------------------------- Let $Occ(x,r)$ be defined as the number of times the clause $x$ is used in the refutation $r$. Let $r$ be the resolution refutation of Thm. \[thm:refofC\] for the clause set $C(n)$, then $Occ(C5,r)$ is the result of the following recurrence relation $a(n+1) = (n+1)*a(n) + 1$ and $a(0)=1$. Let us consider the case for the clause set $C(0)$. This is the case when we have only one symbol in the function’s range. If we compute the recurrence we get $a(1) = a(0) +1 = 2$ Now let us assume it holds for all $m\leq n$ and show it hold for $n+1$. In the proof of Lem. \[lem:l13\], when deriving $c'_{b}(0,0)$ the literal $f(\alpha) = b(0)$ is in the antecedent for every clause higher in the resolution derivation and it is never used in a resolution step . If we remove this clause from the antecedent then we have a resolution refutation for the clause $C(n)$, only if we rename the schematic sort terms accordingly. To refute $C(n+1)$ we need to derive $n+1$ distinct $c'_{b}(0,0)$ clauses and resolve them with a single instance of $(C5)$. Thus, we have the equation, $Occ(C5^{n+1},r_{n+1}) = (n+1)*Occ(C5^{n},r_{n}) +1 $ where $r_{n+1}$ is the resolution refutation of Thm. \[thm:refofC\] for the clause set $C(n+1)$ and $r_{n}$ is the resolution refutation of Thm. \[thm:refofC\] for the clause set $C(n)$. Thus, the theorem holds by induction.\ $\square$ The recurrence relation $a(n) = n\cdot a(n-1) + 1$ and $a(0)=1$ is equivalent to the equation: $$f(n) =n!\cdot \sum_{i=0}^{n} \frac{1}{i!}$$ If we unroll the relation one we get, $$a(n) = n\cdot (n-1)\cdot a(n-2) + n + 1 = n\cdot (n-1)\cdot a(n-2) + \frac{n!}{(n-1)!} + \frac{n!}{n!}$$ Thus, unrolling the function $k$ times results in the following: $$a(n) = \left( \prod^{n}_{i=n-k+1} i\right) \cdot a(n-k) + \sum^{n}_{i=n-k+1} \frac{n!}{i!}$$ Now when we set $k=n$ we get, $$a(n) = n! + \sum^{n}_{i=1} \frac{n!}{i!} = \frac{n!}{0!}+ \sum^{n}_{i=1} \frac{n!}{i!} = \sum^{n}_{i=0} \frac{n!}{i!}$$\ $\square$ [^1]: The individual clauses of the clause set were very large, some containing over 12 literals, and contained both higher order and first order free variables. Interactive theorem provers could not handle these clause sets, nor could a human adequately parse the clause set. [^2]: Also, using “out of the box mode” allows for ease of reproducibility of our results when using the same version of SPASS. [^3]: See Sec. \[sec:spassreffour\] [^4]: The schematic CERES method has the subformula property.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Matrix Product States (MPS) are used for the simulation of the real-time dynamics induced by an electric quench on the vacuum state of the massive Schwinger model. For small quenches it is found that the obtained oscillatory behavior of local observables can be explained from the single-particle excitations of the quenched Hamiltonian. For large quenches damped oscillations are found and comparison of the late time behavior with the appropriate Gibbs states seems to give some evidence for the onset of thermalization. Finally, the MPS real-time simulations are compared with results from real-time lattice gauge theory which are expected to agree in the limit of large quenches.' author: - Boye Buyens - Jutho Haegeman - Florian Hebenstreit - Frank Verstraete - Karel Van Acoleyen bibliography: - 'PaperRT.bib' title: 'Real-time simulation of the Schwinger effect with Matrix Product States' --- Introduction. ============= Gauge theories lie at the heart of high energy physics and hence play an essential role in our understanding of nature. Morever, gauge theories also emerge as low energy effective theories in several condensed matter systems [@Benton2016]. Lattice gauge theories provide a non-perturbative regularization of such theories that can often be simulated very efficiently by using Quantum Monte Carlo (QMC) methods. However, several most pressing questions in that regard, e.g., the phase diagram of quantum chromodynamics (QCD) at finite chemical potential or the real-time dynamics of relativistic heavy ion collisions have largely remained out of reach [@Bali1999]. Over the last decade, the Tensor Network States (TNS) approach has become a powerful alternative method to study strongly correlated quantum systems since it does not suffer from the sign problem [@Orus2004; @Verstraete2004; @Verstraete2008]. The most famous example of TNS are the Matrix Product States (MPS) [@Schollwoeck2011] in one spatial dimension. Ever since the formulation of Density Matrix Renormalization Group [@White1992] in terms of MPS, the number of algorithms for quantum many-body systems has increased rapidly. Recently, MPS have also been successfully applied to lattice gauge theories [@Banuls2013a; @Banuls2016a; @Banuls2016b; @Banuls2016c; @Banuls2016d; @Byrnes2003a; @Byrnes2003b; @Sugihara2005; @Rico2014; @Kuehn2014; @Kuehn2015; @Silvi2016; @Milsted2015]. In this publication we consider $(1+1)$-dimensional quantum electrodynamics (QED), the so-called massive Schwinger model [@Schwinger1962]. Despite being an Abelian gauge theory, it shares several important features with the theory of strong interactions (QCD) such as chiral symmetry breaking or confinement. Due to the reduced dimensionality this model has become an active playground for testing novel analytical and numerical methods [@Schwinger1962; @Coleman1975; @Coleman1976; @Hamer1982; @Iso1990; @Hosotani1996; @Adam1997; @Byrnes2003a; @Byrnes2003b; @Cichy2012; @Banuls2013a; @Hebenstreit2013; @Hebenstreit2014; @Kuehn2014; @Banuls2016a; @Buyens2013; @Buyens2014; @Buyens2015; @Buyens2015b; @Buyens2016; @Buyens2017] and for studying intriguing non-equilibrium questions that have been beyond the reach of convential QCD simulations, e.g., jet energy loss and photon production in relativistic heavy ion collisions [@Kharzeev2013; @Kharzeev2014] or the dynamics of string breaking [@Hebenstreit2013a]. Recently, there have been promising proposals that might allow to quantum simulate the Schwinger model in analog systems of ultracold ions or atoms in optical lattices [@Hauke2013; @Wiese2013; @Martinez2016; @Kasper2015; @Kasper2016]. An intriguing effect in the Schwinger model concerns the non-equilibrium dynamics after a quench that is induced by the application of a uniform electric field $E_0 = g\alpha$ onto the ground state $\ket{\Psi_0}$ at time $t=0$. Physically, this process corresponds to the so-called Schwinger pair creation mechanism [@Schwinger1951] in which an external electric field separates virtual electron-positron dipoles to become real electrons and positrons. Recently, this process has attracted much interest since high-intensity laser facilities like the Extreme Light Infrastructure (ELI) will for the first time be powerful enough to probe this effect experimentally. So far, theoretical investigations have mainly been restricted to the regime in which the fermions are treated quantum mechanically whereas the gauge fields are described classically (quantum kinetic theory [@Schmidt1998; @Kluger1992] or phase-space methods [@Hebenstreit2011]), or classical-statistically (real-time lattice techniques [@Hebenstreit2013; @Kasper2014]). In this publication we apply the MPS framework to investigate the non-equilibrium dynamics at the full quantum level. We perform real-time simulations for small, intermediate and large quenches. Furthermore, we use MPS computations of ground states, single-particle excitations and Gibbs states to analyse and interpret our results. Finally, we explicitly compare the MPS simulations with those obtained using real-time lattice techniques. Setup ===== Kogut-Susskind Hamiltonian -------------------------- The massive Schwinger model describes $(1+1)$-dimensional QED with one fermion flavor that is described by the Lagrangian density = |(\^(i\_+g A\_) - m) - F\_F\^.\[Lagrangian\] Here, $\psi$ is a two-component fermion field, $A_\mu$ denotes the $U(1)$ gauge field and $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ is the corresponding field strength tensor. In the following, we employ a lattice regularization à la Kogut-Susskind [@Kogut1975]. Therefore the two-component fermions are decomposed into their particle and antiparticle components which reside on a staggered lattice. These staggered fermions are converted to quantum spins $1/2$ by a Jordan-Wigner transformation with the local Hilbert space basis $\{\ket{s_n}_n: s_n \in \{-1,1\} \}$ of $\sigma_z(n)$ at site $n$. The charge $-g$ ‘electrons’ reside on the odd lattice sites, where spin down ($s=-1$) denotes an occupied site whereas spin up ($s=+1$) corresponds to an unoccupied site. Conversely, the even sites are related to charge $+g$ ‘positrons’ for which spin down/up corresponds to an unoccupied/occupied sites, respectively. Moreover, we introduce the compact gauge field $\theta(n) = a g A_1(n)$, which lives on the link that connects neighboring lattice sites, and its conjugate momentum $E(n)$, which correspond to the electric field. The commutation relation $[\theta(n),E(n')]=ig\delta_{n,n'}$ determines the spectrum of $E(n)$ up to a constant: $E(n)/g = L(n) + \alpha$. Here, $L(n)$ denotes the angular operator with integer spectrum and $\alpha \in \mathbb{R}$ corresponds to the background electric field. Accordingly, the Kogut-Susskind Hamiltonian reads [@Kogut1975; @Banks1976] \[eq:Hamiltonian\] \_&=& (\_[n =1]{}\^[2N]{} \^2 + m \_[n =1]{}\^[2N]{}(-1)\^n\_z(n)\ &+& x \_[n=1]{}\^[2N-1]{}(\^+ (n)e\^[i(n)]{}\^-(n + 1) + h.c.)) , where $\sigma^{\pm} = (1/2)(\sigma_x \pm i \sigma_y)$ are the ladder operators. Here, we have introduced the parameter $x$ as the inverse lattice spacing in units of $g$: $x \equiv 1/(g^2a^2)$. The continuum limit then corresponds to $x\rightarrow \infty$. We note that $\mathcal{H}_\alpha$ is only invariant under $\mathcal{T}^2$ (translations over two sites) due to the staggered mass term in the Hamiltonian. The Hamiltonian is invariant under local gauge transformations that are generated by: $$\begin{aligned} G(n) = L(n)-L(n-1)-\frac{\sigma_z(n) + (-1)^n}{2}\,.\label{eq:Gauss} \end{aligned}$$ If we restrict ourselves to physical (i.e., gauge invariant) operators $O$ for which $[O,G(n)]=0$, the Hilbert space decomposes into dynamically disconnected superselection sectors, which are distinguished by the eigenvalues of $G(n)$. The sector with $G(n)=0$ at every site $n$ constitutes the physical sector of the Hilbert space. The condition $G(n)=0$ is referred to as the Gauss law constraint as it is the discretized version of $\partial_z E = j^0$, where $j^0$ is the charge density of dynamical fermions. MPS for real-time evolution. ---------------------------- Similar as in [@Buyens2013; @Buyens2017] we block site $n$ and link $n$ into one effective site with local Hilbert space spanned by $\{\ket{\kappa_n} = \ket{s_n,p_n}_n: s_n = -1,1; p_n \in \mathbb{Z} \}$. In our approach we approximate the states of the lattice system eq.  by Matrix Product States (MPS) $\ket{{\Psi_u[{A(1)A(2)}]}}$ that take the form \[eq:MPS\] $$\begin{gathered} \sum_{\bm{\kappa}}v_L^\dagger \left(\prod_{n = 1}^{N} A_{\kappa_{2n-1}}(1) A_{\kappa_{2n}}(2)\right)v_R \ket{\kappa_1,\ldots, \kappa_{2N}}.\end{gathered}$$ Here we have $A_\kappa(n) \in \mathbb{C}^{D \times D}$ and $v_L, v_R \in \mathbb{C}^{D \times 1}$. The MPS ansatz associates a matrix $A_{\kappa_n}(n)= A_{s_n,p_n}(n)$ with each site $n$ and every local basis state $\ket{\kappa_n}_n =\ket{s_n,p_n}_n$. The indices $\alpha$ and $\beta$ are referred to as virtual indices, and $D$ is called the bond dimension. Note that this ansatz is $\mathcal{T}^2$ invariant. As such we can consider the ansatz directly in the thermodynamic limit ($N \rightarrow + \infty$), bypassing any possible finite size artifacts. In this limit the expectation values of all local observables are independent of the boundary vectors $v_L$ and $v_R$. The Gauss law constraint imposes the following form on the matrices [@Buyens2013] \_[(q,\_q),(r,\_r)]{} = [\[a\_[q,s]{}(n)\]]{}\_[\_q,\_r]{}\_[q+(s+(-1)\^n)/2,r]{}\_[r,p]{} \[eq:gaugeMPS\], where $\alpha_q = 1\ldots D_q$, $\beta_r = 1 \ldots D_r$. The variational freedom of the gauge invariant state $\ket{{\Psi_u[{A(1)A(2)}]}}$ thus lies within the matrices $a_{q,s}(n) \in \mathbb{C}^{D_q \times D_r}$ and the total bond dimension of the MPS equals $D = \sum_{q \in \mathbb{Z}}D_q$. In our simulations, we start from the ground state of the Hamiltonian $\mathcal{H}_{\alpha=0}$ without background field, for which we found a faithful gauge invariant MPS approximation $\ket{{\Psi_u[{A(1)A(2)}]}}$ by using the time-dependent variational principle (TDVP) [@Buyens2013; @Haegeman2011; @Haegeman2013; @Haegeman2014a; @Buyens2017] (see Appendix Sec. \[subsec:TDVPGS\]) for a brief review). At time $t = 0$, we perform a quench and apply a uniform electric field, which is simulated by evolving the ground state with respect to the Hamiltonian $\mathcal{H}_{\alpha\neq0}$ with non-vanishing background field: $\ket{\Psi(t)} = e^{-i\mathcal{H}_{\alpha\neq0} t} \ket{{\Psi_u[{A(1)A(2)}]}}$. The evolution is performed using the infinite time-evolving block decimation (iTEBD) [@Vidal2007] which adapts the bond dimension of this MPS dynamically according to the Schmidt spectrum, see Appendix Sec. \[subsec:iTEBDRT\]. As explained there, the errors introduced by this method are well-controlled and argued to be only of order $10^{-3}$ or smaller. We refer also to [@Buyens2013; @Buyens2017] for a discussion on the systematics of the iTEBD. Observables and their discretization ------------------------------------ We focus on the real-time evolution of the following observables \[eq:EandN\] E(t) = \_[n = 1]{}\^[2N]{}\_t,j\^1(t) =\_[n = 1]{}\^[2N-1]{}\_t,(t) = \_[n=1]{}\^[2N]{}\_t, with $\braket{\ldots}_t = \braket{\Psi(t)\vert \ldots \vert \Psi(t)}$. Here, $E(t)$ is the expectation value of the total electric field, $j^1 = \Braket{\bar{\psi}\gamma^1\psi}_t$ the current which can also be obtained from the electric field via Ampère’s law ($\dot{E} = -gj^1$) and $\Sigma(t)$ is the discrete version of the chiral condensate $\Braket{\bar\psi\psi}_t$. We will use $N(t)=\Sigma(t)-\Sigma(0)$ as a measure for the fermion particle number but notice that only in the non-relativistic limit $m\gg g$ we have a clear notion of electron and positron number. We will also show the real-time evolution of the half-chain von Neumann entropy, $S(t)=-Tr \rho \log \rho$, where $\rho$ is the density matrix of the half-chain subsystem. Finally, as explained in Appendix Sec. \[appsec:continuum\], notice that $E(t)$, $j^1(t)$, $N(t)$ and $\Delta S(t) = S(t) - S(0)$ are UV finite and already close to the continuum limit $x\rightarrow \infty$ for $x = 100$.\ [.24]{} ![\[fig:CoherentStateApp\] $m/g= 0.25, x = 100$. Comparison of iTEBD simulations (full line) with the approximation Eq. (\[eq:coherenstateAppb\]) (dashed line). (a): $E(t)/g$ $(\alpha = 0.1)$. (b): $N(t)$ $(\alpha = 0.1)$.](ElectricFieldLRTa1e1 "fig:"){width="\textwidth"} [.24]{} ![\[fig:CoherentStateApp\] $m/g= 0.25, x = 100$. Comparison of iTEBD simulations (full line) with the approximation Eq. (\[eq:coherenstateAppb\]) (dashed line). (a): $E(t)/g$ $(\alpha = 0.1)$. (b): $N(t)$ $(\alpha = 0.1)$.](CCLRTa1e1 "fig:"){width="\textwidth"} Results ======= Weak-field regime {#subsec:weakfieldRegime} ----------------- In [@Buyens2013] we found that the quasi-period of the oscillations of the electric field in the linear response regime ($\alpha \leq 0.01$) could be traced back to the first single-particle excitation of $\mathcal{H}_0$. However, for $\alpha \gtrsim 0.1$ we observed that the quasi-period grows with $\alpha$, and, hence, cannot be explained by the mass of the same single-particle excitation for each $\alpha$. It turns out that for $\alpha\lesssim 0.25$ the original vacuum $\ket{\Psi_0}$ is well described as a small density coherent state of single-particle excitations of the quenched Hamiltonian $\mathcal{H}_\alpha$. This leads to the oscillatory behavior of Fig. \[fig:CoherentStateApp\]. Specifically, as we discuss below, this behavior can be explained quantitatively in terms of the matrix-elements of $\mathcal{H}_0$, and the considered observables $E$ and $N$ in the truncated Hilbert-space consisting of the ground state and the two single-particle excitations of the quenched Hamiltonian $\mathcal{H}_\alpha$. As explained in more detail in Appendix Sec. \[appsec:cohstateapp\], for a given $\alpha$, we approximate all observables $\mathcal{O}$ in terms of a series of the creation ${{\mathrm{a}}}_m^\dagger(k)$ and annihilation operators ${{\mathrm{a}}}_m(k)$ of the single-particle excitations $\ket{\mathcal{E}_m(k)}$ of $\mathcal{H}_\alpha$ with energy $\mathcal{E}_m$ and momentum $k$; and this up to first order in ${{\mathrm{a}}}_m$ and ${{\mathrm{a}}}_m^\dagger$ [@Delfino2016; @Delfino2016b]: $$\begin{gathered} \label{eq:coherenstateAppb} \mathcal{O} \approx \lambda_{\mathcal{O}} \mathbbm{1} + \int dk \int dk'\left(\sum_{m,n}o_{1,m,n}(k,k') {{\mathrm{a}}}_m^\dagger(k){{\mathrm{a}}}_n(k') \right) \\ + \int dk\;\left( \sum_m o_{2,m}(k) {{\mathrm{a}}}_m(k) + \bar{o}_{2,m}(k){{\mathrm{a}}}_m^\dagger(k)\right) . \end{gathered}$$ $\alpha$ $\rho_1\xi$ $\rho_2\xi$ ---------- ---------------------- ----------------------- $0.01$ $2.6 \times 10^{-5}$ $ 9.4 \times 10^{-9}$ $0.1$ $2.7 \times 10^{-3}$ $8.9\times 10^{-5}$ $0.2$ $1.2 \times 10^{-2}$ $1.4 \times 10^{-3}$ $0.3$ $3.5 \times 10^{-2}$ $8.0 \times 10^{-3}$ $0.4$ $8.9 \times 10^{-2}$ $2.9 \times 10^{-2}$ : \[table:valuesdprimem\] $m/g = 0.25, x = 100$. Particle densities in units of correlation length for the two single-particle excitations of $\mathcal{H}_\alpha$. Here $m=1,2$ labels the two single-particle excitations of $\mathcal{H}_\alpha$ and the integral runs over the momenta $k \in [-\pi,\pi]$. Using the MPS approximations for the ground state and the two single-particle excitations obtained in [@Buyens2015; @Buyens2015b], we can extract the coefficients $o_{1,m,n}$ and $o_{2,m}$. For $\mathcal{O}=\mathcal{H}_{0}$ this leads to the approximation of the $\alpha=0$ ground state $\ket{\Psi_0}$, as a coherent state of $\mathcal{H}_{\alpha}$: ${{\mathrm{a}}}_m(k) \ket{\Psi_0} = d'_m \delta(k)\ket{\Psi_0}$ with $d'_m \in \mathbb{C}$. This corresponds to a state with particle densities $\rho_m=\frac{\sqrt{x}}{2\pi}|d'_m|^2$ of the two zero-momentum single-particle excitations on top of the ground state of $\mathcal{H}_\alpha$. In table \[table:valuesdprimem\] we display the obtained densities for different $\alpha$ in units of the correlation length $\xi=1/\mathcal{E}_1(0)$. One would expect our single-particle approximation to hold as long as $\xi\rho_1,\xi\rho_2\ll 1$ which is in the line with our results. The approximation on the evolution for $E(t)$ and $N(t)$ is obtained by extracting the coefficients in eq. for the appropriate operators (Eq. \[eq:EandN\]), and by considering the proper time-evolution ${{\mathrm{a}}}_m(t)={{\mathrm{a}}}_m e^{-i\mathcal{E}_mt}$. As can be observed in Fig. \[fig:CoherentStateApp\], the approximation works very well for $\alpha=0.1$, which lies already well beyond the linear response regime. For $\alpha \gtrsim 0.2$ our approximation still predicts the right quasi-periods, but overestimates the amplitudes of the minima of $E(t)$ and the amplitudes of the maxima of $N(t)$ by approximately $20 \%$. These discrepancies become larger when $\alpha$ increases and, eventually, when $\alpha \gtrsim 0.4$ this approximation also fails in predicting the right quasi-periods (see Appendix Sec. \[appsec:cohstateapp\], in particular Figs. \[fig:appLRTEF\] and \[fig:appLRTCC\]). Finally, let us mention the holographic approach of [@daSilva2016] and the studies of other models with confinement [@Kormos2016; @Rakovszky2016] that also obtained an oscillatory behavior of local observables, bearing some resemblance with our results. Strong-field regime ------------------- Let us now consider larger quenches: $\alpha \geq 0.75$. In Fig. \[fig:semiclassical\] we compare the full quantum simulations (full line) with results obtained from real-time lattice gauge theory simulations [@Hebenstreit2013] (dashed line). The latter should give reliable results as long as the classicality conditions are fulfilled, i.e., anti-commutator expectation values for typical gauge field modes are much larger than the corresponding commutators. This regime is characterized by non-perturbatively large field amplitudes [@Kasper2014]. [.24]{} ![\[fig:semiclassical\] Results for $m/g= 0.25, x = 100$. Comparison of full quantum simulations (full line) with real-time lattice simulations (dashed line) (a): electric field $E(t)/g$. (b): current $j^1(t)/g$. (c) particle number $N(t)$. (d) entropy excess $\Delta S(t)$.](EFsemiClassical "fig:"){width="\textwidth"} [.24]{} ![\[fig:semiclassical\] Results for $m/g= 0.25, x = 100$. Comparison of full quantum simulations (full line) with real-time lattice simulations (dashed line) (a): electric field $E(t)/g$. (b): current $j^1(t)/g$. (c) particle number $N(t)$. (d) entropy excess $\Delta S(t)$.](CurrentsemiClassical "fig:"){width="\textwidth"} [.24]{} ![\[fig:semiclassical\] Results for $m/g= 0.25, x = 100$. Comparison of full quantum simulations (full line) with real-time lattice simulations (dashed line) (a): electric field $E(t)/g$. (b): current $j^1(t)/g$. (c) particle number $N(t)$. (d) entropy excess $\Delta S(t)$.](PNsemiClassical "fig:"){width="\textwidth"} [.24]{} ![\[fig:semiclassical\] Results for $m/g= 0.25, x = 100$. Comparison of full quantum simulations (full line) with real-time lattice simulations (dashed line) (a): electric field $E(t)/g$. (b): current $j^1(t)/g$. (c) particle number $N(t)$. (d) entropy excess $\Delta S(t)$.](EntrsemiClassical "fig:"){width="\textwidth"} Focussing first on the electric field $E(t)$ and the current $j^1(t)$ one can observe good agreement between the MPS and real-time lattice simulations. The agreement further improves for growing $\alpha$ which is a nice cross-check for these two different techniques. However, for the particle number $N(t)$ we find sizeable deviations. We attribute this discrepancy to differences in the initial states: the MPS simulation starts from the full ground state of the Hamiltonian $\mathcal{H}_{\alpha=0}$ and hence incorporates interactions of the fermions with the fluctuating gauge field. On the other hand, the real-time lattice simulations are initialized in the bare Dirac vacuum that does not account for these interactions. In a semi-classical picture the behavior of $E(t)$, $j^1(t)$ and $N(t)$ can be attributed to the nontrivial interplay between fermion and gauge field dynamics (backreaction) [@Kluger1992; @Hebenstreit2013]: the electric field creates electron-positron pairs out of the vacuum and then accelerates them almost to the speed of light. This process costs energy, the electric field therefore decreases due to energy conservation so that particle creation terminates and the current saturates. After this initial creation of electron-positron pairs, which essentially occurs during the first oscillation of the electric field, we enter a regime of plasma oscillations, for which the onset at $tg \gtrsim 3$ can be observed in Figs. \[fig:EFsemiClassical\] and \[fig:CurrentsemiClassical\]. Also the behavior of the entanglement entropy $\Delta S(t)$ fits nicely with the semi-classical picture [@Calabrese2005]: after the local production of entangled electron-positron pairs, the pairs will separate, entangling the system over even larger distances. From Figs. \[fig:PNsemiClassical\] and \[fig:EntrsemiClassical\] one can indeed observe that the entropy starts increasing linearly after the initial period of pair production. Even for large quenches we expect that the classicality conditions that underlie the real-time lattice technique are briefly violated during the times at which $E(t)$ crosses zero. We can indeed observe in Fig. \[fig:EFsemiClassical\] that the full quantum MPS results start deviating from the real-time lattice results after the first transit through zero. In particular the MPS simulations predict a stronger damping. We interpret this damping as the onset of equilibration. It is accepted that a state which is brought out of equilibrium relaxes and equilibrates locally at late times [@Linden2009]. In fact, it is believed that, under some generic conditions, the state thermalizes to a Gibbs state of the quenched Hamiltonian at a certain temperature [@Eisert2015; @Rigol2008; @Deutsch1991; @Srednicki1994; @Tasaki1998; @Rigol2012; @Rigol2009; @Steinigeweg2014; @Beugeling2014; @Polkovnikov2011; @Riera2012]. There are however some exceptions, such as when the state as a whole is not thermal even if some local quantities already indicate thermalization [@Berges2004; @Banuls2011; @Mueller2013], when the system is integrable and it converges towards a so-called generalized Gibbs ensemble [@Caux2011; @Caux2012; @Caux2013; @Gogolin2011; @Cramer2008; @Cassidy2011; @Altland2012; @Vidmar2016], pre-thermalization [@Marcuzzi2013; @Essler2014; @Geiger2014; @Abanin2015] or many-body localization [@Lagendijk2009; @Gogolin2011; @Nandkishore2015; @Bauer2013; @Serbyn2015]. [.24]{} ![\[fig:thermalization\] Results for $m/g= 0.25, x = 100$. Comparison real-time simulations (full line) with predicted asymptotic value in thermal equilibrium (dashed line) (a): $E(t)/g$. (b) $N(t)/g$. ](EFcheckTherm "fig:"){width="\textwidth"} [.24]{} ![\[fig:thermalization\] Results for $m/g= 0.25, x = 100$. Comparison real-time simulations (full line) with predicted asymptotic value in thermal equilibrium (dashed line) (a): $E(t)/g$. (b) $N(t)/g$. ](PNcheckTherm "fig:"){width="\textwidth"} Under the assumption that the state would thermalize, we can determine its inverse temperature $\beta_0$ from energy conservation and by using our results from finite temperature simulations [@Buyens2016] (see Appendix Sec. \[subsec:determineGibbsState\]). In fig. \[fig:thermalization\] we compare $E(t)$ and $N(t)$ (full line) with its predicted thermal values $E_{\beta_0}$ and $N_{\beta_0}$ (dashed line). Note that our finite temperature simulations only enable us to determine $\beta_0$ numerically up to $\Delta\beta = 0.05$, therefore we show the intervals $E_{\beta_0\pm 0.05}$ and $N_{\beta_0 \pm 0.05}$. Although the electric field seems to oscillate around $E_{\beta_0}$, the amplitudes of the oscillations are still too large for a definite conclusion. On the other hand, one might be more tempted to say that $N(t)$ is close to its thermal value for $\alpha = 1.25$ and $\alpha = 1.5$, although one should be cautious here as well. To reach a definite conclusion, we would have to push the MPS simulations further in time. Unfortunately, the linear growth of entanglement, see Fig. \[fig:EntrsemiClassical\], requires the variational freedom of the MPS representation to grow exponentially in time (see Appendix Sec. \[subsec:iTEBDRT\], in particular Fig. \[fig:evolutionDmax\]). This precludes computations at large $t g$ and hence constrains the maximum time up to which we can reliably track the state. Conclusion ========== We demonstrated the potential of MPS to solve the real-time simulation of gauge field theories near the continuum limit, based on the paradigmatic example of an electric quench in the massive Schwinger model. For small quenches the real-time dynamics can be explained by using the single-particle excitations of the quenched Hamiltonian. For large quenches $\alpha=\mathcal{O}(1)$, which is related to the phenomenon of Schwinger pair production, we compared the MPS simulations with results from real-time lattice gauge theory simulations and found good agreement between those methods. In this regime, we further investigated whether the state thermalizes at late times by using finite temperature simulations. While we found evidence that supports the onset of thermalization, the increase of entanglement prevented us to reach a decisive conclusion yet. The MPS method provides a unique means to benchmark quantum simulators of the massive Schwinger model or related models using ultracold ions or atoms in optical lattices [@Hauke2013; @Wiese2013; @Martinez2016; @Kasper2015; @Kasper2016]. On the other hand, it is a major goal to extend this type of real-time simulation technique to more than one spatial dimension using projected entangled pair states (PEPS) [@Verstraete2004]. The major progress on PEPS algorithms in the last decade [@Murg2007; @Corboz2009; @Jordan2008; @Corboz2010; @Kraus2010; @Corboz2014; @Vanderstraeten2015b; @Phien2015; @Corboz2016; @Vanderstraeten2016b] in combination with recent promising PEPS and TNS results for higher-dimensional gauge theories [@Tagliacozzo2014; @Haegeman2015; @Zohar2015; @Milsted2016; @Zohar2016] makes us confident that this will be realized in the foreseeable future. Acknowledgments {#acknowledgments .unnumbered} =============== We acknowledge interesting discussions with Mari-Carmen Bañuls, David Dudal and Esperanza Lopez. This work is supported by an Odysseus grant from the FWO, a PhD-grant from the FWO (B.B), a post-doc grant from the FWO (J.H.), the FWF grants FoQuS and Vicom, the ERC grant QUERG, the EU grant SIQS and a grant from the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC under grant agreement 339220 (F.H.). MPS for the Schwinger model {#app:MPSSchwingerModel} =========================== In this section we explain the Matrix Product States (MPS) methods that are used for the Schwinger model. More specifically we discuss: 1. How the time-dependent variational principle (TDVP) is used to find the optimal translational invariant MPS approximation for the ground state in the thermodynamic limit (see Sec. \[subsec:TDVPGS\]). 2. How we approximate the single-particle excitations using MPS (see Sec.  \[subsec:RRforExc\]). 3. How we perform real-time evolution within the manifold of MPS using the infinite time-evolving block decimation algorithm (iTEBD) (see Sec. \[subsec:iTEBDRT\]). 4. How we use the MPS approximations for the ground state and the single-particle excitations to approximate the real-time evolution in the weak-field regime (see Sec. \[appsec:cohstateapp\]). 5. How we determine the temperature of to the equilibrium state given that the state brought out of equilibrium by the quench thermalizes (see Sec. \[subsec:determineGibbsState\]). More details and results can also be found in our earlier papers [@Buyens2013; @Buyens2014; @Buyens2015; @Buyens2015b; @Buyens2016; @Buyens2017] Ground-state ansatz {#subsec:TDVPGS} ------------------- Consider the Kogut-Susskind Hamiltonian Eq. (\[eq:Hamiltonian\]) of the Schwinger model: $$\begin{gathered} \label{eq:Hamiltonianapp} \mathcal{H}_{\alpha}= \frac{g}{2\sqrt{x}}\Biggl(\sum_{n=1}^{2N}[L(n) + \alpha]^2 + \frac{\sqrt{x}}{g} m \sum_{n =1}^{2N}(-1)^n\sigma_z(n) + \\ x \sum_{n=1}^{2N-1}(\sigma^+ (n)e^{i\theta(n)}\sigma^-(n + 1) + h.c.)\biggl).\end{gathered}$$ We block site $n$ and link $n$ into one effective site with local Hilbert space spanned by $\{\ket{s_n,p_n}_n: s_n = -1,1; p_n \in \mathbb{Z} \}$. Writing $ \kappa_n = (s_n,p_n)$ and $$\bm{\kappa} = \bigl((s_1,p_1),(s_2,p_2),\ldots,(s_{2N},p_{2N})\bigl) = (\kappa_1,\ldots,\kappa_{2N})$$ a general state on this system of $2N$ sites takes the form $$\ket{\Psi} = \sum_{\bm{\kappa}} C_{\kappa_1,\ldots,\kappa_{2N}} \ket{\bm{\kappa}}$$ with basis coefficients $C^{\kappa_1,\ldots,\kappa_{2N}}$. A MPS $\ket{{\Psi_u[{A(1)A(2)}]}}$ assumes now a special form for these coefficients: $$C_{\kappa_1,\ldots,\kappa_{2N}} = v_L^\dagger \left(\prod_{n = 1}^{N} A_{\kappa_{2n-1}}(1) A_{\kappa_{2n}}(2)\right)v_R,$$ i.e., \[eq:MPSapp\] = \_v\_L\^(\_[n = 1]{}\^[N]{} A\_[\_[2n-1]{}]{}(1) A\_[\_[2n]{}]{}(2))v\_R .Here we have $A_\kappa(n) \in \mathbb{C}^{D(n) \times D(n+1)}$ and $v_L, v_R \in \mathbb{C}^{D(1) \times 1}$. The MPS ansatz associates with each site $n$ and every local basis state $\ket{\kappa_n}_n =\ket{s_n,p_n}_n$ a matrix $A_{\kappa_n}(n)= A_{s_n,p_n}(n)$. The indices $\alpha$ and $\beta$ are referred to as virtual indices, and $D(n)$ are called the bond dimensions. Note that here $A_\kappa(n)$ only depends on the parity of $n$, in accordance with the $\mathcal{T}^2$ symmetry of the Hamiltonian. As such we can consider the ansatz directly in the thermodynamic limit ($N \rightarrow + \infty$), bypassing any possible finite size artifacts. In this limit the expectation values of all local observables are independent of the boundary vectors $v_L$ and $v_R$. As explained in [@Buyens2013], to parameterize gauge invariant MPS, i.e. states that obey $G(n)\ket{\Psi(A)}=0$ for every $n$, $$G(n) = L(n) - L(n-1) + \frac{\sigma_z(n) + (-1)^n}{2},$$ it is convenient to give the virtual indices a multiple index structure $\alpha\rightarrow (q,\alpha_q); \beta \rightarrow (r,\beta_r)$, where $q$ resp. $r$ labels the eigenvalues of $L(n-1)$ resp. $L(n)$. One can verify that the condition $G(n)=0$ then imposes the following form on the matrices: \_[(q,\_q),(r,\_r)]{} = [\[a\_[q,s]{}(n)\]]{}\_[\_q,\_r]{}\_[q+(s+(-1)\^n)/2,r]{}\_[r,p]{} \[eq:gaugeMPSapp\], where $\alpha_q = 1\ldots D_q(n)$, $\beta_r = 1 \ldots D_r(n+1)$. The formal total bond dimensions of this MPS are $D(n) = \sum_q D_q(n)$, but notice that, as (\[eq:gaugeMPSapp\]) takes a very specific form, the true variational freedom lies within the matrices $a_{q,s}(n) \in \mathbb{C}^{D_q(n) \times D_r(n+1)}$. To find the optimal ground state of $\mathcal{H}_\alpha$ within the class of gauge invariant MPS Eq. (\[eq:MPSapp\]) we apply the time-dependent variational principle (TDVP) [@Haegeman2011; @Haegeman2013; @Haegeman2014a] to the Schrödinger equation $$\partial_\tau \ket{{\Psi_u[{A(1)A(2)}]}} = -\mathcal{H}_{\alpha} \ket{{\Psi_u[{A(1)A(2)}]}}$$ in imaginary time $d\tau = -idt$. When $\tau \rightarrow + \infty$ we indeed find the optimal approximation $ \ket{{\Psi_u[{A(1)A(2)}]}}$ for the ground state of $\mathcal{H}_{\alpha}$. As the Schmidt decomposition of $\ket{{\Psi_u[{A(1)A(2)}]}}$ with respect to the bipartition of the lattice consisting of the two regions $\mathcal{A}_1(n) = \mathbb{Z}[1, \ldots, n]$ and $\mathcal{A}_2(n) = \mathbb{Z}[n+1,\ldots, 2N]$ equals \[eq:MPSschmidtGaugeapp\] = \_[q]{} \_[\_q=1]{}\^[D\_q]{} , it follows that to obtain a faithful approximation for the ground state one has to choose $D_q$ such that the discarded Schmidt values for each charge sector are [*sufficiently*]{} small. In particular we could take $D_q=0$ for $|q|>3$ which is explained by the first term in the Hamiltonian Eq. (\[eq:Hamiltonianapp\]). A proper justification of truncating the charge sectors is provided in [@Buyens2017]. We refer to [@Buyens2015; @Buyens2017] for the details on the TDVP. MPS approximation for single-particle excitations {#subsec:RRforExc} ------------------------------------------------- Once we have an MPS approximation $\ket{{\Psi_u[{A(1)A(2)}]}}$ for the ground state of $\mathcal{H}_{\alpha}$, see Sec. \[subsec:TDVPGS\], we use the method of [@Haegeman2012; @Haegeman2013] to approximate the single-particle excitations. The ansatz for the single-particle excitations with momentum $k$ that we will use is: \[eq:excAnsatzt2\] $$\begin{gathered} \ket{{\Phi_{k}[B,A(1)A(2)]}} = \sum_{m = 1}^{N} e^{2ikn/\sqrt{x}} \sum_{\{\kappa_n\}} \\ v_L^\dagger \left(\prod_{n = 1}^m A_{\kappa_{2n-1}}(1)A_{\kappa_{2n}}(2)\right) B_{\kappa_{2n-1},\kappa_{2n}} \\ \left(\prod_{n = m+1}^N A_{\kappa_{2n-1}}(1)A_{\kappa_{2n}}(2)\right)v_R \ket{\bm{\kappa}},\end{gathered}$$ where $A(1)$ and $A(2)$ correspond to the ground state $\ket{{\Psi_u[{A(1)A(2)}]}}$ of $\mathcal{H}_{\alpha}$ and gauge invariance is imposed by $$\begin{gathered} [B_{s_1,p_1,s_2,p_2}]_{(q,\alpha_q);(r,\beta_r)} \\ = [b_{q,s_1,s_2}]_{\alpha_q,\beta_r}\delta_{p_1,q + (s_1 -1)/2}\delta_{p_2,q + (s_1+s_2)/2}\delta_{r,p_2}.\end{gathered}$$ where $\kappa_n = (s_{n},p_{n})$ and $b_{q,s_1,s_2} \in \mathbb{C}^{D_q \times D_r}$. The algorithm to find the optimal approximation $\ket{{\Phi_{k}[B,A(1)A(2)]}}$ for the excited states is discussed in [@Haegeman2013; @Buyens2013; @Buyens2017]: one has to find $b_{q,s_1,s_2}$ such that $$\frac{\braket{{\Phi_{k}[\overline{B},\overline{A(1)A(2)}]} \vert \mathcal{H}_{\alpha} \vert {\Phi_{k}[B,A(1)A(2)]}}}{\braket{{\Phi_{k}[\overline{B},\overline{A(1)A(2)}]}\vert {{\Phi_{k}[B,A(1)A(2)]}}}}$$ is minimized with respect to $\overline{b}_{q,s_1,s_2}$. This boils down in a generalized eigenvalue equation for $b_{q,s_1,s_2}$ where the smallest eigenvalues correspond to the energies of the single-particle excitations. Only the ones who are stable against variation of the bond dimensions $D_q$ are physical. We refer to [@Buyens2013; @Buyens2015b; @Buyens2017] for the details.\ \ In [@Buyens2015; @Buyens2017] we found for $m/g = 0.25$ and $\alpha \lesssim 0.47$ two single-particle excitations with masses $\mathcal{E}_1$ and $\mathcal{E}_2$. The energies at non-zero momentum are in the continuum limit determined by the Lorentz dispersion relations: $\mathcal{E}_m(k) = \sqrt{k^2 + \mathcal{E}_m^2}$. The corresponding MPS approximations at non-zero lattice spacing $a = 1/g\sqrt{x}$ are $\ket{{\Phi_{k}[B^{(m,k)},A(1)A(2)]}}$ with $$\begin{gathered} _{(q,\alpha_q);(r,\beta_r)} \\ = [b_{q,s_1,s_2}^{(m,k)}]_{\alpha_q,\beta_r}\delta_{p_1,q + (s_1 -1)/2}\delta_{p_2,q + (s_1+s_2)/2}\delta_{r,p_2} \end{gathered}$$ and are normalized such that [@Haegeman2013] \[eq:normSP\] $$\begin{gathered} \braket{{\Phi_{k'}[\overline{B^{(n,k')}},\overline{A(1)A(2)}]}\vert {{\Phi_{k}[B^{(m,k)},A(1)A(2)]}}} \\ = 2\pi \delta_{n,m} \delta(k - k') \end{gathered}$$ = 0. The delta-Dirac function originates from the infinite lattice length and has to be read as \[eq:diracreg0\](k-k’) = \_[N + ]{} \_[k,k’]{}where $2N$ ($N \rightarrow + \infty$) is the number of sites on the lattice. For a local observable $\mathcal{O} = \sum_{n = 1}^{2N-1} \mathcal{T}^{n-1} o \mathcal{T}^{-n+1}$, where $o$ is a Hermitian operator which acts only non-trivial on sites $1$ and $2$, we first subtract the ground state contribution such that $$\braket{{\Psi_u[{\overline{A(1)A(2)}}]} \vert \mathcal{O} \vert {\Psi_u[{A(1)A(2)}]}} = 0 .$$ With this renormalization we have that \[eq:overlapSP\] $$\begin{gathered} \bra{{\Phi_{k}[\overline{B},\overline{A(1)A(2)}]}} \mathcal{O} \ket{{\Phi_{k'}[B,A(1)A(2)]}} \\ = 2\pi\delta(k-k') O_{eff}^1[\overline{B},B']\end{gathered}$$ $$\begin{gathered} \braket{{\Psi_u[{\overline{A(1)A(2)}}]} \vert \mathcal{O}\vert {\Phi_{k}[B,A(1)A(2)]}} \\ = 2\pi \delta(k) O_{eff}^2[\overline{A(1)A(2)},B]\end{gathered}$$ where $O_{eff}^1[\overline{B},B']$ and $O_{eff}^2[\overline{A(1)A(2)},B]$ are finite quantities that can be computed efficiently, see [@Haegeman2013]. The delta-Dirac distributions have to be regularized according to Eq. (\[eq:diracreg0\]). iTEBD for real-time evolution {#subsec:iTEBDRT} ----------------------------- To evolve a state approximated by a MPS Eq. (\[eq:MPSapp\]) at $t = 0$, i.e. to find $$\ket{\Psi(t)} = e^{-i\mathcal{H}_\alpha t} \ket{{\Psi_u[{A(1)A(2)}]}},$$ we used the infinite time-evolving block decimation (iTEBD) [@Vidal2007]. At the core of this method lies the Trotter decomposition [@Hatano2005] which decomposes $e^{- i dt \mathcal{H}}$ into a product of local operators, the so-called Trotter gates. Specifically, we did a fourth order Trotter decomposition of $e^{-i \mathcal{H}_\alpha dt}$ for small steps $dt$ and projected afterwards $\ket{\Psi(t+dt)} = e^{-i \mathcal{H}_\alpha dt}\ket{{\Psi_u[{A(1)A(2)}]}}$ to a MPS $\ket{{\Psi_u[{\tilde{A}(1)\tilde{A}(2)}]}}$ with smaller bond dimensions $D_q$ . Similar as for the ground state, $D_q$ is chosen by discarding the Schmidt values smaller than a preset tolerance $\epsilon^2$ in Eq. (\[eq:MPSschmidtGaugeapp\]). In this way the virtual dimensions are adapted dynamically. We refer to [@Buyens2013] for the details on the implementation of the iTEBD. [.24]{} ![\[fig:evolutionDmax\] $m/g= 0.25$. Evolution of the maximum of the bond dimension over the charge sectors for fixed values of $\epsilon$: $\epsilon = 5 \times 10^{-5}$ (full line) and $\epsilon = 1 \times 10^{-4}$ (dashed line) (a) $\alpha = 0.125$. (b) $\alpha = 1.5$. ](maxDdiffepsa125e3 "fig:"){width="\textwidth"} [.24]{} ![\[fig:evolutionDmax\] $m/g= 0.25$. Evolution of the maximum of the bond dimension over the charge sectors for fixed values of $\epsilon$: $\epsilon = 5 \times 10^{-5}$ (full line) and $\epsilon = 1 \times 10^{-4}$ (dashed line) (a) $\alpha = 0.125$. (b) $\alpha = 1.5$. ](maxDdiffepsa15e2 "fig:"){width="\textwidth"} Taking a non-zero value for $\epsilon$ yields a truncation in the entanglement spectrum and hence a truncation in turn determines the required bond dimensions $D_q$ for every charge sector. For instance, in Fig. \[fig:evolutionDmax\] we show how the maximum of the bond dimension over the charge sectors $D_{max} = \max_q D_q$ varies with time for a given value of $\epsilon$. It is this growth of the required bond dimensions, which can be traced back to the growth of entanglement, that makes the computations more costly at later times. Note that to save computational resources we imposed that $D_{max} \leq 2000$. As explained in [@Buyens2013] the simulation should be exact as $\epsilon \rightarrow 0$. Therefore the convergence in $\epsilon$ can be used to control the truncation error for a certain observable. In order to have a rough idea about the error for taking non-zero $\epsilon$ we compare the results for the simulation for the two smallest values of $\epsilon$. We illustrate this in Fig. \[fig:Qdiffeps\] for the electric field expectation value and the particle number $N(t)$ where we compare the simulations for $\epsilon = 5 \times 10^{-5}$ (full line) with the simulations for $\epsilon = 1 \times 10^{-4}$. As can be observed from the inset, where we plot the differences in magnitude of the electric field, $$\Delta E(t) = \vert E_{\epsilon = 5 \times 10^{-5}}(t) - E_{\epsilon = 1 \times 10^{-4}}(t)\vert$$ and the particle number, $$\Delta N(t) = \vert N_{\epsilon = 5 \times 10^{-5}}(t) - N_{\epsilon = 1 \times 10^{-4}}(t) \vert,$$ the results are in agreement with each other up to at most $8 \times 10^{-3}$. For other (smaller) values of $\alpha$ we found that this error was even smaller. Therefore we can trust that our results are reliable up to at least $1\%$. [.24]{} ![\[fig:Qdiffeps\]$m/g= 0.25$. Evolution of the electric field and particle number for different values of $\epsilon$: $\epsilon = 5 \times 10^{-5}$ (full line) and $\epsilon = 1 \times 10^{-4}$ (dashed line). Inset: difference in magnitude of the considered quantity for the simulation with $\epsilon = 5 \times 10^{-5}$ and the simulation with $\epsilon = 1 \times 10^{-4}$. (a) $N(t)$ ($\alpha = 1.25$). (b) $N(t)$ ($\alpha = 1.5$). (c) $E(t)$ ($\alpha = 1.25$). (d) $E(t)$ ($\alpha = 1.5$). ](Ndiffepsa125e3 "fig:"){width="\textwidth"} [.24]{} ![\[fig:Qdiffeps\]$m/g= 0.25$. Evolution of the electric field and particle number for different values of $\epsilon$: $\epsilon = 5 \times 10^{-5}$ (full line) and $\epsilon = 1 \times 10^{-4}$ (dashed line). Inset: difference in magnitude of the considered quantity for the simulation with $\epsilon = 5 \times 10^{-5}$ and the simulation with $\epsilon = 1 \times 10^{-4}$. (a) $N(t)$ ($\alpha = 1.25$). (b) $N(t)$ ($\alpha = 1.5$). (c) $E(t)$ ($\alpha = 1.25$). (d) $E(t)$ ($\alpha = 1.5$). ](Ndiffepsa15e2 "fig:"){width="\textwidth"} [.24]{} ![\[fig:Qdiffeps\]$m/g= 0.25$. Evolution of the electric field and particle number for different values of $\epsilon$: $\epsilon = 5 \times 10^{-5}$ (full line) and $\epsilon = 1 \times 10^{-4}$ (dashed line). Inset: difference in magnitude of the considered quantity for the simulation with $\epsilon = 5 \times 10^{-5}$ and the simulation with $\epsilon = 1 \times 10^{-4}$. (a) $N(t)$ ($\alpha = 1.25$). (b) $N(t)$ ($\alpha = 1.5$). (c) $E(t)$ ($\alpha = 1.25$). (d) $E(t)$ ($\alpha = 1.5$). ](EFdiffepsa125e3 "fig:"){width="\textwidth"} [.24]{} ![\[fig:Qdiffeps\]$m/g= 0.25$. Evolution of the electric field and particle number for different values of $\epsilon$: $\epsilon = 5 \times 10^{-5}$ (full line) and $\epsilon = 1 \times 10^{-4}$ (dashed line). Inset: difference in magnitude of the considered quantity for the simulation with $\epsilon = 5 \times 10^{-5}$ and the simulation with $\epsilon = 1 \times 10^{-4}$. (a) $N(t)$ ($\alpha = 1.25$). (b) $N(t)$ ($\alpha = 1.5$). (c) $E(t)$ ($\alpha = 1.25$). (d) $E(t)$ ($\alpha = 1.5$). ](EFdiffepsa15e2 "fig:"){width="\textwidth"} Weak-field regime approximation {#appsec:cohstateapp} ------------------------------- If $\mathcal{H}_{\alpha_0}$ is the Hamiltonian in an electric background field $\alpha_0$ $$\begin{gathered} \mathcal{H}_{\alpha_0}= \frac{g}{2\sqrt{x}}\Biggl(\sum_{n=1}^{2N}[L(n) + \alpha_0]^2 \\ + \frac{\sqrt{x}}{g} m \sum_{n =1}^{2N}(-1)^n\bigl(\sigma_z(n) + (-1)^n\bigl) \\ + x \sum_{n=1}^{2N-1}(\sigma^+ (n)e^{i\theta(n)}\sigma^-(n + 1) + h.c.)\biggl).\end{gathered}$$ and $\mathcal{H}_{\alpha}$ is the Hamiltonian in an electric background field $\alpha$, then we can write (up to an irrelevant constant) $$\mathcal{H}_{\alpha} = \mathcal{H}_{\alpha_0} + \epsilon \mathcal{V}$$ where $$\mathcal{V} = \frac{g}{\sqrt{x}}\sum_{n=1}^{2N}L(n)$$ and $\epsilon = \alpha - \alpha_0$. Consider now the annihilation and creation operators ${{\mathrm{a}}}_m$ and ${{\mathrm{a}}}_m^\dagger$ of the single-particle excitations with energy $\mathcal{E}_m(k)$ and momentum $k$ of $\mathcal{H}_\alpha$. In principle, they can obey either the canonical commutation relations for bosons or fermions, but as we will see later we need to impose boson statistics: $$\begin{gathered} \label{eq:commRelA} [{{\mathrm{a}}}_n(k'), {{\mathrm{a}}}_m^\dagger(k)] = \delta(k'-k)\delta_{m,n}, \\ [{{\mathrm{a}}}_m(k'),{{\mathrm{a}}}_n(k')] = 0, [{{\mathrm{a}}}_n^\dagger(k'),{{\mathrm{a}}}_m^\dagger (k)] = 0. \end{gathered}$$ Using the TDVP, see Sec. \[subsec:TDVPGS\], we have a MPS approximation $\ket{{\Psi_u[{A(1)A(2)}]}}$ for the ground state of $\mathcal{H}_\alpha$ and by using the method discussed in Sec. \[subsec:RRforExc\], we have a MPS approximation $\ket{{\Phi_{k}[B^{(m,k)},A]}}$ for the $m$-th single-particle excitation with momentum $k$ and energy $\mathcal{E}_{m}(k)$. They are normalized as \[eq:Norm\] = 1, = 0, and $$\begin{gathered} \braket{{\Phi_{k'}[\overline{B^{(n,k')}},\overline{A(1)A(2)}]}\vert {\Phi_{k}[B^{(m,k)},A(1)A(2)]}} \\ = 2\pi \delta(k - k')\delta_{n,m} \end{gathered}$$ The delta-Dirac functions originate from the infinite lattice length and have to be read as, see Eq. (\[eq:diracreg0\]), \[eq:diracreg\](k-k’) = \_[N + ]{} \_[k,k’]{}where $2N$ ($N \rightarrow + \infty$) is the number of sites on the lattice. Within this approximation we have that \_ = 0, $$\begin{gathered} \mathcal{H}_{\alpha} \ket{{\Phi_{k}[B^{(m,k)},A(1)A(2)]}} \\ = \mathcal{E}_m(k) \ket{{\Phi_{k}[B^{(m,k)},A(1)A(2)]}},\end{gathered}$$ and \[eq:creationoperator\] \_m\^(k) = , \_m(k) = 0. We now want to express the ground state $\ket{\Psi(0)}$ of $\mathcal{H}_{\alpha_0}$ in terms of the ground state $\ket{{\Psi_u[{A(1)A(2)}]}}$ and the single-particle excitations $\ket{{\Phi_{k}[B,A(1)A(2)]}}$ of $\mathcal{H}_{\alpha}$. We will expand $\mathcal{H}_{\alpha_0}$ in a series of powers of $({{\mathrm{a}}}_m(k),{{\mathrm{a}}}_m^\dagger(k))$: $$\begin{gathered} \label{eq:halpha0app} \mathcal{H}_{\alpha_0} \approx \lambda_0 \mathbbm{1} + \int dk \left( \sum_m c_m(k){{\mathrm{a}}}_m(k) + \sum_m\bar{c}_m(k){{\mathrm{a}}}_m^\dagger (k) \right) \\ + \int dk \int dk' \left(\; \sum_{m,n}\mu_{m,n}(k,k') {{\mathrm{a}}}_m(k)^\dagger {{\mathrm{a}}}_n(k')\right) + \ldots \end{gathered}$$ where $\lambda_0, c_m(k), \mu_{m,n}(k) \in \mathbb{C}$. The integrals over $k$ and $k'$ run over all the momenta $k,k' \in [-\pi,\pi[$. Note that we only displayed the operators that are non-trivial within the single-particle subspace. Indeed, in higher-order terms there appear products of the form $a_{m_1}(k_1)\ldots a_{m_n}(k_n)$ or of the form $a_{m_1}^\dagger (k_1)\ldots a_{m_n}^\dagger (k_n)$ for $n \geq 2$ and, as such, these operators become trivial when projected onto the single-particle subspace. As we have only MPS approximations for the ground state and the single-particle excitations we need to restrict ourselves to the terms that are displayed in Eq. (\[eq:halpha0app\]). Physically this means that we ignore the contributions of multi-particle eigenstates of $\mathcal{H}_\alpha$. Because $\mathcal{H}_{\alpha_0}$ is Hermitian, $\mu_{m,n}$ should also be a Hermitian operator: $$\mu_{m,n}(k,k') = \overline{\mu_{n,m}(k',k)}.$$ Using the ground state $\ket{{\Psi_u[{A(1)A(2)}]}}$ and the single-particle excitations $\ket{{\Phi_{k}[B^{(m,k)},A(1)A(2)]}}$ of $\mathcal{H}_{\alpha}$ it follows from Eq. (\[eq:creationoperator\]) that $$\lambda_0 = \bra{{\Psi_u[{\overline{A(1)A(2)}}]}} \mathcal{H}_{\alpha_0} \ket{{\Psi_u[{A(1)A(2)}]}}.$$ As the energy is only determined up to a constant we can renormalize $\mathcal{H}_{\alpha_0}$ such that $$\lambda_0 = \bra{{\Psi_u[{\overline{A(1)A(2)}}]}} \mathcal{H}_{\alpha_0} \ket{{\Psi_u[{A(1)A(2)}]}} = 0.$$ With this convention, it follows from Eq. (\[eq:creationoperator\]) that we can compute the coefficients $\mu_{m,n}$ and $c_m$: \[eq3\] $$\begin{gathered} \mu_{m,n}(k,k') = \\ \frac{1}{2\pi}\braket{{\Phi_{k}[\overline{B^{(m,k)}},\overline{A(1)A(2)}]} \vert \mathcal{H}_{\alpha_0} \vert {\Phi_{k'}[B^{(n,k')},A(1)A(2)]}}\end{gathered}$$ $$\begin{gathered} c_m(k) = \\ \frac{1}{\sqrt{2\pi}} \braket{{\Psi_u[{\overline{A(1)A(2)}}]} \vert \mathcal{H}_{\alpha_0} \vert {\Phi_{k}[B^{(m,k)},A(1)A(2)]}}\end{gathered}$$ and as the states are normalized according to Eq. (\[eq:normSP\]), it follows from Eq. (\[eq:overlapSP\]) that: \[eq:overlaphalpha0\] c\_m(k) = (k) H\_[eff]{}\^2\[,B\^[(m,k)]{}\]\_[m,n]{}(k,k’) = (k-k’) H\_[eff]{}\^1\[,B\^[(n,k’)]{}\] where $H_{eff}^1[\overline{B^{(m,k')}},B^{(n,k)}]$ and $H_{eff}^2[\overline{A(1)A(2)},B^{(m,k)}]$ are finite quantities that we can compute efficiently (see [@Haegeman2013] for the details). Using Eq. (\[eq3\]) and (\[eq:overlaphalpha0\]) we rewrite $\mathcal{H}_{\alpha_0}$, Eq. (\[eq:halpha0app\]), now as $$\begin{gathered} \mathcal{H}_{\alpha_0} = \int dk \; \left(\sum_m c_m(k) {{\mathrm{a}}}_m(k) + \sum_m \bar{c}_m(k) {{\mathrm{a}}}_m^\dagger (k)\right. \\ \left.+ \sum_{m,n} M_{m,n}(k) {{\mathrm{a}}}_m^\dagger (k) {{\mathrm{a}}}_n(k) \right) \end{gathered}$$ where \[eq5\] M\_[m,n]{}(k) = H\_[eff]{}\^1\[,B\^[(n,k)]{}\],c\_m(k) = H\_[eff]{}\^2\[,B\^[(m,0)]{}\](k). $\mathcal{H}_{\alpha_0}$ is now diagonalized by the following transformations: $${{\mathrm{b}}}_r(k) = \sum_{m}\left( U_{r,m}(k){{\mathrm{a}}}_m(k) + \frac{U_{r,m}(k)}{\mathcal{E}_{r}(k)}\bar{c}_m(k) \right)$$ where $U(k)$ is the unitary transformation which diagonalize $M(k)$ and $\mathcal{E}(k)$ is the diagonal matrix containing the eigenvalues of $M(k)$, i.e. $M(k) = U(k)^\dagger \mathcal{E}(k)U(k)$. In vector notation we can write this transformation as \[BT\](k) = U(k)(k) + \^[-1]{}(k)U(k)(k) or $$\vec{{{\mathrm{a}}}}(k) = U^\dagger(k) \vec{{{\mathrm{b}}}}(k) - U^\dagger(k)\mathcal{E}^{-1}(k)U(k)\vec{\bar{c}}(k) .$$ One easily verifies now that $$\begin{gathered} \mathcal{H}_{\alpha_0} = \int dk\; \left(\sum_{r}\mathcal{E}_r(k){{\mathrm{b}}}_r^\dagger(k){{\mathrm{b}}}_r(k) \right. \\ \left.- \sum_{m,n}[M^{-1}]_{m,n}(k)c_m(k)\bar{c}_n(k)\right).\end{gathered}$$ Some remarks are in order here - The last term in $\mathcal{H}_{\alpha_0}$ is a constant (divergent) term and can be omitted. This therm is only necessary if we are doing computations in the eigenbasis of $\mathcal{H}_{\alpha}$ because it is this term that assures us that $$\braket{{\Psi_u[{\overline{A(1)A(2)}}]} \vert \mathcal{H}_{\alpha} \vert {\Psi_u[{A(1)A(2)}]}} = 0.$$ - In the Hamiltonian $\mathcal{H}_{\alpha_0}$ there appear terms of the form $c_m(k)c_n(k)$ which is ill-defined as $c_m(k) \varpropto \delta(k)$. One can regularize this by replacing the Dirac-functions by $\delta(k) \rightarrow \delta_{k,0}2N/(2\pi)$ and the $dk$ by $dk \rightarrow 2\pi/2N$ ($2N$ the number of sites on the lattice, $2N \rightarrow + \infty$). - $\mathcal{E}_r(k)$ should be positive, otherwise the quadratic expansion of $\mathcal{H}_{\alpha_0}$ in the creation and annihilation operators $a_n^\dagger(k)$ and ${{\mathrm{a}}}_n(k)$ is certainly not a valid approximation anymore. Now we have diagonalized $\mathcal{H}_{\alpha_0}$, the ground state $\ket{\Psi(0)}$ of $\mathcal{H}_{\alpha_0}$ is found as the state for which \[eq:vacHalpha0\] \_r(k) = 0, k \[-, \[ r,or \_m(k) = d\_m(k)where d\_m(k) = - \_r \[M(k)\^[-1]{}\]\_[m,r]{}|[c]{}\_r(k) as follows from Eq. (\[BT\]). Note that if $k \neq 0$ that $d_m(k) = 0$, so for non-zero momenta (in this approach) $\mathcal{H}_{\alpha_0}$ and $\mathcal{H}_{\alpha}$ have the same vacuum. This can be interpreted as the fact that a translation invariant quench cannot create particles with non-zero momentum out of the vacuum. Again, $d_m(k)$ involves a Delta-dirac distribution, \[eq:defdprimem\] d\_m(k) = (k)d’\_m, d’\_m which can be regularized as in Eq. (\[eq:diracreg\]). In order that the approximation Eq. (\[eq:halpha0app\]) remains valid we must have that $\vert d'_m \vert^2 \ll \vert d'_m \vert$, i.e. that $\vert d'_m \vert \ll 1$. Note that Eq. (\[eq:vacHalpha0\]) implies that $\ket{\Psi(0)}$ is a coherent state, i.e. an eigenvalue of ${{\mathrm{a}}}_m(k)$. This is only possible for non-zero $d'_m$ if the creation and annihilation operators obey boson statistics. This means that within our approximation the single-particle excitations must behave as bosons, see Eq. (\[eq:commRelA\]). In this approximation the vacuum $\ket{\Psi(0)}$ of $\mathcal{H}_{\alpha_0}$ is interpreted as the vacuum of $\mathcal{H}_\alpha$ with on top of it a small density of zero-momentum single-particle excitations. This number of single-particles per site can be computed and equals $$\frac{1}{N}\int dk \braket{\Psi(0) \vert {{\mathrm{a}}}_m^\dagger (k) {{\mathrm{a}}}_m(k) \vert \Psi(0)} = \frac{1}{2\pi} \sum_m \vert d'_m \vert^2$$ where $N \rightarrow + \infty$ is the number of sites and we regularized $dk = 2\pi/N$ and the Dirac-delta distribution according to Eq. (\[eq:diracreg\]). [.24]{} ![\[fig:appLRTEF\] $m/g= 0.25, x = 100$. Comparison of iTEBD simulations (full line) with the approximation Eq. (\[eq:appcoherenstateAppb\]) (dashed line) for the electric field $E(t)$. (a): $\alpha = 0.01$. (b): $\alpha = 0.2$. (c) $\alpha = 0.3$. (d): $\alpha = 0.4$.](ElectricFieldLRTa1e2 "fig:"){width="\textwidth"} [.24]{} ![\[fig:appLRTEF\] $m/g= 0.25, x = 100$. Comparison of iTEBD simulations (full line) with the approximation Eq. (\[eq:appcoherenstateAppb\]) (dashed line) for the electric field $E(t)$. (a): $\alpha = 0.01$. (b): $\alpha = 0.2$. (c) $\alpha = 0.3$. (d): $\alpha = 0.4$.](ElectricFieldLRTa2e1 "fig:"){width="\textwidth"} [.24]{} ![\[fig:appLRTEF\] $m/g= 0.25, x = 100$. Comparison of iTEBD simulations (full line) with the approximation Eq. (\[eq:appcoherenstateAppb\]) (dashed line) for the electric field $E(t)$. (a): $\alpha = 0.01$. (b): $\alpha = 0.2$. (c) $\alpha = 0.3$. (d): $\alpha = 0.4$.](ElectricFieldLRTa3e1 "fig:"){width="\textwidth"} [.24]{} ![\[fig:appLRTEF\] $m/g= 0.25, x = 100$. Comparison of iTEBD simulations (full line) with the approximation Eq. (\[eq:appcoherenstateAppb\]) (dashed line) for the electric field $E(t)$. (a): $\alpha = 0.01$. (b): $\alpha = 0.2$. (c) $\alpha = 0.3$. (d): $\alpha = 0.4$.](ElectricFieldLRTa4e1 "fig:"){width="\textwidth"} Assume now we want to compute expectation values with respect to $\ket{\Psi(0)}$ of a translation invariant observable $$\mathcal{O} = \sum_{n = 1}^{2N} \mathcal{T}^{n-1} o \mathcal{T}^{-n+1}$$ where $o$ has only support on sites $1$ and $2$. Then we expand this operator similar as $\mathcal{H}_{\alpha_0}$ quadratically in the annihilation and creation operators of $\mathcal{H}_{\alpha}$: $$\begin{gathered} \mathcal{O} \approx \int dk\;\left( \sum_m o_{2,m}(k) {{\mathrm{a}}}_m(k) + \bar{o}_{2,m}(k){{\mathrm{a}}}_m^\dagger(k)\right) \\ + \int dk \int dk'\;\left(\sum_{m,n}o_{1,m,n}(k,k') {{\mathrm{a}}}_m^\dagger(k){{\mathrm{a}}}_n(k') \right) \end{gathered}$$ where we renormalized $\mathcal{O}$ such that $\braket{{\Psi_u[{\overline{A(1)A(2)}}]} \vert \mathcal{O}\vert{\Psi_u[{A(1)A(2)}]}} = 0$. The coefficients can be extracted similar to Eq. (\[eq:overlaphalpha0\]): $$\begin{gathered} o_{1,m,n}(k,k') \\ = \frac{1}{2\pi}\bra{{\Phi_{k}[\overline{B^{(m,k)}},\overline{A(1)A(2)}]}} \mathcal{O} \ket{{\Phi_{k'}[B^{(n,k')},A(1)A(2)]}} \\ = \delta(k-k') O_{eff}^1[\overline{B^{(m,k)}},B^{(n,k')}] \end{gathered}$$ $$\begin{gathered} o_{2,m}(k) = \\ \frac{1}{\sqrt{2\pi}} \braket{{\Psi_u[{\overline{A(1)A(2)}}]} \vert \mathcal{O} \vert {\Phi_{k}[B^{(m,k)},A(1)A(2)]}} \\ = \sqrt{2\pi} \delta(k) O_{eff}^2[\overline{A(1)A(2)},B^{(m,k)}] \end{gathered}$$ where $O_{eff}^1$ and $O_{eff}^2$ are finite quantities which we can compute efficiently. Hence, we find $$\begin{gathered} \label{eq:opHeisenbergPict} \mathcal{O} \approx \sum_m \left(o_{2,m}{{\mathrm{a}}}_m(0) + \bar{o}_{2,m}{{\mathrm{a}}}_m^\dagger(0)\right) \\ + \int dk \left(\sum_{m,n}o_{1,m,n}(k) {{\mathrm{a}}}_m^\dagger(k){{\mathrm{a}}}_n(k) \right) \end{gathered}$$ with o\_[1,m,n]{}(k) = O\_[eff]{}\^1\[,B\^[(n,k)]{}\] and o\_[2,m]{} = O\_[eff]{}\^2\[,B\^[(m,0)]{}\]. [.24]{} ![\[fig:appLRTCC\] $m/g= 0.25, x = 100$. Comparison of iTEBD simulations (full line) with the approximation Eq. (\[eq:appcoherenstateAppb\]) (dashed line) for $N(t)$. (a): $\alpha = 0.01$. (b): $\alpha = 0.2$. (c) $\alpha = 0.3$. (d): $\alpha = 0.4$.](CCLRTa1e2 "fig:"){width="\textwidth"} [.24]{} ![\[fig:appLRTCC\] $m/g= 0.25, x = 100$. Comparison of iTEBD simulations (full line) with the approximation Eq. (\[eq:appcoherenstateAppb\]) (dashed line) for $N(t)$. (a): $\alpha = 0.01$. (b): $\alpha = 0.2$. (c) $\alpha = 0.3$. (d): $\alpha = 0.4$.](CCLRTa2e1 "fig:"){width="\textwidth"} [.24]{} ![\[fig:appLRTCC\] $m/g= 0.25, x = 100$. Comparison of iTEBD simulations (full line) with the approximation Eq. (\[eq:appcoherenstateAppb\]) (dashed line) for $N(t)$. (a): $\alpha = 0.01$. (b): $\alpha = 0.2$. (c) $\alpha = 0.3$. (d): $\alpha = 0.4$.](CCLRTa3e1 "fig:"){width="\textwidth"} [.24]{} ![\[fig:appLRTCC\] $m/g= 0.25, x = 100$. Comparison of iTEBD simulations (full line) with the approximation Eq. (\[eq:appcoherenstateAppb\]) (dashed line) for $N(t)$. (a): $\alpha = 0.01$. (b): $\alpha = 0.2$. (c) $\alpha = 0.3$. (d): $\alpha = 0.4$.](CCLRTa4e1 "fig:"){width="\textwidth"} To perform real-time evolution with $\mathcal{H}_{\alpha}$ we will work in the Heisenberg picture. The creation- and annihilation operator ${{\mathrm{a}}}_m^\dagger(k)$ and ${{\mathrm{a}}}_m(k)$ satisfy the following differential equation \[eq:evolveAlinresp\] \_m(k) = i\[\_,\_m(k)\], \_m\^(k) = i\[\_,\_m\^(k)\] . If we restrict the Hilbert space to the vacuum and the single-particle excitations we find that = -\_m(k)\_m(k) , = \_m(k)\_m\^(k). It follows that within this approximation: $${{\mathrm{a}}}_m(k,t) = e^{-i\mathcal{E}_m(k) t}{{\mathrm{a}}}_m(k)\mbox{ and } {{\mathrm{a}}}_m^\dagger(k,t) = e^{i\mathcal{E}_m(k) t}{{\mathrm{a}}}_m^\dagger(k).$$ In the Heisenberg picture Eq. (\[eq:opHeisenbergPict\]) becomes $$\begin{gathered} \mathcal{O}(t) = \sum_m \left(o_{2,m}{{\mathrm{a}}}_m(0,t) + \bar{o}_{2,m}{{\mathrm{a}}}_m^\dagger(0,t)\right) \\ + \int dk \left(\sum_{m,n}o_{1,m,n}(k) {{\mathrm{a}}}_m^\dagger(k,t)a_n(k,t) \right) \end{gathered}$$ and the expectation value with respect to $\ket{\Psi(0)}$, the vacuum of $\mathcal{H}_{\alpha_0}$, see Eq. (\[eq:vacHalpha0\]), then reads $$\begin{gathered} \braket{\Psi(0) \vert \mathcal{O}(t) \vert \Psi(0)} = \\ \sum_m o_{2,m} d_m(0) e^{-i\mathcal{E}_m(0) t} + \sum_m \bar{o}_{2,m} \bar{d}_m(0) e^{i\mathcal{E}_m(0) t} \\ + \int dk \left(\sum_{m,n}o_{1,m,n}(k)e^{i(\mathcal{E}_m(k) - \mathcal{E}_n(k))t}\bar{d}_m(k){d}_n(k)\right) \end{gathered}$$ where we used eqs. (\[eq:vacHalpha0\]) and (\[eq:evolveAlinresp\]). As already noted before, $d_m(k)$ involves a delta-Dirac contribution: $d_m(k) = \delta(k)d'_m$. The expression $\braket{0 \vert \mathcal{O}(t) \vert 0} $ is regularized by $\delta(k) \rightarrow \delta_{k,0}2N/(2\pi)$ and $dk = 2\pi/2N$. This yields the following results: $$\begin{gathered} \braket{\Psi(0) \vert \mathcal{O}(t) \vert \Psi(0)} = \\ \frac{2N}{2\pi}\left[\sum_m o_{2,m} d'_m e^{-i\mathcal{E}_m(0) t} + \sum_m \bar{o}_{2,m} \bar{d}'_m e^{i\mathcal{E}_m(0) t} \right. \\ \left. + \left(\sum_{m,n}o_{1,m,n}(0)e^{i(\mathcal{E}_m(0) - \mathcal{E}_n(0))t}\bar{d}'_m{d}'_n\right)\right].\end{gathered}$$ Because $\mathcal{O} = \sum_{n = 1}^{2N-1} T^{n-1} o T^{-n+1}$, $\braket{0 \vert \mathcal{O}(t) \vert 0}$ will scale with the number of lattice sites ($2N$). It follows that $$\begin{gathered} \label{eq:appcoherenstateAppb} \frac{1}{2N} \braket{\Psi(0) \vert \mathcal{O}(t) \vert \Psi(0)} = \\ \frac{1}{2\pi}\left[\sum_m o_{2,m} d'_m e^{-i\mathcal{E}_m(0) t} + \sum_m \bar{o}_{2,m} \bar{d}'_m e^{i\mathcal{E}_m(0) t}\right. \\ \left.+ \left(\sum_{m,n}o_{1,m,n}(0)e^{i(\mathcal{E}_m(0) - \mathcal{E}_n(0))t}\bar{d}'_m{d}'_n\right)\right] \end{gathered}$$ is the expectation value per site and is finite.\ \ Within this approximation all coefficients appearing above can be computed from the MPS approximations $\ket{{\Psi_u[{A(1)A(2)}]}}$ and $\ket{{\Phi_{k}[B^{(m,k)},A(1)A(2)]}}$ for the ground state and the single-particle excitations of $\mathcal{H}_\alpha$. In our case for $m/g = 0.25$ and $x = 100$ we have for the values of $\alpha$ we considered here two single-particle excitations. Hence, the sum over $m$ runs from 1 to 2. We expect the above approximation to be true as long as the contribution of the multi-particle excitations of $\mathcal{H}_\alpha$ is negligible. Physically this means that the ground state $\ket{\Psi(0)}$ of $\mathcal{H}_{\alpha_0}$ is a coherent state of the creation and annihilation operators of $\mathcal{H}_\alpha$. This can be interpreted as the fact that $\ket{\Psi(0)}$ is constructed from the ground state of $\mathcal{H}_{\alpha}$ with a small density of single-particles of $\mathcal{H}_\alpha$ on top of it. We can indeed expect that for small values of $\alpha$ and at early times that this is the case. In Figs. \[fig:appLRTEF\] and \[fig:appLRTCC\] we compare the real-time simulations with iTEBD (full line) with this approximation Eq. (\[eq:appcoherenstateAppb\]) and find agreement for $\alpha \lesssim 0.2$ while for $\alpha = 0.4$ the difference between both results is quit large. A discussion is provided in Sec. \[subsec:weakfieldRegime\]. Predicting the asymptotic thermal values of real-time evolution {#subsec:determineGibbsState} --------------------------------------------------------------- In [@Buyens2016] we succeeded to approximate the Gibbs state $\rho(\beta)$ at temperature $T = 1/\beta$ by using Matrix Product Operators (MPO) with $$\rho(\beta) = \frac{\mathcal{P}e^{-\beta \mathcal{H}_\alpha}}{\mbox{tr}\left(\mathcal{P}e^{-\beta \mathcal{H}_\alpha}\right)}$$ where $\mathcal{P}$ is the orthogonal projector onto the $(G(n) = 0)$-subspace. If the state $\ket{\Psi(t)} = e^{- i\mathcal{H}_\alpha t}\ket{\Psi(0)}$ would eventually equilibrate to a Gibbs state then we can estimate its inverse temperature $\beta_0$ from the requirement that $$\braket{\Psi(0) \vert \mathcal{H}_{\alpha} \vert \Psi(0)} = \frac{\mbox{tr}\left(\mathcal{H}_{\alpha} \mathcal{P}e^{-\beta_0 \mathcal{H}_\alpha}\right)} {\mbox{tr}\left( \mathcal{P}e^{-\beta_0 \mathcal{H}_\alpha}\right)},$$ as follows from energy conservation during real-time evolution. In Figs. \[fig:FindTempa\] and \[fig:FindTempb\] we show the energy per unit of length $\mathcal{E}_\beta$ of the Gibbs state $\rho(\beta)$ as a function of $\beta$ and the (conserved) energy per unit of length $\mathcal{E}(t)$ of the state $\ket{\Psi(t)}$. We subtracted from both quantities the energy per unit of length of $\ket{\Psi(0)}$. The intersection between the curves determines the value of $\beta_0$. Because we simulated the thermal evolution with steps $d\beta = 0.05$ we can only determine $\beta_0$ up to $0.05/g$. For $\alpha = 0.75$ we find $\beta_0g = 1.35 (\pm 0.05)$, $\alpha = 1.25$ we find $\beta_0g = 0.85 (\pm 0.05)$ and for $\alpha = 1.5$ we find $\beta_0 g = 0.70 (\pm 0.05)$. [.24]{} ![\[fig:findTemp\] Results for $m/g= 0.25, x = 100$. Determination of the temperature of the asymptotic state in thermal equilibrium by finding the intersection of the conserved energy $\mathcal{E}_t $(dashed line) with the energy of the Gibbs state $\mathcal{E}_\beta$ (full line). (a): $\alpha = 1.25$. (b): $\alpha = 1.5$. ](FindTempa "fig:"){width="\textwidth"} [.24]{} ![\[fig:findTemp\] Results for $m/g= 0.25, x = 100$. Determination of the temperature of the asymptotic state in thermal equilibrium by finding the intersection of the conserved energy $\mathcal{E}_t $(dashed line) with the energy of the Gibbs state $\mathcal{E}_\beta$ (full line). (a): $\alpha = 1.25$. (b): $\alpha = 1.5$. ](FindTempb "fig:"){width="\textwidth"} Scaling to the continuum limit of the real-time results {#appsec:continuum} ======================================================= In this paper we consider the following quantities: - The electric field: $$\begin{gathered} \label{eq:ElectricFieldRT} E(t) = \braket{\Psi(t) \vert E \vert \Psi(t) } \\ = \frac{1}{2N}\sum_{n=1}^{2N}\braket{\Psi(t) \vert L(n) + \alpha \vert \Psi(t) }\end{gathered}$$ - The current: $$\begin{gathered} \label{eq:CurrentRT} j^1(t) = \braket{\Psi(t) \vert j^1 \vert \Psi(t) } \\ = \frac{-i\sqrt{x}g}{2N}\sum_{n = 1}^{2N}\Braket{\Psi(t) \vert \sigma^+(n)e^{i\theta(n)}\sigma^{-}(n+1) - h.c. \vert\Psi(t)},\end{gathered}$$ - The particle number $N(t) = (\Sigma(t) - \Sigma(0))/g$ with $$\begin{gathered} \Sigma(t) = \braket{\Psi(t) \vert \bar{\psi}(0)\psi(0) \vert\Psi(t)} \\ = \Braket{\Psi(t) \left\vert g\frac{\sqrt{x}}{2N}\sum_{n=1}^{2N} \frac{\sigma_z(n) + (-1)^n}{2}\right\vert \Psi(t)} \end{gathered}$$ which counts in the weak coupling limit ($m/g \gg 1$) the number of electrons and positrons per unit of length that are created out of the vacuum or destroyed in the vacuum due to turning on the electric background field $\alpha$ at $t = 0$. - From the Schmidt spectrum $\{\lambda_{\alpha_q}^q\}$ associated to a cut between an even and an odd site, we can compute the half chain entanglement entropy S = -\_[q]{} \_[= 1]{}\^[D\_q]{}\_[\_q]{}\^q (\_[\_q]{}\^q). As we will show below, a UV quantity is obtained by considering the renormalized half chain entanglement entropy $$\Delta S(t) = S(t) - S(0).$$ [.24]{} ![\[fig:RTscaling\] $m/g= 0.25, \alpha = 0.75$. Scaling of the quantities to $x \rightarrow + \infty$. (a) Electric field $E(t,x)$. (b) Particle number $N(t,x)$. (c) Renormalized entropy $\Delta S(t,x)$. (d) Polynomial extrapolation in $1/\sqrt{x}$ of the renormalized entropy to $x \rightarrow + \infty$. ](RTEFScaling "fig:"){width="\textwidth"} [.24]{} ![\[fig:RTscaling\] $m/g= 0.25, \alpha = 0.75$. Scaling of the quantities to $x \rightarrow + \infty$. (a) Electric field $E(t,x)$. (b) Particle number $N(t,x)$. (c) Renormalized entropy $\Delta S(t,x)$. (d) Polynomial extrapolation in $1/\sqrt{x}$ of the renormalized entropy to $x \rightarrow + \infty$. ](RTCCScaling "fig:"){width="\textwidth"} [.24]{} ![\[fig:RTscaling\] $m/g= 0.25, \alpha = 0.75$. Scaling of the quantities to $x \rightarrow + \infty$. (a) Electric field $E(t,x)$. (b) Particle number $N(t,x)$. (c) Renormalized entropy $\Delta S(t,x)$. (d) Polynomial extrapolation in $1/\sqrt{x}$ of the renormalized entropy to $x \rightarrow + \infty$. ](RTEntropyScaling "fig:"){width="\textwidth"} [.24]{} ![\[fig:RTscaling\] $m/g= 0.25, \alpha = 0.75$. Scaling of the quantities to $x \rightarrow + \infty$. (a) Electric field $E(t,x)$. (b) Particle number $N(t,x)$. (c) Renormalized entropy $\Delta S(t,x)$. (d) Polynomial extrapolation in $1/\sqrt{x}$ of the renormalized entropy to $x \rightarrow + \infty$. ](RTEntropyContExtr "fig:"){width="\textwidth"} The fact that these quantities are UV finite is corroborated by Fig. \[fig:RTscaling\] where we show the evolution of the electric field $E(t,x)$, the particle number $N(t,x)$ and the renormalized entropy $\Delta S(t,x)$ as a function of time for $x = 100,$ $200,$ $300,$ $400$. Note that we here explicitly denote the $x-$dependence of the quantities. We observe that for all these quantities the graphs are almost on top of each other, see Figs. \[fig:RTscaling\] (a) - (c). One can also obtain a continuum estimate for these quantities by a polynomial extrapolation, see Fig. \[fig:RTEntropyContExtr\] where we perform a polynomial extrapolation for $\Delta S(t)$ for $tg = 5$. It is also clear from this example that we can already expect at $x = 100$ to be close to the continuum limit. (For the current $j^1(t)$ this follows from Ampère’s law: $\dot{E} = - gj^1$.) This justifies that we restrict ourselves to $x = 100$ for the discussion on the continuum results. .
{ "pile_set_name": "ArXiv" }
--- abstract: 'Effective fusion of complementary information captured by multi-modal sensors (visible and infrared cameras) enables robust pedestrian detection under various surveillance situations (e.g. daytime and nighttime). In this paper, we present a novel box-level segmentation supervised learning framework for accurate and real-time multispectral pedestrian detection by incorporating features extracted in visible and infrared channels. Specifically, our method takes pairs of aligned visible and infrared images with easily obtained bounding box annotations as input and estimates accurate prediction maps to highlight the existence of pedestrians. It offers two major advantages over the existing anchor box based multispectral detection methods. Firstly, it overcomes the hyperparameter setting problem occurred during the training phase of anchor box based detectors and can obtain more accurate detection results, especially for small and occluded pedestrian instances. Secondly, it is capable of generating accurate detection results using small-size input images, leading to improvement of computational efficiency for real-time autonomous driving applications. Experimental results on KAIST multispectral dataset show that our proposed method outperforms state-of-the-art approaches in terms of both accuracy and speed.' address: - 'State Key Laboratory of Fluid Power and Mechatronic Systems, School of Mechanical Engineering, Zhejiang University, Hangzhou, China' - 'Key Laboratory of Advanced Manufacturing Technology of Zhejiang Province, School of Mechanical Engineering, Zhejiang University, Hangzhou, China' - 'Scene Understanding Group, University of Twente, Hengelosestraat 99, 7514 AE Enschede, The Netherlands' author: - Yanpeng Cao - Dayan Guan - Yulun Wu - Jiangxin Yang - Yanlong Cao - Michael Ying Yang bibliography: - 'manuscript.bib' title: 'Box-level Segmentation Supervised Deep Neural Networks for Accurate and Real-time Multispectral Pedestrian Detection' --- Multispectral data ,Pedestrian detection ,Deep neural networks ,Box-level segmentation ,Real-time application INTRODUCTION ============ Pedestrian detection has received much attention within the field of computer vision and robotics in recent years [@oren1997pedestrian; @dalal2005histograms; @dollar2012pedestrian; @angelova2015pedestrian; @geiger2012we; @jafari2016real; @cordts2016cityscapes; @Zhang2017CVPR]. Given images captured in various real-world surveillance situations, pedestrian detectors are required to accurately locate human regions. It provides an important functionality to facilitate human-centric applications such as autonomous driving, video surveillance, and urban monitoring [@wu2016squeezedet; @li2017unified; @zhang2017towards; @wang2014scene; @li2017accurate; @bu2005pedestrian; @shirazi2017looking]. \ Although significant improvements have been accomplished during recent years, it still remains a challenging task to develop a robust pedestrian detector ready for practical applications. It can be noticed that most existing pedestrian detection methods are based on visible information alone. Their performances are sensitive to changes of the environmental brightness (daytime or nighttime). To overcome the aforementioned limitations, multispectral information (*e.g.* visible and infrared), which can supply complementary information about the targets of interest, are considering to build more robust pedestrian detectors under various illumination conditions. In the past few years, multispectral pedestrian detection solutions are developed by many research works to achieve more accurate and stable pedestrian detection results for around-the-clock application [@leykin2007thermal; @krotosky2008person; @torabi2012iterative; @oliveira2015multimodal; @hwang2015multispectral; @gonzalez2016pedestrian]. It is noted that most existing multispectral pedestrian detection approaches are built upon anchor box based detectors such as region proposal networks (RPN) [@zhang2016faster] or Faster R-CNN [@ren2017faster], localizing each human target using a bounding box. During the training phase, a large number of anchor boxes are needed to ensure sufficient overlap with most ground truth boxes and will cause severe imbalance between positive and negative anchor boxes and slow down the training process [@lin2018focal]. Moreover, the state-of-the-art pedestrian detection techniques only perform well when the input is large-size images. Their performances will drop significantly when applied to small-size images since it is difficult to make use of anchor boxes to generate positive samples for small-size targets. A simple solution is to increase the size of input images and human targets through image up-scaling, however such practice will adversely decrease the computational efficiency which is critical for real-time autonomous driving applications. To overcome the problems mentioned above, we present a novel box-level segmentation supervised learning framework for accurate and real-time multispectral pedestrian detection. Our approach takes pairs of aligned visible and infrared images with easily obtained bounding box annotations as input and computes heat maps to predict the existence of human targets. In Fig. \[fig\_1\], we show some comparative detection results of our method with the state-of-the-art anchor box based detector. It is noticed that the proposed box-level segmentation supervised learning framework produces more accurate detection results, successfully locating far-scale human targets even when the input is small-size images. It is also worth mentioning that our proposed method can process more than 30 images per second on a single NVIDIA Geforce Titan X GPU which is sufficient for real-time applications in autonomous vehicles. The contributions of this work are as follows. Overall, the **contributions** of this paper are summarized as follows: - Our box-level segmentation supervised framework completely eliminates the complex hyperparameter settings of anchor boxes (e.g., box size, aspect ratio, stride, and intersection-over-union threshold) required in existing anchor box based detectors. To the best of our knowledge, this is the first attempt to train [ deep learning based]{} multispectral pedestrian detectors without using anchor boxes. - We demonstrate that box-level approximate segmentation masks provide better supervision information than anchored boxes to train two-stream deep neural networks for distinguishing pedestrians from the background, particularly for small human targets. As the result, our method is capable of generating accurate detection results even using small-size input images. - Our method achieves significantly higher detection accuracy compared with the state-of-the-art multispectral pedestrian detectors [@konig2017fully; @Liu2016BMVC; @guan2018fusion; @guan2018exploiting; @li2018multispectral]. Moreover, this efficient framework can process more than 30 images per second on a single NVIDIA Geforce Titan X GPU to facilitate real-time applications in autonomous vehicles. The remainder of our paper is structured as follows. Section \[related\] reviews existing research work on multispectral pedestrian detection. The details of our proposed box-level segmentation supervised deep neural networks are presented in Section \[method\]. An extensive evaluation of our method and experimental comparison of methods for multispectral pedestrian detection are provided in Section \[experiment\]. We conclude our paper in Section \[conclusion\]. RELATED WORKS {#related} ============= Pedestrian detection facilitates various applications in robotics, automotive safety, surveillance, and autonomous vehicles. A large variety of visible-channel pedestrian detectors have been proposed. Schindler et al. [@schindler2010automatic] developed a visual stereo system, which consists of various probabilistic models to fuse evidence from 3D points and 2D images, for accurate detection and tracking of pedestrians in urban traffic scenes. Piotr et al. [@Piotr2009ICF] developed the Integrate Channel Features (ICF) detector using feature pyramids and boosted classifiers for visible images. [The feature representations of ICF have been further improved through various techniques, including aggregated channel features (ACF) [@dollar2014fast], locally decorrelated channel features (LDCF) [@nam2014local], Checkerboards [@Zhang2015Checkerboards] etc. ]{} Klinger et al. [@klinger2017probabilistic] addressed the problems of target occlusion and imprecise visual observation by building up a new predictive model on the basis of Gaussian process regression, and by combining generic object detection with instance-specific classification for refined localization. Object detection based on deep neural networks [@girshick2015fast; @ren2017faster; @He2017ICCV] have achieved state-of-the-art results on various challenging benchmarks, thus they have been adopted for the task of human-target detection. Li et al. [@li2015scale] developed a scale-aware fast region-based convolutional neural networks (SAF R-CNN) method which combines a large-size sub-network and a small-size one into a unified architecture using a scale-aware weighting mechanism to capture unique pedestrian features at different scales. Zhang et al. [@zhang2016faster] proposed an effective baseline for pedestrian detection using region proposal networks (RPN) followed by boosted classifiers, which utilizes high-resolution convolutional feature maps generated by RPN for classification. Mao et al. [@Mao2017CVPR] proposed a powerful deep neural networks framework by implementing representations of channel features to boost pedestrian detection accuracy without extra inputs in inference. Brazil et al. [@Brazil2017ICCV] developed an effective segmentation infusion network to improve pedestrian detection performance through the joint training of target detection and semantic segmentation. Recently, multispectral pedestrian detection becomes a promising solution to narrow the gap between automatic pedestrian detectors and human observers. Multi-modal sensors (visible and infrared) supply complementary information about the targets of interest thus lead to more robust and accurate detection results. Hwang et al. [@hwang2015multispectral] published the first large-scale multispectral pedestrian dataset (KAIST) which contains well-aligned visible and infrared image pairs with dense pedestrian annotations. Wagner et al. [@wagner2016multispectral] presented the first application of deep neural networks for multispectral pedestrian detection. Two decision networks, one for early-fusion and the other for late-fusion, were proposed to classify the proposals generated by ACF+T+THOG [@hwang2015multispectral] and achieved more accurate detections. Liu et al. [@Liu2016BMVC] systematically evaluated the performance of four ConvNet fusion architectures which integrate two-branch ConvNets on different DNNs stages and found the optimal architecture is the Halfway Fusion model that merges two-branch ConvNets on the middle-level convolutional features. K[ö]{}nig et al. [@konig2017fully] adopt the architecture of RPN+BDT [@zhang2016faster] to build Fusion RPN+BDT, which merges the two-branch RPN on the middle-level convolutional features, for multispectral pedestrian detection. Recently, researchers explore illumination information of a scene and proposed illumination-aware weighting mechanism to boost multispectral pedestrian detection performances [@guan2018fusion; @li2018illumination]. Guan et al. [@guan2018exploiting] presented a unified multispectral fusion framework for joint training of semantic segmentation and target detection. More accurate detection results were obtained by infusing the multispectral semantic segmentation masks as supervision for learning human-related features. Li et al. [@li2018multispectral] further deployed subsequent multispectral classification network to distinguish pedestrian instances from hard negatives. It is noted that most existing multispectral pedestrian detection approaches are built upon anchor box based detectors such as region proposal networks (RPN) [@zhang2016faster] or Faster R-CNN [@ren2017faster], using a number of bounding boxes to localize human pedestrians. However, the use of anchor boxes will cause severe imbalance between positive and negative training samples [@lin2018focal] and involve complex hyperparameter settings (e.g., box size, aspect ratio, stride, and intersection-over-union threshold) [@law2018cornernet]. Our method is very different from the existing anchor box based multispectral pedestrian detectors [@konig2017fully; @Liu2016BMVC; @li2018illumination; @guan2018fusion; @guan2018exploiting; @li2018multispectral] in two major aspects. Firstly, we make use of the ground truth bounding boxes (manually annotated) to generate coarse box-level segmentation masks, which are utilized to replace the anchor bounding boxes for the training of two-stream deep neural networks to learn human-relative characteristic features. Secondly, our method estimates a prediction heat map instead of a number of bounding boxes to localize pedestrians in the surrounding space, which can be easily used to support perceptive autonomous driving applications such as path planning or collision avoidance. It is worth mentioning that a large number of semantic segmentation techniques have been proposed to generate accurate boundary between foreground objects and background regions without using anchor boxes [@ha2017mfnet; @balloch2018unbiasing; @jegou2017one]. However, these methods typically require the supervision of pixel-level accuracy mask annotations which are very time-consuming to obtain. Many researchers attempted to achieve competitive semantic segmentation accuracy by only using the easily obtained bounding box annotations [@dai2015boxsup; @rajchl2017deepcut]. These methods involve iterative updates to gradually improve the accuracy of segmentation masks, which are slow and not suitable for real-time autonomous driving applications. Our Approach ============ We propose a novel box-level segmentation supervised framework for multispectral pedestrian detection. Given pairs of well-aligned visible and infrared images, we make use of two-stream deep neural networks to extract semantic features in individual channels. Visible and infrared feature maps are combined through the concatenation operation and then utilized to estimate heat maps to predict the existence of pedestrians as illustrated in Fig. \[fig2\]. [Note that image regions corresponding to human targets produce high confident scores (larger than 0.5). ]{} \[method\] Network Architecture -------------------- Fig. \[fig3\] (a) shows a baseline architecture of our proposed multispectral feature fusion network (MFFN) for pedestrian detection. Given a pair of well-aligned visible and infrared images, we make use of the two-stream deep convolutional neural networks presented by Liu *et al.* [@liu2016multispectral] to extract semantic feature maps in individual channels. Note that each feature extraction stream consists of five convolutional layers and pooling ones (Conv1-V to Conv5-V in the visible stream and Conv1-I to Conv5-I in the infrared stream) which adopts the architectures of Conv1-5 from VGG-16 [@simonyan2014very]. The two single-channel feature maps are then fused using the concatenation layer followed by a $1\times1$ convolutional layer (Conv-Mul) to learn two-channel multispectral semantic features. We use a softmax layer (Det-Mul) to estimate the heat map to predict the location of pedestrians. \ [(a) MFFN ]{} \ [(b) HMFFN]{} Inspired by the recent success of top-down architecture with lateral connections for object detection and segmentation [@pinheiro2016learning; @lin2017feature], we design another hierarchical multispectral feature fusion network (HMFFN) and its architecture is shown in Fig. \[fig3\](b). It is noted that the HMFFN architecture makes use of skip connections to associate the middle-level feature maps (output of Conv4-V/I layers) with the high-level ones (output of Conv5-V/I layers). The deconvolutional layers (Deconv5-V/I) are deployed to increase the spatial resolution of high-level feature maps by a factor of 2. Then, the upsampled high-level feature maps are merged with the corresponding middle-level ones (which undergoes $1\times1$ convolutional layers Conv4x-V/I to reduce channel dimensions) by element-wise addition. In deep convolutional neural networks, outputs of deeper layers encode high-level semantic information while shallower layers outputs capture rich low-level spatial patterns [@lin2017feature; @hou2017deeply]. Therefore, the proposed HMFFN architecture, combining feature maps from different levels, is capable of extracting informative multi-scale feature maps to achieve more accurate detection results. The comparative evaluation of MFFN and HMFFN architectures are provided in Sec. \[MFFNvsHMFFN\]. Box-level segmentation for Supervised Training ---------------------------------------------- A common step of state-of-the-art anchor box based detectors is generating a large number of anchor boxes with various sizes and aspect ratios as potential detection candidates as illustrated in Fig. \[fig4\] (a). However, the use of anchor boxes involves complex hyperparameter settings (e.g., box size, aspect ratio, stride, and intersection-over-union threshold) [@law2018cornernet] and causes severe imbalance between positive and negative training samples [@lin2018focal]. Moreover, it is difficult to make use of discretely distributed anchor boxes (using a large stride) to generate positive samples for small-size targets. In comparison, our proposed method takes the easily obtained bounding box annotation as input and generates an unambiguous box-level segmentation mask for the training of two-stream deep neural networks to learn human-relative characteristic features as illustrated in Fig. \[fig4\] (b). In our implementation, the obtained box-level segmentation masks are down-scaled to match with the size of final multispetral feature maps (outputs of the concatenation layer) through bilinear interpolation. It is worth mentioning that it is a challenging task to obtain pixel-level accurate annotations for visible and infrared image pairs since it is difficult to obtain perfectly aligned and synchronized multispectral data [@hwang2015multispectral]. Therefore, we attempt to explore the easily obtained bounding box annotations as an alternative of supervision to train deep convolutional neural networks for multispectral target detection. Let $\{(X,Y)\}$ denote the training images $X=\{x_{i},i=1,...,M\}$ (M pixels) with box-level approximate segmentation masks $Y=\{y_{i},i=1,...,M\}$, where $y_{i}=1$ denotes the foreground pixel and $y_{i}=0$ is the background pixel. The parameters $\theta$ of multispectral pedestrian detector are updated by minimizing the cross-entropy loss which is defined as $$\begin{aligned} \mathcal{L}(\theta) = -\sum_{i\in{Y_{+}}}{\text{log }\text{Pr}(y_{i}=1|X;\theta)} \\ -\sum_{i\in{Y_{-}}}{\text{log }\text{Pr}(y_{i}=0|X;\theta)} , \label{eq1} \end{aligned}$$ where $Y_{+}$ and $Y_{-}$ represent the foreground and background pixels respectively, and $\text{Pr}(y_{i}|X;\theta)\in[0,1]$ is the confidence score of the prediction that measures probability of the pixel belong to pedestrian regions. The confidence score is calculated utilizing the softmax function as $$\text{Pr}(y_{i}=1|X;\theta)=\frac{e^{s_{1}}}{e^{s_{0}}+e^{s_{1}}}, \label{eq2}$$ $$\text{Pr}(y_{i}=0|X;\theta)=\frac{e^{s_{0}}}{e^{s_{0}}+e^{s_{1}}}, \label{eq3}$$ where $s_{0}$ and $s_{1}$ are the computed values in our two-channel feature maps. The optimal parameters $\theta^{*}$ are obtained by minimizing the loss function $\mathcal{L}(\theta)$ through the gradient descent optimization algorithm as $$\theta^{*}=\mathop{\arg\min}_{\theta}\mathcal{L}(\theta). \label{eq4}$$ The output of our method is a full-size prediction heat map in which human target regions yields high confident scores [(larger than 0.5)]{} while the background regions produce low ones. Such perceptive information is useful for many autonomous driving applications such as path planning or collision avoidance. In comparison, it is difficult/impractical to use a number of bounding boxes to identify individual pedestrians in crowded urban scenes. Visual comparisons are provided in Fig. \[fig\_1\]. EXPERIMENTS {#experiment} =========== [ccccccccccc]{} Model & ------------ Reasonable all ------------ & ------------ Reasonable day ------------ & ------------ Reasonable night ------------ & ------- Near scale ------- & -------- Medium scale -------- & ------- Far scale ------- & ----------- No occlusion ----------- & ----------- Partial occlusion ----------- & ----------- Heavy occlusion ----------- & ------------- Inference speed (fps) ------------- \ MFFN-640 &0.844 &0.849 &0.836 &0.812 &0.736 &0.163 &0.816 &0.373 &0.169 &12.4\ HMFFN-640 &0.854 &0.865 &0.836 &0.797 &0.785 &0.166 &0.832 &0.391 &0.171 &10.8\ MFFN-480 &0.825 &0.837 &0.812 &0.799 &0.705 &0.100 &0.790 &0.328 &0.152 &20.3\ HMFFN-480 &0.843 &0.866 &0.805 &0.796 &0.764 &0.148 &0.818 &0.373 &0.152 &18.5\ MFFN-320 &0.748 &0.757 &0.740 &0.756 &0.546 &0.043 &0.697 &0.243 &0.110 &40.0\ HMFFN-320 &0.817 &0.825 &0.808 &0.779 &0.696 &0.111 &0.779 &0.345 &0.140 &38.3\ \[tab1\] \ \ \ [**(a) Daytime**]{}\ \ \ \ [**(b) Nighttime**]{}\ [ccccccccccc]{} Model & ------------ Reasonable all ------------ & ------------ Reasonable day ------------ & ------------ Reasonable night ------------ & ------- Near scale ------- & -------- Medium scale -------- & ------- Far scale ------- & ----------- No occlusion ----------- & ----------- Partial occlusion ----------- & ----------- Heavy occlusion ----------- & ------------- Inference speed (fps) ------------- \ RPN-HMFFN-640 &0.756 &0.761 &0.741 &0.607 &0.662 &0.065 &0.705 &0.263 &0.149 &9.4\ HMFFN-640 &0.854 &0.865 &0.836 &0.797 &0.785 &0.166 &0.832 &0.391 &0.171 &10.8\ RPN-HMFFN-480 &0.75 &0.755 &0.743 &0.591 &0.64 &0.046 &0.7 &0.282 &0.142 &16.5\ HMFFN-480 &0.843 &0.866 &0.805 &0.796 &0.764 &0.148 &0.818 &0.373 &0.152 &18.5\ RPN-HMFFN-320 &0.718 &0.717 &0.713 &0.638 &0.571 &0.057 &0.672 &0.225 &0.124 &32.0\ HMFFN-320 &0.817 &0.825 &0.808 &0.779 &0.696 &0.111 &0.779 &0.345 &0.140 &38.3\ \[tab12\] \ \ \ [**(a) Daytime**]{}\ \ \ \ [**(b) Nighttime**]{}\ Dataset and Evaluation Metric ----------------------------- All the detectors are evaluated using the public KAIST multispectral pedestrian benchmark [@hwang2015multispectral]. We notice that CVC-14 [@gonzalez2016pedestrian] is another newly published multispectral pedestrian benchmark consisting of infrared and visible gray image pairs. However, the multispectral image pairs were not properly aligned thus the pedestrian annotations are individually labeled in infrared and visible images. It should be noted that some annotations are only generated in the infrared/visible image on the CVC-14 dataset. To the best of our knowledge, KAIST multispectral pedestrian benchmark is the only available pedestrian dataset which contains large-scale and well-aligned visible-infrared image pairs with accurate manual annotations. Totally, KAIST training dataset consists of 50,172 well aligned visible-infrared image pairs ($640 \times 512$ resolution) captured in all-day traffic scenes with 13,853 pedestrian annotations. The training images are sampled in every 2 frames following the other multispectral pedestrian detection methods [@liu2016multispectral; @konig2017fully; @guan2018fusion; @guan2018exploiting; @li2018multispectral]. The KAIST testing dataset contains 2,252 image pairs with 1,356 pedestrian annotations. Since the original KAIST testing dataset contains many problematic annotations (e.g., inaccurate bounding boxes and missed human targets), we make use of the improved annotations provided by Liu et al. [@liu2018improved] for quantitative and qualitative evaluation. Specifically, we consider all reasonable, scale, and occlusion subset of the KAIST testing dataset [@hwang2015multispectral]. The output of our method is a full-size prediction heat map in which human target regions yields high confident scores while the background regions produce low ones. For a fair comparison, we transform the bounding box detection results with different prediction scores to the heat map representation, and the pixel-level average precision (AP) [@salton1986introduction; @cordts2016cityscapes] is utilized to evaluate the quantitative performance of multispectral pedestrian detectors in the pixel-level. The computed detection results are compared with the ground-truth annotation masks which are generated based on manually labeled bounding boxes. Pixels located in the ground-truth bounding boxes are defined as foreground ones, while other pixels are defined as background ones. Given the heat map predictions, true positive (TP) is the number of correctly predicted foreground pixels, false positive (FP) is the number of incorrectly predicted background pixels, and false negative (FN) is the number of incorrectly foreground background pixels. Precision is calculated as TP/(TP+FP) and recall is computed as TP/(TP+FN). The AP depicts the shape of the precision/recall curve, and is defined as the mean precision at a number of equally spaced recall levels by varying the threshold on detection scores. In our implementation, we average the precision values at 100 recall levels equally spaced between 0 and 1. Implementation Details ---------------------- The image-centric training and testing strategy are applied to generate mini-batches without using image pyramids. The batch size is set to 1 according to the method presented by Guan *et al.* [@guan2018exploiting]. Each stream of the feature extraction layers in MFFN and HMFFN are initialized using the weights and bias of VGG-16 net [@simonyan2014very] pre-trained on the ImageNet dataset [@russakovsky2015imagenet]. All the other convolutional layers use normalized initialization following the method presented by Xavier [@glorot2010understanding]. We utilize the Caffe [@jia2014caffe] deep learning framework to train and test our proposed multispectral pedestrian detectors. All the models are fine-tuned using stochastic gradient descent (SGD) [@zinkevich2010parallelized] for the first two epochs with the learning rate of 0.001 and one more epoch with the learning rate of 0.0001. Adjustable gradient clipping technique is used in training to suppress exploding gradients [@pascanu2013difficulty]. [ccccccccccc]{} Model & ------------ Reasonable all ------------ & ------------ Reasonable day ------------ & ------------ Reasonable night ------------ & ------- Near scale ------- & -------- Medium scale -------- & ------- Far scale ------- & ----------- No occlusion ----------- & ----------- Partial occlusion ----------- & ----------- Heavy occlusion ----------- & ------------- Inference speed (fps) ------------- \ &0.702 &0.708 &0.691 &0.623 &0.583 &0.062 &0.695 &0.128 &0.037 &2.5\ [Fusion RPN+BDT [@konig2017fully]]{} &0.755 &0.767 &0.731 &0.663 &0.681 &0.027 &0.700 &0.165 &0.030 &1.3\ [IATDNN+IAMSS [@guan2018fusion]]{} &0.766 &0.772 &0.756 &0.614 &0.643 &0.043 &0.715 &0.263 &0.106 &4.0\ [FRPN-Sum+TSS [@guan2018exploiting]]{} &0.765 &0.767 &0.750 &0.626 &0.638 &0.045 &0.714 &0.277 &0.116 &4.4\ [MSDS-RCNN [@li2018multispectral]]{} &0.744 &0.750 &0.721 &0.670 &0.673 &0.068 &0.712 &0.206 &0.070 &4.4\ **HMFFN-640 (ours)** &[0.854]{} &[0.865]{} &[0.836]{} &[0.797]{} &[0.785]{} &[0.166]{} &[0.832]{} &[0.391]{} &[0.171]{} &\ **HMFFN-320 (ours)** & & & & & & & & & &[38.3]{}\ \[tab2\] [Ground Truth]{} [FRPN+BDT [@konig2017fully]]{} [IATDNN+IAMSS [@guan2018fusion]]{} [MSDS-RCNN [@li2018multispectral]]{} [**HMFFN-640 (ours)**]{} [**HMFFN-320 (ours)**]{} \ \ \ [**(a) Daytime**]{}\ \ \ \ [**(b) Nighttime**]{}\ Evaluation of Multispectral Feature Fusion Schemes {#MFFNvsHMFFN} -------------------------------------------------- In this paper, we design two multispectral feature fusion schemes (MFFN and HMFFN ). The HMFFN model makes use of skip connections to associate the middle-level feature maps (output of Conv4-V/I layers) with the high-level ones (output of Conv5-V/I layers). We experimentally evaluate the performance gain by incorporating middle-level feature maps into the baseline MFFN model. The quantitative performance [(pixel-level AP [@salton1986introduction])]{} of MFFN and HMFFN for different sizes of input images ($640 \times 512$, $480 \times 384$, and $320 \times 256$) are compared in Tab. \[tab1\]. We observe that better detection performance is achieved through the hierarchical multispectral feature fusion. Moreover, the performance gain is more obvious when handling small-size input images. By incorporating the middle-level feature maps, AP index significantly increases from 0.748 (MFFN-320) to 0.817 (HMFFN-320) for $320 \times 256$ resolution input images in the Reasonable-all subset, while the improvement is not obvious for $640 \times 512$ resolution input images (increasing from 0.844 to 0.854). The underlying reason is that the middle-level features from shallower layers (Conv4-V/I) encode rich small-scale image characteristics which are essential for accurate detection of small-size targets. Using a smaller size input image will significantly improve the computational efficiency for real-time autonomous driving applications. Furthermore, we conduct the qualitative comparison of two multispectral feature fusion networks (MFFN-320 and HMFFN-320) by displaying detection results in various scenes in Fig. \[fig7\]. It is observed that performance gains can generally be achieved (in both daytime and nighttime scenes and on different scale and occlusion subsets) by integrating middle-level feature maps with high-level ones. We evaluate MFFN-320 and HMFFN-320 models on testing subsets of different scales. Although both MFFN-320 and HMFFN-320 work well on the near scale subset, HMFFN-320 can better identify pedestrian targets in the medium and far scale subsets through incorporating image details extracted in middle-level layers (Conv4-V/I). Moreover, we test MFFN-320 and HMFFN-320 models on different occlusion subsets and observe that HMFFN-320 generates more accurate detection results when target objects are partially or heavily occluded. A reasonable explanation of this improvement is that low-level features extracted in shallower layers (Conv4-V/I) provide useful information of human parts and their relationships to handle the challenging target occlusion problem [@shu2012part]. The experimental results verify the effectiveness of the proposed HMFFN architecture, capable of extracting informative multi-scale feature maps to achieve more precise object detection and remain more robust against scene variations. Evaluation of Box-level Segmentation Supervised Framework --------------------------------------------------------- In this subsection, we evaluate the performance gain of using box-level segmentation masks instead of anchor boxes to train deep convolutional neural networks for multispectral target detection. For a fair comparison, we make use of the same architecture in HMFFN for multispectral feature extraction/fusion as shown in Fig. \[fig3\] (b). Given the multispectral semantic features from Conv-Mul layer, the anchor box based detector RPN [@zhang2016faster] is utilized to generate confident scores and bounding boxes as detection results. In comparison, our proposed segmentation mask supervised method computes a prediction heat map to highlight the existence of human targets in a scene. The performances [(pixel-level AP [@salton1986introduction])]{} of our proposed box-level segmentation supervised method (HMFFN) and the one based on anchor boxes (RPN-HMFFN) on different sizes of input images ($640 \times 512$, $480 \times 384$, and $320 \times 256$) are quantitatively compared in Tab. \[tab12\]. It is observed that HMFFN based on box-level segmentation masks performs better than RPN-HMFFN based on anchor boxes, achieving significantly higher AP indexes on various testing subsets and on images of different sizes (HMFFN-640 0.854 AP vs. RPN-HMFFN-640 0.756 AP on the reasonable all subset). Such improvements are particularly evident on some challenging detection tasks (HMFFN-640 0.166 AP vs. RPN-HMFFN-640 0.065 AP for far scale human target detection). Another advantage of our proposed HMFFN is that it directly computes a prediction heat map instead of confident scores and coordinates of bounding boxes, achieving faster inference speed (HMFFN-320 38.3 fps vs. RPN-HMFFN-320 32.0 fps). Furthermore, we qualitatively show some sample detection results of HMFFN-640 and RPN-HMFFN-640 in Fig. \[fig8\]. [ The output of our method is a full-size prediction heat map in which human target regions yields high confident scores. For a fair comparison, we also transform the bounding box detection results with different prediction scores to the heat map representation, utilizing different colors to show prediction scores of bounding boxes. Note we only show regions with confident scores larger than 0.5.]{} It is noted that HMFFN-640 generate more precise detection results and fewer false positives compared with RPN-HMFFN-640. The use of anchor boxes involves complex hyperparameter settings (e.g., box size, aspect ratio, stride, and intersection-over-union threshold) will cause severe imbalance between positive and negative training samples and damage the learning of human-related features [@law2018cornernet]. Moreover, we observe that HMFFN-640 can successfully identify some pedestrian instances on the far scale and heavy occlusion subsets, which are difficult to detect using the anchor box based RPN-HMFFN-640 or even based on visual observation. For small/occluded targets, it is difficult to generate enough positive samples using discretely distributed anchor boxes. In comparison, our proposed HMFFN takes the easily obtained bounding box annotation as input and produces an unambiguous box-level segmentation mask for learning to distinguish target objects from the background. Overall, our experimental results demonstrate that box-level approximate segmentation masks provide better supervision information than anchored boxes for the training of two-stream deep neural networks to learn human-relative characteristic features. Comparison with the State-of-the-art ------------------------------------ We compare the proposed HMFFN-640 and HMFFN-320 models with a number of state-of-the-art multispectral pedestrian detectors including Halfway Fusion [@liu2016multispectral], Fusion RPN+BDT [@konig2017fully], IATDNN+IAMSS [@guan2018fusion], FRPN-Sum+TSS [@guan2018exploiting], and MSDS-RCNN [@li2018multispectral]. The Fusion RPN+BDT [@konig2017fully] model is re-implemented and trained according to the original papers, and the detection results of Halfway Fusion [@liu2016multispectral], IATDNN+IAMSS [@guan2018fusion], FRPN-Sum+TSS [@guan2018exploiting], and MSDS-RCNN [@li2018multispectral] are kindly provided by the authors. The quantitative evaluation results of different multispectral pedestrian detectors are shown in Tab. \[tab2\]. Our proposed HMFFN-640 and HMFFN-320 models both achieve higher AP values in all reasonable, scale, and occlusion subset of the KAIST testing dataset. These comparative results indicate that our propose multispectral pedestrian detector achieves more robust performances under various surveillance situations. We qualitatively compare different multispectral pedestrian detectors by visualizing some sample detection results in Fig. \[fig6\]. [ The output of our method is a full-size prediction heat map in which human target regions yields high confident scores, while the bounding box detection results with different prediction scores are transformed to the heat map representation, utilizing different colors to show prediction scores of bounding boxes. Note we only show regions with confident scores larger than 0.5.]{} Different from the existing multispectral pedestrian detection methods which generate a number of bounding boxes, our method estimates a full-size prediction heat map to highlight the existence of pedestrians in a scene. It is observed that our approach is capable of generating accurate detection results even for small human targets and using small-size input images. We also compare the computational efficiency of HMFFN-640 and HMFFN-320 with state-of-the-art methods. A single Titan X GPU is utilized to evaluate the computation efficiency. Please note that the current state-of-the-art multispectral pedestrian detectors [@konig2017fully; @Liu2016BMVC; @guan2018fusion; @guan2018exploiting; @li2018multispectral] typically perform image up-scaling to achieve their optimal detection performances. For instance, input sizes of [@liu2016multispectral], Fusion RPN+BDT [@konig2017fully], IATDNN+IAMSS [@guan2018fusion], FRPN-Sum+TSS [@guan2018exploiting], and MSDS-RCNN [@li2018multispectral] models are $750 \times 600$, $960 \times 768$, $960 \times 768$, $960 \times 768$, and $750 \times 600$, respectively. In comparison, HMFFN-640 directly takes $640 \times 512$ multispectral data as input without image up-scaling thus run much faster (10.8fps vs. 4.4fps). Moreover, our HMFFN-320 model takes small-size $320 \times 256$ images as input and achieves 38.3fps which is sufficient for real-time autonomous driving applications. Please note HMFFN-320 achieves more accurate detection results than the current state-of-the-art multispectral pedestrian detection methods. CONCLUSIONS {#conclusion} =========== In this paper, we propose a powerful box-level segmentation supervised learning framework for accurate and real-time multispectral pedestrian detection. To the best of our knowledge, this represents the first attempt to train multispectral pedestrian detectors without using anchor boxes. Extensive experimental results verify that box-level approximate segmentation masks provide useful information for distinguishing human targets from the background. Also, we design a hierarchical multispectral feature fusion scheme in which the middle-level feature maps (small-scale image characteristics) and the high-level ones (semantic information) are incorporated to achieve more accurate detection results, particularly for far-scale human targets. Experimental results on KAIST benchmark show that our proposed method achieves higher detection accuracy compared with the state-of-the-art multispectral pedestrian detectors. Moreover, this efficient framework achieves real-time processing speed and processes more than 30 images per second on a single NVIDIA Geforce Titan X GPU. The proposed methods can be generalized to other object detection task with multispectral input and facilitate potential applications (e.g., path planning, collision avoidance, and target tracking) in autonomous vehicles.
{ "pile_set_name": "ArXiv" }
--- author: - 'Khoat Than,  Tu Bao Ho, ' bibliography: - '../topic-models-all.bib' - '../other-all.bib' title: 'Inference in topic models: sparsity and trade-off' --- Topic modeling has been increasingly maturing to be an attractive area. Originally motivated from textual applications, it has been going beyond far from text to touch upon many amazing applications in computer vision, bioinformatics, software engineering, forensics, to name a few. Recent development [@SmolaS2010; @NewmanASW2009; @AsuncionSW2011; @MimnoHB12; @Hoffman2013SVI; @Broderick2013streaming] in this area enables us to easily work with big text collections or stream data. Posterior inference is an integral part of probabilistic topic models, e.g., latent Dirichlet allocation (LDA) [@BNJ03]. It often refers to the problem of estimating the posterior distribution of latent variables, such as ${\boldsymbol{z}}$ (topic indices) or ${\boldsymbol{\theta}}$ *(topic proportion)*, for an individual document ${\boldsymbol{d}}$. Knowing ${\boldsymbol{z}}$ or ${\boldsymbol{\theta}}$ (or their distributions) are vital in many tasks, such as understanding individual texts, dimensionality reduction, and prediction. More importantly, posterior inference is the core step when designing efficient algorithms for learning topic models from large-scale data. Unfortunately, the problem is often intractable [@SontagR11]. The topic and contributions in this paper ----------------------------------------- We consider the MAP inference problem: $${\boldsymbol{\theta}}^* = \arg \max \Pr({\boldsymbol{\theta, d}} | \mathcal{M}),$$ given a document ${\boldsymbol{d}}$ and a model $\mathcal{M}$. We investigate the benefits of the Frank-Wolfe algorithm (FW) by [@Clarkson2010] when used to do posterior inference in topic models. On one hand, this algorithm has a fast rate of convergence to optimal solutions. On the other hand, FW can swiftly recover sparse ${\boldsymbol{\theta}}$’s and provides a way to directly trade off sparsity of solutions against quality. Those properties are essential in order to resolve large-scale settings. Note that sparsity in topic models has been receiving considerable attentions recently. FW provides a very simple way to deal with sparsity. Therefore, FW seems to have many more attractive properties than traditional inference methods. More detailed comparison is summarized in Table \[table 1: theoretical comparison\]. Our second contribution is the introduction of 3 novel algorithms for learning LDA at large scales: *Online-FW* which borrows ideas from online learning [@Hoffman2013SVI]; *Streaming-FW* which borrows ideas from stream learning [@Broderick2013streaming]; and *ML-FW* which is regularized online learning. Those algorithms employ FW as the core step to do inference for individual documents, and learn LDA in a stochastic way. While Online-FW can only work with big datasets, Streaming-FW and ML-FW can work with both big collections and data streams. Extensive experiments demonstrate that those methods are much more efficient than the state-of-the-art learning methods, but keep comparable generalizability and quality. In particular, to reach the same level of predictiveness, ML-FW can perform tens to thousand times faster than existing methods. Therefore, our study results in efficient tools for learning LDA at large scales. Related work {#sec: related work} ------------ Various methods for inference have been proposed such as variational Bayes (VB) [@BNJ03], collapsed variational Bayes (CVB) [@TehNW2007collapsed; @Asuncion+2009smoothing], collapsed Gibbs sampling (CGS) [@MimnoHB12; @GriffithsS2004]. Sampling-based methods may converge to the underlying distributions. VB and CVB are much faster, and CVB0 [@Asuncion+2009smoothing] often performs best. Although these inference methods are significant developments for topic models, they remain two common limitations that should be further studied in both theory and practice. First, there has been no theoretical bound on convergence rate and inference quality. Second, the inferred topic proportions of documents are dense, which requires considerable memory for storage. [^1] Previous researches that have attacked the sparsity problem can be categorized into two main directions. The first direction is probabilistic [@WilliamsonWHB2010] for which some probability distributions or stochastic processes are employed to control sparsity. The other direction is non-probabilistic for which regularization techniques are employed to induce sparsity [@ZhuX2011; @ShashankaRS2007; @LarssonU11]. Although those approaches have gained important successes, they suffer from some severe drawbacks. Indeed, the probabilistic approach often requires extension of core topic models to be more complex, thus complicating learning and inference. Meanwhile, the non-probabilistic one often changes the objective functions of inference to be non-smooth which complicates doing inference, and requires some more auxiliary parameters associated with regularization terms. Such parameters necessarily require us to do model selection to find an acceptable setting for a given dataset, which is sometimes expensive. Furthermore, a common limitation of these two approaches is that the sparsity level of the latent representations is a priori unpredictable, and cannot be directly controlled. There is inherently a tension between sparsity and time in the previous inference approaches. Some approaches focusing on speeding up inference [@BNJ03; @TehNW2007collapsed; @Asuncion+2009smoothing] often ignore the sparsity problem. The main reason may be that a zero contribution of a topic to a document is implicitly prohibited in some models, in which Dirichlet distributions [@BNJ03] or logistic function [@BleiL07] are employed to model latent representations of documents. Meanwhile, the approaches dealing with the sparsity problem often require more time-consuming inference, e.g., [@WilliamsonWHB2010; @LarssonU11].[^2] Note that in many practical applications, e.g., information retrieval and computer vision, fast inference of sparse latent representations of documents is of substantial significance. Hence resolving this tension is necessary. Roadmap ------- We review briefly in Section \[sec:post-inference\] some of the most common methods for doing inference in topic models. Section \[sec:FW\] presents the Frank-Wolfe algorithm, discusses how to employ it to topic models, and then some interesting benefits of FW. We present 3 new stochastic algorithms for learning LDA in Section \[sec:stochasticLDA\], and then followed by empirical evaluations in Section \[sec:evaluation\]. Some conclusions are in the final section. <span style="font-variant:small-caps;">Notation:</span> ------------------------- ---------------------------------------------------------------------------------------------------------- $\mathcal{V}$: a vocabulary of $V$ terms, often written as $\{1, 2,...,V\}$ $\boldsymbol{d}$: a document represented as a count vector, ${\boldsymbol{d}} = (d_1, ..., d_V)$, where $d_j$ is the frequency of term $j$ $n_d$: the number of different terms in ${\boldsymbol{d}}$ $\ell_d$: the length of ${\boldsymbol{d}}$ $\mathcal{C}$: a corpus consisting of $M$ documents, $ \{\boldsymbol{d}_1, ..., \boldsymbol{d}_M\}$ $\boldsymbol{\beta}_k$: a topic which is a distribution over the vocabulary $\mathcal{V}$. $\boldsymbol{\beta}_k = (\beta_{k1},...,\beta_{kV})^t,$ $\beta_{kj} \ge 0, \sum_{j=1}^{V} \beta_{kj} =1$ ------------------------- ---------------------------------------------------------------------------------------------------------- ----------------------- --------------------------------------------------------------------------------------------------- $N_{kj}$: the expected \# of times that term $j$ appears in topic $k$. $\lambda_{kj}$: the variational parameter showing the contribution of term $j$ to topic $k$. $\phi_{jk}$: the variational parameter showing the probability that term $j$ is generated from topic $k$. $\phi_{ik}$: the variational parameter showing the probability that token $i$ is generated from topic $k$. $\gamma_k$: the variational parameter showing the expected contribution of topic $k$. $\psi(\cdot)$: the digamma function. $K$: number of topics. ${\boldsymbol{e}}_i$: the $i$th unit vector in $\mathbb{R}^K$. $\Delta_K$: the unit simplex $\Delta_K = conv(\boldsymbol{e}_1, ..., \boldsymbol{e}_K)$ or $\Delta_K = \{\boldsymbol{x} \in \mathbb{R}^K: ||\boldsymbol{x}||_1 = 1, \boldsymbol{x} \ge 0\}$. $\mathbb{I}(x)$: the indicator function which returns 1 if $x$ is true, and 0 otherwise. $\nabla f$: the gradient of function $f$. ----------------------- --------------------------------------------------------------------------------------------------- Backgrounds on posterior inference {#sec:post-inference} ================================== A topic model often assumes that a corpus is composed from $K$ topics, $\boldsymbol{\beta} = (\boldsymbol{\beta}_1, ..., \boldsymbol{\beta}_K)$. Each document ${\boldsymbol{d}}$ is a mixture of those topics and is assumed to arises from the following generative process: For the $i^{th}$ word of ${\boldsymbol{d}}$: - draw topic index $z_{i} | {\boldsymbol{\theta}} \sim Multinomial({\boldsymbol{\theta}})$ - draw word $w_{i}| z_{i}, {\boldsymbol{\beta}} \sim Multinomial({\boldsymbol{\beta}}_{z_{i}})$. Each topic mixture ${\boldsymbol{\theta}} = (\theta_{1}, ..., \theta_{K})$ represents the contributions of topics to document ${\boldsymbol{d}}$, i.e., $\theta_k = \Pr(z=k | {\boldsymbol{d}})$. Each $\beta_{kj} = \Pr(w=j | z=k)$ shows the contribution of term $j$ to topic $k$. Note that ${\boldsymbol{\theta}} \in \Delta_K, {\boldsymbol{\beta}}_k \in \Delta_V, \forall k$. Both ${\boldsymbol{\theta}}$ and ${\boldsymbol{z}}$ are hidden variables and are local for each document. The generative process above generally describes what probabilistic latent semantic analysis (PLSA) by [@Hof01] is. Latent Dirichlet allocation (LDA) [@BNJ03] further assumes that ${\boldsymbol{\theta}}$ and ${\boldsymbol{\beta}}$ are samples of some Dirichlet distributions. More specifically, ${\boldsymbol{\theta}} \sim Dirichlet(\alpha)$ and ${\boldsymbol{\beta}}_k \sim Dirichlet(\eta)$ for any topic. According to [@TehNW2007collapsed], *the problem of posterior inference* for each document ${\boldsymbol{d}}$, given a model $\{{\boldsymbol{\beta}}, \alpha\}$, is to estimate the full joint distribution $p({\boldsymbol{z}}, {\boldsymbol{\theta}}, {\boldsymbol{d}} | {\boldsymbol{\beta}}, \alpha)$. Direct estimation of this distribution is intractable, i.e., NP-hard in the worst case [@SontagR11] . Hence existing inference approaches use different schemes. VB, CVB, and CVB0 try to estimate the distribution by maximizing a lower bound of the likelihood $p({\boldsymbol{d}} | {\boldsymbol{\beta}}, \alpha)$, whereas CGS [@MimnoHB12] tries to estimate $p({\boldsymbol{z}} | {\boldsymbol{d}}, {\boldsymbol{\beta}}, \alpha)$. We will revisit those methods briefly in the nexts subsections, with LDA as the base model. Variational Bayes (VB) ---------------------- VB by [@BNJ03] is one of the first methods to do posterior inference for LDA. The learning problem of LDA is to estimate the full joint distribution $\Pr({\boldsymbol{z, \theta, \beta}} | \mathcal{C})$ given a corpus $\mathcal{C}$. This problem is intractable in the worst case [@SontagR11]. To overcome intractability, VB assumes that the latent variables are independent. Specifically, we use a simpler factorized distribution $Q$ to estimate the joint distribution $\Pr({\boldsymbol{z, \theta, \beta}} | \mathcal{C})$, where $$\label{eq-vb-01} Q({\boldsymbol{z, \theta, \beta}}) = \prod_{d \in \mathcal{C}} Q({\boldsymbol{z}}_d | {\boldsymbol{\phi}}_d) \prod_{d \in \mathcal{C}} Q({\boldsymbol{\theta}}_d | {\boldsymbol{\gamma}}_d) \prod_k Q({\boldsymbol{\beta}}_k | {\boldsymbol{\lambda}}_k).$$ Since then, the learning problem is reduced to estimating the variational parameters $\{{\boldsymbol{\phi, \gamma, \lambda}}\}$, by maximizing an evidence lower bound (ELBO) on the likelihood $\Pr(\mathcal{C} | \alpha, \eta)$, i.e. $$\label{eq-vb-02} \max \mathbb{E}_{Q({\boldsymbol{z,\theta, \beta}})} \left[ \log \Pr({\boldsymbol{z, \theta, \beta}}, \mathcal{C} | \alpha, \eta) \right] + H(Q({\boldsymbol{z, \theta, \beta}})),$$ where $H(x)$ denotes the entropy of $x$. Note that VB implicitly assumes ${\boldsymbol{\beta}}_k \sim Dir({\boldsymbol{\lambda}}_k)$. Due to the modulo nature of VB, individual documents can be independently dealt with. Algorithm \[alg:-VB\] describes in details how VB estimates $\Pr({\boldsymbol{z, \theta}} | {\boldsymbol{d}}, {\boldsymbol{\beta}}, \alpha)$ to do posterior inference for a document. It is easy to observe that VB requires $O(Kn_d + K)$ to store the variational parameters for each document. Each iteration needs $O(Kn_d + K)$ arithmetic computations to update ${\boldsymbol{\gamma}}$ and ${\boldsymbol{\phi}}$. VB also requires computation of some expensive functions including digamma and exponent. In particular, for each iteration VB needs $O(Kn_d + K)$ evaluations of digamma and exponent functions. Those computations cause VB to consume significant time in practices. document $\boldsymbol{d}$, model $\{{\boldsymbol{\lambda}}, \alpha\}$. ${\boldsymbol{\phi}}$. Initialize ${\boldsymbol{\phi}}$ randomly. $\gamma_k := \alpha + \sum_{d_j > 0} \phi_{jk} d_j$ $\phi_{jk} \propto \exp \psi(\gamma_{k}) .\exp[ \psi(\lambda_{kj}) - \psi(\sum_t \lambda_{kt}) ]$ document $\boldsymbol{d}$, model $\{{\boldsymbol{N}}, \alpha, \eta\}$. ${\boldsymbol{\phi}}$. Initialize ${\boldsymbol{\phi}}$ randomly. $\gamma^{-i}_{k} := \alpha + \sum_{t \neq i} \phi_{tk}$ $V^{-i}_{k} := \sum_{t \neq i} \phi_{tk} (1-\phi_{tk})$ $N^{-i}_{kz_i} := N^{-i}_{kz_i} + \phi_{ik}$ $a_k^{-i} := \sum_t N_{kt}^{-i} $ $X := - \frac{V_k^{-i}}{2(\gamma_{k}^{-i})^2} - \frac{V_{kz_i}^{-i}}{2(N_{kz_i}^{-i} + \eta)^2} + \frac{V_k^{-i}}{2(a_{k}^{-i} + V\eta)^2} $ $\phi_{ik} \propto \gamma_{k}^{-i} (N_{kz_i}^{-i} + \eta) (a_{k}^{-i} + V\eta)^{-1} \exp X$ document $\boldsymbol{d}$, model $\{{\boldsymbol{N}}, \alpha, \eta\}$. ${\boldsymbol{\phi}}$. Initialize ${\boldsymbol{\phi}}$ randomly. $\gamma^{-i}_{k} := \alpha + \sum_{t \neq i} \phi_{tk} $ $N^{-i}_{kz_i} := N^{-i}_{kz_i} + \phi_{ik}$ $a_k^{-i} := \sum_t N_{kt}^{-i} $ $\phi_{ik} \propto \gamma_{k}^{-i} (N_{kz_i}^{-i} + \eta) (a_{k}^{-i} + V\eta)^{-1}$ document $\boldsymbol{d}$, model $\{{\boldsymbol{\lambda}}, \alpha\}$. ${\boldsymbol{\phi}}$. Initialize ${\boldsymbol{z}}$ randomly. Discard $B$ burn-in sweeps. $\gamma^{-i}_{k} := \alpha + \sum_{t \neq i} \mathbb{I}(z_t = k) $ $\phi_{ik} \propto \gamma_{k}^{-i} \exp [ \psi(\lambda_{kz_i}) - \psi(\sum_t \lambda_{kt}) ]$ Sample $z_i$ from $Multinomial({\boldsymbol{\phi}}_i)$ document $\boldsymbol{d}$, model ${\boldsymbol{\beta}}$, objective function $f({\boldsymbol{\theta}}) = \sum_{j} d_j \log \sum_{k=1}^K \theta_k \beta_{kj}$. $\boldsymbol{\theta}$ that maximizes $f(\boldsymbol{\theta})$ over $\Delta_K$. Pick as $\boldsymbol{\theta}_{0}$ the vertex of $\Delta_K$ with largest $f$ value. $i' := \arg \max_i \nabla f(\boldsymbol{\theta}_{\ell})_{i}$; $\alpha := 2 / (\ell+3)$; $\boldsymbol{\theta}_{\ell +1} := \alpha \boldsymbol{e}_{i'} +(1-\alpha)\boldsymbol{\theta}_{\ell}$. Collapsed variational Bayes (CVB) --------------------------------- Instead of using a full factorized distribution, CVB by [@TehNW2007collapsed] uses $$\label{eq-cvb-03} Q({\boldsymbol{z, \theta, \beta}}) = Q({\boldsymbol{\theta, \beta}} | {\boldsymbol{z, \gamma, \lambda}}) \prod_{d \in \mathcal{C}} Q({\boldsymbol{z}}_d | {\boldsymbol{\phi}}_d)$$ to approximate $\Pr({\boldsymbol{z, \theta, \beta}} | \mathcal{C})$. The resulting problem is $$\label{eq-cvb-04} \max \mathbb{E}_{Q({\boldsymbol{z}}) Q({\boldsymbol{\theta, \beta}} | {\boldsymbol{z}})} \left[ \log \Pr({\boldsymbol{z, \theta, \beta}}, \mathcal{C} | \alpha, \eta) \right] + H(Q({\boldsymbol{z}}) Q({\boldsymbol{\theta, \beta}} | {\boldsymbol{z}})).$$ We maximize the objective function with respect to $Q({\boldsymbol{\theta, \beta}} | {\boldsymbol{z}})$ first and followed by $Q({\boldsymbol{z}})$ until convergence. Note that CVB can give better approximations than VB because of maintaining the dependency between ${\boldsymbol{z}}$ and $({\boldsymbol{\theta, \beta}})$. Borrowing ideas from Gibbs sampling [@GriffithsS2004], CVB exploits individual tokens in documents to do inference. As an example, while VB maintains a variational distribution ${\boldsymbol{\gamma}} = (\gamma_1, ..., \gamma_K)$ for each document, CVB maintains a ${\boldsymbol{\gamma}}$ for each token. Such a deeper treatment probably helps CVB work better than VB. When adapting to inference for a specific document ${\boldsymbol{d}}$, we find that CVB in fact tries to estimate $\Pr({\boldsymbol{z}} | {\boldsymbol{d}}, \alpha, \eta)$ which is simpler than $\Pr({\boldsymbol{z, \theta}} | {\boldsymbol{d}}, \alpha, \eta)$ in VB. However, posterior inference by CVB is not local for a particular document, and requires some updates to global variables. Details of posterior inference by CVB is presented in Algorithm \[alg:-CVB\]. Note that $N_{kj}$ plays a similar role with $\lambda_{kj}$ in VB. In comparison with VB, CVB requires significantly more computations and memory for storing temporary parameters. Since CVB works with individual tokens in a document, memory for the variational parameters is $O(K\ell_d)$ where $\ell_d$ denotes the number of tokens in document ${\boldsymbol{d}}$. Note that we often have $\ell_d \ge n_d$. CVB further needs to maintain the variance vector ($V^{-i}$) for each token which also requires a memory of $O(K\ell_d)$. From those observations, one can realize that each iteration of CVB requires $O(K\ell_d)$ computations. One important property of CVB is that each update for the local variables w.p.t a token requires some modifications to the global variables (${\boldsymbol{N}}$). It may help the model update more quickly as observing individual tokens. Nonetheless, this property is not ideal for some practical cases, such as parallel/distributed inference for individual documents, as communication overhead will be very high. Fast collapsed variational Bayes (CVB0) --------------------------------------- CVB0 [@Asuncion+2009smoothing] is an improved version of CVB. The update for $\phi_{ik}$ in CVB makes uses of a second order Taylor extension, and is quite involved. Asuncion et al. [@Asuncion+2009smoothing] propose to use only the zero order information for approximation to make the update of $\phi_{ik}$ significantly simpler. Algorithm \[alg:-CVB0\] shows details of CVB0 for doing posterior inference for a given document. Similar with CVB, we still have to make some updates to global variables $(N_{kj})$ when doing inference for individual documents in CVB0. Nonetheless, CVB0 does not have to maintain any variance for individual tokens. This property helps CVB0 much more efficient than CVB in both computation and memory. Due to its simplicity, CVB0 requires much less computations and storage than the original CVB. No computation of exponents or digamma function is necessary. By a careful enumeration, we find that the complexity of CVB0 in both computation and memory is $O(K\ell_d)$. Similar with CVB, we still need to do some modifications to global variables when doing local inference for individual documents. Collapsed Gibbs sampling (CGS) ------------------------------ Originally, CGS was proposed by [@GriffithsS2004] for learning LDA from data. It recently has been successfully adapted to posterior inference for individual documents by [@MimnoHB12]. It tries to estimate $\Pr({\boldsymbol{z}} | {\boldsymbol{d}}, \alpha, \eta)$ by iteratively resampling the topic indicator at each token in ${\boldsymbol{d}}$ from the conditional distribution over that position given the remaining topic indicator variables (${\boldsymbol{z}}^{-i}$): $$\label{eq-cgs-05} \Pr(z_i = k | {\boldsymbol{z}}^{-i}) \propto \left( \alpha + \sum_{t \neq i} \mathbb{I}(z_t = k) \right) \exp [ \psi(\lambda_{kz_i}) - \psi(\sum_t \lambda_{kt}) ].$$ Note that this adaptation makes the inference more local, i.e., posterior inference for a document does not need to modify any global variable. This property is similar with VB, but very different with CVB and CVB0. Details are presented in Algorithm \[alg:-CGS\]. To take a random sample, CGS needs $O(K\ell_d)$ computations to compute all $\phi_{ik} = \Pr(z_i = k | {\boldsymbol{z}}^{-i})$. Note that CGS also needs $O(K\ell_d)$ evaluations of exponent and digamma functions which are expensive. In total, CGS requires $O((S+B)K\ell_d)$ computations for the whole sampling procedure with $B$ burn-in sweeps and $S$ samples. Storing ${\boldsymbol{\phi}}$ requires $O(K\ell_d)$ memory. The Frank-Wolfe algorithm for posterior inference {#sec:FW} ================================================= This section reviews the Frank-Wolfe algorithm for concave maximization over simplex. We then discuss how to employ it to do inference of theta in LDA. Its interesting properties will be discussed and compared with common inference methods. Concave maximization over simplex and sparse approximation ---------------------------------------------------------- Consider a concave function $f(\boldsymbol{\theta}): \mathbb{R}^K \rightarrow \mathbb{R}$ which is twice differentiable over $\Delta_K$. We are interested in the following problem, *concave maximization over the unit simplex*, $$\label{eq-fw-01} \boldsymbol{\theta}^* = \arg \max_{\boldsymbol{\theta} \in \Delta_K} f(\boldsymbol{\theta})$$ Convex/concave optimization has been extensively studied in the optimization literature. There has been various excellent results such as [@Nesterov2005; @Lan2012]. However, we are interested in sparse approximation algorithms specialized for problem (\[eq-fw-01\]). More specifically, we focus on the Frank-Wolfe algorithm [@Clarkson2010]. Loosely speaking, the Frank-Wolfe algorithm is an approximation one for problem (\[eq-fw-01\]). Starting from a vertex of the simplex $\Delta_K$, it iteratively selects the most potential vertex of $\Delta_K$ to change the current solution closer to that vertex in order to maximize $f(\boldsymbol{\theta})$. Details are presented in Algorithm \[alg:-Frank-Wolfe\]. It has been shown that the algorithm converges at a linear rate to the optimal solution. Moreover, at each iteration, the algorithm finds a provably good approximate solution lying in a face of $\Delta_K$. [@Clarkson2010] \[thm-FW\] Let $f$ be a continuously differentiable, concave function over $\Delta_K$, and denote $C_f$ be the largest constant so that $\forall \boldsymbol{\theta}, \boldsymbol{\theta}' \in \Delta_K, a \in [0, 1] $ we have $f(a \boldsymbol{\theta}' + (1-a)\boldsymbol{\theta}) \ge f(\boldsymbol{\theta}) + a(\boldsymbol{\theta}' - \boldsymbol{\theta})^t \nabla f(\boldsymbol{\theta}) - a^2 C_f$. After $\ell$ iterations, the Frank-Wolfe algorithm finds a point $\boldsymbol{\theta}_{\ell}$ on an $(\ell+1)-$dimensional face of $\Delta_K$ such that $$\max_{\boldsymbol{\theta} \in \Delta_K} f(\boldsymbol{\theta}) - f(\boldsymbol{\theta}_{\ell}) \le \frac{4C_f}{(\ell +3)}.$$ It is worth noting some observations about the algorithm: - It achieves a linear rate of convergence, and has provable bounds on the goodness of approximate solutions. These are crucial for practical applications. - Overall running time mostly depends on how complicated $f$ and $\nabla f$ are. - It provides an explicit bound on the dimensionality of the face of $\Delta_K$ in which an approximate solution lies. After $\ell$ iterations, Theorem \[thm-FW\] ensures that at most $\ell+1$ out of $K$ components of $\boldsymbol{\theta}_{\ell}$ are non-zero. - It is easy to directly control the sparsity level of ${\boldsymbol{\theta}}$ by trading off sparsity against quality. The fewer the number of iterations, the sparser the solution. This characteristic makes the algorithm very attractive for resolving high dimensional problems. How to employ FW in topic models {#sec:Employment-FW} -------------------------------- Posterior inference for a document in LDA and many models often relates to the latent variables ${\boldsymbol{z}}$ and ${\boldsymbol{\theta}}$. We sometimes want to know the full joint distribution $\Pr({\boldsymbol{z, \theta | d}})$, or $\Pr({\boldsymbol{z | d}})$, or $\Pr({\boldsymbol{\theta | d}})$, or even individuals ${\boldsymbol{z}}$ or ${\boldsymbol{\theta}}$. Estimation of individuals ${\boldsymbol{z}}$ or ${\boldsymbol{\theta}}$ is often maximum a posteriori (MAP). Here we discuss how to do inference of ${\boldsymbol{\theta}}$ using FW. Note that one can make approximation to $\Pr({\boldsymbol{z | d}})$ from ${\boldsymbol{\theta}}$ and vice versa. ### MAP inference of ${\boldsymbol{\theta}}$ We now consider LDA and the MAP estimation of topic mixture for a given document ${\boldsymbol{d}}$: $$\label{eq-map-1} {\boldsymbol{\theta}}^* = \arg \max_{{\boldsymbol{\theta}} \in \Delta_K} \Pr({\boldsymbol{\theta}}, {\boldsymbol{d}}|{\boldsymbol{\beta}},\alpha) = \arg \max_{{\boldsymbol{\theta}} \in \Delta_K} \Pr({\boldsymbol{d}}|{\boldsymbol{\theta}},{\boldsymbol{\beta}}) \Pr({\boldsymbol{\theta}}|\alpha).$$ For a given document $\boldsymbol{d}$, the probability that a term $j$ appears in $\boldsymbol{d}$ can be expressed as $\Pr(w = j | \boldsymbol{d}) = \sum_{k=1}^K \Pr(w=j | z=k).\Pr(z=k | \boldsymbol{d}) = \sum_{k=1}^K \beta_{kj} \theta_k$. Hence the log likelihood of $\boldsymbol{d}$ is $$\begin{aligned} \nonumber & & \log \Pr(\boldsymbol{d} | {\boldsymbol{\theta}},{\boldsymbol{\beta}}) = \log \prod_{j} \Pr(w=j | \boldsymbol{d})^{d_j} \\ &=& \sum_{j } d_j \log \Pr(w=j | \boldsymbol{d}) = \sum_{j} d_j \log \sum_{k=1}^K \theta_k \beta_{kj}.\end{aligned}$$ Remember that the density of the $K$-dimensional Dirichlet distribution with parameter $\alpha$ is $p({\boldsymbol{\theta}} | \alpha) \propto \prod_{k=1}^{K} \theta_k^{\alpha -1}$. Therefore problem (\[eq-map-1\]) is equivalent to the following: $$\label{eq-map-2} {\boldsymbol{\theta}}^* = \arg \max_{{\boldsymbol{\theta}} \in \Delta_K} \sum_j d_j \log\sum_{k = 1}^K\theta_k\beta_{kj} + (\alpha - 1)\sum_{k = 1}^K \log\theta_k.$$ When $\alpha=1$, it is easy to show that problem (\[eq-map-2\]) is concave. Hence we can employ FW to efficiently solve for ${\boldsymbol{\theta}}$. In other words, FW can be used to find ${\boldsymbol{\theta}}^*$ by maximizing the function $\sum_j d_j \log\sum_{k = 1}^K\theta_k\beta_{kj}$ over the unit simplex. By using Algorithm \[alg:-Frank-Wolfe\] to do inference, we implicitly assume that ${\boldsymbol{\theta}}^*$ follows the distribution $Dirichlet(1)$. Another interpretation is that we remove the Dirichlet prior over ${\boldsymbol{\theta}}$. This seems to be strange and uncommon. No prior endowment over ${\boldsymbol{\theta}}$ might cause some overfittings in practice [@BNJ03]. However, we will show that such an inference way provides us many practical benefits, and that there is an *implicit sparse prior* over topic mixtures to avoid overfitting as discussed in the next subsection. ### Recovery of ${\boldsymbol{z}}$ from ${\boldsymbol{\theta}}$ and vice versa We can easily make a connection of ${\boldsymbol{\theta}}$ and ${\boldsymbol{z}}$. Note that estimation of ${\boldsymbol{z}}$ is intractable in the worst case [@SontagR11]. Instead, we discuss a connection of ${\boldsymbol{\theta}}$ and the distribution of ${\boldsymbol{z}}$, as it is enough for deriving various fast algorithms for learning topic models which will be discussed in Section \[sec:stochasticLDA\]. Denote $\phi_{jk} = \Pr(z=k | w=j, {\boldsymbol{d}})$ the probability that topic $k$ generates term $j$ in document ${\boldsymbol{d}}$. Then it connects to ${\boldsymbol{\theta}}$ by the following formula [@Asuncion+2009smoothing] $$\label{eq-map-10} \phi_{jk} \propto \theta_k \beta_{kj}.$$ When further assuming ${\boldsymbol{\beta}}$ to be a random variable, we have $$\label{eq-map-11} \phi_{jk} \propto \theta_k \exp \mathbb{E}_Q (\log \beta_{kj}).$$ If both ${\boldsymbol{\beta}}$ and ${\boldsymbol{\theta}}$ are random variables as in LDA, we have $$\label{eq-map-12} \phi_{jk} \propto \exp\mathbb{E}_Q (\log \theta_k) .\exp \mathbb{E}_Q (\log \beta_{kj}),$$ where $Q$ is a certain distribution. Sometimes $Q$ is a variational distribution of $\Pr({\boldsymbol{z, \theta, \beta}})$, but in some other situations $Q$ is the distribution of $({\boldsymbol{z}}^{-i}, {\boldsymbol{\theta, \beta}})$ for some token $i$ removed. Note that the expectations in (\[eq-map-12\]) are often intractable to compute, because both ${\boldsymbol{\beta}}$ and ${\boldsymbol{\theta}}$ are hidden. Some popular approaches to deal with these quantities base on VB [@BNJ03] and CGS [@GriffithsS2004]. The formulas of $\phi$ in Algorithms \[alg:-VB\]–\[alg:-CVB0\] are the results of different approaches to approximate the intractable expectations in (\[eq-map-12\]), and provide some specific ways to approximate the distribution of ${\boldsymbol{z}}$ given ${\boldsymbol{\theta}}$. We can make an approximation to ${\boldsymbol{\theta}}$ once having known ${\boldsymbol{\phi}}$. Indeed, we observe that ${\boldsymbol{\gamma}}$ in Algorithms \[alg:-VB\]–\[alg:-CVB0\] plays a role as sufficient statistics for the Dirichlet distribution over ${\boldsymbol{\theta}}$. Hence, we can use the following approximation $$\label{eq-map-13} \theta_k = \frac{\gamma_k} {\sum_{t=1}^K \gamma_t}$$ Benefits from FW ---------------- In this section we elucidate the main benefits of using FW, accompanied by a comparison with existing methods for posterior inference. The benefits come from both theoretical and practical perspectives. Table \[table 1: theoretical comparison\] summarizes the main properties of the inference methods of interests. Method FW VB CVB CVB0 CGS ---------------------------------- ----------------------------------------------- -------------------------------------------------- ------------------------------------------ ------------------------------------------ ------------------------------------------ Posterior probability $\Pr({\boldsymbol{\theta, d}} | \mathcal{M})$ $\Pr({\boldsymbol{\theta, z, d}} | \mathcal{M})$ $\Pr({\boldsymbol{z, d}} | \mathcal{M})$ $\Pr({\boldsymbol{z, d}} | \mathcal{M})$ $\Pr({\boldsymbol{z, d}} | \mathcal{M})$ Approach ML ELBO ELBO ELBO Sampling Sparse solution Yes - - - Yes Sparsity control direct - - - - Trade-off: sparsity vs. quality Yes - - - - sparsity vs. time Yes - - - - Quality bound Yes - - - - Convergence rate $O(1/{L})$ - - - - Iteration complexity $O(K. n_d)$ $O(K. n_d)$ $O(K. \ell_d)$ $O(K. \ell_d)$ $O(K. \ell_d)$ Storage $O(K)$ $O(K. n_d)$ $O(K. \ell_d)$ $O(K. \ell_d)$ $O(K. \ell_d)$ $Digamma$ evaluations 0 $O(K.n_d)$ 0 0 $O(K.n_d)$ $Exp$ or $Log$ evaluations $O(K.n_d)$ $O(K.n_d)$ $O(K. \ell_d)$ 0 $O(K.n_d)$ Modification on global variables No No Yes Yes No \[table 1: theoretical comparison\] ### Complexity and quality of inference One can easily observe that the initialization step and selection of a maximum gradient direction step are most expensive in Algorithm \[alg:-Frank-Wolfe\]. Initialization requires $K$ evaluations of $f({\boldsymbol{\theta}})$ with respect to $K$ vertices of the simplex $\Delta_K$. For $f({\boldsymbol{\theta}}) = \sum_j d_j \log\sum_{k = 1}^K\theta_k\beta_{kj}$, we need $O(Kn_d)$ computations to do the initialization. Taking $K$ partial differentials from $f$ and then finding the maximal one also need $O(Kn_d)$. As a consequence, $O(Kn_d)$ computations are sufficient to do an iteration for FW. FW requires a modest amount of memory for storage, which is $O(K)$ for maintaining the solution and gradient. Such a memory consumption is significantly less than VB, CVB, CVB0, and CGS as Table \[table 1: theoretical comparison\] demonstrates. Therefore FW is expected to be much more efficient than other methods in both memory and computation. Theorem \[thm-FW\] suggests that FW converges very fast to the optimal solution. After $\ell$ iterations, it finds an approximate solution ${\boldsymbol{\theta}}_\ell$ which is provably good, with a bounded error of $4C_f / (\ell + 3)$ in inference quality. This property of FW is very different from existing methods. To the best of our knowledge, no theory has been established to see the convergence rate and inference quality of VB, CVB, CVB0, and CGS. Hence in practices, we are not sure about the quality of posterior inference by VB, CVB, CVB0, and CGS. In these theoretical aspects, FW behaves better than existing methods. ### Managing sparsity level and trade-off Good solutions are often necessary for practical applications. In practice, we may have to spend intensive time and significant memory to search such solutions. This sometimes is not necessary or impossible in limited time/memory settings. Hence one would prefer to trading off quality of solutions against time/memory. Searching for sparse solutions is a common approach in Machine Learning to reduce memory for storage and efficient processing. Most previous works have tried to learn sparse solutions by imposing regularization which induces sparsity, e.g., L1 regularization [@ZhuX2011], [@WangXLC2011] and entropic regularization [@ShashankaRS2007]. Nevertheless, those techniques are severely limited in the sense that we cannot directly control the sparsity level of solutions (e.g., one cannot decide how many non-zero components solutions should have). In other words, the sparsity level of solutions is a priori unpredictable. This limitation makes regularization techniques inferior in memory limited settings. It is also the case with other works that employ some probabilistic distributions to induce sparsity [@WilliamsonWHB2010; @WangB2009] or that exploits sparsity of sufficient statistics of Gibbs samples [@MimnoHB12]. Unlike prior approaches, FW naturally provides a principled way to control sparsity. Theorem \[thm-FW\] implies that if stopped at the $L$th iteration, the inferred solution has at most $L+1$ non-zero components. Hence one can control sparsity level of solutions by simply limiting the number of iterations. It means that we can predict a priori how sparse and how good the inferred solutions are. Less iterations, sparser (but probably worse) solutions of inference. Besides, we can trade off sparsity against inference time. More iterations imply more necessary time and probably denser solutions. ### Implicit prior over $\boldsymbol{\theta}$ Note that FW allows us to easily trade off sparsity of solutions against quality and time. If one insists on solutions with at most $t$ nonzero components, the inference algorithm can be modified accordingly. In this case, it mimics that one is trying to find a solution to the problem $\max_{\boldsymbol{\theta} \in \Delta_K} \{f(\boldsymbol{\theta}): ||\boldsymbol{\theta}||_0 \le t\}$. We remark a well-known fact that the constraint $||\boldsymbol{\theta}||_0 \le t$ is equivalent to addition of a penalty term $\lambda.||\boldsymbol{\theta}||_0$ to the objective function [@Murray+1981], for some constant $\lambda$. Therefore, one is trying to solve for $$\begin{aligned} \nonumber \boldsymbol{\theta}^* &=& \arg \max_{\boldsymbol{\theta} \in \Delta_K} \{f(\boldsymbol{\theta})- \lambda.||\boldsymbol{\theta}||_0 \} = \arg \max_{\boldsymbol{\theta} \in \Delta_K} P(\boldsymbol{d}| \boldsymbol{\theta}).P(\boldsymbol{\theta}) \\ \nonumber &=& \arg \max_{\boldsymbol{\theta} \in \Delta_K} P(\boldsymbol{\theta}| \boldsymbol{d}),\end{aligned}$$ where $p(\boldsymbol{\theta}) \propto \exp(-\lambda.||\boldsymbol{\theta}||_0)$. Notice that the last problem, $\boldsymbol{\theta}^*= \arg \max_{\boldsymbol{\theta} \in \Delta_K} P(\boldsymbol{\theta}| \boldsymbol{d})$, is an MAP inference problem. Hence, these observations basically show that inference by Algorithm \[alg:-Frank-Wolfe\] for sparse solutions mimics MAP inference. As a result, there exists an implicit prior, having density function $p(\boldsymbol{\theta}; \lambda) \propto \exp(-\lambda.||\boldsymbol{\theta}||_0)$, over latent topic proportions. Stochastic algorithms for learning LDA {#sec:stochasticLDA} ====================================== We have seen many interesting properties of FW. In this section, we show the simplicity of using FW to design efficient algorithms for learning topic models at large scales. More specifically, we present 3 different ways to encode FW as an internal step into online learning [@Hoffman2013SVI] and stream learning [@Broderick2013streaming]. Those encodings result in 3 novel methods which are fast and effective. Online-FW for learning LDA from large corpora --------------------------------------------- Hoffman et al. [@Hoffman2013SVI] show that LDA can be learned efficiently in a stochastic manner. Note that the batch VB by [@BNJ03] learns LDA by iteratively maximizing an ELBO on the data likelihood using coordinate ascent. Each iteration of the batch VB requires to access all the available training data. Such a requirement causes batch VB to be impractical for large corpora or stream environments. Fortunately, a simple modification can help us learn topic models in an online fashion. Indeed, *stochastic variational inference* (SVI) by [@Hoffman2013SVI] learns LDA iteratively from a corpus of size $D$ as follows: - Sample a set $\mathcal{C}_t$ consisting of $S$ documents. Use Algorithm \[alg:-VB\] to do posterior inference for each document ${\boldsymbol{d}} \in \mathcal{C}_t$, given the global variable ${\boldsymbol{\lambda}}^{(t - 1)}$ in the last step, to get variational parameters ${\boldsymbol{\phi}}_d$. - For each $k \in \{1, 2, ..., K\}$, form an intermediate global variable $\hat{{\boldsymbol{\lambda}}}_k$ for $\mathcal{C}_t$ by $$\label{eq-14-ovb} \hat{{\boldsymbol{\lambda}}}_k = \eta + \frac{D}{S} \sum_{{\boldsymbol{d}} \in \mathcal{C}_t} \sum_j d_j \phi_{djk}$$ - Update the global variable to be a weighted average of $\hat{{\boldsymbol{\lambda}}}$ and ${\boldsymbol{\lambda}}^{(t - 1)}$ by $$\label{eq-15-ovb} {\boldsymbol{\lambda}}^{(t)} := (1-\rho_t) {\boldsymbol{\lambda}}^{(t-1)} + \rho_t \hat{{\boldsymbol{\lambda}}}.$$ $\rho_t$ is called the step size of the learning algorithm, and should satisfy two conditions: $\sum_t^\infty \rho_t = \infty$ and $\sum_t^\infty \rho^2_t$ is finite. Those two conditions are to assure that the learning algorithm will converge to a stationary point. In practice, we often choose $$\rho_t = (\tau + t)^{-\kappa}$$ where $\kappa \in (0.5, 1]$ is the forgeting rate which determines how fast the algorithm forgets past observations, and $\tau$ is a positve constant. It is easy to modify SVI to employ FW instead of VB. Remember that FW infers a vector ${\boldsymbol{\theta}}$, but VB infers a matrix ${\boldsymbol{\phi}}$. Fortunately, equation (\[eq-map-11\]) shows that we can recover ${\boldsymbol{\phi}}$ from ${\boldsymbol{\theta}}$. Therefore, we arrive at a novel algorithm (namely, Online-FW) for learning LDA stochastically as described in Algorithm \[alg:-online-FW\]. training data $\mathcal{C}$ with $D$ documents, hyperparameter $\eta$ ${\boldsymbol{\lambda}}$ Initialize ${\boldsymbol{\lambda}}^{(0)}$ randomly Sample a set $\mathcal{C}_t$ consisting of $S$ documents. Use Algorithm \[alg:-Frank-Wolfe\] to do posterior inference for each document ${\boldsymbol{d}} \in \mathcal{C}_t$, given the global variable ${\boldsymbol{\beta}}^{(t-1)} \propto {\boldsymbol{\lambda}}^{(t - 1)}$ in the last step, to get topic mixture ${\boldsymbol{\theta}}_d$. Then compute ${\boldsymbol{\phi}}_d$ as $$\phi_{djk} \propto \theta_{dk} \beta_{kj}.$$ For each $k \in \{1, 2, ..., K\}$, form an intermediate global variable $\hat{{\boldsymbol{\lambda}}}_k$ for $\mathcal{C}_t$ by $$\hat{\lambda}_{kj} = \eta + \frac{D}{S} \sum_{{\boldsymbol{d}} \in \mathcal{C}_t} d_j \phi_{djk}.$$ Update the global variable to be a weighted average of $\hat{{\boldsymbol{\lambda}}}$ and ${\boldsymbol{\lambda}}^{(t - 1)}$ by $${\boldsymbol{\lambda}}^{(t)} := (1-\rho_t) {\boldsymbol{\lambda}}^{(t-1)} + \rho_t \hat{{\boldsymbol{\lambda}}}.$$ A careful observation about the algorithm reveals that in fact Online-FW is a hybrid combination of FW and variational Bayes [@BNJ03], where the global variables $({\boldsymbol{\beta}})$ are approximated by variational Bayes, but the local variables $({\boldsymbol{\theta}})$ are estimated by FW. Note that our adaptation of FW to posterior inference of local variables is similar in manner with the adaptation of CGS by [@MimnoHB12]. One important property of Online-FW is that the quality of MAP inference of ${\boldsymbol{\theta}}$ is theoretically guaranteed. In contrast, posterior inference of local variables by VB or CGS does not have any guarantee. Streaming-FW for learning LDA from data streams ----------------------------------------------- A disadvantage of SVI and Online-FW is that the number of training documents has to be known a priori. In practice, one may have no way to know how many documents to be processed. In those cases, the scheme proposed by [@Hoffman2013SVI] cannot apply. Fortunately, [@Broderick2013streaming] shows a simple way to help SVI work in a real online/stream environment. Imagine the data come sequentially in an order. Our task is to estimate a posterior distribution from this data sequence without knowing how many instances there are. [@Broderick2013streaming] suggest that we should treat the posterior of the previous data as the new prior for the incomming data points. By this way, we can estimate the posterior in a real online/stream environment. When applying this scheme to some models with conjugate priors such as LDA, saving and updating the sufficient statistics of the posterior are enough. We now discuss how to modify Online-FW to work with data streams, following the suggestion by [@Broderick2013streaming]. Note that the intermediate variable $\hat{{\boldsymbol{\lambda}}}$ in Algorithm \[alg:-online-FW\] plays the role as the variational parameters of the distribution over words with respect to the current minibatch. It contains the sufficient statistics ($\sum_{{\boldsymbol{d}} \in \mathcal{C}_t} d_j \phi_{djk}$) of the posterior of the current minibatch. Following the arguments by [@Broderick2013streaming], we just need to add those statistics to the sufficient statistics of the global posterior over topics. Nonetheless, we find that such an update of the global posterior would quickly forget the role of the prior $Dir(\eta)$ over topics, which is a crucial part that helps LDA works in practice. To maintain the regularization role of this prior in streaming environments, we propose to keep $\eta$ as a part of the sufficient statistics to be used in each minibatch. Therefore, we arrive at a new algorithm (namely, *Streaming-FW*) for learning LDA as described in Algorithm \[alg:-stream-FW\]. data sequence, hyperparameter $\eta$ ${\boldsymbol{\lambda}}$ Initialize ${\boldsymbol{\lambda}}^{(0)}$ randomly Sample a set $\mathcal{C}_t$ of documents. Use Algorithm \[alg:-Frank-Wolfe\] to do posterior inference for each document ${\boldsymbol{d}} \in \mathcal{C}_t$, given the global variable ${\boldsymbol{\beta}}^{(t-1)} \propto {\boldsymbol{\lambda}}^{(t - 1)}$ in the last step, to get topic mixture ${\boldsymbol{\theta}}_d$. Then compute $$\phi_{djk} \propto \theta_{dk} \beta_{kj}.$$ For each $k \in \{1, 2, ..., K\}$, compute the sufficient statistics $\hat{{\boldsymbol{\lambda}}}_k$ for $\mathcal{C}_t$ by $$\hat{\lambda}_{kj} = \eta + \sum_{{\boldsymbol{d}} \in \mathcal{C}_t} d_j \phi_{djk}.$$ Update the global variable by $${\boldsymbol{\lambda}}^{(t)} := {\boldsymbol{\lambda}}^{(t-1)} + \hat{{\boldsymbol{\lambda}}}.$$ ML-FW for learning LDA from large corpora or data streams --------------------------------------------------------- data sequence, parameter $\{\kappa, \tau \}$ ${\boldsymbol{\beta}}$ Initialize ${\boldsymbol{\beta}}^{(0)}$ randomly in $\Delta_V$ Sample a set $\mathcal{C}_t$ of documents. Use Algorithm \[alg:-Frank-Wolfe\] to do posterior inference for each document ${\boldsymbol{d}} \in \mathcal{C}_t$, given the global variable ${\boldsymbol{\beta}}^{(t-1)}$ in the last step, to get topic mixture ${\boldsymbol{\theta}}_d$. For each $k \in \{1, 2, ..., K\}$, compute the intermediate topic $\hat{{\boldsymbol{\beta}}}_k$ for $\mathcal{C}_t$ by $$\hat{\beta}_{kj} \propto \sum_{{\boldsymbol{d}} \in \mathcal{C}_t} d_j \theta_{dk}.$$ Update the global variable by, where $\rho_t = (t + \tau)^{-\kappa}$, $${\boldsymbol{\beta}}^{(t)} := (1-\rho_t) {\boldsymbol{\beta}}^{(t-1)} + \rho_t \hat{{\boldsymbol{\beta}}}.$$ It is worth noticing that Online-FW and Streaming-FW are hybrid algorithms which combine FW with variational Bayes for estimating the posterior of the global variables. They have to maintain variational parameters (${\boldsymbol{\lambda}}$) for the Dirichlet distribution over topics, instead of the topics themselve. Nonetheless, the combinations are not very natural since we have to compute ${\boldsymbol{\phi}}$ from ${\boldsymbol{\theta}}$ in order to update the model. Such a conversion might incur some information losses. It is more natural if we can use directly ${\boldsymbol{\theta}}$ in the update of the model at each minibatch. To this end, we use an idea from [@ThanH2012fstm]. Instead of following Bayesian approach to estimate the distribution over topics, one can consider the topics as parameters and estimate them directly from data. [@ThanH2012fstm] show that we can estimate the topics from a given corpus $\mathcal{C}_t$ by ${\boldsymbol{\beta}}_{kj} \propto \sum_{{\boldsymbol{d}} \in \mathcal{C}_t} d_j \theta_{dk}$. Combining this with the idea of online learning [@Bottou1998stochastic], one can arrive at a new algorithm (namely, *ML-FW*) for learning LDA as described in Algorithm \[alg:-ML-FW\]. Different from Online-FW, we need not to know a priori how many documents to be processed. Hence, ML-FW can deal well with stream/online environments in a realistical way. Note that ML-FW ignores the priors over topics (${\boldsymbol{\beta}}$) and topic mixtures (${\boldsymbol{\theta}}$) when learning LDA. This means we learn topics and topic mixtures by the maximum likelihood approach. Further, the magnitude of the global parameters (${\boldsymbol{\lambda}}$) in Online-FW and Streaming-FW can arbitrarily grow as the data come infinitely, but the topics ${\boldsymbol{\beta}}$ in ML-FW are regularized to belong to the unit simplex $\Delta_V$. Such a regularization might help ML-FW avoid overfitting. Note that due to no need of computing any matrix ${\boldsymbol{\phi}}$, ML-FW would be much more efficient than both Online-FW and Streaming-FW. Those properties make ML-FW very different from Online-FW and Streaming-FW. Empirical evaluation {#sec:evaluation} ==================== This section is devoted to investigating the practical behaviors of FW, and how useful it is when FW is employed to design large-scale algorithms for learning topic models. To this end, we take the following methods, datasets, and performance measures into investigation. <span style="font-variant:small-caps;">Inference methods:</span> - *Frank-Wolfe* (FW). - *Variational Bayes* (VB) [@BNJ03]. - *Collapsed variational Bayes* (CVB0) [@Asuncion+2009smoothing]. - *Collapsed Gibbs sampling* (CGS) [@MimnoHB12]. CVB0 and CGS have been observing to work best by several previous studies [@Asuncion+2009smoothing; @MimnoHB12; @Foulds2013stochastic; @GaoSWYZ15]. Therefore they can be considered as the state-of-the-art inference methods. It is worth observing that VB, CVB, CVB0 never return sparse solutions or sufficient statistics (encoded by ${\boldsymbol{\gamma}} -\alpha$ in Algorithm \[alg:-VB\]–\[alg:-CVB0\]) when doing inference for individual documents; but CGS and FW do. <span style="font-variant:small-caps;">Large-scale learning methods:</span> - Our new algorithms: *Online-FW*, *Streaming-FW*, *ML-FW* - *Online-CGS* by [@MimnoHB12] - *Online-CVB0* by [@Foulds2013stochastic] - *Online-VB* by [@Hoffman2013SVI], which is often known as SVI - *Streaming-VB* by [@Broderick2013streaming] with original name to be SSU Online-CGS [@MimnoHB12] is a hybrid algorithm, for which CGS is used to estimate the distribution of local variables (${\boldsymbol{z}}$) in a document, and VB is used to estimate the distribution of global variables (${\boldsymbol{\lambda}}$). Online-CVB0 [@Foulds2013stochastic] is an online version of the batch algorithm by [@Asuncion+2009smoothing], where local inference for a document is done by CVB0. Online-VB [@Hoffman2013SVI] and Streaming-VB [@Broderick2013streaming] are two stochastic algorithms for which local inference for a document is done by VB. To avoid any possible bias in our investigation, we wrote those 6 methods by Python in a unified framework with our best efforts, and Online-VB was taken from <http://www.cs.princeton.edu/~blei/downloads/onlineldavb.tar>. <span style="font-variant:small-caps;">Data for experiments:</span> The following two large corpora were used in our experiments. *Pubmed* consisting of 8.2 millions of medical articles from the pubmed central; *New York Times* consisting of 300K news.[^3] The vocabulary size ($V$) of each corpus is more than 110,000. For each corpus we set aside randomly 1000 documents for testing, and used the remaining for learning. <span style="font-variant:small-caps;">Parameter settings:</span> - *Model parameters:* $K=100, \alpha = 1/K, \eta = 1/K$ which were frequently used in previous studies [@GriffithsS2004; @Foulds2013stochastic; @Hoffman2013SVI]. - *Inference parameters:* at most 50 iterations were allowed for FW and VB to do inference. We terminated VB if the relative improvement of the lower bound on likelihood is not better than $10^{-4}$. 50 samples were used in CGS for which the first 25 were discarded and the remaining were used to approximate the posterior distribution. 50 iterations were used to do inference in CVB0, in which the first 25 iterations were burned in. Those number of samples/iterations are often enough to get a good inference solution, according to [@MimnoHB12; @Foulds2013stochastic]. - *Learning parameters:* minibatch size $S = | \mathcal{C}_t | =5000$, $\kappa = 0.9, \tau=1$. This choice of learning parameters has been found to result in competitive performance of Online-VB [@Hoffman2013SVI], Online-CVB0 [@Foulds2013stochastic]. Therefore it was used in our investigation to avoid any possible bias. We used default values for some other parameters in Online-CVB0. <span style="font-variant:small-caps;">Performance measures:</span> We used *NPMI* and *Predictive Probability* to see the performance of the learning methods. NPMI [@Lau2014npmi] measures the semantic quality of individual topics. From extensive experiments, [@Lau2014npmi] found that NPMI agrees well with human evaluation on the interpretability and coherence of topic models. Predictive probability [@Hoffman2013SVI] measures the predictiveness and generalization of a model to new data. Detailed descriptions of these measures are presented in Appendix \[appendix–perp\]. Sparsity and time by inference methods -------------------------------------- Inference time is the focus in our first investigation with inference methods including FW, VB, CVB0, and CGS. In order to help us see how fast they are, we used ML-FW, Online-VB, Online-CVB0, and Online-CGS to learn LDA from the two datasets; and then calculated the average time per document that FW, VB, CVB0, and CGS respectively do inference. ![image](inf-time-pm-nyt-FW.pdf){width="60.00000%"}\ Figure \[fig=inference-time\] shows the speed of 4 methods. We observe that FW worked fastest, followed by CGS. VB and CVB0 required significant computation time to do inference. For example, VB did approximately 390 times more slowly than FW, while CVB0 did 70 times more slowly than FW on New York Times. Such a slow inference of CVB0 and VB is due to various reasons. Remember that VB requires many evaluations of the Digamma, logarit, and exponent functions which are often expensive (see Table \[table 1: theoretical comparison\]). Further, VB has to check convergence when doing inference which was observed to be extremely expensive. That is why VB consumed intensive time in our experiments. CGS worked much faster than CVB0 and VB owing to the ability of sparse updates to the counts from samples and owing to few evaluations of Digamma/exponent functions. Although CVB0 requires no evaluation of expensive functions, it has to update all the local and global parameters (${\boldsymbol{\gamma, \phi, N}}$) with respect to each token in the inferred document. Therefore in total the number of computations may increase very quickly if the length of documents is high. That is why CVB0 often works significantly more slowly than CGS and FW. Different from other methods, FW just requires a computation of the gradient vectors of the log likelihood and then an update of the solution. Hence FW worked very fast as depicted in Figure \[fig=inference-time\]. ![image](sparsity-nyt-pm-FW.pdf){width="60.00000%"}\ We next want to see *how sparse are the solutions returned by the inference methods?* Sparsity refers to the number of topics appearing in a document which are inferred by an inference method. Note that a document often relates to few topics, therefore sparsity measures the fitness of inference results on real texts. It was computed as the fraction of the number of nonzero elements in ${\boldsymbol{\theta}}$ or ${\boldsymbol{\gamma}}$ in Algorithms \[alg:-VB\]–\[alg:-Frank-Wolfe\]. It is worth noting that VB, CVB, and CVB0 never return sparse solutions; but CGS does without accounting for the hyperparameter $\alpha$. Figure \[fig=sparsity-fw-cgs\] shows sparsity of FW and CGS, for which we counted the number of topics in ${\boldsymbol{\theta}}$ in FW and the number of topics appearing in samples of CGS. We see that both methods can find sparse solutions/statistics. It is worth noting that on average FW inferred 5-7 topics while CGS inferred 8-10 topics per document. A text written by human often talks about few topics. It suggests that 8-10 topics in a document seems to be unrealistic. Furthermore, Figure \[fig=sparsity-fw-cgs\] tells that the solutions by CGS tends to be denser as continuing learning which is unrealistic. In contrast, on average the sparsity in FW is quite stable as continuing learning, and inference of 5-7 topics seems to better fit with common texts. From those observations, FW seems to be better than CGS in both sparsity and fitness with real texts. *Convergence rate of FW:* We have seen that FW does inference very fast, compared with existing methods. Our next investigation is to see how fast FW converges to the optimal solution in practice. Theorem \[thm-FW\] ensures a linear rate of convergence for FW. Figure \[fig=sensitivity-fw\] tells us more about performance of FW in practice. We observe that more iterations may lead to denser solutions, but do not infer significant dense solutions. When FW is encoded in ML-FW for learning LDA, Figure \[fig=sensitivity-fw\] shows that allowing more iterations for FW does not always get better models. 30 iterations seem to be enough for FW to help us learn a good model, since there was no statistically significant difference in predictiveness for different settings as the number of iterations is at least 20. Those observations suggest that FW converges very fast in practice. ![image](FW-sensitivity-with-loops.pdf){width="60.00000%"}\ Performance of learning algorithms ---------------------------------- In this section, we investigate the performance of our new algorithms for learning LDA at large scales, and the benefits when employing FW to do posterior inference in topic models. We took 4 existing methods into investigation including Online-CGS, Online-CVB0, Online-VB and Streaming-VB. Following previous studies, we set $\alpha = 1/K = 0.01$ for the Dirichlet prior over ${\boldsymbol{\theta}}$ to get competitive performance for those four methods. Remember that employing FW implies the use of Dirichlet prior with $\alpha=1$ in ML-FW, Online-FW, and Streaming-FW. It means that our new methods learn a different LDA model. Therefore, to make a better comparison, we also did experiments with Online-CGS, Online-CVB0, Online-VB and Streaming-VB for the case of $\alpha=1$. ### Predictiveness Figure \[fig=perplexity-learning-methods\] depicts the performance of 11 learning methods as spending more time for learning. Observing the figure we see that ML-FW, Online-FW, Streaming-FW, and Online-CGS are among the most efficient methods. They reached very quickly to a good predictiveness level. To reach to the same level, other methods required substantially more learning time. It is worth noticing that for the same LDA model with $\alpha=1$, FW-based methods got higher predictiveness level than the others. This result suggests that FW can do inference significantly better than VB, CVB0, and CGS for the same models. ![image](FW-perp-pm-nyt.pdf){width="80.00000%"}\ In the case of $\alpha=0.01$, Online-CGS and Online-CVB0 can reach to a very high predictiveness level, which agree well with previous studies [@MimnoHB12; @Foulds2013stochastic; @GriffithsS2004]. Online-VB and Streaming-VB can perform well, but with intensive learning time due to the expensive computation of VB. Among 11 methods for learning LDA, the following three reached to top performance: ML-FW, Online-CGS, and Online-CVB0. It is easy to observe from Figure \[fig=perplexity-learning-methods\] that Online-CVB0 required significantly more time than Online-CGS and ML-FW. The reason comes from the intensive computation of CVB0 as analyzed before. Both FW and CGS consumes light computation, and hence they can help ML-FW and Online-CGS learn very fast. It is worth noticing that ML-FW performed best among 11 methods on both New York Times and Pubmed. For a given learning time budget, ML-FW often reached to a very high predictiveness level, compared with other methods. The superior performance of ML-FW might come from the facts that the solutions (${\boldsymbol{\theta}}$) from FW are provably good, and that the quality of solutions from FW are inherited directly in ML-FW to update the global variables (${\boldsymbol{\beta}}$). VB, CVB0, and CGS do not have any guarantee on quality, and may require a large number of iterations/samples to get a good solution. Unlike ML-FW, Online-FW and Streaming-FW do not always perform better than other methods. The reasons might come from the indirect use of ${\boldsymbol{\theta}}$ to update the global variables (${\boldsymbol{\lambda}}$). The indirect use of qualified ${\boldsymbol{\theta}}$ in Online-FW and Streaming-FW might incur some losses. This could be one of the main reasons for the inferior performance of those two methods. ### Semantic quality We next want to see the semantic quality of the models learned by different methods. We used NPMI as a standard measure, because it has been observed to agree well with human evaluation on interpretability of topics. Figure \[fig=npmi-learning-methods\] presents the results of 11 methods. Similar with predictiveness, FW-based methods often resulted in better models than the other methods when the same models ($\alpha=1$) are in consideration. ML-FW and Online-FW did consistently better than Streaming-FW. It seems that $\alpha=1$ is not the good condition for the traditional inference methods such as VB, CVB0, and CGS to do inference. On contrary, FW exploits well this condition to optimally infer topic proportions (${\boldsymbol{\theta}}$). That might be why FW-based learning methods performed significantly better than the others. Among 11 learning methods and in unrestricted settings (such as $\alpha=0.01$), Online-CVB0 seems to perform best if it is allowed enough learning time. Online-VB and Streaming-VB often work very slowly, while FW-based methods and Online-CGS can quickly learn a good LDA model. It is interesting that Online-CVB0 performed well with respect to both measures (Predictive Probability and NPMI). The reasons might come from the facts that CVB0 helps us better approximate the likelihood than VB [@TehNW2007collapsed; @Asuncion+2009smoothing; @SatoN2012CVB0], and that the ability to exploit individual tokens can help CVB0 infer better. Our experimental results here agree well with previous studies on CVB0 and CGS [@Foulds2013stochastic; @MimnoHB12; @Asuncion+2009smoothing; @GaoSWYZ15; @SatoN2015SCVB0]. ![image](FW-npmi-pm-nyt.pdf){width="80.00000%"}\ ![image](FW-perp-npmi-pm-nyt.pdf){width="80.00000%"}\ Figure \[fig=perp-npmi-learning-methods\] shows another perspective on performance of the large-scale learning methods. We find that ML-FW, Online-FW, Online-CGS, and Online-CVB0 can reach to a high predictiveness level just after seeing 100K documents. More texts always improve their predictiveness. In terms of semantic quality (NPMI), ML-FW were often among the top performers but neither Streaming-FW nor Online-FW. For New York Times, Online-FW could outperform the others. However, the performance of Online-FW was not very stable to reach top performance. Some information losses could incur when recovering ${\boldsymbol{\phi}}$ from ${\boldsymbol{\theta}}$ in Online-FW and Streaming-FW. In summary, Figures \[fig=perplexity-learning-methods\]–\[fig=perp-npmi-learning-methods\] show that ML-FW and Online-FW can reach to comparable performance with state-of-the-art methods for learning LDA. ML-FW often outperforms the others in both efficiency and effectiveness (predictiveness). Those results clearly illustrate practical benefits of FW. Conclusion ========== We have investigated the use of the Frank-Wolfe algorithm (FW) [@Clarkson2010] to do posterior inference in topic modeling. By detailed comparisons with existing inference methods in both theoretical and practical perspectives, we elucidated many interesting benefits of FW when employed in topic modeling and in large-scale learning. FW is theoretically guaranteed on inference quality, can swiftly infer sparse solutions, and enable us to easily design efficient large-scale methods for learning topic models. Our investigation resulted in 3 novel stochastic methods for learning LDA at large scales, among which ML-FW reaches state-of-the-art performance. ML-FW can work with big collections and text streams, and therefore provides a new efficient tool to the public community. The code of those methods is available at <http://github.com/Khoat/OPE/.> Acknowledgments {#acknowledgments .unnumbered} =============== Acknowledgment {#acknowledgment .unnumbered} ============== This work was partially supported by Vietnam National Foundation for Science and Technology Development (NAFOSTED Project No. 102.05-2014.28), and by AOARD (U.S. Air Force) and ITC-PAC (U.S. Army) under agreement number FA2386-15-1-4011. [Khoat Than]{} received B.A in Applied Mathematics and Informatics (2004) from Vietnam National University, M.A in Computer Science (2009) from Hanoi University of Science and Technology, and Ph.D in Computer Science (2013) from Japan Advanced Institute of Science and Technology. His research interests include topic modeling, dimension reduction, manifold learning, large-scale modeling, graphical models. [Tu Bao Ho]{} is currently is a professor of School of Knowledge Science, Japan Advanced Institute of Science and Technology. He received a BT in applied mathematics from Hanoi University of Science and Technology (1978), MS and PhD in computer science from Pierre and Marie Curie University, Paris (1984, 1987). His research interests include knowledge-based systems, machine learning, knowledge discovery and data mining. [^1]: Some attempts have been initiated to speed up inference time and to attack the sparsity problem for Gibbs sampling [@MimnoHB12]. Sparsity in those methods does not lie in the topic proportions of documents, but lies in sufficient statistics of Gibbs samples. [^2]: The method by Zhu and Xing [@ZhuX2011] is an exception, for which inference is potentially fast. Nonetheless, their inference method cannot be applied to probabilistic topic models, since unnormalization of latent representations is required. [^3]: The data were retrieved from <http://archive.ics.uci.edu/ml/datasets/>
{ "pile_set_name": "ArXiv" }
--- abstract: 'We have conducted the largest systematic search so far for stellar disk truncations in disk-like galaxies at intermediate redshift ($z$$<$1.1), using the Great Observatories Origins Deep Survey South (GOODS-S) data from the *Hubble Space Telescope* - ACS. Focusing on Type II galaxies (i.e. downbending profiles) we explore whether the position of the break in the rest-frame $B$-band radial surface brightness profile (a direct estimator of the extent of the disk where most of the massive star formation is taking place), evolves with time. The number of galaxies under analysis (238 of a total of 505) is an order of magnitude larger than in previous studies. For the first time, we probe the evolution of the break radius for a given stellar mass (a parameter well suited to address evolutionary studies). Our results suggest that, for a given stellar mass, the radial position of the break has increased with cosmic time by a factor 1.3$\pm$0.1 between $z$$\sim$1 and $z$$\sim$0. This is in agreement with a moderate inside-out growth of the disk galaxies in the last $\sim$ 8 Gyr. In the same period of time, the surface brightness level in the rest-frame $B$-band at which the break takes place has increased by 3.3$\pm$0.2 [mag/$arcsec^{2}$]{}(a decrease in brightness by a factor of 20.9$\pm$4.2). We have explored the distribution of the scale lengths of the disks in the region inside the break, and how this parameter relates to the break radius. We also present results of the statistical analysis of profiles of artificial galaxies, to assess the reliability of our results.' author: - 'R. Azzollini, I. Trujillo and J. E. Beckman' title: | Cosmic Evolution of Stellar Disk Truncations:\ From $\sim$1 to the Local Universe --- Introduction {#sec1} ============ Early studies of the disks of spiral galaxies [@Patterson40; @deVaucouleurs59; @Freeman70] showed that this component generally follows an exponential radial surface-brightness profile, with a certain scale length, usually taken as the characteristic size of the disk. @Freeman70 pointed out, though, that not all disks follow this simple exponential law. Another repeatedly reported feature of disks is that of a truncation of the stellar population at large radii, typically 2-4 exponential scale lengths [see e.g. the review by @Pohlen04]. and @KruitSearle81a [@KruitSearle81b] first drew attention to this phenomenon, which they inferred primarily from the major axis profiles of edge-on, late-type spirals. Though the term “truncation” is used, not even in the original studies was a complete absence of stars beyond the truncation radius suggested. In fact, @Pohlen02 showed that the truncation actually adopts the form of a fairly sharp change in slope, from the shallow exponential of the main disk to a steeper exponential at larger radii [see also @deGrijs01]. @Erwin05 denoted galaxies with this feature as Type II objects, generalizing a classification scheme initially proposed by @Freeman70. Though the truncation phenomenon appears to be very widespread [see for example @PT06 hereinafter cited as PT06], there are many cases in which this just does not appear to happen even at extremely faint surface brightness levels. To give an example, @Bland-Hawthorn05 found (using star counts) a galaxy (NGC 300) for which the single exponential decline simply continues down to $\sim$ 10 radial scale lengths ($\sim$ 30.5 [mag/$arcsec^{2}$]{}in $r'$ band). Together with earlier measurements by e.g. @BartonThompson97 or @Weiner01 (using surface photometry) this provides evidence that indeed there are prototypical exponential disks [Type I objects, following nomenclature in @Freeman70]. More strikingly, perhaps, there even exists evidence for a third type of profile presented by @Erwin05 [@Erwin07; @Erwin08] for early-type disks, by @HunterElmegreen06 and PT06 for late-type disks ; and for extreme late-type spirals by @MatthewsGallagher97. In this class, named “Type III” (also “antitruncations”) by @Erwin05, the inner profile is a relatively steep exponential, which gives way to a shallower profile, beyond the break radius. This profile is thus something like the “inverse” of a Type II profile, bending “up” instead of “down” beyond the break radius (“upbending” profile). In this paper we will concentrate on the Type II galaxies. Several possible break-forming mechanisms have been investigated to explain their truncations. There have been ideas based on maximum angular momentum distribution: @Kruit87 proposed that angular momentum conservation in a collapsing, uniformly rotating cloud naturally gives rise to disk breaks at roughly 4.5 scale radii. suggested that the breaks are due to angular momentum cut-offs of the cooled gas. On the other hand, breaks have also been attributed to a threshold for star formation (SF), due to changes in the gas density [@Kennicutt89], or to an absence of equilibrium in the cool Interstellar Medium phase [@ElmegreenParravano94; @Schaye04]. Following this, and using a semi-analytic model, @ElmegreenHunter06 demonstrated that a double-exponential profile may arise from a multi-component star formation prescription. The above two “simple” scenarios (angular momentum vs. star formation threshold), however, are challenged by observational results. It is becoming more and more clear that galaxies have a significant density of stars beyond the break radius. In addition, the existence of extended UV disks [@Thilker05; @Gildepaz05] and the lack of a clear correlation between H$\alpha$ cut-offs and optical disk breaks [@Pohlen04; @HunterElmegreen06] further complicates the picture. Even though a sharp star formation or angular momentum cut-off may explain a disk truncation, it does not provide a compelling explanation for extended outer exponential components. More elaborate models such as that by @Debattista06 demonstrated that the redistribution of angular momentum by spirals during bar formation also produces realistic breaks, using collisionless N-body simulations. In a further elaboration of this idea, @Roskar08 have performed high resolution simulations of the formation of a galaxy embedded in a dark matter halo [see also @Bournaud07; @Foyle08]. They are able to reproduce Type II profiles, which they claim to be in good agreement with observations. In their model, breaks are the result of the interplay between a radial star formation cut-off and redistribution of stellar mass by secular processes. Independently of the formation mechanism of the truncation, it seems reasonable to assume that the structural properties of the faintest regions of galactic disks must be intimately linked to the processes involved in the growing and shaping of galaxies. These outer edges are easily affected by interactions with other galaxies and, consequently, their characteristics must be closely connected with the evolutionary paths followed by the galaxies [@PT06; @Erwin07]. Together with their stellar halos, the study of the outer edges allows the exploration of the “fossil” evidence imprinted by the galaxy formation process [@deJong07]. Furthermore, addressing the question of how the radial truncation evolves with z is strongly linked to our understanding of how the galactic disks grow and where star formation takes place. @Perez04 showed that it is possible to detect stellar truncations even out to $z\sim1$. Using the radial position of the truncation (hereafter [$R_{Br}$]{}) as a direct estimator of the size of the stellar disk, @TP05 inferred a moderate ($\sim $25%) inside-out growth of disk galaxies since $z\sim1$. An important point, however was missing in the previous analyses: the evolution with redshift of the radial position of the break at a given stellar mass. The stellar mass is a much better parameter to explore the growth of galaxies, since the luminosity evolution of the stellar populations can mimic a size evolution [@Trujillo04; @Trujillo06]. Addressing this point here, one aim of the present work is to probe whether the galaxies are growing from the inside outwards, with star formation propagating radially outward with time. Another point to mention is that while @TP05, worked with imaging data from the Ultra Deep Field [@Beckwith06], we make use of GOODS-HST/ACS data [@Giavalisco04]. This field provides a much wider sky coverage, and so increases the size of the sample under study by roughly an order of magnitude, though at the cost of using images with somewhat lower signal to noise ratio. We take care to tackle the implications of this caveat in the paper as well. In this work (to which we will refer hereinafter as ATB08) we also broach other issues relative to the detailed characterization of the surface brightness profiles ([$\mu$-r]{}profiles). We present results on the distribution of the $B$-band rest-frame surface brightness at the position of the break (hereafter [$\mu_{Br}$]{}). We discuss the distribution of the ratio [$R_{Br}$]{}/ [$h_{1}$]{}, where [$h_{1}$]{}is the scale length of the disk in the region inner to the break. Also included are results on the relation between [$h_{1}$]{}and absolute magnitude and stellar mass. Finally, for the first time in this kind of studies, we make extended analysis of the profiles of simulated galaxies, in an attempt to better limit the accuracy of the reported figures. The outline of this paper is as follows. In section \[sec2\] we describe the galaxy sample and give a brief description of the imaging data we use. In Section \[sec3\] we describe how we obtained the radial surface-brightness profiles of the galaxies, and the methodology of the analysis we apply for their characterization. Section \[sec4\] is focused on the completeness of the samples under study and assessing the accuracy level of the parameters retrieved. Section \[sec5\] presents results on classification and characterization of the [$\mu$-r]{}profiles, and more extensive analysis of how the Break Radii of Type II disks relate to various other properties of the galaxies at different redshifts. In section \[sec6\] we summarize our results and conclusions. Finally we include an Appendix on the description and analysis of results from an exercise of classification of profiles performed on a sample of artificial model *galaxies*. Throughout, we assume a flat $\Lambda$-dominated cosmology ($\Omega_{M}$ = 0.30, $\Omega_{\Lambda}$=0.70, and $H_{0}$=70 km $s^{-1}$ $Mpc^{-1}$). DATA & SAMPLE SELECTION {#sec2} ======================= We have worked with objects present in imaging data from HST-ACS observations of the GOODS-South field [@Giavalisco04], which subtends an area in the sky of roughly 170 square arcmin. These data are publicly available[^1], and consist of imaging in the $F435W$, $F606W$, $F775W$ and $F850LP$ HST pass-bands, hereafter referred to as [$B_{435}$]{}, [$V_{606}$]{}, [$i_{775}$]{}and [$z_{850}$]{}, and whose total exposure times are approximately 7200, 3040, 3040, and 6280 seconds, respectively. The images have an angular scale of 0.03”/pixel and the FWHM of the PSF in [$z_{850}$]{}measures 0.09”. To make the selection of our sample we have profited from the fact that the GOODS-South field is within the *Galaxy Evolution from Morphologies and SEDs* imaging survey [GEMS; @Rix04]. In the redshift range 0.1$\leq$$z$$\leq$1.1, GEMS provides morphologies and structural parameters for nearly 10,000 galaxies [@Barden05; @McIntosh05]. For many of these objects there also exist photometric redshift and luminosity estimates, and SEDs from COMBO-17 [*Classifying Objects by Medium-Band Observation in 17 filters*; @Wolf01; @Wolf03]. The COMBO-17 team made this information publicly available through a catalog with precise redshift estimates (with errors $\delta$$z$/(1+$z$)$\sim$0.02) for approximately 9000 galaxies down to $m_{R}<24$ [@Wolf04]. In the same data release were included rest-frame absolute magnitudes and colors (accurate to $\sim$ 0.1 mag). We have also used the stellar mass estimates published in @Barden05, which are taken from @Borch04, and are deduced from the COMBO-17 photometric data. @Barden05 conducted the morphological analysis of the late-type galaxies in the GEMS field by fitting Sérsic $r^{1/n}$ [@Sersic68] profiles to the surface brightness distributions. @Ravindranath04 showed that using the Sérsic index $n$ as the criterion, it is feasible to distinguish between late- and early-type galaxies at intermediate redshifts. Late-types (Sab-Sdm) are defined as having $n$$<$2-2.5. Moreover, the morphological analysis conducted by Barden et al. provides the information about the inclination of the galaxies. This is particularly important, since we want to study the truncation of the stellar disks in objects with low inclination. The edge-on view facilitates the discovery of truncations but introduces severe quantitative problems caused by the effects of dust and line-of-sight integration which we want to avoid [@Pohlen02]. We selected objects from the Barden et al. sample within the subsequent ranges of parameters: Sérsic index $n$$\leq$2.5 to isolate disk-dominated galaxies [@Barden05; @Shen03; @Ravindranath04]; axial ratio $q$ $>$ 0.5 to select objects with inclination $<$ 60$\arcdeg$; and [$M_{B}$]{}$<$ -18.5 magnitudes, as in @TP05. Moreover, only objects with $z$$<$1.1 were selected in order to maintain our analysis in the optical rest-frame bands. Finally the resulting sample was matched to a photometric catalog derived by ourselves from the GOODS-South HST-ACS data (GOODS data hereafter), using SExtractor[^2] [@BA96]. The resulting sample contains 505 objects. To analyse the surface brightness profiles of our galaxies in a similar rest-frame band within the explored redshift range (0.1$<$$z$$<$1.1), we have extracted the profiles in the following bands: $V_{606}$ band for galaxies with 0.1$<$$z$$\leq$0.5, $i_{775}$ band for 0.5$<$$z$$\leq$0.8, and $z_{850}$ band for 0.8$<$$z$$\leq$1.1. This allows us to explore the surface brightness distribution in a wavelength close to the $B$-band rest-frame. RADIAL PROFILE ANALYSIS {#sec3} ======================= The surface brightness ([$\mu$-r]{}) profiles were extracted using photometry on quasi-isophotal elliptical apertures. The intensities were estimated as the median of the flux in the area between elliptical apertures of increasing semi major axis length (hereafter we will dub these lengths as “radii”). The ellipticity and position angle of the apertures were fixed to those retrieved by SExtractor for the whole distribution of light of the object. The center of the apertures was also fixed: in a first iteration, the first moments (in ‘$x$’ and ‘$y$’) of the brightness distribution of flux of the object were used for this purpose. After visual inspection of the resulting profile and the image of the object, the center was refined, if needed, with help of the task “imexam” from iraf[^3] to match what was visually estimated, in each case, as the dynamical center of the object. In a regular disk galaxy, as are those under study, this center coincides with the central bulge or nucleus. The radii of the annular apertures was linearly increased at constant steps of 1 pixel (0.03”), up to a radius which is a 50% larger than the radius of a circle with the same area as the isophotal area of the object in the $z_{850}$ band, as given by SExtractor. The isophotal area is that covered by the set of connected pixels with intensities above the detection threshold which constitute a detection (in our case 25.4 [mag/$arcsec^{2}$]{}in [$z_{850}$]{}). For objects with a more or less regular morphology, as are those selected, the median intensities in these annulii are a good approximation to isophotal intensities. The error in the intensity, $\delta$I, is given by the $\sigma$ of the distribution of fluxes inside the annulus, divided by the square root of the corresponding pixel area. The intensities (“$I$”) were transformed to surface brightnesses ($\mu$), expressed in [mag/$arcsec^{2}$]{}in the AB system, using the magnitude zero points posted on the GOODS-HST/ACS website for each filter and the angular scale per pixel, through the expression $\mu = zero - 2.5 \cdot log(I/scale^{2})$. For the errors in $\mu$, $\delta\mu$, the formula used was $\delta\mu$ = 2.5 $\cdot$ log( 1 + $\delta$$I$/$I$). In producing these profiles, SExtractor, DS9[^4] and Iraf software packages were used, “glued” together by a script written by ourselves in “Python”[^5] language. We produced radial profiles, and characterized them in terms of the properties of the exponential laws which best fit them. For each object, the image and profile obtained are visually inspected. If the object is affected by artifacts (which mostly occurs near the edges of the area of the GOODS-HST-ACS field), or the object is suspected of being a merger, or has an irregular morphology, or the object is between two tiles of the survey, or part of the object lies outside the field, the object is rejected for further analysis. From the initial 505 objects, 70 were rejected for these reasons (14%). For those objects which are suitable for analysis, the profile was tested for the existence of breaks of any kind, trying the 3 methods we describe next and comparing results, and also by visually judging the apparent morphology of the galaxy. If no break is apparent (Type I), the profile is fitted to a single exponential. If there is a break, it is characterized by using one of the two following methods. The first, which we call the “Intersection method”, was the most commonly applied. It was used in 84% of the 289 objects which show a “break” (i.e. Types II and III). In this method two exponentials were independently fitted to non overlapping sections of the profile. The point of intersection of the two lines gives the Break Radius, [$R_{Br}$]{}, while the surface brightness on the profile, linearly interpolated at that radius, is the surface brightness at the break, [$\mu_{Br}$]{}. The radial scales of the exponentials ([$h_{1}$]{}and [$h_{2}$]{}) are used to distinguish between Types II ([$h_{1}$]{}$>$ [$h_{2}$]{}) and III ([$h_{1}$]{}$<$ [$h_{2}$]{}). In 47 cases (16% of the objects which show a break), a slightly more involved process is required to measure the position and brightness of the break. We have found some objects which show a change in slope of some Type (mostly II, but also III), in which there is a smaller zone between the 2 exponential regions with a different slope from the other two more extended zones. An example of this phenomenology is shown in Fig. \[figJump\]. If the length of this intermediate zone were increased, and if its slope were constant, it would be classified as a “mixed” Type [II+III or III+II, @Erwin05 PT06], but we refer here to those cases in which the radial extent of this intermediate zone is so small that it seems to be showing a specific morphology. Measuring the position of the break in these (from now on in this paper) termed “Jump Truncations”, cannot be done with the “Intersection method”. In these cases, the intersection point does not match the point at which there is the change of slope in the profile, as seen in Fig.  \[figJump\]. Instead, we use the “Equal Deviation” (ED) method, which we have devised for this purpose. It consists in locating the position of the break at the point in which the inner and outer fit lines, are at a maximum and equal distance (in $\mu$) from the intensity profile. In most cases, there are usually two points of the profile which are at a same distance from the fits. One is the point of intersection of the inner and outer fit lines (a meaningless position in these cases), and the other is the point of interest, the point at which the change in slope appears to happen. By selecting the point at which the distances are equal and of maximum absolute value, the required point is usually naturally selected. Only in some cases, with more irregular profiles, the region in which we search for the points of equal deviation must be limited to find a meaningful solution. If we apply this method to a profile in which there is normal break, i.e. without a “Jump”, the same solution is obtained by this method as with the standard intersection technique. It is important to note that this method is used only to measure the radius of the break (understood as the change in the slope of the profiles) with greater accuracy. We think that, at this point, and for the stated purposes, it is not necessary to speculate about the causes for these “Jump Truncations”. Our standpoint is to consider them as irregularities in the profiles which make the task of estimating the position of the “break” more involved than in most other cases. In analysing the profiles we adopt the following assumption. A “break” in a stellar disk is a significant discontinuity in the slope of the exponential profile. This significance comes from both the change in slope and the permanence of the change at increasing radii. Bearing this in mind, the position of the break should be placed at the point where the change in slope takes place, and not where the lines which best fit the two subsections of the profile intersect, if these two positions do not coincide, as is the case for the “Jump Truncations”.\ Taking into account that stellar disks have their irregularities and asymmetries, it might be argued that perhaps other profile analysis techniques would be better fitted to our goals. For example, using averages of profiles in different sectors of the galaxies, instead of the complete azimuthal profiles, might be helpful to indicate problems in individual extracted profiles. Three are the main reasons which make us prefer the adopted methodology. First, our profiles are not azimuthally averaged, but we extract median values of intensity, so minimizing the relevance of asymmetries on the resulting intensity profiles. Second, averaging profiles in sectors would cause a severe decrease in the limiting surface brightness which could be analysed. For many cases, this could imply that we would not be able to measure the truncations. And third, selecting the sectors could add a bias which could compromise the statistical significance of the results. The complete list of objects (505), with detailed results of the profile classification and characterization, is given in Table \[tblResults\]. Also listed are the objects which were rejected for analysis. These have no information on classification, and the reason for rejection is given in the last column. We will discuss the results in the following sections. COMPLETENESS & STRUCTURAL PARAMETERS RELIABILITY {#sec4} ================================================ Completeness {#sec4.1} ------------ The selection criterion employed for the absolute B magnitude ([$M_{B}$]{}$<$ -18.5) is the same as that used in @TP05. They adopted this “cut” as a compromise between maximizing the number of objects in their final sample and assuring as much homogeneity (equally luminous objects) as possible through the range of redshifts they explored, which is the same as ours. In Fig. \[figComplete\] (left) we see that at redshift $z$$\sim$0.7, which is the mean/median redshift of our sample, the completeness level is at [$M_{B}$]{}$\sim$ -19.5 mag, a magnitude below the selected level. This makes our sample incomplete at fainter magnitudes beyond $z$$\sim$0.5. Nevertheless, we decided to maintain our absolute magnitude selection criterion to satisfy the following objectives: a) to allow a detailed comparison with TP05, and b) because otherwise our sample at low redshift would be significantly reduced. It is also interesting to note from the same figure that our sample lacks some high luminosity objects at lower redshifts because the surveyed volume is significantly smaller in that range. In the right panel of the same figure we see the completeness level in stellar mass is [$M_{\star}$]{}$\sim$ 3$\cdot$ $10^{9}$ [$M_{\sun}$]{}at z$\sim$0.7. The incompleteness with stellar mass is less severe than with luminosity. This is because the [$M_{\star}$]{}/L relation evolves with redshift in this range, in the sense that for a same [$M_{\star}$]{}galaxies at $z$$\sim$1 are brighter in the $B$-band than at $z$$\sim$0 [eg. @Rudnick03]. Comparison with Radial Profile analysis in TP05 {#sec4.2} ----------------------------------------------- To check the accuracy of our structural parameter determination and galaxy type classification, we made comparisons with deeper observations and with simulations. Trujillo & Pohlen 2005 (TP05) studied radial profiles for a sample of 36 late-type galaxies, using imaging data from the *Hubble Ultra Deep Field* [UDF; @Beckwith06], which is also contained in GOODS-South. Those objects are included in our sample, and 35 of them were analysed. Only one object (UDF3203) was discarded in this work, as it has $B$ = -18.46 $>$ -18.5 mag, which is one of our selection criteria. Of the 35 matched objects, 3 more objects were discarded in our analysis: one because it was near the edge of a tile (UDF7556), and the other two as being probable merger candidates (UDF8049 and UDF8275). In Fig. \[figU2G\] we show some examples of radial profiles using UDF data and GOODS-S data. We see that the profiles match well ($\vert \mu_{UDF}-\mu_{GOODS} \vert \lesssim 0.2$ [mag/$arcsec^{2}$]{}) out to a level of $\sim$ 26 [mag/$arcsec^{2}$]{}, and then the differences grow erratic, as the UDF data are “deeper” . We take this value as a minimum surface brightness level of reliability for our profiles. Of the 32 objects which were in the TP05 sample that we studied, in 22 cases (69%), we assign the same classification for their profile (as Type I, II or III). The remaining 10 cases (31% of the matched sample) show varying bases for disagreement. In 4 cases (13%, UDF3268 II$_{TP05}$-I$_{ATB08}$, UDF3822 III$_{TP05}$-I$_{ATB08}$, UDF9455 II$_{TP05}$-I$_{ATB08}$, UDF6853 III$_{TP05}$-I$_{ATB08}$) the feature which could be taken as a “break” can also be identified in our profiles, but happens in TP05 at a lower surface brightness part of our profiles which for our purpose is unreliable (i.e. [$\mu_{Br}$]{}$\gtrsim$ 26 [mag/$arcsec^{2}$]{}). In 2 cases (6%, UDF6862 III$_{TP05}$-II$_{ATB08}$, UDF 8257 III$_{TP05}$-II$_{ATB08}$), we have suspicions of neighbor objects being the cause for these galaxies being classified as Type III in TP05. Three galaxies (9%), present morphologies which are rather irregular or have profiles of difficult interpretation, because of bars (UDF2525 -II+III?- II$_{TP05}$-III$_{ATB08}$, UDF8040 -“Jump Truncation”- I$_{TP05}$-II$_{ATB08}$, UDF7559 -barred- III$_{TP05}$-II$_{ATB08}$). And only in 1 case (3%, UDF4491 II$_{TP05}$ - I$_{ATB08}$) we can see no feature in our profiles that may justify the difference in classification. More importantly for this paper, in the case of Type II galaxies there were only 2 cases out of 19 ($\sim$10%) that were misclassified due to an insufficient depth of the GOODS images. In Fig. \[figU2G\_RMU\] we present a comparison of the estimates of the [$R_{Br}$]{}and [$\mu_{Br}$]{}for the 15 objects of Type II where our classification agrees with TP05. We see an overall good agreement for both parameters. We define the relative error in [$R_{Br}$]{}as 100 \* \[($R_{TP05}$-$R_{ATB08}$)/ $R_{ATB08}$\], and the relative error in intensity as 100 \* \[1 - $10^{-0.4 (\mu_{TP05}-\mu_{ATB08})}$\]. Now, for the given comparisons, the standard deviations are $\sigma_{R}$ = 0.2” ($\sigma_{R}^{err}$ = 22%) and $\sigma_{\mu}$ = 0.6 [mag/$arcsec^{2}$]{}($\sigma_{I}^{err}$ = 43%) for [$R_{Br}$]{}and [$\mu_{Br}$]{}respectively. For [$R_{Br}$]{}, the largest disagreement is with UDF1971. The feature we identified as the break was taken by TP05 as a mere consequence of the bar, while the truncation they detect falls below our reliability level, though it can be seen in our profile. In [$\mu_{Br}$]{}the differences are below 1 [mag/$arcsec^{2}$]{}in all cases, except for UDF6974 and UDF1971. There is a hint, although based on poor statistics, that breaks at [$\mu_{Br}$]{}(TP05) $>$ 25 [mag/$arcsec^{2}$]{}are overestimated in brightness in our work relative to TP05 ([$\mu_{Br}$]{}(ATB08) $<$ [$\mu_{Br}$]{}(TP05)). Profile Analysis of Artificial Galaxies. {#sec4.3} ---------------------------------------- In addition to the previous comparison, we performed simulations in order to assess the accuracy of our results, with regard to the classification of the profiles and their characterization. We have created 300 synthetic galaxies using the GALFIT program [@Peng02 see the Appendix for a detailed description of how the objects were created and analysed]. The mock objects were placed in a synthetic image, to emulate the signal-to-noise ratio, angular scale and PSF properties of the [$z_{850}$]{}band images of GOODS-S. The synthesized galaxies are exponential disks of Type I, II or III (100 of each kind). Their total apparent magnitudes, inner scale length [$h_{1}$]{}over every Type, and outer scale length [$h_{2}$]{}, [$R_{Br}$]{}and [$\mu_{Br}$]{}for Types II and III, are taken to cover the whole ranges over which those parameters vary for our real objects. The selected ranges of parameters are in Table \[tblSim\]. The simulated objects were detected with SExtractor and analysed in an analogous way to that used in studying the real objects. Each model was visually classified as Type I, II or III. When the object had been assigned to Type II or III, the position of the break in the [$\mu$-r]{}plane was also estimated. The process was done “blindly”, the classifier did not know which were the real classification and profile parameters of the object a priori. In Fig. \[figSim\_mag\] (left) we show the effect of the apparent magnitude of the object on its classification. In the panels we have plotted the fraction of objects of each Type (I, II or III) which are correctly classified, against the total apparent magnitude. In each panel the lower 2 curves represent the fraction of objects which are misclassified as being of the complementary Types. Only points which represent a population of at least 3 objects are shown. In the first panel we see how the success ratio in classifying Type I objects goes from 100% to 50% at the fainter end. The wrong classifications are divided between Types II and III with roughly equal shares. In the second panel we see how Type II objects are classified with a higher ratio of success at all surveyed magnitudes ($\gtrsim$70%), but fainter objects are principally misclassified as being of Type I. Finally, as shown in the third panel, Type III ($\gtrsim$80%) objects are never misclassified as Type II, but Type I. It is interesting to note that the reliability of our classification, both in models and real galaxies (Sect. \[sec4.2\]), are at a similar level: 70-80%. Furthermore, note that more than 80% of our real galaxies have [$z_{850}$]{}$<$23 mag, for which the probability of a potential misclassification is lower. As an illustrative exercise, we have also tested the reliability of the visual appreciation of the position of the “break” in the simulations. In Fig. \[figSim\_mag\] right we represent the relative error in the estimate of [$R_{Br}$]{}, against the input surface brightness at the “break” [$\mu_{Br}$]{}$_{in}$. The relative error in [$R_{Br}$]{}is defined as 100 $\cdot$ ([$R_{Br}$]{}$_{out}$ - [$R_{Br}$]{}$_{in}$) / [$R_{Br}$]{}$_{in}$, where [$R_{Br}$]{}$_{in}$ is the actual position of the break for the model and [$R_{Br}$]{}$_{out}$ is the visually estimated position. In the Figure we plot the results for Types II and III. The points have been fitted by a line, and this gives a variation in the relative error from -9.4% to 3.9% between 22 and 26 [mag/$arcsec^{2}$]{}in [$\mu_{Br}$]{}$_{in}$. The standard deviation in the relative error is $\sigma_{err}$ = 8%. We see how the scatter for Type II objects is less ($\sigma_{err}^{II}$=5%) than for Type III objects ($\sigma_{err}^{III}$=13%). These errors are both below the estimate of the error in [$R_{Br}$]{}(22%) derived from the comparison of results in this work and TP05 for objects common to GOODS and UDF observations, as reported in subsection 4.2. It is that error, derived from the analysis of real galaxies, which must be taken a realistic estimate of the error in the estimate of [$R_{Br}$]{}. Finally, in Fig. \[figHisto2Dsim\] we present probability maps of the success in classification of simulated objects of Type II (left) and III (right), as a function of the simulated [$h_{1}$]{}and [$\mu_{Br}$]{}(up) and [$R_{Br}$]{}and [$\mu_{Br}$]{}(down) of the models. The success ratio has been coded in gray scale, and the portions of the planes ([$h_{1}$]{}- [$\mu_{Br}$]{}and [$R_{Br}$]{}- [$\mu_{Br}$]{}) for which no input model within the corresponding ranges of parameters was produced are marked with a cross. The points mark the position in the given planes of the real galaxies under study. Most of our real galaxies are in a position of the plane where the classification success is $>$80%. To summarize the results of this section, from a comparison with deeper observations (i.e. with the UDF) and with artificial galaxies in simulations we conclude that for Type II galaxies (the main goal of this paper) the accuracy on type profile identification is higher than 80%, for surveys as deep as GOODS-South. Nearly 20% of Type II galaxies that are potentially missed in our work are not found because their break positions lie at fainter magnitudes than 25-26 [mag/$arcsec^{2}$]{}. Results {#sec5} ======= Classification of Galaxies in our Sample. ----------------------------------------- We have divided our sample in 3 redshift bins or ranges; “low”: 0.1$<z\leq$0.5, “mid”: 0.5$<z\leq$0.8 and “high”: 0.8$<z\leq$1.1. Of the 435 objects which were suitable for analysis, 146 (34%) are classified as Type I, 242 (56%) as Type II and 47 (11%) as Type III. All considered objects are visually confirmed as non Irregular/Merger, as we stated before. There are 6 objects, of Types I and III, which are suspected of being of early-type (probably S0, though it is difficult to be sure without more information), but have entered our statistics. These objects are marked as “Early” in Table \[tblResults\], and the reason to include them is that they seem to have a genuine disk. Nonetheless, and to avoid any controversy on this issue, our results on disk sizes are based on a subsample of the Type II objects, of which none is classified as “Early”. Our work is focused on the population of late-type, non interacting galaxies, in which the profile is “externally” truncated. This means we concentrate on Type II objects, and specifically on those in which the “downbending” break takes place in the outer parts of the visible disk, i.e. beyond the visible bars or spiral arms. A selection criterion for the ratio [$R_{Br}$]{}/ [$h_{1}$]{}, where $h_{1}$ is the scale length of the inner part of the disk, is not effective in discriminating between the two kinds of “breaks”: the “inner”, which take place “inside” the stellar disk, and the proper “truncations”, those which mark the edge of the stellar disk, or a decline in the density of the stellar population, and so the trained judgement of the classifier is needed to distinguish between them. We define, then, our “truncated” sample (T-sample hereafter) as follows: late-type objects, though not Irregular, nor involved in merger events, with a Type II-truncated profile. This sample contains 238 objects. That means only 4 out of 242 objects which form the whole Type II sample have breaks of the “inner” kind. Additionally, 48 of the objects in the T-sample have asymmetries in their disks (20%). Finally, there are a few objects (15, 6.3%) in which the profile is of the mixed kind II+III. In these there is a clear Type II break, followed by a second break, of Type III; even so we have included them in the T-sample. The distribution of the objects amongst the 3 redshift ranges, is given in Table \[tblClass\]. We find that the frequency of objects classifed as of Type I decreases from 39% to 25% between z$\sim$1 and z$\sim$0.3. For Type II objects the opposite trend is found, increasing their proportion by almost 9% in the same period. These results are compatible with no changes in the distributions within the error bars. Finally, Type III objects show no change in their relative population within the error bars between z$\sim$1 and z$\sim$0.3. It is tempting to explore whether there is a real evolution in the populations of objects of different profile types. This would mean that more disk galaxies presented no truncations in the past than nowadays, relative to the fraction of those that present truncation. But first we need to look again Fig. \[figSim\_mag\] (left), in which we show results from the simulations. As we stated above, when Type I objects are fainter, they are increasingly mistaken for Types II or III in roughly equal proportions. On the other hand, genuine Type II objects are increasingly classified as Type I when the magnitude decreases. Consequently, it is quite possible that the observed variation in the frequencies is caused by misidentifications of Type II galaxies as Type I at high $z$. We also compare the given ratios of profile types with results published in PT06 for local disk galaxies in the SDSS (their sample being composed of 85 objects, with Hubble Types $T$ in the range 3$<$T$<$8.5). In the Table 4 of that paper they report the following shares: 11$\pm$3% of objects are classified as Type I, 66$\pm$5% objects have a Type II profile and 33$\pm$5% have a Type III profile. The shares do not add up to 100, and that is because they count objects with a mixed classification (i.e. II+III or III+II) for the statistics of both Types. In our lowest redshift bin (z$\sim$0.3) the objects are classified as 25$\pm$6% of Type I, 59$\pm$9% of Type II and 15$\pm$5% of Type III. We see the fraction of Type II objects is in good agreement with their results, while there is significant disagreement with regard to Types I and III. We assume that the fact that Type III is under represented in our work, in comparison with theirs, is probably because Type III objects have their breaks at lower brightness levels than Type II’s, and so are harder to identify at intermediate redshifts. Results for the “truncated” sample. ----------------------------------- As stated in Section 2, the classification and characterization of the profiles has been performed in the band which best approximates the rest-frame $B$-band in each redshift bin. As a consequence, the contribution of luminous young stars to the retrieved surface brightness profiles is significant. In fact, in many objects the signature of star forming “clumps” spread over the disks is evident in the images. This means that the reported “truncations” should be more accurately interpreted as related to, or at least influenced by, the extent of the “star formation” disk, a term we will use to refer to that part of the disk where most of the massive star formation is taking place in the galaxies. In contrast, the “truncations” found at longer wavelengths in other works (e.g. PT06) could be more unambiguously interpreted as an abrupt decrease in the density of stellar mass. Nonetheless, when comparing profile parameters of objects in the Local Universe (from results in PT06) and at intermediate redshift from this work, we do so based on profile characterizations performed in similar rest-frame bands: $g'$ for local objects and $B$-band for the higher redshift ones. ### [$R_{Br}$]{}- B-Luminosity relation. Using the radial position of the truncation as a direct estimator of the size of the “star formation dominated disk”, we have explored the relation between [$R_{Br}$]{}and the $B$-band Luminosity of the galaxies for the T-sample (“truncated” profiles). In Fig. \[figTrmag\] left we show [$R_{Br}$]{}in kpc against the rest-frame absolute $B$ magnitude [from @Barden05]. In the 4 panels we show this relation at the 3 redshift ranges we have explored, and also in the Local Universe, based on results from PT06 on SDSS galaxies at $z$$\sim$0 (we use the values of [$R_{Br}$]{}obtained in $g'$-band for these objects). The observed distributions have been fitted to a line with a “robust” least absolute deviation fit, in an iterative “bootstrap method” to get the most probable values for the slope and the y-intercept ordinate and their errors (the same mathematical procedure is used in Figs. \[figTrmass\] and \[figTrh1mag\] through \[figh1mass\]). At a given luminosity, [$R_{Br}$]{}is evolving towards smaller values as redshift increases. Before exploring the meaning of this last assertion we want to clarify an important point with regard to the distribution of objects with luminosity at different redshifts. In the panels we see how this distribution varies amongst redshift bins. In the “low” redshift bin the objects are concentrated around (-20.0, -18.5) mag, while in the “mid” range the distribution is quite homogeneous between -18.5 $\leq$ [$M_{B}$]{}$\leq$ -22 mag. Quite opposite to what happens at lower redshifts, in the “high” redshift bin the objects cluster around (-20.0, -21.5) mag. These distributions are the result of a) probing a smaller volume in the lowest redshift bin and b) the sample is incomplete at the lower end of luminosities in the higher redshift bin. To avoid as much as possible the sampling effects just described, we explore the size evolution using as reference point [$M_{B}$]{}= -20 mag, which is well populated of galaxies at all redshifts. In Fig. \[figTrmag\] b) we show the value of the aforementioned best fit lines at [$M_{B}$]{}= -20 mag, relative to the corresponding value in the $z$=0 sample (logarithmic y-axis), against the mean value of redshift in each bin. The errors shown are from the bootstrap method applied to the fitting of the relation. We see how the ratio decreases with redshift. Given the fact the points are aligned we fit them with a straight line. On this line, the [$R_{Br}$]{}of galaxy with [$M_{B}$]{}= -20 mag has increased by a factor 2.6$\pm$0.3 between $z$=1 and $z$=0. We see how the point at $z$=0.3 fits somewhat worse to the straight line than the others. Nonetheless, the deviation is 2.5$\sigma$. As we explained above, the ranges of [$M_{B}$]{}covered by the objects are not the same amongst different redshift bins, because of a lack of completeness inherent to the available data. Although our analysis method has been devised to minimize the negative consequences this fact could have on the produced results, it would be desirable to test, within the possibilities granted by the available data, whether this method is really effective for the stated purpose. In this line, we have also explored the relation between [$R_{Br}$]{}and [$M_{B}$]{}when a same range of [$M_{B}$]{}is used to select the objects in all redshift bins. This range has been chosen as [-19.5$>M_{B}>$-21 mag]{}. When applying this restriction, the sample populations reduce to 15, 22, 75 and 41 objects in the z$\sim$0, and the “low”, “mid” and “high” redshift bins respectively. In comparison, the whole samples explored in Fig. 7 (and also in Figs. 8 through 14) are 39, 39, 133 and 66 objects in the same redshift bins, i.e., significantly larger. The growth factor in [$R_{Br}$]{}between z$\sim$1 and z$\sim$0, deduced from the best fit lines to the relation [$R_{Br}$]{}-[$M_{B}$]{}at [$M_{B}$]{}=-20 mag is, when restricting the range in [$M_{B}$]{}, 2.0$\pm$0.8, which is smaller than the value given above, but in agreement within the error bar. This further supports the hypothesis that the reported change in [$R_{Br}$]{}at a fixed [$M_{B}$]{}across redshift is not due to differences in the luminosities range of the sampled objects. ### [$R_{Br}$]{}- Stellar Mass relation. It is not entirely straightforward to interpret the size evolution of galaxies from the [$R_{Br}$]{}- [$M_{B}$]{}relation, studied above, for the following reason. The Mass Luminosity relation, specially for shorter wavelength bands such as the $B$-band, has varied significantly between $z$=0 and $z$$\sim$1 [@Brinchmann00; @Bell03; @Dickinson03; @PerezGonzalez08]. Objects of the same mass were brighter in the $B$-band in the past as their stellar populations were younger on average. This has as consequence that even if there were no changes in the [$R_{Br}$]{}of objects, their proved increase in $B$-band Luminosity would make the relation vary such that objects of a same [$M_{B}$]{}would have smaller values of [$R_{Br}$]{}in the past. This is the trend reported in the preceding subsection. Fortunately there is a way to overcome the consequences of this luminosity evolution in order to test for a “real” change in the sizes of stellar disks with time (or more properly, the size of the “star formation dominated” disk, as we are relying on rest-frame $B$-band data). This is to survey the [$R_{Br}$]{}- [$M_{\star}$]{}relation. It is also known that the stellar mass content of galaxies has increased since z$\sim$1, but this change has been more moderate, in relative terms, [@Rudnick03 $\lesssim$30%; ] than that in luminosity. Following the preceding discussion, we have tested the relation between size ([$R_{Br}$]{}) and stellar mass. This is the strategy adopted in @Trujillo04 and @Trujillo06 to test for evolution of the effective radius of massive galaxies ([$M_{\star}$]{}$>$ $10^{10}$ [$M_{\sun}$]{}) at high redshifts. In Fig. \[figTrmass\] we show, in analogous fashion to Fig. \[figTrmag\] left, the values of [$R_{Br}$]{}for objects of the T-sample against the stellar mass [$M_{\star}$]{}, as reported in @Barden05. The $z$=0 data come again from @TP05 [their stellar masses were computed following the prescription of @Bell03]. In this case, the best fit line (obtained by the same method as in the [$R_{Br}$]{}- [$M_{B}$]{}relation) also does show a trend to lower values with redshift. While in this case the differences in the distribution of stellar masses are not as large as in the case of the luminosities, we see how the slopes are slightly less “stable” than in that case, based on the assumption that they should follow less pronounced changes between redshift ranges. This may be because the distribution of masses is not as broad relative to the distribution of [$R_{Br}$]{}as the distribution of [$M_{B}$]{}’s. Moreover, the stellar mass is a quantity derived from the luminosities, and this leads to an increase in the dispersion. The factors that go into producing the dispersion in the two relations ([$R_{Br}$]{}- [$M_{B}$]{}and [$R_{Br}$]{}- [$M_{\star}$]{}), are the following: a) inaccurate redshifts ($\delta$$z$/(1+$z$)$\sim$0.02), b) inaccurate estimates of the [$R_{Br}$]{}($\sigma^{err}_{R}$ $\lesssim$ 25%), and c) intrinsic dispersion in the relation. Though the two first factors (especially the second) may play a role in explaining part of the dispersion, we do not rule out the third possibility; i.e. it is not only possible, but almost to be expected, that objects of the same mass may have followed different evolutionary paths (depending on initial conditions and environment, for example), which have shaped their structure in a different way, and so produce “intrinsic” dispersions in the [$R_{Br}$]{}- [$M_{B}$]{}and [$R_{Br}$]{}- [$M_{\star}$]{}relations. In Fig. \[figTrmass\] right we present the fitted value of [$R_{Br}$]{}at $10^{10}$ [$M_{\sun}$]{}relative to the $z$=0 value, against the mean value of redshift in each bin. The errors are again derived from the bootstrap method applied when fitting the [$R_{Br}$]{}-[$M_{\star}$]{}relation. We see a less strong evolution with redshift than for the [$R_{Br}$]{}-[$M_{B}$]{}relation. It is also interesting to see how in this case the points are much better aligned than for that relation. From the linear fit to the points in Fig. \[figTrmass\] right we deduce an increase by a factor of 1.3$\pm$0.1 in [$R_{Br}$]{}for a given stellar mass between $z$=1 and $z$=0, i.e. in the last $\sim$8 Gyr using the standard parameters in our cosmological model. For comparison, in the same range of redshifts, and using the assumption of a maximum evolution in luminosity for their galaxies, @TP05 reported a more moderate growth in [$R_{Br}$]{}by a factor of at least 1.25, which is in good agreement with our result. Another point worth noting is that individual galaxies also evolve in stellar mass. For this reason it is interesting to provide a rough number of how much [$R_{Br}$]{}could grow for a given “individual” galaxy. In the range of ages probed here it is claimed in the literature that galaxies have increased their stellar masses by $\sim$30% [@Rudnick03]. This means, that a Type II object with [$M_{\star}$]{}= $10^{10}$ [$M_{\sun}$]{}at z=1, would have its [$R_{Br}$]{}larger by $\lesssim$50% today, i.e. by somewhat more than if its mass were unchanged. We have also explored the relation between [$R_{Br}$]{}and [$M_{\star}$]{}when a same range of [$M_{\star}$]{}is imposed to select the objects in all redshift bins, in analogous way to what we did in subsection 5.2.1 for the [$R_{Br}$]{}-[$M_{B}$]{}relation, and for the same reasons. In this case, the chosen range of stellar masses is [$5\cdot10^{9}<M_{\star}<5\cdot10^{10}$ [$M_{\sun}$]{}]{}. In this case the populations reduced to 30, 11, 59 and 40 galaxies within the z$\sim$0, and “low”, “mid” and “high” redshift bins, respectively. As a result, and for these stellar mass-restricted samples, the growth factor in [$R_{Br}$]{}between z$\sim$1 and z$\sim$0 is 1.4$\pm$0.2, a minor difference with the value obtained for the unrestricted samples. This difference between “restricted” and “unrestricted” values is lesser than that found for the [$R_{Br}$]{}-[$M_{B}$]{}relation because the distributions of [$M_{\star}$]{}are more similar between the redsfhift bins than the distributions in [$M_{B}$]{}. Finally, this test strengthens the significance of the result found for the growth in the [$R_{Br}$]{}-[$M_{\star}$]{}relation, as it persists when only objects within a same stellar mass range are taken into account through redshift. ### Surface Brightness at the Break Evolution. We have also explored the distribution of [$\mu_{Br}$]{}in our sample of truncated profiles. In Fig. \[figMuz\] (left) we show a histogram of the [$\mu_{Br}$]{}distributions of the T-sample in the 3 redshift bins under study. The [$\mu_{Br}$]{}have been corrected for the cosmological dimming effect (I $\propto$ $(1+z)^{-4}$). This means we are showing the distribution of [$\mu_{Br}$]{}as they would be measured from a rest-frame observational standpoint for every object. Also in the same panel we represent the median of the distribution of [$\mu_{Br}$]{}in the $g'$ band reported in PT06 for $z$$\sim$0 galaxies. We use this band as it is the closest to our rest-frame $B$-band. We see a clear evolution in the median values of the distributions, in the sense that the “break” in the profiles happens at a brightness level which is 3.3$\pm$0.2 [mag/$arcsec^{2}$]{}brighter at $z$$\sim$1 than at $z$$\sim$0. This is a strong evolution, by a factor of 20.9$\pm$4.2 in intensity. As we are measuring the surface brightness profiles in the rest-frame $B$-band, which is significantly affected by the contribution of young stars, we argue this change may be related to the well known cosmological evolution in global SFR between $z$$\sim$1 and $z$$\sim$0. Medians and standard deviations for the distributions represented in Fig. \[figMuz\] are given in Table \[tblfigMuz\]. If we apply here restrictions both in [$M_{B}$]{}([-19.5$>M_{B}>$-21 mag]{}) and [$M_{\star}$]{}([$5\cdot10^{9}<M_{\star}<5\cdot10^{10}$ [$M_{\sun}$]{}]{}) to select the objects in all redshift bins the results do not change perceptively either, as in previous cases. The difference in median values of [$\mu_{Br}$]{}between z$\sim$1 and z$\sim$0 is -3.5$\pm$0.3, only slightly larger than for unrestricted samples. This corresponds to a decrease in intensity by a factor 25.1$\pm$8.0. In this case the samples reduce to 11, 11, 39, and 19 galaxies in the reshift bins termed as “local”, “low”, “mid” and “high”. ### [$R_{Br}$]{}/ [$h_{1}$]{}Evolution. We also present results on [$R_{Br}$]{}, relative to the scale length of the inner exponential, [$h_{1}$]{}, in our truncated objects (T-sample). In Fig. \[figTrh1z\] are shown histograms of the distribution of this parameter in the 3 explored redshift bins. Also represented is the median value of the ratio for the local sample in PT06, [$R_{Br}$]{}/[$h_{1}$]{}(z$\sim$0) = 2.0 (measured in $g'$-band). The statistical parameters of the distributions can be found in Table \[tblfigTrh1z\]. The most striking feature is the low set of values of the ratio [$R_{Br}$]{}/[$h_{1}$]{}we have measured, compared to local values. The mean, median and standard deviation of the distributions of values in the different redshift bins are given in Table \[tblfigTrh1z\]. The median values are also shown as vertical lines in Fig.\[figTrh1z\]. @Perez04 found a median/mean value of 1.8 for [$R_{Br}$]{}/[$h_{1}$]{}at z$\sim$1 (over 6 objects), which is also shorter than the local value, but still larger than those we find. It is probable that the difference in the sizes of the samples is the cause of this difference. With regard to the difference between the values of [$R_{Br}$]{}/[$h_{1}$]{}she obtained and those observed in the Local Universe, she attributed it mainly to two biases: 1) the detection limit on surface brightness would have prevented her from identifying the breaks with larger value of [$R_{Br}$]{}/[$h_{1}$]{}; and 2) the effects of dust. The first reason is evident, but the latter may need clarification. She assumes dust absorption was more important in late-type galaxies at intermediate redshifts, and specially in the inner parts of the galaxies. This would be perceived as the inner parts of disks having their profiles “flattened”, i.e. with larger [$h_{1}$]{}. This is what would make the quotient [$R_{Br}$]{}/[$h_{1}$]{}smaller at higher redshift, relative to the Local Universe. Again in Fig. \[figTrh1z\], it is also interesting how the distributions of the ratio [$R_{Br}$]{}/[$h_{1}$]{}are quite similar amongst the sub-samples between z$\sim$0.3 and z$\sim$1. It is important to note that the [$R_{Br}$]{}/[$h_{1}$]{}evolution presented here is measured in relation to the observed (uncorrected for dust) [$R_{Br}$]{}/[$h_{1}$]{}of the local galaxies; consequently, if the dust opacity were not to change with redshift, the observed evolution presented here would reflect an intrinsic evolution in the profile morphology of the objects. However, it is likely that the opacity of the galaxies changes with redshift. At a fixed inclination, bulge-to-total flux ratio, and rest-frame wavelength, the degree of attenuation and the increase in the observed scale length due to dust can be parameterized by the change in the central face-on optical depth. The optical depth is a very uncertain quantity (even in the nearby Universe), and this makes a detailed evaluation of the effect of dust to go beyond the scope of this work. Consequently, we have not made any attempt to correct our results for the effect of opacity. Nevertheless, in order to provide a crude estimate of how a significant increase in opacity could affect our results, we have performed the following exercise: let us assume a mean inclination of 30 degrees and an increase in the total central face-on optical depth in the $B$-band from 4 (present-day galaxies) to 8 (high-z galaxies). This change implies a transition from an intermediate to a moderately optically thick case. In this case, for a disk-like galaxy observed in the $B$-band rest-frame, the attenuation increases by 0.25 mag [@Tuffs04, their Fig. 3 and Table 4] and the scale length increases by 12% [@Mollenhoff06]. If we account for these numbers, the galaxies in our high-z sample would be intrinsically brighter by 25% and their scale length intrinsically smaller by 12%. In this sense, if we assume that the break position is not affected by the dust content, the observed (uncorrected for dust increase) [$R_{Br}$]{}/[$h_{1}$]{}evolution presented in this paper would be an upper limit to the actual evolution. It is important to stress however, that a $\sim$15% bias in [$R_{Br}$]{}/[$h_{1}$]{}due to dust is far from explaining the amount of evolution observed here ([$R_{Br}$]{}/[$h_{1}$]{}increasing by a $\sim$50% between z$\sim$1 and z$\sim$0, as shown in Fig. 10 and Table 3). Finally, there is no clear evidence that the mean optical depth in dust was in fact so much higher in previous epochs, and if the opacity were smaller in the past, then the situation would be reversed, with our current estimate of the size evolution being a lower limit. It could be argued that the low values of [$R_{Br}$]{}/[$h_{1}$]{}found at intermediate redshifts could be related to the presence of stellar bars, as objects which present them tend to have lower values of this ratio. In fact, in Pohlen & Trujillo (2006) it was reported that in galaxies with bars (Type II.o-OLR) [$R_{Br}$]{}/[$h_{1}$]{}is, as measured in $r'$-band, around 1.7 compared to 2.5 in galaxies without bars Type II-CT. However, using the largest up-to-date sample of face-on galaxies (0.2$<$z$<$0.84) from the COSMOS 2-square degree field, @Sheth08 find that the fraction of barred spiral galaxies declines with redshift. For galaxies with [$M_{\star}$]{}$>$$10^{10}$[$M_{\sun}$]{}the fraction of bars drops from 65% in the local universe to 20% at z$\sim$84. It must also be taken into account that the Type II.o-OLR objects are included in our “local” sample, as, due to insufficient angular resolution, it is not possible to discriminate with the same accuracy in the higher redshift samples, as it is possible to attain with the local objects, whether the “truncations” are of this Type (II.o-OLR) or not. These facts together imply that the systematical decline in the [$R_{Br}$]{}/[$h_{1}$]{}ratio we see with redshift is unlikely to be due to a bar fraction increase. Another point which is interesting to discuss with regard to this problem is related to the apparent similarity of many of the high-z galaxies under study in this work with the irregular galaxies in the Local Universe. Moreover, the values of [$R_{Br}$]{}/[$h_{1}$]{}and [$h_{1}$]{}are similar to those derived in @HunterElmegreen06 for a sample of irregular galaxies. However, it is unlikely that the present-day irregular galaxies could be the final stage of the galaxies in our sample. Our high-z galaxies are much more massive than these local irregular galaxies. An interesting feature, however, shared by these two families of galaxies is a large value of their specific star formation rate sSFR [see for example the comparison between the sSFR fo $10^{8}$[$M_{\sun}$]{}galaxies at low redshift and the $10^{10}$ [$M_{\sun}$]{}galaxies at z$\sim$0.85 in Fig. 2 of @Bauer05] and the fact that their star formation is well spread over the galaxy disk. This is probably the main reason for their large inner scale-length. With the purpose of shedding some light on this puzzling phenomenon, we have also probed how [$R_{Br}$]{}/[$h_{1}$]{}relates to global properties of the galaxies. In Fig. \[figTrh1mag\] (left) we present this ratio against [$M_{B}$]{}for the T-sample, in analogous way to Fig. \[figTrmag\]. Again, we have divided the objects in 3 redshift bins, and the local data are from PT06 ($g'$-band results). In the low redshift bin (0.1$<z\lesssim$0.5) our derived relation depends less on luminosity than that reported by PT06. In the “mid” redshift bin, though, both relations show as parallel. In the “high” range the relation reverses, in the sense that [$R_{Br}$]{}/[$h_{1}$]{}decreases slightly with the $B$-band luminosity of the galaxies, in contrast to the sense in the Local Universe. This feature has not been previously reported and we must try to explain it. First, in the “high” redshift bin there is a lack of low luminosity galaxies which might make the slope of the best fit line tilt downwards. But, if this were a real phenomenon (and we will give more evidence below supporting this), and not an observational effect, what would it mean? Assuming an exponential profile, the ratio [$R_{Br}$]{}/ [$h_{1}$]{}is proportional to the difference between the surface brightness of the inner exponential measured at the break and at the center: [$R_{Br}$]{}/[$h_{1}$]{}$\propto$ ( [$\mu_{Br}$]{}- $\mu$(r=0) ). So what we see is that, at z$\sim$1 the more luminous objects had a profile which changed less in surface brightness, in absolute terms, from center to the break, than the less luminous objects (i.e. the surface brightness profile is flatter for the more luminous galaxies). Flat profiles, in these rest-frame wavelengths, imply that the star formation was taking place more homogeneously across the whole galaxy disk (a good example is UDF3372 shown in Fig. \[figU2G\]). In the right panel of Fig. \[figTrh1mag\] we can see the ratio [$R_{Br}$]{}/[$h_{1}$]{}relative to the local value (PT06), at [$M_{B}$]{}=-20 mag, for every redshift bin, taken from the linear fits given in the left panel. We see how there is little evolution in the ratio from $z$$\sim$0.3 to z$\sim$1. The ratio [$R_{Br}$]{}/[$h_{1}$]{}varies between 0.62$\pm$0.04 and 0.66$\pm$0.11 of the local value at these redshifts, although the error bars are large to derive any firm conclusion. The points in the panel give the impression that there is a “gap” between the values at z$\sim$0 and those at intermediate redshifts, perhaps raising concerns about some kind of difference between samples, other than genuine evolutionary changes in this structural parameter, or variations in the methods employed to study the samples. We must say first that, if we take into account the error bars, the significance of this “gap” is significantly reduced. We also want to stress that the analysis methodology employed here is analogous to that used in studying the local objects. But we will come over this point below, where the relation [$R_{Br}$]{}-[$M_{\star}$]{}is broached. We have also tested how the [$R_{Br}$]{}/[$h_{1}$]{}for the T-sample relates to the stellar mass of the galaxies, and this we can see in Fig. \[figTrh1mass\]. In the left panel, we see the given ratio against the stellar mass for the different redshift bins (local data are from PT06, as obtained in the $g'$-band). Both panels are analogous to those in Fig. \[figTrh1mag\], but substituting [$M_{B}$]{}for [$M_{\star}$]{}. In this case, the relation between [$R_{Br}$]{}/[$h_{1}$]{}and [$M_{\star}$]{}in the “low” redshift bin is steeper than the relation reported by PT06 at $z$$\sim$0. Note that the local sample covers a smaller range in stellar mass, which could somehow bias the local [$R_{Br}$]{}/[$h_{1}$]{}determination. In the mid redshift bin, the two relations are almost parallel, and in the high redshift range, the slope changes from positive to negative, as happened in the [$R_{Br}$]{}/[$h_{1}$]{}vs. [$M_{B}$]{}in the same range of redshifts. It is important to note that the dispersions are quite large, and so these slopes must be taken with caution. But it is interesting to note again an anticorrelation between [$R_{Br}$]{}/[$h_{1}$]{}and [$M_{\star}$]{}at $z$$\sim$1. As we said above for the more luminous objects, it seems that more massive galaxies at high redshift have an inner disk whose surface brightness decreases less between the central part and the break, compared to the less massive ones. In the right panel of the Figure \[figTrh1mass\] we show the values of the best fits to the distributions of [$R_{Br}$]{}/[$h_{1}$]{}at [$M_{\star}$]{}= $10^{10}$ [$M_{\sun}$]{}, against redshift. This time the decrease in [$R_{Br}$]{}/[$h_{1}$]{}is progressive, and we find that the ratio has increased by a factor 1.6$\pm$0.3 between $z$$\sim$1 and $z$=0, from the linear fit. A point of particular interest is that the points in the right panel of Fig. \[figTrh1mass\] appear better aligned than in the right panel of Fig. \[figTrh1mag\]. This could mean that the apparent discontinuity in the [$R_{Br}$]{}-[$M_{B}$]{}relation commented above may be, at least in part, caused by the significant evolution in $B$-band luminosity in the same redshift range. But we would not like to push this interpretation, as the error bars are large enough to be compatible with a more progressive evolution of [$R_{Br}$]{}/[$h_{1}$]{}at a given [$M_{B}$]{}with redshift. Also for the [$R_{Br}$]{}/[$h_{1}$]{}parameter have we tested its relations with [$M_{B}$]{}and [$M_{\star}$]{}when the corresponding ranges to select the objects are fixed, as in previous subsections. In the case for a fixed [$M_{B}$]{}(selection range: [-19.5$>M_{B}>$-21 mag]{}), the ratio [$R_{Br}$]{}/[$h_{1}$]{}at intermediate redshifts varies between 0.50$\pm$0.08 and 0.59$\pm$0.17 of the local value. This is a slightly larger difference with local values than for the unrestricted samples. This does not affect in a perceptible way our discussion on this parameter. For the relation between [$R_{Br}$]{}/[$h_{1}$]{}and [$M_{\star}$]{}(selection range: [$5\cdot10^{9}<M_{\star}<5\cdot10^{10}$ [$M_{\sun}$]{}]{}), the result remains just unchanged with respect to the unrestricted case. This is, there is an increase in the ratio also by a factor 1.5$\pm$0.3 between $z$$\sim$1 and $z$=0, from the linear fit. ### Scale length of the disk inner to the Break: [$h_{1}$]{}. In a further attempt towards clarifying why we find a distribution of [$R_{Br}$]{}/[$h_{1}$]{}with lower mean and median values at higher redshift than in the Local Universe, we have also probed the relation between [$h_{1}$]{}and [$M_{B}$]{}and [$M_{\star}$]{}for the “truncated” galaxies under study. In Fig. \[figh1mag\] we present [$h_{1}$]{}as a function of [$M_{B}$]{}for galaxies of the T-sample, again for different redshift bins, as in previous similar figures (\[figTrmag\] and \[figTrh1mag\]). In the left panel we have also included the relation between “size” (i.e. scale length derived from the effective radius as explained below) and absolute magnitude found by @Shen03 for local, Sérsic $n<$2.5 galaxies in the SDSS, represented by their best fit curve (slash-dotted line). @Shen03 published the relation between the Sérsic effective radius of the galaxies (i.e. the radius that encloses half of the light of the Sérsic model that best fits to the distribution of light of the object), as measured in $r'$ band and absolute magnitude (see Fig. 6 in their paper). Our profiles are retrieved in approximately rest-frame $B$-band ($\sim$ $g'$ band), but we would not expect dramatic differences in the size estimates of disks between the two bands (either from scale lengths or truncation radii; see eg. values of these parameters in both bands in Table in PT06). Assuming an $n$=1 Sérsic profile (exponential profile), we converted the effective radii ($R_{S,eff}$) to scale length of the exponential profile using the expression $h$ = $R_{S,eff}$ / 1.676. As their cut-off for classification as a late-type galaxy was as broad as $n<$2.5 (the same as ours), this implies that most of their objects do not obey the exponential law, and we should term this $h$ as the equivalent scale length. We still show their curve for reference. First, we see how the values of [$h_{1}$]{}in the PT06 sample fall above the @Shen03 curve ([$h_{1}$]{}values from PT06 were obtained in the $g'$-band). This result is expected since fitting the whole profile of a truncated galaxy will produce smaller values of scale length than for the inner part. Our results are divided amongst the other 3 sub-panels as a function of redshift, and we see how the best fit lines to the distributions have much steeper slopes than for the local sample. Our galaxies cover a wider range in [$h_{1}$]{}relative to local ones, reaching larger limits, especially for the most luminous objects. We see in the right panel the [$h_{1}$]{}values from the best fits to the distributions at [$M_{B}$]{}=-20 mag, as a function of redshift. It can be seen how there is a moderate decrease in [$h_{1}$]{}with redshift. In Fig. \[figh1mass\] (left) we show the relation between [$h_{1}$]{}and the stellar mass for the galaxies of the T-sample, at different redshifts. The local data, as before, are from PT06 (results obtained in $g'$-band). The slash-dotted line represents the relation between equivalent scale length, $h$ = $R_{S,eff}$ / 1.676, and stellar mass given in @Shen03 (taken from their Fig. 10). That is a best fit to the relation they find for late-type ($n<$2.5) galaxies in SDSS, in the $z'$ band. Again, we would not expect dramatic changes in size parameters of disks between $B$ and $z'$ band, and this is the best suited result for comparison they give. It is evident that as in the [$h_{1}$]{}- [$M_{B}$]{}relation, the values of [$h_{1}$]{}fall systematically above the @Shen03 relation. The reason for this is the same as the given previously: the inner scale lengths in “truncated” galaxies are systematically larger than the scale-length of the whole disk. We can see how our distributions of [$h_{1}$]{}are slightly above (reaching higher values) those for the local sample, and their best fit lines have steeper slopes than in the lowest redshift range. In the right panel of the same Figure we see the best fit value to the [$h_{1}$]{}distributions at [$M_{\star}$]{}= $10^{10}$[$M_{\sun}$]{}, relative to the local value, against redshift. The values of [$h_{1}$]{}at intermediate redshifts are larger than the local ones by a factor that goes from 1.29$\pm$0.12 to 1.20$\pm$0.11 between $z\sim$0.3 and $z\sim$0.9. This overall slight increase in [$h_{1}$]{}, combined with the decrease of [$R_{Br}$]{}are responsible for the reported decrease in [$R_{Br}$]{}/[$h_{1}$]{}at higher redshift. The decrease by a factor 1.3 in [$R_{Br}$]{}(at a fixed stellar mass), times the increase in [$h_{1}$]{}by $\sim$1.25 (also at a given stellar mass) gives 1.6, in agreement with the reported decrease between $z\sim$0 and $z\sim$1. From the lack of evolution of [$h_{1}$]{}at intermediate redshifts seen in the right panel of Fig. 14, and the previous discussions on the [$R_{Br}$]{}- [$M_{\star}$]{}and [$R_{Br}$]{}/[$h_{1}$]{}relations, we can also say something else. The disks have become fainter (in rest-frame $B$-band), while keeping their slopes ($\propto$1/[$h_{1}$]{}) roughly constant or slightly increasing, and at the same time increasing their [$R_{Br}$]{}, as the Universe has grown older (in the surveyed range of redshifts). We have also probed these relations under restrictions in the selected [$M_{B}$]{}([-19.5$>M_{B}>$-21 mag]{}) and [$M_{\star}$]{}([$5\cdot10^{9}<M_{\star}<5\cdot10^{10}$ [$M_{\sun}$]{}]{}), depending on whether [$h_{1}$]{}is related to luminosity or stellar mass respectively. On one hand, for the [$h_{1}$]{}- [$M_{B}$]{}relation, this varies by a factor 1.02$\pm$0.2 between z$\sim$1 and z$\sim$0; i.e. no evolution in the surveyed range of redshifts is retrieved (in the “unrestricted” case this factor is 1.4$\pm$0.5). On the other hand, for the [$h_{1}$]{}- [$M_{\star}$]{}relation, the result is the same as in the case when no restrictions in [$M_{\star}$]{}are applied. This is, from the linear fit we obtain a decrease in [$h_{1}$]{}by a factor 0.8$\pm$0.1 between z$\sim$1 and z$\sim$0 when the mass is fixed. Again we see how using the same ranges of luminosities and stellar masses in every redshift bin does not vary significantly the results. ### Robustness of the reported evolutionary changes. The previous sub-sections have shown that both the break position [$R_{Br}$]{}and the surface brightness at the break radius [$\mu_{Br}$]{}have evolved with cosmic time. It could be argued, however, that part of this evolution could be due to selection effects, since as shown in Section 4 very faint breaks can be missed if they were present in the galaxies analysed. We have checked whether this is the case by exploring how our surface brightness limit could affect the observed evolution. According to Sec. 4, the profiles are reliable down to $\mu$$\sim$26 [mag/$arcsec^{2}$]{}. Being conservative, we estimate that we should be able to identify breaks with high confidence down to [$\mu_{Br}$]{}$_{,limit}$$\sim$25.5 [mag/$arcsec^{2}$]{}. Using this number as the limiting value we could estimate what would be the rest-frame (i.e. cosmological dimming corrected) surface brightness at the break at which the number of objects classified as of Type II will start to decline due to a selection effect and not as a result of a real evolution. These numbers will be 24.4 [mag/$arcsec^{2}$]{}($z\sim$0.3), 23.3 [mag/$arcsec^{2}$]{}($z\sim$0.65) and 22.6 [mag/$arcsec^{2}$]{}($z\sim$0.95). As we can see in Fig. \[figMuz\], these numbers are far away (more than a magnitude in all cases) from the position of the peak in the observed galaxy surface brightness break distribution. This reinforces the idea that most of the observed evolution is real and not caused by this effect. Another test we have run is to estimate what would be the maximum [$R_{Br}$]{}that could be measured at a given absolute magnitude according to the limiting surface brightness proposed above. At the highest redshift bin, $z\sim$1, the “distance modulus” is 43 mag ($k$ correction included, derived from the direct comparison of [$z_{850}$]{}magnitudes to [$M_{B}$]{}). Again, we use a surface brightness limit of 25.5 [mag/$arcsec^{2}$]{}. We consider a truncated exponential galaxy with a typical scale length of $h$ = 0.5 arcsecs. With these values, the largest [$R_{Br}$]{}that could be measured would be: [$R_{Br}$]{}= 11.1 kpc for galaxies with [$M_{B}$]{}=-20 mag and [$R_{Br}$]{}=18.5 kpc for galaxies with [$M_{B}$]{}= -22 mag. As can be seen these values are again far away from the observational data distribution, indicating that most (if not all) of the observed evolution is real and not caused by our limiting detected surface brightness. Conclusions {#sec6} =========== In this work we have presented an analysis of surface brightness profiles for a sample of 505 galaxies in the GOODS-South field, making use of HST-ACS imaging from @Giavalisco04. These galaxies are all classified as disk galaxies, based on the selection of those objects with Sérsic indexes $n<$2.5 [Sérsic profile fitting is from @Barden05]. We have also used publicly available data on redshift estimates, absolute rest-frame $B$-band magnitudes and stellar masses from @Wolf01 [@Wolf03], @Barden05. We have classified the profiles using the presence of “breaks” in the exponential profiles of the disks, and the ratio of inner and outer scale lengths ([$h_{1}$]{}/[$h_{2}$]{}) in Types I (no apparent break), II (“downbending break”; [$h_{1}$]{}/[$h_{2}$]{}$>1$) and III (“upbending break” ; [$h_{1}$]{}/[$h_{2}$]{}$<$1). This characterization has been performed in bands which, within the explored range of redshifts, track the rest-frame $B$-band. We have performed simulations on the classification and profile characterization of artificial galaxies, analogous to that performed on real galaxies. With regard to their classification, the worst results are for Type I objects, 50 % of which we fail to classify correctly at the faintest magnitudes ([$z_{850}$]{}$>$22.5 mag AB), while for Types II and III the success rates are much higher ($\gtrsim$70-80%). We have also compared our results on classification and [$R_{Br}$]{}estimate to those presented in @TP05 for a set of 32 galaxies in common to our sample and theirs. Their results are based on the *HST*-UDF observations, which are deeper than the GOODS images. Both sets of classifications match in $\sim$70% of cases. Particularly, in the case of Type II galaxies there were only 2 cases out of 19 ($\sim$10%) that were classified differently in our work due to insufficient depth in the images. When we compare the [$R_{Br}$]{}and [$\mu_{Br}$]{}estimated in the two studies for Type II objects in common, we find no clear bias in any sense. The dispersions are $\sigma_{R}^{err}$=22% and $\sigma_{\mu}^{err}$ = 43%, which can be used as estimates of the errors in these structural parameters. In this work we put special emphasis on the study of “truncated” galaxies, i.e. those with Type II profile, with the “break” taking place in the outer part of the disk. This subsample, which we call the T-sample, is composed of 238 objects, an order of magnitude larger than any sample in previously published work at intermediate redshifts. We have studied the relation between the radius at which the “break” takes place, [$R_{Br}$]{}, and absolute $B$-band magnitude and total stellar mass for these objects. We find a clear evolution in [$R_{Br}$]{}with redshift, galaxies with the same values of luminosity/stellar mass having shorter [$R_{Br}$]{}’s than local galaxies [using as reference the results in @PT06]. We measure an increase by a factor 1.3$\pm$0.1 in [$R_{Br}$]{}between $z\sim$1 and $z\sim$0 at fixed [$M_{\star}$]{}= $10^{10}$ [$M_{\sun}$]{}, and by a factor 2.6$\pm$0.3 at fixed [$M_{B}$]{}= -20 mag, in the same range of redshifts. At the same time, there is also clear evidence for a decrease in the surface brightness level [$\mu_{Br}$]{}at which the “break” takes place on the profile in the last $\sim$8Gyr. We find that [$\mu_{Br}$]{}(in the rest-frame $B$-band) has decreased by 3.3$\pm$0.2 mag between $z\sim$1 and nowadays. This is equivalent to a decrease in surface intensity by a factor 20.9$\pm$4.2. Another point of interest is how [$R_{Br}$]{}relates to the scale length [$h_{1}$]{}of the disk inside the “break”. We show results on the ratio [$R_{Br}$]{}/[$h_{1}$]{}at different redshift ranges up to z$\sim$1. The median values of this ratio are significantly smaller than those found in the Local Universe (1.4$\pm$0.6 at $z\sim$1 versus 2.0$\pm$0.7 at $z\sim$0). We also find that in the highest redshift bin (0.8$<z\leq$1.1) [$R_{Br}$]{}/[$h_{1}$]{}tends to decrease moderately with [$M_{B}$]{}and [$M_{\star}$]{}, in contrast to what happens in the Local Universe and the other redshift bins. This means that, at the epoch corresponding to $z\sim$1, more massive objects had somewhat smaller difference in brightness between the central part of the disk and the break radius, than less luminous / massive objects (implying that their surface brightness profiles are flatter). Finally, we have also probed the relation between [$h_{1}$]{}and $B$-band luminosity and stellar mass. We see how the distribution of values of [$h_{1}$]{}is broader, reaching higher values, at intermediate redshifts than in the Local Universe. The mean values of [$h_{1}$]{}seem to be somewhat larger than local ones, by a factor 1.3 at most, but there is almost no evolution in them (within our statistics) between $z\sim$0.3 and $z\sim$1. We have also tested the previous relations when the samples are selected within the same ranges of [$M_{B}$]{}and/or [$M_{\star}$]{}in every redshift bin. The reported figures do not change significantly under these restrictions, and this further supports the robustness of the results. We conclude by remarking that our results are consistent with the following picture of disk evolution. We find that in the lapse of time that goes from $z\sim$1 to the present epoch ($\sim$8 Gyr), the radii of the “break” in truncated disks, as measured in $\sim$$B$-band, have increased noticeably, while the intensities (on the profile) at which the break takes place have substantially decreased. We interpret the [$R_{Br}$]{}, in this particular case, as a measure of the size of the disk where most of the massive star formation is taking place, as the $B$-band is more influenced by the emission from these young stars than redder bands. At a given stellar mass, the scale lengths of the disk in the part inner to the “break” were on average somewhat larger in the past, and have remained more or less constant until recently. This phenomenon could be related to the spatial distribution of star formation, which seems to be rather spread over the disks in the images. So disk galaxies had profiles with a flatter brightness distribution in the inner part of the disk, which has grown in extension, while becoming fainter and “steeper” over time. This is consistent with at least some versions of the inside-out formation scenario for disks. We are grateful to Marco Barden for kindly providing us with the GEMS morphological analysis catalog. We acknowledge the COMBO-17 collaboration (especially Christian Wolf) for the public provision of a unique database upon which this study is based. We also thank Michael Pohlen for permitting us to use the local sample data for comparison. We must acknowledge the GOODS team for providing such a valuable and easily accesible database from which we could extract our results. We also thank the anonymous referee for insightful and fruitful revision of the manuscript from which its quality has much benefited. This work is based on observations made with the NASA/ESA *Hubble Space Telescope*, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Partial support has been provided by projects AYA2004-08251-C02-01 and AYA2007-67625-C02-01 of the Spanish Ministry of Education and Science, and P3/86 of the Instituto de Astrofísica de Canarias. [*Facilities:*]{} Testing the Reliability of the characterization of Profiles. {#appendix} ============================================================ In this Appendix we explain the work done on the profile characterization of synthetic galaxies, relevant to “breaks”. We have generated 300 2D models of galaxies using GALFIT [@Peng02], which are simplified emulations of real galaxies observed in the range of redshifts we explored. The GALFIT software allows the user to build a model as the sum of a series of analytic 2D components. For each component the user can choose the 2D geometry of the isophotes, as a generalized ellipse (i.e. with “regular” appearance, but which can also adopt a “disky” or “boxy” appearance, depending on the “c” parameter), and besides, also the law that governs the radial profile of intensity. The user can choose amongst different laws: exponential, Sérsic 1/n profiles, gaussians, moffatians, and more. As we are interested in the mid to outer parts of disks, we used only exponential behavior, without introducing bulges or other features. It is possible to reproduce a “broken” exponential disk as the composition of 2 concentric exponential disks, with the same isophote geometry, cancelling each disk in the internal or external region, relative to the radial position of the break. Although we produce these “broken disks” as the combination of two exponential components, we then define these models as containing a single component; i.e. a disk with a “break” in its profile. All the models synthetized have only one component, with a geometry of the isophotes invariant with radius. We chose an elliptical geometry with c=0, i.e. a common ellipse (neither “disky” nor “boxy”). The axis ratio q for each model was randomly chosen in the range 0.5$<$q$<$1, to mimic variations in orientation of the disks. Three kinds of models, each corresponding to a Type of profile (I, II and III) were generated. For Type I galaxies, the model follows a single exponential law. The other 2 kinds of models have also a single component, but they follow an exponential law with a slope that changes from [$h_{1}$]{}to [$h_{2}$]{}at some radius, the break radius ([$R_{Br}$]{}), using the technique described above. Depending on the ratio [$h_{1}$]{}/ [$h_{2}$]{}, with this kind of model we represent either a Type II ([$h_{1}$]{}/ [$h_{2}$]{}$>$ 1) or a Type III ([$h_{1}$]{}/ [$h_{2}$]{}$<$ 1) galaxy. We generated 300 models, 100 of each of these 3 kinds (Types I, II and III). The ranges of [$h_{1}$]{}, [$h_{2}$]{}, [$R_{Br}$]{}, [$\mu_{Br}$]{}and total magnitudes from where to extract the parameters to build each type of model are shown in Table \[tblSim\]. The ranges bracket the whole space in which the parameters vary in our sample of real galaxies, at all redshifts. Some further restrictions are applied to discard certain combinations of parameters, which would produce models which are classified as wildly unreal or undetectable by SExtractor: a) the ratio [$R_{Br}$]{}/ [$h_{1}$]{}is limited to be $<$ 6 ; b) the magnitude in the [$z_{850}$]{}band is within 21$\leq$[$z_{850}$]{}$\leq$24 mag ; and c) the central intensity must be above 3 $\sigma_{sky}$ of the simulated image. The 300 created objects were placed in a synthetic, noisy [$z_{850}$]{}band image with the same pixel scale, zero point and noise level as those of the GOODS data. Crowding effects were not simulated as we have discarded in our analysis those real galaxies which are too near to other objects. SExtractor was run on the synthetic image, using the same detection parameters as those we used for real data, to produce a photometric catalog and segmentation map. These were then used to extract the radial profiles of the objects, and we analysed them with the same software we used for studying the real objects. Following the same criteria we used for real galaxies, one of us visually classified each model as being of Type I, II or III. If he judged there was any break in the profile, he pinpointed its radial and surface brightness coordinates in the [$\mu$-r]{}profile. The characterization test was “blind”, in the sense that the classifier did not know a priori the parameters that defined the models he was facing, and all models were “shuffled”, so as to avoid bias. The results of this simulation and the constraints they imply on the reliability of our results on analysis of real objects have been explained in section \[sec3\]. Barden, M. et al. 2005, , 635, 959 Barton, I. J., & Thompson, L. A. 1997, , 114, 655 Bauer, A. E., Drory, N., Hill, G. J., & Feulner, G. 2005, , 621L, 89 Beckwith, S. V. W. et al. 2006, , 132, 1729 Bell, E., McIntosh, D., Katz, N., & Weinberg, M. 2003, , 149, 289 Bertin, E., & Arnouts, S. 1996, , 117, 393 Bournaud, F., Elmegreen, B. G., Elmegreen, D. M. 2007, , 670, 237 Brinchmann, J. & Ellis, R. 2000, , 536, L77 Bland-Hawthorn, J., Vlajić, M., Freeman, K. C., & Draine, B. T. 2005, , 629, 239 Borch, A. 2004, Ph.D. thesis, Univ. Heidelberg de Grijs, R., Kregel, M., & Wesson, K. H. 2001, , 324, 1074 Debattista, V. P., Mayer, L., Carollo, C. M., Moore, B. Wadsley, J., & Quinn, T. 2006, , 645, 209 de Jong, R. S. et al. 2007, , 667, L49 de Vaucouleurs, G. 1959, Handb. Phys., 53, 311 Dickinson, M., Papovich, C., Ferguson, H. C. & Budávari, T. 2003, , 587, 25 Elmegreen, B. G. & Hunter, D. A. 2006, , 636, 712 Elmegreen, B. G. & Parravano, A. 1994, , 435, L121 Erwin, P., Beckman, J. E., & Pohlen, M. 2005, , 626, 81 Erwin, P., Pohlen, M., Gutiérrez, L., & Beckman, J. E. 2007, arXiv: 0712.1473 Erwin, P., Pohlen, M., & Beckman J. E. 2008, , 135, 20 Foyle, K., Courteau, S. & Thacker, R. 2008, arXiv: 0803.2761v1 Freeman, K. C. 1970, , 160, 811 Giavalisco M. et al. 2004, , 600L, 93 Gil de Paz, A. et al. 2005, , 627, L29 Hunter, D. A., & Elmegreen, B. G. 2006, , 162, 49 Kennicutt, R. C. 1989, , 344, 685 Matthews, L. D., & Gallaguer, J. S. 1997, , 114, 1899 McIntosh, D. et al. 2005, , 632, 191 Möllenhoff, C., Popescu, C. C., & Tuffs, R. J. 2006, ,456, 941 Patterson, F. S. 1940, Harvard Coll. Obs. Bul., 914, 9 Peng, C. Y., Ho, L. C., Impey, C. D., & Rix, H.-W. 2002, , 124, 266 Pérez, I. 2004, , 427, L17 Pérez-González, P. et al. 2008, , 675, 234 Pohlen, M., Dettmar, R. J., Lütticke, R., & Aronica, G. 2002, , 392, 807 Pohlen, M., Beckman, J. E., Hüttemeister, S., Knapen, J. H., Erwin, P., & Dettmar, R.-J. 2004, Penetrating Bars through Masks of Cosmic Dust: The Hubble Tuning Fork Strikes a New Note, ed. D. L. Block, I. Puerari, K. C. Freeman, R. Groess, & E. K. Block (Dordrecht: Springer) 731 Pohlen, M., & Trujillo, I. 2006, , 454, 759 Ravindranath, S. et al. 2004, , 604, L9 Rix, H.-W. et al. 2004, , 152, 163 Sérsic, J. L. 1968, Atlas de Galaxias Australes (Córdoba: Obs. Astron.) Roškar, R., Debattista, V. P., Gregory, S. S., Thomas, R. Q., Kauffman, T., & Wadsley, J. 2008, , 675L, 65 Rudnick, G. et al. 2003, , 599, 847 Schaye, J. 2004, , 609, 667 Shen, S., Mo, H. J., White, S. D. M., Blanton, M. R., Kauffman, G., Voges, W., Brinkmann, J., & Csabai, I. 2003, , 343, 978 Sheth, K. et al. 2008, , 675, 1141 Thilker, D. A. et al. 2005, , 619, L79 Trujillo, I. et al. 2004, , 604, 521 Trujillo, I. & Pohlen, M. 2005, , 630, L17 Trujillo, I. et al. 2006, , 650, 18 Tuffs, R. J., Popescu, C. C., Völk, H. J., Kylafis, N. D., & Dopita, M. A. 2004, , 419, 821 van den Bosch, F. C. 2001, , 327, 1334 van der Kruit, P. C. 1979, van der Kruit, P. C. & Searle, L. 1981a, , 95, 105 van der Kruit, P. C. & Searle, L. 1981b, , 95, 106 van der Kruit, P. C. 1987, , 173, 59 Weiner, B. J., Williams, T. B., van Gorkom, J. H., & Sellwood, J. A. 2001, , 546, 916 Wolf, C. et al. 2001, , 365, 681 Wolf, C., Meisenheimer, K., Rix, H.-W., Borch, A., Dye, S., & Kleinheinrich, M. 2003, , 401, 73 Wolf, C. et al. 2004, , 421, 913 [r|rrr]{} median & $\sigma$\ z$\sim$0 23.8 & 0.8\ $0.1 < z \leq 0.5$ 22.3 & 0.98\ $0.5 < z \leq 0.8$ 21.5 & 0.81\ $0.8 < z \leq 1.1$ 20.6 & 0.79\ [r|rrr]{} & mean & median & $\sigma$\ z$\sim$0 & 2.12 & 2.00 & 0.73\ $0.1 < z \leq 0.5$ & 1.52 & 1.32 & 0.71\ $0.5 < z \leq 0.8$ & 1.45 & 1.38 & 0.73\ $0.8 < z \leq 1.1$ & 1.35 & 1.35 & 0.60\ [c|rrrrrr]{} & [$h_{1}$]{}& [$h_{2}$]{}& [$R_{Br}$]{}& [$\mu_{Br}$]{}& Total Flux\ Type (Number) & arcsec & arcsec & arcsec & [mag/$arcsec^{2}$]{}& magnitudes\ I (100) & (0.1”, 0.7”) & & & & (21, 24)\ II (100) & (0.1”, 1.7”) & (0.1”, 0.5”) & (0.2”, 1.7”) & (22, 26) & (21, 24)\ III (100) & (0.1”, 0.4”) & (0.1”, 0.7”) & (0.2”, 1.2”) & (23, 26) & (21, 24)\ [r|rrrrrr]{} & I & II & III & Raw Total & Discarded & Net Total\ $0.1 < z \leq 0.5$ & 17 (25$\pm$6%) & 40 (59$\pm$9%) & 10 (15$\pm$5%) & 81 & 14 & 67\ $0.5 < z \leq 0.8$ & 77 (33$\pm$2%) & 135 (58$\pm$5%) & 22 (9$\pm$2%) & 273 & 39 & 234\ $0.8 < z \leq 1.1$ & 52 (39$\pm$5%) & 67 (50$\pm$6%) & 15 (11$\pm$3%) & 151 & 17 & 134\ Sum & 146 & 242 & 47 & 505 & 70 & 435\ [^1]: <http://www.stsci.edu/ftp/science/goods/> [^2]: This catalog was obtained by detecting sources in the [$z_{850}$]{}band which had at least 16 contiguous pixels (0.014 $arcsec^{2}$) at an isophotal level of 0.6 sky $\sigma$ (25.35 [mag/$arcsec^{2}$]{}) or higher. These photometry parameters are the same employed by the GOODS Teams for producing their catalogs. Nonetheless, we wanted to have segmentation maps for each object, as produced by SExtractor, but the GOODS team did not release these files, and so we had to do our own, though equivalent, photometry [^3]: <http://iraf.noao.edu> [^4]: <http://hea-www.harvard.edu/RD/ds9/> [^5]: <http://www.python.org>
{ "pile_set_name": "ArXiv" }
--- abstract: 'A *Howe curve* is a curve of genus $4$ obtained as the fiber product of two genus-$1$ double covers of $\bbP^1$. In this paper, we present a simple algorithm for testing isomorphism of Howe curves, and we propose two main algorithms for finding and enumerating these curves: One involves solving multivariate systems coming from Cartier–Manin matrices, while the other uses Richelot isogenies of curves of genus $2$. Comparing the two algorithms by implementation and by complexity analyses, we conclude that the latter enumerates curves more efficiently. Using these algorithms, we show that there exist superspecial curves of genus $4$ in characteristic $p$ for every prime $p$ with $7 < p < 20000$.' address: - 'Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8656, Japan' - 'Graduate School of Environment and Information Sciences, Yokohama National University, 79-7, Tokiwadai, Hodogaya-ku, Yokohama 240-8501, Japan' - 'Unaffiliated mathematician, San Diego, CA 92104' author: - Momonari Kudo - Shushi Harashita - 'Everett W. Howe' bibliography: - 'ANTS.bib' date: 29 July 2020 title: | Algorithms to enumerate\ superspecial Howe curves of genus 4 ---
{ "pile_set_name": "ArXiv" }
--- abstract: 'Given an element in a finite-dimensional real vector space, $V$, that is a nonnegative linear combination of basis vectors for some basis $B$, we compute the probability that it is furthermore a nonnegative linear combination of basis vectors for a second basis, $A$. We then apply this general result to combinatorially compute the probability that a symmetric function is Schur-positive ([recovering the recent result of Bergeron–Patrias–Reiner]{}), $e$-positive or $h$-positive. Similarly we compute the probability that a quasisymmetric function is quasisymmetric Schur-positive or fundamental-positive. In every case we conclude that the probability tends to zero as the degree of a function tends to infinity.' address: - ' Laboratoire de Combinatoire et d’Informatique Mathématique, Université du Québec à Montréal, Montréal QC H3C 3P8, Canada' - ' Department of Mathematics, University of British Columbia, Vancouver BC V6T 1Z2, Canada' author: - Rebecca Patrias - Stephanie van Willigenburg title: The probability of positivity in symmetric and quasisymmetric functions --- Introduction {#sec:intro} ============ The subject of when a symmetric function is Schur-positive, that is, a nonnegative linear combination of Schur functions, is an active area of research. If a homogeneous symmetric function of degree $n$ is Schur-positive, then it is the image of some representation of the symmetric group ${\mathfrak{S}}_n$ under the Frobenius characteristic map. Furthermore, if it is a polynomial, then it is the character of a polynomial representation of the general linear group $GL(n,{\mathbb{C}})$. Consequently, much effort has been devoted to determining when the difference of two symmetric functions is Schur-positive, for example [@BBR06; @FFLP05; @KWvW08; @Kir04; @lpp; @McN08; @McvW09b; @Oko97]. While this question is still wide open in full generality, there exist well-known examples of Schur-positive functions. These include the product of two Schur functions, skew Schur functions, and the chromatic symmetric function of the incomparability graph of $(3+1)$-free posets [@Gasharov], the latter of which are further conjectured to be $e$-positive, that is, a nonnegative linear combination of elementary symmetric functions [@StanleyStembridge]. One other well-known example that is conjectured to be Schur-positive is the bigraded Frobenius characteristic of the space of diagonal harmonics [@HHLRU], which was recently proved to be fundamental-positive, namely, a nonnegative linear combination of fundamental quasisymmetric functions [@CarlssonMellit]. This result is better known as the proof of the shuffle conjecture. Quasisymmetric functions, a natural generalization of symmetric functions, are further related to positivity via representation theory since the 1-dimensional representations of the 0-Hecke algebra map to fundamental quasisymmetric functions under the quasisymmetric characteristic map [@DKLT]. There also exist 0-Hecke modules whose quasisymmetric characteristic map images are quasisymmetric Schur functions [@0Hecke]. Additionally, if a quasisymmetric function is both symmetric and a nonnegative linear combination of quasisymmetric Schur functions, then it is Schur-positive [@SSQSS]. While nonnegative linear combinations of quasisymmetric functions are not as extensively studied, some progress has been made in this direction, for example [@AlexSulz; @BLvW; @SSQSS; @LamP; @McN14], and this area is ripe for study. This paper is structured as follows. [In Theorem \[thm:genthm\], we calculate the probability that an element of a vector space that is a nonnegative linear combination of basis elements is also a nonnegative linear combination of the elements of a second basis, where the bases satisfy certain conditions.]{} In Section \[sec:probsym\], we then apply this theorem and compute the probability that a symmetric function is Schur-positive or $e$-positive in Corollaries \[cor:smprob\], \[cor:ehsprob\], \[cor:emprob\]. We show that these probabilities tend to 0 as the degree of the function tends to infinity in Corollaries \[cor:sm0\], \[cor:ehs0\], \[cor:em0\]. We then apply Thoerem \[thm:genthm\] again in Section \[sec:probqsym\] to compute the probability that a quasisymmetric function is quasisymmetric Schur-positive or fundamental-positive in Corollaries \[cor:SMprob\], \[cor:SFprob\], \[cor:FMprob\], and similarly show these probabilities tend to 0 in Corollaries \[cor:SM0\], \[cor:SF0\], \[cor:FM0\]. The probability of vector positivity {#sec:probvec} ==================================== Let $V$ be a finite-dimensional real vector space with bases $A=\{A_0,\ldots,A_d\}$ and $B=\{B_0,\ldots,B_d\}$, and suppose further that $$A_j=\sum_{i\leq j} a_i^{(j)}B_i,$$ where $a_j^{(j)}=1$ and $a_i^{(j)}\geq 0$. In particular, note that $A_0=B_0$. We say that $f\in V$ is *$A$-positive* (respectively, *$B$-positive*) if $f$ is a nonnegative linear combination of $\{A_0,\ldots,A_d\}$ (respectively, $\{B_0,\ldots,B_d\}$). We would like to answer the following question: What is the probability that if $f\in V$ is $B$-positive, then it is furthermore $A$-positive? We denote this probability by ${\mathbb{P}}(A_i {\;|\;}B_i)$ and note that any $A$-positive $f\in V$ will also necessarily be $B$-positive. In order to calculate ${\mathbb{P}}(A_i {\;|\;}B_i)$, observe that any $B$-positive $f\in V$ can be written as $$f=\sum_{i=0}^d b_iB_i,$$ where each $b_i\geq 0$, and the set of all $B$-positive elements of $V$ forms a cone $${B^+_{\text{cone}}}=\left\{\sum_{i=0}^d b_iB_i {\;|\;}b_i\in\mathbb{R}_{\geq0}\right\}.$$ Inside the cone ${B^+_{\text{cone}}}$ is the cone of $A$-positive elements of $V$ $${A^+_{\text{cone}}}=\left\{\sum_{i=0}^db_iB_i {\;|\;}b_i\in\mathbb{R}_{\geq0}\text{ and the expression is $A$-positive}\right\}.$$ We define ${\mathbb{P}}(A_i {\;|\;}B_i)$ to be the ratio of the volume of the slice of $A^+_{\text{cone}}$ defined by $${A^+_{\text{slice}}}=\left\{\sum_{i=0}^d b_iB_i {\;|\;}b_i\in\mathbb{R}_{\geq0}\text{, the expression is $A$-positive, and }\sum_{i=0}^db_i=1\right\}$$ to the volume of the slice of $B^+_{\text{cone}}$ $${B^+_{\text{slice}}}=\left\{\sum_{i=0}^d b_iB_i {\;|\;}b_i\in\mathbb{R}_{\geq0}\text{ and }\sum_{i=0}^db_i=1\right\}.$$ We could, equivalently, replace “1” in both definitions with any positive real number and obtain the same ratio. Note that this probability will depend on the choice of bases $\{A_0,\ldots,A_d\}$ and $\{B_0,\ldots,B_d\}$; however, each application of the following theorem (see Corollaries \[cor:smprob\], \[cor:ehsprob\], \[cor:emprob\], \[cor:SMprob\], \[cor:SFprob\], and \[cor:FMprob\]) comes with a natural choice of bases, and the asymptotics we explore (see Corollaries \[cor:sm0\], \[cor:ehs0\], \[cor:em0\], \[cor:SM0\], \[cor:SF0\], and \[cor:FM0\]) do not depend on this choice. The existence of the general statement below was suggested by F. Bergeron and its proof inspired by a conversation with V. Reiner about Schur-positivity. \[thm:genthm\] Let $\{A_0,\ldots,A_d\}$ and $\{B_0,\ldots,B_d\}$ be bases of a finite-dimensional real vector space $V$ such that $$A_j=\sum_{i\leq j} a_i^{(j)}B_i,$$ where $a_j^{(j)}=1$ and $a_i^{(j)}\geq 0$, so in particular, $A_0=B_0$. Then $${\mathbb{P}}(A_i{\;|\;}B_i)=\prod_{j=0}^d\left(\sum_{i=0}^j a_i^{(j)}\right)^{-1}.$$ Consider $${B^+_{\text{slice}}}=\left\{\sum_{i=0}^d b_iB_i {\;|\;}b_i\in\mathbb{R}_{\geq0}\text{ and }\sum_{i=0}^db_i=1\right\}$$ and the corresponding slice of ${A^+_{\text{cone}}}$ $${A^+_{\text{slice}}}=\left\{\sum_{i=0}^d b_iB_i {\;|\;}b_i\in\mathbb{R}_{\geq0}\text{, the expression is $A$-positive, and }\sum_{i=0}^db_i=1\right\}.$$ Note that ${B^+_{\text{slice}}}$ is the simplex determined by vertices $B_0,\ldots,B_d$. Define vectors $v_1,\ldots,v_d$ by $v_i=B_i-B_0$ for $1\leq i\leq d$. Then the volume of ${B^+_{\text{slice}}}$ is by definition $$\frac{1}{d!}\lvert \det(v_1,\ldots,v_d)\rvert.$$ The simplex ${A^+_{\text{slice}}}$ is determined by vertices $\left\{\left(\sum_i a_i^{(j)}\right)^{-1}A_j\right\}_{0\leq j\leq d}$. To find its volume, we first define vectors $w_1,\ldots,w_d$ by $$w_j=\left(\sum_i a_i^{(j)}\right)^{-1}A_j-A_0.$$ We then see that $$\begin{aligned} w_j&=\frac{1}{\sum_ia_i^{(j)}}A_j-A_0\\ &= \frac{1}{\sum_ia_i^{(j)}}\left(B_j+a^{(j)}_{j-1}B_{j-1}+\cdots+a^{(j)}_0B_0\right)-B_0\\ &=\frac{1}{\sum_ia_i^{(j)}}\left(B_j+a^{(j)}_{j-1}B_{j-1}+\cdots+a^{(j)}_0B_0\right)-\frac{1}{\sum_ia_i^{(j)}}\left(B_0+a^{(j)}_{j-1}B_0+\cdots+a_0^{(j)}B_0\right)\\ &= \frac{1}{\sum_ia_i^{(j)}}\left(v_j+a^{(j)}_{j-1}v_{j-1}+\cdots+a^{(j)}_1v_1\right).\end{aligned}$$ Thus the volume of ${A^+_{\text{slice}}}$, namely the simplex determined by vertices $\left\{\frac{1}{\sum_ia_i^{(j)}}A_j\right\}$, is $$\begin{aligned} \frac{1}{d!}\lvert\det(w_1,\ldots,w_d)\rvert&=\frac{1}{d!}\left\lvert\det\left(\frac{1}{\sum_i a_i^{(1)}}v_1,\frac{1}{\sum_ia^{(2)}_i}(v_2+a^{(2)}_1v_1),\ldots,\frac{1}{\sum_ia_i^{(d)}}(v_d+\cdots+a^{(d)}_1v_1)\right)\right\rvert\\ &=\frac{1}{d!}\prod_{j}\frac{1}{\sum_ia_i^{(j)}} \left\lvert\det(v_1,\ldots,v_d)\right\rvert.\end{aligned}$$ The result now follows, since by definition we have that $${\mathbb{P}}(A_i {\;|\;}B_i) =\frac{\mbox{volume of } {A^+_{\text{slice}}}}{\mbox{volume of } {B^+_{\text{slice}}}}.$$ Probabilities of symmetric function positivity {#sec:probsym} ============================================== Before we define the various symmetric functions that will be of interest to us, we need to recall some combinatorial concepts. A *partition* $\lambda = (\lambda _1, \ldots , \lambda _k)$ of $n$, denoted by $\lambda \vdash n$, is a list of positive integers whose *parts* $\lambda _i$ satisfy $\lambda _1 \geq \cdots \geq \lambda _k$ and $\sum _{i=1}^k \lambda _i = n$. If there exists $\lambda _{m+1} = \cdots = \lambda _{m+j} = i$, then we often abbreviate this to $i^j$. There exist two total orders on partitions of $n$, which will be useful to us. The first of these is *lexicographic order*, which states that given partitions $\lambda = (\lambda _1, \ldots , \lambda _k)$ and $\mu = (\mu _1, \ldots , \mu _\ell)$ we say that $\mu$ is lexicographically smaller that $\lambda$, denoted by $\mu {<_{\mathit{lex}}}\lambda$, if $\mu \neq \lambda$ and the first $i$ for which $\mu _i \neq \lambda _i$ satisfies $\mu _i < \lambda _i$. The second is the closely related *reverse lexicographic order*, where we say that $\mu$ is reverse lexicographically smaller that $\lambda$, denoted by $\mu {<_{\mathit{revlex}}}\lambda$ if and only if $\mu {>_{\mathit{lex}}}\lambda$. \[ex:lex\] The partitions of 4 in lexicographic order are $$(1^4){<_{\mathit{lex}}}(2,1^2){<_{\mathit{lex}}}(2^2){<_{\mathit{lex}}}(3,1) {<_{\mathit{lex}}}(4).$$ Given a partition $\lambda = (\lambda _1, \ldots , \lambda _k)$ and commuting variables $\{x_1, x_2, \ldots \}$, we define the *monomial symmetric function* $m_\lambda$ to be $$m_\lambda = \sum x_{i_1} ^{\lambda _1} \cdots x_{i_k} ^{\lambda _k}$$where the sum is over all $k$-tuples $(i_1, \ldots , i_k)$ of distinct indices that yield distinct monomials. \[ex:m\] We see that $m_{(2,1)} = x_1^2 x_2 + x_2^2 x_1 + x_1^2 x_3 + x_3^2 x_1 + \cdots .$ The set of all monomial symmetric functions forms a basis for the graded algebra of symmetric functions $${\ensuremath{\operatorname{Sym}}}= \bigoplus _{n\geq 0} {\ensuremath{\operatorname{Sym}}}^n \subseteq {\mathbb{R}}[[x_1, x_2, \ldots ]]$$where ${\ensuremath{\operatorname{Sym}}}^0 = {\operatorname{span}}\{1\}$ and ${\ensuremath{\operatorname{Sym}}}^n = {\operatorname{span}}\{m_\lambda {\;|\;}\lambda \vdash n\}$ for $n\geq 1$. Hence each graded piece ${\ensuremath{\operatorname{Sym}}}^n$ for $n\geq 1$ is a finite-dimensional real vector space with basis $ \{m_\lambda {\;|\;}\lambda \vdash n\}$. For our second required basis we need Young diagrams and Young tableaux. Given a partition $\lambda = (\lambda _1, \ldots , \lambda _k)\vdash n$, we call the array of $n$ left-justified boxes with $\lambda _i$ boxes in row $i$ from the top, for $1\leq i \leq k$, the *Young diagram* of $\lambda$, also denoted by $\lambda$. Given a Young diagram we say that $T$ is a *semistandard Young tableau (SSYT)* of *shape* $\lambda$ if the boxes of $\lambda$ are filled with positive integers such that 1. the entries in each row weakly increase when read from left to right, 2. the entries in each column strictly increase when read from top to bottom. Two SSYTs of shape $(2,1)$ can be seen below in Example \[ex:sasm\]. Given an SSYT $T$ we define the *content* of $T$, denoted by ${\mathrm{content}}(T)$, to be the list of nonnegative integers $${\mathrm{content}}(T) = (c_1, \ldots , c_{{\mathit{max}}})$$where $c_i$ is the number of times that $i$ appears in $T$ and ${\mathit{max}}$ is the largest integer appearing in $T$. We say that an SSYT is of *partition content* if $c_1\geq \cdots \geq c_{{\mathit{max}}}>0$. With this in mind, if $\lambda$ and $\mu$ are partitions, then we define the *Schur function* $s_\lambda$ to be $$\label{eq:sasm}s_\lambda = m_\lambda + \sum _{\mu{<_{\mathit{lex}}}\lambda} K_{\lambda\mu} m_\mu$$where $K_{\lambda\mu}$ is the number of SSYTs, $T$, of shape $\lambda$ and ${\mathrm{content}}(T)=\mu$. \[ex:sasm\] We see $s_{(2,1)} = m_{(2,1)} + 2 m_{(1,1,1)}$ from the following two SSYTs arising from the nonleading term. $${\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1&2\\3\crcr}}}\quad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1&3\\2\crcr}}}$$ We now define the *$i$-th complete homogeneous symmetric function* to be $$h_i=s_{(i)}$$and if $\lambda = (\lambda _1, \ldots , \lambda _k)$ is a partition then we define the *complete homogeneous symmetric function* $h_\lambda$ to be $$h_\lambda = h_{\lambda _1}\cdots h_{\lambda _k} = s_{(\lambda _1)}\cdots s_{(\lambda _k)}.$$Similarly, we define the *$i$-th elementary symmetric function* to be $$e_i=s_{(1^i)}$$and if $\lambda = (\lambda _1, \ldots , \lambda _k)$ is a partition then we define the *elementary symmetric function* $e_\lambda$ to be $$e_\lambda = e_{\lambda _1}\cdots e_{\lambda _k} = s_{(1^{\lambda _1})}\cdots s_{(1^{\lambda _k})}.$$ We have that $\{m_\lambda {\;|\;}\lambda \vdash n\}$, $\{s_\lambda {\;|\;}\lambda \vdash n\}$, $ \{h_\lambda {\;|\;}\lambda \vdash n\}$ and $\{e_\lambda {\;|\;}\lambda \vdash n\}$ are all bases of ${\ensuremath{\operatorname{Sym}}}^n$ for $n\geq1$. Additionally, if $f\in {\ensuremath{\operatorname{Sym}}}$ is a nonnegative linear combination of elements in these bases, then using the vernacular we say that $f$ is, respectively, *monomial-, Schur-, $h$-* or *$e$-positive*. Also, in the following results, we use the notation [${\mathbb{P}}_n(\cdot{\;|\;}\cdot )$]{} to denote that the probability is being calculated in ${\ensuremath{\operatorname{Sym}}}^n$ for $n\geq 1$. Considering its importance, our first result shows the rarity that a monomial-positive symmetric function is furthermore Schur-positive. This statement was previously determined by Bergeron–Patrias–Reiner using a proof method similar to that of Theorem \[thm:genthm\]. The statement without proof is given in [@Patrias]. \[cor:smprob\][@Patrias] Let $\mathcal{K}_\lambda$ denote the number of SSYTs of shape $\lambda$ and partition content. Then $${\mathbb{P}}_n(s_\lambda{\;|\;}m_\lambda)=\prod_{\lambda\vdash n}(\mathcal{K}_\lambda)^{-1}.$$ The result follows from Theorem \[thm:genthm\] by first setting $A_0=s_{(1^n)}=m_{(1^n)}=B_0$, and ordering the basis elements in increasing order by taking their indices in lexicographic order. Then use Equation  along with $K_{\lambda\mu}=0$ if $\lambda {<_{\mathit{lex}}}\mu$ and $K_{\lambda\lambda}=1$ [@EC2 Proposition 7.10.5]. \[ex:smprob\] For $n=3$ we have that $\mathcal{K}_{(3)}=3$, $\mathcal{K}_{(2,1)}=3$ and $\mathcal{K}_{(1,1,1)}=1$ from the following SSYTs. $${\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1&1&1\crcr}}}\quad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1&1&2\crcr}}}\quad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1&2&3\crcr}}} \qquad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1&1\\2\crcr}}} \quad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1&2\\3\crcr}}}\quad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1&3\\2\crcr}}}\qquad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1\\2\\3\crcr}}}$$ Hence, $${\mathbb{P}}_3 (s_\lambda {\;|\;}m_\lambda)= \left( \frac{1}{3} \right)\left( \frac{1}{3} \right)\left( \frac{1}{1} \right) = \frac{1}{9}.$$ \[cor:sm0\] We have that $$\lim_{n\to\infty}{\mathbb{P}}_n(s_\lambda{\;|\;}m_\lambda)=0.$$ Let $\lambda=(\lambda_1,\ldots,\lambda_k)$ be a partition of $n$, and consider the following two fillings of $\lambda$. For the first, fill the boxes in the top row with $1,\ldots,\lambda_1$ from left to right, the second row with $\lambda_1+1,\ldots,\lambda_1+\lambda_2$, etc. For the second, fill the boxes in row $i$ from the top with $i$ for $1\leq i \leq k$. For $\lambda\neq (1^n)$, these fillings are distinct, and thus $\mathcal{K}_\lambda\geq 2$. It follows that $$0\leq \prod_{\lambda\vdash n}(\mathcal{K}_\lambda)^{-1}\leq \frac{1}{2^{p(n)-1}},$$ where $p(n)$ denotes the number of partitions of $n$, and hence $$0\leq \lim_{n\to\infty}\prod_{\lambda\vdash n}(\mathcal{K}_\lambda)^{-1}\leq \lim_{n\to\infty}\frac{1}{2^{p(n)-1}}=0.$$ \[cor:ehsprob\] Let $\mathcal{E}_\lambda$ be the number of SSYTs with content $\lambda$. Then $${\mathbb{P}}_n(e_\lambda {\;|\;}s_\lambda)={\mathbb{P}}_n(h_\lambda{\;|\;}s_\lambda)=\prod_{\lambda\vdash n}(\mathcal{E}_\lambda)^{-1}.$$ [By [@EC2 Proposition 7.10.5 and Corollary 7.12.4]]{} we have that $$\label{eq:hass}h_\lambda = s_\lambda + \sum _{\mu{>_{\mathit{lex}}}\lambda} K_{\mu\lambda} s_\mu.$$The result for ${\mathbb{P}}_n(h_\lambda{\;|\;}s_\lambda)$ now follows from Equation  and Theorem \[thm:genthm\], [along with $K_{\mu\lambda}=0$ if $\mu{<_{\mathit{lex}}}\lambda$ and $K_{\lambda\lambda = 1}$ [@EC2 Proposition 7.10.5],]{} by setting $A_0=h_n=s_{(n)}=B_0$ and by ordering the basis elements in increasing order by taking their indices in reverse lexicographic order. The result for ${\mathbb{P}}_n(e_\lambda {\;|\;}s_\lambda)$ now follows from applying the involution $\omega$ to [that acts as a bijection from Schur-positive functions that are $h$-positive to Schur-positive functions that are $e$-positive. It satisfies]{} $$\omega(h_\lambda)=e_\lambda \mbox{ and } \omega(s_\lambda)=s_{\lambda '}$$where $\lambda ' $ is the transpose of $\lambda$, that is, the partition whose parts are obtained from $\lambda$ with maximum part ${\mathit{max}}(\lambda)$ by letting $\lambda _i ' =$ the number of parts of $\lambda \geq i$, for $1\leq i \leq {\mathit{max}}(\lambda)$. \[ex:ehsprob\] For $n=3$ we have that $\mathcal{E}_{(3)}=1$, $\mathcal{E}_{(2,1)}=2$ and $\mathcal{E}_{(1,1,1)}=4$ from the following SSYTs. $${\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1&1&1\crcr}}}\qquad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1&1&2\crcr}}}\qquad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1&1\\2\crcr}}} \qquad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1&2&3\crcr}}} \qquad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1&2\\3\crcr}}}\qquad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1&3\\2\crcr}}}\qquad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1\\2\\3\crcr}}}$$ Hence, $${\mathbb{P}}_3 (e_\lambda {\;|\;}s_\lambda)= {\mathbb{P}}_3 (h_\lambda {\;|\;}s_\lambda)=\left( \frac{1}{1} \right)\left( \frac{1}{2} \right)\left( \frac{1}{4} \right) = \frac{1}{8}.$$ \[cor:ehs0\] We have that $$\lim_{n\to\infty}{\mathbb{P}}_n(e_\lambda{\;|\;}s_\lambda)=\lim_{n\to\infty}{\mathbb{P}}_n(h_\lambda{\;|\;}s_\lambda)=0.$$ As in the proof of Corollary \[cor:sm0\], the result will follow from showing that $\mathcal{E}_\lambda\geq 2$ for all $\lambda\neq (n)$. Indeed, first consider the tableau $T$ of shape $\lambda=(\lambda_1,\ldots,\lambda_k)$ with the boxes in row $i$ from the top filled with $i$ for $1\leq i \leq k$. Second, consider the tableau of shape $(\lambda_1+1,\lambda_2,\ldots,\lambda_{k-1},\lambda_k-1)$ obtained from $T$ by moving the rightmost box filled with $k$ from row $k$ to row 1. These are distinct for all $\lambda\neq (n)$, hence $\mathcal{E}_\lambda\geq 2$ for all $\lambda\neq (n)$. One can form a square matrix with the $K_{\lambda\mu}$ (known as the *Kostka numbers*), where $\lambda$ and $\mu$ vary over all partitions of $n$, and rows and columns are ordered in lexicographic order. Then $\mathcal{K}_\lambda$ and $\mathcal{E}_\lambda$ may be interpreted as a row sum and as a column sum of this matrix, respectively. Since elementary symmetric functions are Schur-positive, which in turn are monomial-positive, it is natural to compute the following. \[cor:emprob\] Let $\mathcal{M}_\lambda$ be the number of (0,1)-matrices with row sum $\lambda$ and column sum a partition. Then $${\mathbb{P}}_n(e_\lambda{\;|\;}m_\lambda)=\prod_{\lambda\vdash n}(\mathcal{M}_\lambda)^{-1}.$$ Let $A_0=e_n=m_{(1^n)}=B_0$. [Order the basis elements of $A$ in increasing order by taking their indices in reverse lexicographic order. Order the basis elements of $B$ in increasing order by taking the transpose, as in the proof of Corollary  \[cor:ehsprob\], of their indices in reverse lexicographic order. By [@EC2 Proposition 7.4.1 and Theorem 7.4.4] we have]{} that $$e_\lambda={m_{\lambda '}} + \sum_{\mu '{<_{\mathit{revlex}}}\lambda}M_{\lambda\mu}m_\mu,$$ where $\mu '$ is the transpose of $\mu$ as in the proof of Corollary  \[cor:ehsprob\], and $M_{\lambda\mu}$ is the number of (0,1)-matrices whose row sums give the parts of $\lambda$ and whose column sums give the parts of $\mu$. The result now follows from Theorem \[thm:genthm\] [along with $M_{\lambda\mu}=0$ if $\lambda {<_{\mathit{lex}}}\mu '$ and $M_{\lambda\lambda '}=1$ [@EC2 Theorem 7.4.4]]{}. \[ex:emprob\] For $n=3$ we have that $\mathcal{M}_{(3)}=1$, $\mathcal{M}_{(2,1)}=4$ and $\mathcal{M}_{(1,1,1)}=10$ from the six $3\times 3$ permutation matrices, the matrix $\begin{pmatrix}1&1\\1&0\end{pmatrix}$ and the following four matrices and their transposes. $$\begin{pmatrix}1&1&1\end{pmatrix}\quad \begin{pmatrix}1&1&0\\0&0&1\end{pmatrix}\quad \begin{pmatrix}1&0&1\\0&1&0\end{pmatrix}\quad \begin{pmatrix}0&1&1\\1&0&0\end{pmatrix}\quad$$ Hence, $${\mathbb{P}}_3 (e_\lambda {\;|\;}m_\lambda)= \left( \frac{1}{1} \right)\left( \frac{1}{4} \right)\left( \frac{1}{10} \right) = \frac{1}{40}.$$ \[cor:em0\] We have that $$\lim_{n\to\infty}{\mathbb{P}}_n(e_\lambda{\;|\;}m_\lambda)=0.$$ As before, in the proof of Corollary \[cor:sm0\], the result will follow if we show that $\mathcal{M}_\lambda\geq 2$ for all $\lambda=(\lambda_1,\ldots,\lambda_k)\neq (n)$. Consider the matrix where the first $\lambda_1$ columns have a 1 in row 1 and 0’s everywhere else, the next $\lambda_2$ columns have a 1 in row 2 and 0’s everywhere else, the next $\lambda_3$ columns have a 1 in row 3 and 0’s everywhere else, etc. We obtain a second valid (0,1)-matrix by swapping column $\lambda_1$ with column $\lambda_1+1$. Probabilities of quasisymmetric function positivity {#sec:probqsym} =================================================== We now turn our attention to quasisymmetric functions, and again begin by recalling pertinent combinatorial concepts. A *composition* $\alpha = (\alpha _1, \ldots , \alpha _k)$ of $n$, denoted by $\alpha \vDash n$, is a list of positive integers whose *parts* $\alpha _i$ sum to $n$. Observe that every composition $\alpha$ determines a partition $\lambda(\alpha)$, which is obtained by reordering the parts of $\alpha$ [into]{} weakly decreasing order. Also recall the bijection between compositions of $n$ and subsets of $[n-1] = \{1, \ldots , n-1\}$. Namely, given $\alpha = (\alpha _1, \ldots , \alpha _k) \vDash n$, its corresponding set is ${\mathrm{set}}(\alpha) = \{\alpha _1, \alpha _1 +\alpha _2, \ldots , \alpha _1 + \cdots + \alpha _{k-1}\} \subseteq [n-1]$. Conversely, given $S=\{s_1, \ldots , s_{k-1}\} \subseteq [n-1]$ its corresponding composition is ${\mathrm{comp}}(S) = (s_1, s_2 - s_1, \ldots , n-s_{k-1}) \vDash n$. Lastly, the empty set is in bijection with $(n)$. We again use the abbreviation $i^j$ to mean $j$ consecutive parts equal to $i$, and extend the definition of lexicographic order from the previous section for partitions to encompass compositions. We then use this extension to define a total order on compositions of $n$. Given compositions $\alpha, \beta$ we say $\beta {\blacktriangleleft}\alpha$ if $\lambda (\beta) {<_{\mathit{lex}}}\lambda (\alpha)$ or $\lambda (\beta) = \lambda (\alpha)$ and $\beta {<_{\mathit{lex}}}\alpha$. \[ex:btr\] The compositions of 4 in ${\blacktriangleleft}$ order are $$(1^4){\blacktriangleleft}(1^2, 2) {\blacktriangleleft}(1,2,1) {\blacktriangleleft}(2, 1^2) {\blacktriangleleft}(2^2) {\blacktriangleleft}(1,3) {\blacktriangleleft}(3,1) {\blacktriangleleft}(4).$$ There is also a partial order on compositions of $n$, which will be useful later. Given compositions $\alpha, \beta$ we say that $\alpha$ is a *proper coarsening* of $\beta$ (or $\beta$ is a *proper refinement* of $\alpha$) denoted by $\beta {\prec}\alpha$ if we can obtain $\alpha$ by nontrivially adding together adjacent parts of $\beta$. For example, $(1,2,1){\prec}(1,3)$. Observe that $\beta {\prec}\alpha$ if and only if ${\mathrm{set}}(\alpha) \subset {\mathrm{set}}(\beta)$. Now, similar to the previous section, given a composition $\alpha = (\alpha _1 ,\ldots , \alpha _k)$ and commuting variables $\{ x_1, x_2, \ldots \}$ we define the *monomial quasisymmetric function* $M_\alpha$ to be $$M_\alpha = \sum _{i_1< \cdots <i_k} x_{i_1} ^{\alpha _1}\cdots x_{i_k} ^{\alpha _k}$$and the *fundamental quasisymmetric function* $F_\alpha$ to be $$F_\alpha = M_\alpha + \sum _{\beta {\prec}\alpha} M_\beta.$$ \[ex:mandf\] We compute $M_{(2,1)} = x_1^2x_2 + x_1^2x_3+\cdots$ and $F_{(2,1)}=M_{(2,1)}+M_{(1,1,1)}.$ The set of monomial quasisymmetric functions or the set of fundamental quasisymmetric functions forms a basis for the graded algebra of quasisymmetric functions $${\ensuremath{\operatorname{QSym}}}= \bigoplus _{n\geq 0} {\ensuremath{\operatorname{QSym}}}^n \subseteq {\mathbb{R}}[[x_1, x_2, \ldots ]]$$where ${\ensuremath{\operatorname{QSym}}}^0 = {\operatorname{span}}\{1\}$ and ${\ensuremath{\operatorname{QSym}}}^n = {\operatorname{span}}\{M_\alpha {\;|\;}\alpha \vDash n\}= {\operatorname{span}}\{F_\alpha {\;|\;}\alpha \vDash n\}$ for $n\geq 1$. Hence each graded piece ${\ensuremath{\operatorname{QSym}}}^n$ for $n\geq 1$ is a finite-dimensional real vector space with basis $\{M_\alpha {\;|\;}\alpha \vDash n\}$ or $\{F_\alpha {\;|\;}\alpha \vDash n\}$. In order to define our third and final basis of ${\ensuremath{\operatorname{QSym}}}$ we need composition diagrams and composition tableaux. Given a composition $\alpha = (\alpha _1, \ldots , \alpha _k) \vDash n$, we call the array of $n$ left-justified boxes with $\alpha _i$ boxes in row $i$ from the top, for $1\leq i \leq k$, the *composition diagram* of $\alpha$, also denoted by $\alpha$. Given a composition diagram $\alpha \vDash n$, we say $\tau$ is a *semistandard composition tableau (SSCT)* of *shape* $\alpha$ if the boxes of $\alpha$ are filled with positive integers such that 1. the entries in each row weakly decrease when read from left to right, 2. the entries in the leftmost column strictly increase when read from top to bottom, 3. if we denote the box in $\tau$ that is in the $i$-th row from the top and $j$-th column from the left by $\tau(i,j)$, then if $i<j$ and $\tau(j,m)\leq \tau (i,m-1)$ then $\tau(i, m)$ exists and $\tau(j,m)< \tau(i,m)$. Furthermore, if each of the numbers $1, \ldots , n$ appears exactly once, then we say that $\tau$ is a *standard composition tableau [(SCT)]{}*. Intuitively we can think of the third condition as saying that if $a\leq b$ then $a<c$ in the following array of boxes. $${\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\crb&c\\ & \\ & a\crcr}}}$$Given an SSCT $\tau$ we define the *content* of $\tau$, denoted by ${\mathrm{content}}(\tau)$, to be the list of nonnegative integers $${\mathrm{content}}(\tau) = (c_1, \ldots , c_{{\mathit{max}}})$$where $c_i$ is the number of times that $i$ appears in $\tau$ and ${\mathit{max}}$ is the largest integer appearing in $\tau$. We say that an SSCT is of *composition content* if $c_i\neq 0$ for all $1\leq i \leq {\mathit{max}}$. Given an SCT $\tau$ of shape $\alpha \vDash n$, we define its *descent set* to be $${\mathrm{Des}}(\tau) = \{ i {\;|\;}i+1 \mbox{ is weakly right of } i\} \subseteq [n-1]$$and define its *descent composition* to be $${\mathrm{comp}}(\tau) = {\mathrm{comp}}({\mathrm{Des}}(\tau)) \vDash n.$$ We can now define our final basis both in terms of monomial and fundamental quasisymmetric functions, respectively. The first formula is [@QS Theorem 6.1] with [@QS Proposition 6.7] applied to it, and the second is [@QS Theorem 6.2] with [@QS Proposition 6.8] applied to it. If $\alpha$ and $\beta$ are compositions, then we define the *quasisymmetric Schur function* ${\ensuremath{\mathcal{S}}}_\alpha$ to be $$\label{eq:qsasm}{\ensuremath{\mathcal{S}}}_\alpha = M_\alpha + \sum _{\beta{\blacktriangleleft}\alpha} K_{\alpha\beta}^c M_\beta$$where $K_{\alpha\beta}^c$ is the number of SSCTs, $\tau$, of shape $\alpha$ and ${\mathrm{content}}(\tau)=\beta$. It is also given by $$\label{eq:qsasf}{\ensuremath{\mathcal{S}}}_\alpha = F_\alpha + \sum _{\beta{\blacktriangleleft}\alpha} d_{\alpha\beta} F_\beta$$where $d_{\alpha\beta}$ is the number of SCTs, $\tau$, of shape $\alpha$ and ${\mathrm{comp}}(\tau)=\beta$. \[ex:qsasmf\] $S_{(1,2)} = M_{(1,2)} + M_{(1,1,1)} = F_{(1,2)}$ from the following [SSCT, which is also an SCT, arising from the nonleading term in the first equality]{}. $${{\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1\\3&2\crcr}}}}$$ We have that, in addition to $\{M_\alpha {\;|\;}\alpha \vDash n\}$ and $\{F_\alpha {\;|\;}\alpha \vDash n\}$, $\{{\ensuremath{\mathcal{S}}}_\alpha {\;|\;}\alpha \vDash n\}$ is a basis of ${\ensuremath{\operatorname{QSym}}}^n$ for $n\geq 1$, and if $f\in {\ensuremath{\operatorname{QSym}}}$ is a nonnegative linear combination of such basis elements, then we refer to $f$ respectively as being *monomial quasisymmetric-, fundamental-* or *quasisymmetric Schur-positive*. We also use the notation [${\mathbb{P}}_n(\cdot{\;|\;}\cdot )$]{} to denote that the probability is being calculated in ${\ensuremath{\operatorname{QSym}}}^n$ for $n\geq 1$. Our first result is reminiscent of the probability that a monomial-positive symmetric function is furthermore Schur-positive in Corollary \[cor:smprob\]. \[cor:SMprob\] Let $\mathcal{K}_\alpha^c$ be the number of SSCTs of shape $\alpha$ and composition content. Then $${\mathbb{P}}_n({\ensuremath{\mathcal{S}}}_\alpha {\;|\;}M_\alpha)=\prod_{\alpha\vDash n}(\mathcal{K}_\alpha^c)^{-1}.$$ The result follows from Theorem \[thm:genthm\] by [first]{} setting $A_0={\ensuremath{\mathcal{S}}}_{(1^n)}=M_{(1^n)}=B_0$, [and]{} ordering the basis elements in increasing order by taking their indices in ${\blacktriangleleft}$ order. [Then use Equation  along with $K_{\alpha\beta}^c=0$ if $\alpha {\blacktriangleleft}\beta$ and $K_{\alpha\alpha}^c = 1$ [@QS Proposition 6.7]]{}. \[ex:SMprob\] For $n=3$ we have that $\mathcal{K}^c_{(3)}=4$, $\mathcal{K}^c_{(2,1)}=2$, $\mathcal{K}^c_{(1,2)}=2$ and $\mathcal{K}^c_{(1,1,1)}=1$ from the following SSCTs. $${\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1&1&1\crcr}}}\quad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr2&1&1\crcr}}}\quad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr2&2&1\crcr}}}\quad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr3&2&1\crcr}}} \qquad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1&1\\2\crcr}}} \quad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr2&1\\3\crcr}}}\qquad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1\\2&2\crcr}}} \quad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1\\3&2\crcr}}}\qquad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1\\2\\3\crcr}}}$$ Hence, $${\mathbb{P}}_3 ({\ensuremath{\mathcal{S}}}_\lambda {\;|\;}M_\lambda)= \left( \frac{1}{4} \right)\left( \frac{1}{2} \right)\left( \frac{1}{2} \right)\left( \frac{1}{1} \right) = \frac{1}{16}.$$ \[cor:SM0\] We have that $$\lim_{n\to\infty}{\mathbb{P}}_n(\mathcal{S}_\alpha{\;|\;}M_\alpha)=0.$$ Let $\alpha = (\alpha _1, \ldots , \alpha _k)$ be a composition of $n$ and consider the following two fillings of $\alpha$. For the first, fill the boxes of $\alpha$ such that the boxes in the bottom row contain $n, n-1, \ldots , n+1-\alpha _k$ from left to right, the next row up $n-\alpha _k, n-\alpha _k -1, \ldots, n+1-\alpha_k-\alpha_{k-1}$ etc. For the second, fill the boxes in row $i$ from the top with $i$, for $1\leq i \leq k$. For $\alpha \neq (1^n)$ these fillings are distinct, and thus $\mathcal{K}_\alpha^c\geq 2$. Since the number of compositions of $n$ is $2^{n-1}$ it follows that $$0\leq \lim_{n\to\infty}\prod_{\alpha\vDash n}(\mathcal{K}_\alpha^c)^{-1} \leq \lim_{n\to\infty}\frac{1}{2^{2^{n-1}-1}}=0.$$ \[cor:SFprob\] Let $\mathcal{D}_\alpha$ be the number of SCTs of shape $\alpha$. Then $${\mathbb{P}}_n({\ensuremath{\mathcal{S}}}_\alpha{\;|\;}F_\alpha)=\prod_{\alpha\vDash n}(\mathcal{D}_\alpha)^{-1}.$$ The result follows from Theorem \[thm:genthm\] by [first]{} setting $A_0={\ensuremath{\mathcal{S}}}_{(1^n)}=F_{(1^n)}=B_0$ [and]{} ordering the basis elements in increasing order by taking their indices in ${\blacktriangleleft}$ order. [Then use Equation  along with $d_{\alpha\beta}=0$ if $\alpha {\blacktriangleleft}\beta$ and $d_{\alpha\alpha} = 1$ [@QS Proposition 6.8]]{}. \[ex:SFprob\] For $n=3$ we have that $\mathcal{D}_{(3)}=1$, $\mathcal{D}_{(2,1)}=1$, $\mathcal{D}_{(1,2)}=1$ and $\mathcal{D}_{(1,1,1)}=1$ from the following SSCTs. $${\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr3&2&1\crcr}}} \qquad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr2&1\\3\crcr}}} \qquad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1\\3&2\crcr}}}\qquad {\vtop{\let\\=\cr \setlength\baselineskip{-16000pt} \setlength\lineskiplimit{16000pt} \setlength\lineskip{0pt} \halign{&{\def\thearg{##}\def\nothing{}\ifx\thearg\nothing \vrule width0pt height\cellsize depth0pt\else \hbox to 0pt{\usebox2\hss}\fi\vbox to 15\unitlength{ \vss \hbox to 15\unitlength{\hss$##$\hss} \vss}}\cr1\\2\\3\crcr}}}$$ Hence, $${\mathbb{P}}_3 ({\ensuremath{\mathcal{S}}}_\lambda {\;|\;}F_\lambda)= 1.$$ \[cor:SF0\] We have that $$\lim_{n\to\infty}{\mathbb{P}}_n(\mathcal{S}_\alpha{\;|\;}F_\alpha)=0.$$ By [@BvWmultiplicity Theorem 4.4], we know that $\mathcal{S}_\alpha=F_\alpha$ if and only if $\alpha=(m,1^{\epsilon_1},2,1^{\epsilon_2},\ldots ,2,1^{\epsilon_{k}})$, where $m\in\mathbb{N}_0 {=\{0,1,2,\ldots \}}$ ($m=0$ is understood to mean it does not appear in the composition), $k\in\mathbb{N}_0$, $\epsilon_i\in\mathbb{N}{=\{1,2,\ldots \}}$ for $i\in[k-1]$, and $\epsilon_k\in\mathbb{N}_0$. [Let $\mathcal{A}_n$ be the set of compositions of $n$ *not* in the set of compositions described above. Note that if $\alpha\in \mathcal{A}_n$ then $\mathcal{D}_\alpha \geq 2$. Also note that if $\alpha = (\alpha _1, \ldots , \alpha _k) \in \mathcal{A}_n$, then $(\alpha _1, \ldots , \alpha _k +1), (\alpha _1, \ldots , \alpha _k,1) \in \mathcal{A}_{n+1}$. Hence $2|\mathcal{A}_n | \leq |\mathcal{A}_{n+1}|$. Using this repeatedly, along with $| \mathcal{A}_4 |=2$ since $\mathcal{A}_4 = \{(1,3),(2,2)\}$, yields that for $n\geq 5$]{} [$$2^{n-3}\leq | \mathcal{A}_n | .$$]{}[Hence it follows that]{} [$$0\leq \lim_{n\to\infty}\prod_{\alpha\vDash n}(\mathcal{D}_\alpha)^{-1}\leq \lim_{n\to\infty}\frac{1}{2^{|\mathcal{A}_n|}}\leq \lim_{n\to\infty}\frac{1}{2^{2^{n-3}}}=0.$$]{} We end with the most succinct of our formulas, namely the probability that a quasisymmetric monomial-positive function is furthermore fundamental-positive. \[cor:FMprob\] $${\mathbb{P}}_n(F_\alpha{\;|\;}M_\alpha)=\frac{1}{(n-1)2^{n-2}}$$ Recall that $$F_\alpha = M_\alpha + \sum _{\beta {\prec}\alpha} M_\beta,$$ and that $\beta\prec\alpha$ if and only if ${\mathrm{set}}(\alpha)\subset{\mathrm{set}}(\beta)$. Letting $A_0=F_{(1^n)}=M_{(1^n)}=B_0$ and ordering the basis elements in increasing order by taking their indices in ${\blacktriangleleft}$ order, Theorem \[thm:genthm\] gives that $${\mathbb{P}}_n(F_\alpha{\;|\;}M_\alpha)=\prod_{\alpha\vDash n}\left(\sum_{\beta\preceq\alpha}1\right)^{-1}.$$ Now $$\prod_{\alpha\vDash n}\left(\sum_{\beta\preceq\alpha}1\right) = \prod_{S\subseteq [n-1]}\left(\sum_{T\supseteq S}1\right)= \prod_{S\subseteq [n-1]}\left(2^{n-1-|S|}\right)=2^{\sum_{S\subseteq[n-1]}(n-1-|S|)},$$and $$\sum_{S\subseteq[n-1]}(n-1-|S|)=\sum_{T\subseteq[n-1]}|T|=\sum_{k=0}^{n-1}k\binom{n-1}{k}=(n-1)2^{n-2},$$ where the last equality is [@EC1 [Chapter 1 Exercise 2(b)]{}]. \[ex:FM\] For $n=3$ we have that ${\mathbb{P}}_3 (F_\alpha{\;|\;}M_\alpha) = \frac{1}{4}.$ The following corollary follows from Corollary \[cor:FMprob\]. \[cor:FM0\] We have that $$\lim_{n\to\infty}{\mathbb{P}}_n(F_\alpha{\;|\;}M_\alpha)=0.$$ Acknowledgements {#sec:acknow .unnumbered} ================ The authors would like to thank François Bergeron, Vic Reiner, and Sébastien Labbé for conversations that led to fruitful research directions, and LaCIM where some of the research took place. The first author received support from the National Sciences and Engineering Research Council of Canada, CRM-ISM, and the Canada Research Chairs Program. The second author was supported in part by the National Sciences and Engineering Research Council of Canada, the Simons Foundation, and the Centre de Recherches Mathématiques, through the Simons-CRM scholar-in-residence program. [10]{} , [*Inequalities between [L]{}ittlewood-[R]{}ichardson coefficients*]{}, J. Combin. Theory Ser. A 113 (2006) 567–590. , [*Skew quasisymmetric [S]{}chur functions and noncommutative [S]{}chur functions*]{}, Adv. Math. 226 (2011) 4492–4532. , [*[L]{}ittlewood-[R]{}ichardson rules for symmetric skew quasisymmetric [S]{}chur functions*]{}, J. Combin. Theory Ser. A 137 (2016) 179–206. , [*Multiplicity free [S]{}chur, skew [S]{}chur, and quasisymmetric [S]{}chur functions*]{}, Ann. Comb. 17 (2013) 275–294. , *A proof of the shuffle conjecture*, J. Amer. Math. Soc. 31 (2018) 661–697. , [*Fonctions quasi-symétriques, fonctions symétriques non-commutatives, et algèbres de Hecke à $q = 0$*]{}, C. R. Math. Acad. Sci. Paris 322 (1996) 107–112. , [*Eigenvalues, singular values, and [L]{}ittlewood-[R]{}ichardson coefficients*]{}, Amer. J. Math. 127 (2005) 101–127. , [*Incomparability graphs of (3+1)-free posets are $s$-positive*]{}, Discrete Math. 157 (1996) 193–197. , *A combinatorial formula for the character of the diagonal coinvariants*, Duke Math. J. 126 (2005) 195–232. , [*[Quasisymmetric [S]{}chur functions]{}*]{}, J. Combin. Theory Ser. A 118 (2011) 463–490. , [*Schur positivity of skew [S]{}chur function differences and applications to ribbons and [S]{}chubert classes*]{}, J. Algebraic Combin. 28 (2008) 139–167. , [*An invitation to the generalized saturation conjecture*]{}, Publ. Res. Inst. Math. Sci. 40 (2004) 1147–1239. , [*[[S]{}chur positivity and [S]{}chur log-concavity]{}*]{}, Amer. J. Math. 129 (2007) 1611–1622. , Adv. in Appl. Math. 40 (2008) 271–294. , [*Necessary conditions for [S]{}chur-positivity*]{}, J. Algebraic Combin. 28 (2008) 495–507. , [*Positivity results on ribbon [S]{}chur function differences*]{}, European J. Combin. 30 (2009) 1352–1369. , [*Log-concavity of multiplicities with application to characters of [${\rm U}(\infty)$]{}*]{}, Adv. Math. 127 (1997) 258–282. , [*What is [S]{}chur positivity and how common is it?*]{}, arXiv:1809.04448v1. , *Enumerative Combinatorics. Volume 1*, Wadsworth and Brooks/Cole (1986). , [*On immanants of [J]{}acobi-[T]{}rudi matrices and permutations with restricted position*]{}, [J. Combin. Theory Ser. A]{} 62 (1993) 261–279. , [*Modules of the 0-[H]{}ecke algebra and quasisymmetric [S]{}chur functions*]{}, Adv. Math. 285 (2015) 1025–1065.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider $f(T)$ gravity for a Weitzenbock’s spherically symmetric and static spacetime, where the metric is projected in the tangent space to the manifold, for a set of non-diagonal tetrads. The matter content is coupled through the energy momentum tensor of an anisotropic fluid, generating various classes of new black hole and wormhole solutions. One of these classes is that of cold black holes. We also perform the reconstruction scheme of the algebraic function $f(T)$ for two cases where the radial pressure is proportional to $f(T)$ and its first derivative.' --- UFES 2012 [M. Hamani Daouda $^{(a)}$]{}[^1] , [Manuel E. Rodrigues $^{(a)}$]{}[^2] and [M. J. S. Houndjo $^{(b)(c)}$]{}[^3] \(a)  Universidade Federal do Espírito Santo\ Centro de Ciências Exatas - Departamento de Física\ Av. Fernando Ferrari s/n - Campus de Goiabeiras\ CEP29075-910 - Vitória/ES, Brazil\ (b)  Departamento de Engenharia e Ciências Naturais - CEUNES\ Universidade Federal do Espírito Santo\ CEP 29933-415 - São Mateus - ES, Brazil\  (c) Institut de Mathématiques et de Sciences Physiques (IMSP)\ 01 BP 613 Porto-Novo, Bénin\ Pacs numbers: 04.50. Kd, 04.70.Bw, 04.20. Jb Introduction ============ A special attention is now attached to the so-called Teleparallel Theory (TT). This is a geometric theory which possesses only torsion, without curvature, for characterising the gravitational interaction between matter fields. The TT, which is dynamically equivalent to the General Relativity (GR) [@pereira], possesses as fundamental field the tetrads ones with which the Weitzenbock’s connection is generated [@weitzenbock]. The contraction of the Weitzenbock’s connection through a linear combination defines the action of the theory. With the progress of the measurements about the evolution of the universe, as the expansion and the acceleration, the dark matter and dark energy, various proposals for modifying the GR are being tested. As the unifications theories, for the scales of low energies, it appears in the effective actions besides the Ricci scalar, the terms $R^2$, $R^{\mu\nu}R_{\mu\nu}$ and $R^{\mu\nu\alpha\beta}R_{\mu\nu\alpha\beta}$ and the proposal of modified gravity that agrees with the cosmological and astrophysical data is $f(R)$ theory [@odintsov; @capozziello]. The main problem that one faces with this theory is that the equation of motion is of order 4, being more complicated than the GR for any analysis. Since the GR possesses the TT as analogous, it has been thought the so-called $f(T)$ gravity, $T$ being the torsion scalar, which would be the analogous to the generalizes GR, the $f(R)$ gravity. The $f(T)$ gravity is the generalization of the TT as we shall see later. Note also that the $f(T)$ theory is free of the curvature, i.e., it is defined from the Weitzenbock’s connection. However, it has been shown recently that this theory breaks the invariance of the local Lorentz transformations [@barrow]. Other recent problem is that the $f(T)$ gravity appears to be dependent on the used frame, i.e., it is not covariant [@barrow; @yapiskan]. The $f(T)$ gravity appears in Cosmology as the source driving the inflation [@fiorini]. Also, it has been used as a theory that reproduces the acceleration of the universe without the necessity of introducing the dark energy [@ferraro; @ratbay; @eric]. The contributions of the cosmological perturbations of this theory have been studied [@dutta2]. In gravitation, the first model of black hole which has been analysed was the BTZ [@fiorini2]. Recently, also in the framework of $f(T)$ gravity, several black hole and wormhole solutions with spherical symmetry have been found [@wang; @stephane; @stephane1]. Moreover, other analysis about various themes are being investigated in the context of $f(T)$ gravity [@x]. In this paper, we study some solutions obtained by fixing the spherical symmetries and the staticity of the metric, as well as the more usual methods of GR [@florides]. Introducing an anisotropic matter content, we obtain new black hole and traversable wormhole solution for the $f(T)$ gravity, considering a set of non-diagonal tetrads. The paper is organized as follows. In Section $2$, we present a brief revision of the fundamental concepts of the Weitzenbok’s geometry, the action of the $f(T)$ gravity and the equations of motion. In Section $3$, we fix the symmetries of the geometry and present the equations of the energy density, the radial and tangential pressures. The Section $4$ is devoted for obtaining new solutions in the $f(T)$ gravity. In Section $5$, we present a summary of the reconstruction method for the static case of the $f(T)$ theory and reconstruct two simplest cases linked with the radial pressure. The conclusion and perspectives are presented in the Section 6. The field equations from $f(T)$ theory ====================================== The mathematical concept of the $f(T)$ gravity is based on the Weitzenbock’s geometry and there exists some excellent works in this way [@pereira; @pereira2; @pereira3]. Our convention and nomenclature are the following: the Latin index describe the elements of the tangent space to the manifold (spacetime), while the Greek ones describe the elements of the spacetime. For a general spacetimes metric, we can define the line element as $$dS^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}\; .$$ This metric can be projected in the tangent space to the manifold, using the representation of the tetrad matrix, where the line element is $$\begin{aligned} dS^{2} &=&g_{\mu\nu}dx^{\mu}dx^{\nu}=\eta_{ij}\theta^{i}\theta^{j}\label{1}\; ,\\ dx^{\mu}& =&e_{i}^{\;\;\mu}\theta^{i}\; , \; \theta^{i}=e^{i}_{\;\;\mu}dx^{\mu}\label{2}\; ,\end{aligned}$$ where $\eta_{ij}=diag[1,-1,-1,-1]$ and $e_{i}^{\;\;\mu}e^{i}_{\;\;\nu}=\delta^{\mu}_{\nu}$ or $e_{i}^{\;\;\mu}e^{j}_{\;\;\mu}=\delta^{j}_{i}$. The square root of the metric determinant is given by $\sqrt{-g}=\det{\left[e^{i}_{\;\;\mu}\right]}=e$. Now, we describe the spacetime through the tetrad matrix and then define the Weitzenbock’s connection as $$\begin{aligned} \Gamma^{\alpha}_{\mu\nu}=e_{i}^{\;\;\alpha}\partial_{\nu}e^{i}_{\;\;\mu}=-e^{i}_{\;\;\mu}\partial_{\nu}e_{i}^{\;\;\alpha}\label{co}\; .\end{aligned}$$ From the definition of connection (\[co\]), the spacetime possesses an identically null curvature, having only the torsion contribution and its related quantities for this geometry. Due to the fact that the antisymmetric part of the connection does not vanish, we can define directly from the components of the connection, the torsion tensor, whose components are given by $$\begin{aligned} T^{\alpha}_{\;\;\mu\nu}&=&\Gamma^{\alpha}_{\nu\mu}-\Gamma^{\alpha}_{\mu\nu}=e_{i}^{\;\;\alpha}\left(\partial_{\mu} e^{i}_{\;\;\nu}-\partial_{\nu} e^{i}_{\;\;\mu}\right)\label{tor}\;.\end{aligned}$$ Through the torsion tensor, we can define two important tensors in this geometry: the contorsion and the tensor $S$, whose components can be written as $$\begin{aligned} K^{\mu\nu}_{\;\;\;\;\alpha}&=&-\frac{1}{2}\left(T^{\mu\nu}_{\;\;\;\;\alpha}-T^{\nu\mu}_{\;\;\;\;\alpha}-T_{\alpha}^{\;\;\mu\nu}\right)\label{cont}\; ,\\ S_{\alpha}^{\;\;\mu\nu}&=&\frac{1}{2}\left( K_{\;\;\;\;\alpha}^{\mu\nu}+\delta^{\mu}_{\alpha}T^{\beta\nu}_{\;\;\;\;\beta}-\delta^{\nu}_{\alpha}T^{\beta\mu}_{\;\;\;\;\beta}\right)\label{s}\;.\end{aligned}$$ We are now able to define easily the scalar that makes up the action of $f(T)$ gravity, i.e. the torsion scalar $T$. Through (\[tor\])-(\[s\]), we define the scalar torsion scalar as $$\begin{aligned} T=T^{\alpha}_{\;\;\mu\nu}S^{\;\;\mu\nu}_{\alpha}\label{tore}\; .\end{aligned}$$ Following the same idea of the GR and $f(R)$ theory for the coupling of the geometry with the matter part, we define the action of the $f(T)$ gravity as $$\begin{aligned} S[e^{i}_{\mu},\Phi_{A}]=\int\; d^{4}x\;e\left[\frac{1}{16\pi}f(T)+\mathcal{L}_{Matter}\left(\Phi_{A}\right)\right]\label{action}\; ,\end{aligned}$$ where we used the units $G=c=1$ and the $\Phi_{A}$ are the matter fields. Considering the action (\[action\]) as a functional of the fields $e^{i}_{\mu}$ and $\Phi_{A}$, and vanishing the variation of the functional with respect to the field $e^{i}_{\nu}$, i.e. the principle of minimum action, one obtains the following equation of motion [@barrow] $$\begin{aligned} S^{\;\;\nu\rho}_{\mu}\partial_{\rho}Tf_{TT}+\left[e^{-1}e^{i}_{\mu}\partial_{\rho}\left(ee^{\;\;\alpha}_{i}S^{\;\;\nu\rho}_{\alpha}\right)+T^{\alpha}_{\;\;\lambda\mu}S^{\;\;\nu\lambda}_{\alpha}\right]f_{T}+\frac{1}{4}\delta^{\nu}_{\mu}f=4\pi\mathcal{T}^{\nu}_{\mu}\label{em}\; ,\end{aligned}$$ where $\mathcal{T}^{\nu}_{\mu}$ is the energy momentum tensor, $f_{T}=d f(T)/d T$ and $f_{TT}=d^{2} f(T)/dT^{2}$. If we consider $f(T)=a_{1}T+a_{0}$, the TT is recovered with a cosmological constant. We now introduce the matter content as being described by an anisotropic fluid, whose energy-momentum components are given by $$\begin{aligned} \mathcal{T}^{\,\nu}_{\mu}=\left(\rho+p_t\right)u_{\mu}u^{\nu}-p_t \delta^{\nu}_{\mu}+\left(p_r-p_t\right)v_{\mu}v^{\nu}\label{tme}\; ,\end{aligned}$$ where $u^{\mu}$ is the four-velocity, $v^{\mu}$ the unitary space-like vector in the radial direction, $\rho$ the energy density, $p_r$ the pressure in the direction of $v^{\mu}$ (radial pressure) and $p_t$ the pressure orthogonal to $v_\mu$ (tangential pressure). Since we are assuming an anisotropic spherically symmetric matter, one has $p_r \neq p_t$, such that their equality corresponds to an isotropic fluid sphere. In the next section, we will make some considerations for the manifold symmetries in order to obtain simplifications in the equations of motion and the specific solutions of these symmetries. Spherically symmetric geometry ============================== We consider from the beginning the tetrads matrix as the fundamental fields of the $f(T)$ theory. Now, in the same way that the frames were constructed for the TT theory in [@maluf1] and [@maluf2], our tetrad ansatz is elaborated fixing the degree of freedom as follows $e_{0}^{\;\;\mu}=u^{\mu}$, where $u^{\mu}$ is the four velocity of an observer in free fall, and $e_{1}^{\;\;\mu},e_{2}^{\;\;\mu}$ and $e_{3}^{\;\;\mu}$ are oriented along the unitary vectors in the Cartesian directions $x$ , $y$ and $z$, resulting in the matrix $$\begin{aligned} \left\{e^{i}_{\;\;\mu}\right\}= \left(\begin{array}{cccc} e^{a/2}&0&0&0\\ 0&e^{b/2}\sin\theta\cos\phi &r \cos\theta\cos\phi &-r \sin\theta\sin\phi\\ 0&e^{b/2}\sin\theta\sin\phi &r \cos\theta\sin\phi &r \sin\theta\cos\phi\\ 0& e^{b/2}\cos\theta & -r\sin\theta & 0 \end{array}\right)\label{tetra}\; ,\end{aligned}$$ for spherical and static symmetries. Using the relations (\[1\]) and (\[2\]), one can write the components of the metric, through the line element of spherically symmetric and static spacetimes as $$dS^{2}=e^{a(r)}dt^{2}-e^{b(r)}dr^{2}-r^{2}\left[d\theta^{2}+\sin^{2}\left(\theta\right)d\phi^{2}\right]\label{ele}\; .$$ This choice of tetrad matrices is not unique, because the aim of letting the line element invariant under local Lorentz transformations, is to obtain the form (\[1\]). Other choices have been performed with diagonal matrices, as in references [@stephane; @stephane1]. Using (\[tetra\]), one can obtain $e=\det{\left[e^{i}_{\;\;\mu}\right]}=e^{(a+b)/2}r^2 \sin\left(\theta\right)$, and with (\[co\])-(\[tore\]) we determine the torsion scalar in terms of $r$ $$\begin{aligned} T(r) &=& \frac{2e^{-b}}{r^2}\left(e^{b/2}-1\right)\left(e^{b/2}-1-ra^{\prime}\right)\label{te}\; ,\end{aligned}$$ where the prime ($^{\prime}$) denotes the derivative with respect to the radial coordinate $r$. One can now re-write the equations of motion (\[em\]) for an anisotropic fluid as $$\begin{aligned} 4\pi\rho &=& \frac{f}{4}-\left( \frac{T}{2}-\frac{1}{r^2}-\frac{e^{-b}}{r^2}\left(1-rb^{\prime}\right)\right)\frac{f_T}{2}-\frac{e^{-b/2}}{r}\left(e^{-b/2}-1\right)\left(f_T\right)^{\prime}\,,\label{dens} \\ 4\pi p_{r} &=& \left(\frac{T}{2}+\frac{e^{-b}}{r^2}\left(1+ra^{\prime}\right)-\frac{1}{r^2}\right)\frac{f_T}{2}-\frac{f}{4}\label{presr}\;, \\ 4\pi p_{t} &=& \frac{e^{-b}}{2}\left(\frac{a^{\prime}}{2}+\frac{1}{r}-\frac{e^{b/2}}{r}\right)\left(f_T\right)^{\prime}+ \left[\frac{T}{2}+e^{-b}\left(\frac{a''}{2}+\left(\frac{a'}{4}+\frac{1}{2r}\right) (a^{\prime}-b^{\prime})\right)\right]\frac{f_T}{2}-\frac{f}{4}\label{prest}\;,\end{aligned}$$ where $p_{r}$ and $p_{t}$ are the radial and tangential pressures respectively. In the case where there does not exist a set of non-diagonal tetrads, as in [@stephane; @stephane1], we get an off-diagonal equation (component $\theta-r$) which imposed to the algebraic function $f(T)$ to be a linear function of $T$, or the torsion scalar had to be a constant, with a free $f(T)$. Now, in this present paper, with the choice of a set of non-diagonal tetrads, we do not get such constraint equation, but rather, $f(T)$ may assume an arbitrary functional form. In the next section, we will determine new solutions for the $f(T)$ theory making some consideration about the matter component $p_{r} (r)$. New solutions for a set of non-diagonal tetrads =============================================== In this section, we will study two simplest cases for a set of non-diagonal tetrads. The first is when the radial pressure $p_r$, in (\[presr\]), is identically null. This has been done originally in GR by Florides [@florides] and used later by Boehmer et al [@boehmer1], and in $f(T)$ gravity, the same condition has been used for the cases of diagonal tetrads [@stephane; @stephane1]. We can classify these two cases: 1. Considering $p_{r}= 0$ in (\[presr\]), we get $$\begin{aligned} f(T)=2f_{T}(T)\left\{\frac{T}{2}+\frac{1}{r^2}\left[e^{-b}(1+ra^{\prime})-1\right]\right\}\,.\label{p0}\end{aligned}$$ We can explain two main cases here: 1. Considering $f(T)=\exp\left[a_1 T\right]=\sum\limits_{n=0}^{\infty}[\left(a_{1}T\right)^n/n!]$ in (\[p0\]), one obtains $$T(r)=\frac{1}{a_1}-\frac{2}{r^2}\left[e^{-b}(1+ra^{\prime})-1\right]\label{t1}\,.$$ Choosing the quasi-global coordinate $a(r)=-b(r)$ and equating (\[te\]) with (\[t1\]), one gets $$\begin{aligned} e^{a(r)}=e^{-b(r)}=\frac{-3+r(6a_1+r^2)+2\sqrt{3}\sqrt{3a_1^2 r^2+a_1r^4-3a_1 r}}{12a_1 r}\label{sol1}\;.\end{aligned}$$ This is a black hole solution whose the horizon is given by $r_H=\sqrt[3]{3}$, for $a_1<0$. The torsion scalar (\[t1\]) is given by $$T(r)=\frac{-6\sqrt{3}a_1^2 r+3(2a_1+r^2)\sqrt{a_1 r (3a_1 r+r^3-3)}-\sqrt{3}a_1(4r^3-3)}{6a_1 r^2\sqrt{a_1 r (3a_1 r+r^3-3)}}\label{t2}\;.$$ 2. Considering $$T(r)=\frac{2}{r^2}\left[e^{-b}(1+ra^{\prime})-1\right]\label{t3}\,,$$ the equation (\[p0\]) yields $$f(T)=\sqrt{\frac{T}{T_0}}=\sum\limits_{n=0}^{\infty}\frac{(-1)^n(2n!)}{(1-2n)(n!)^2(4^n)}\left(\frac{T-T_0}{T_0}\right)^n\,.\label{f0}$$ When we impose $$\begin{aligned} e^{a(r)}=\left(1-\frac{r_H}{r}\right)^k\,,\label{cond2}\end{aligned}$$ with $k\geq 2$, equating (\[te\]) with (\[t3\]), we obtain $$\begin{aligned} e^{b(r)}=\left[\frac{2r+(k-2)r_H}{2(r-r_H)}\right]^2\,,\,T(r)=\frac{-2k^2r_{H}^2}{r^2[2r+(k-2)r_{H}]^2}\,.\label{b1}\end{aligned}$$ This is the first class of cold black holes solutions[^4] in $f(T)$ gravity. The degenerated event horizon of order $k$ is obtained in $r=r_H$. We can still look at the limit where $T<<T_0$ in (\[f0\]), which, at the second order, leads to $$f(T)=\sqrt{\frac{T}{T_0}}\approx a_0+a_1 T+a_2 T^2\,,$$ where $a_0=3/8$, $a_1=3/(4T_0)$ and $a_2=-1/(8T_0^2)$. In this limit, the theory becomes the TT one plus a quadratic term in the torsion scalar $T$. 3. With the choice $$\begin{aligned} e^{b(r)}=\frac{r_0}{r}\label{cond3}\;,\end{aligned}$$ and equating (\[te\]) with (\[t3\]), one gets $$\begin{aligned} a(r)=-4\sqrt{\frac{r_0}{r}}-2\ln r\label{a1}\;.\end{aligned}$$ This solution is completely different from that obtained in [@stephane1], taking the same condition of coordinate, but for a set of diagonal tetrads. 2. Let us consider here the case where the radial pressure is simply a function of the radial coordinate $r$ and the algebraic function $f(T)$ is chosen as $$\begin{aligned} f(T)=a_2 T^2+a_1 T+a_0\label{f1}\;.\end{aligned}$$ In this case, substituting (\[f1\]) into (\[presr\]) and taking the coordinate condition [@stephane1] $$\begin{aligned} a(r)=\ln\left(\frac{r_0}{r}\right)\;,\label{cond4}\end{aligned}$$ we obtain $$\begin{aligned} T(r)=\frac{2}{r^2}\left[1\pm\sqrt{1-\frac{a_1}{2a_2}r^2+\frac{r^4}{4a_2}(a_0+16\pi p_{r}(r))}\right]\label{t2}\;.\end{aligned}$$ Equating (\[t2\]) with (\[te\]) and taking into account (\[cond4\]), we get the solution $$\begin{aligned} dS^2=\frac{r_0}{r}dt^2-\left[1-\frac{r^2}{2}T(r)\right]^{-2}dr^2-r^2d\Omega^2\label{sol3}\;,\end{aligned}$$ where $T(r)$ is given in (\[t2\]) and the condition $$1-\frac{a_1}{2a_2}r^2+\frac{r^4}{4a_2}[a_0+16\pi p_{r}(r)]\geq0\,,\label{pr0}$$ has to be satisfied for $T(r)$ being a real function. This is a new class of traversable wormhole solutions. We can observe this by following the same process as in [@stephane]. This line element (\[ele\]) can be put in the form $$dS^{2}=e^{a(r)}dt^{2}-dl^{2}-r^{2}(l)d\Omega^{2}\label{elw}\;,$$ where $a(r)$ is denoted redshift function, and through the redefinition $\beta (r)=r\left[1-e^{-b(r)}\right]$, with $b(r)$ being the metric function given in (\[ele\]), $\beta(r)$ is called shape function. Therefore, the conditions of existence of a traversable wormhole are: a) the function $r(l)$ must possess a minimum value $r_{1}$ for $r$, which imposes ${d^2r}(l)/dl^{2}>0$; b) $\beta(r_{1})=r_{1}$; c) $a(r_{1})$ has a finite value; and finally d) $d\beta(r)/dr|_{r=r_{1}}\leqslant 1$. In this case, $\beta(r)=r[1-(1-r^2T(r)/2)^2]$. Hence, the conditions of a traversable wormhole are applied on the torsion scalar $T(r)$, in (\[t2\]). The condition a), which can be explained in $d^2r/dl^2=[\beta (r)-r\beta^{\prime}(r)]/2r^2=-r[1-r^2T(r)/2][T(r)+rT^{\prime}(r)/2]>0$, leads to $[T(r)+rT^{\prime}(r)/2]<0$; the condition b), yields $T(r_1)=2/r_1^2$; the condition c) is always satisfied, as we can see through (\[sol3\]); and finally, the condition d) leads to $[1-r^2T(r)/2]\geq 2r^2[T(r)+rT^{\prime}(r)/2]$, which is always true when a) is satisfied. Therefore, we must choose the torsion scalar, which will be defined from the radial pressure $p_r(r)$, in (\[t2\]), such that it satisfies the conditions a)-d) and (\[pr0\]), for obtaining traversable wormhole solutions. Let us consider first a class of solutions coming from the following example, $T(r)=2r_1^{m-2}/r^m$, with $r_1>0$ and $m\geq 3$, which is obtained for $p_r (r)=[2a_1 r^{2m}r_1^{4}-a_0 r^{2(m+1)}r_1^4-8a_2r^{m}r_1^{m+2}+4a_2 r^{2}r_1^{2m}]/16\pi r_1^4 r^{2(m+1)}$, which satisfies (\[pr0\]). The condition a) is satisfied, since $d^2 r/dl^2=(r_1^{m-2}/r^{m-1})[1-(r_1/r)^{m-2}]>0$; the conditions b) and c) are also directly satisfied; and d) is satisfied through a). Reconstruction in static f(T) theory ==================================== A method widely used in cosmology in order to obtain the algebraic form of the gravitational part of the action is the so-called reconstruction scheme. This method stems from the introduction of an auxiliary field for the reconstruction of the algebraic function of the main action, as in the case of the theory $f(R)$ for example [@reconstruction1]. We can briefly present this method as follows. Considering the algebraic function $$f(T)=P\left(\varphi\right)T+Q\left(\varphi\right)\label{fr}\;,$$ the functional variation of the action (\[action\]), with respect to $\varphi$, is given by $$\frac{\delta S}{\delta \varphi}=\frac{e}{16\pi}\left[\frac{dP}{d \varphi}T+\frac{d Q}{d\varphi}\right]=0\label{econst}\;.$$ Solving this equation, we get $\varphi\equiv\varphi(T)$, then, $f(T)=P[\varphi(T)]T+Q[\varphi(T)]$. Hence, we have the following identities $$\begin{aligned} f_{T}(T)&=&P+\left(\frac{dP}{d\varphi}T+\frac{dQ}{d\varphi}\right)\frac{d\varphi}{dT}=P[\varphi(T)]\label{ftr}\;,\\ f_{TT}(T)&=&\frac{dP[\varphi(T)]}{dT}\label{fttr}\;.\end{aligned}$$ Having in hand the equations (\[fr\]), (\[ftr\]) and (\[fttr\]), and substituting into (\[dens\])-(\[prest\]), we get $$\begin{aligned} 4\pi\rho&=&-\frac{e^{-b/2}}{r}\left(e^{-b/2}-1\right)\frac{dP}{dT}+\frac{Q}{4}+\frac{P}{2r^2}\left[1+e^{-b}(1-rb^{\prime})\right]\;,\label{densr}\\ 4\pi p_{r}&=&\frac{P}{2r^2}\left[-1+e^{-b}(1+ra^{\prime})\right]-\frac{Q}{4}\label{presrr}\;,\\ 4\pi p_{t}&=& \frac{e^{-b}}{2}\left(\frac{a^{\prime}}{2}+\frac{1}{r}-\frac{e^{-b/2}}{r}\right)\frac{dP}{dT}+P\frac{e^{-b}}{4}\left[a^{\prime\prime}+\left(\frac{1}{r}+\frac{a^{\prime}}{2}\right)(a^{\prime}-b^{\prime})\right]-\frac{Q}{4}\label{prestr}\;.\end{aligned}$$ Since $\varphi$ is an arbitrary field, the reconstruction can be performed directly choosing $\varphi=r$. This method may be used for re-obtaining or reconstructing, when the inversion $r\equiv r(T)$ is possible, the $f(T)$ theory in the static case. Let us draw up to cases here: 1. For the case where $f(T)$ is a linear algebraic function, one has the condition $P\in\Re$, due to (\[fttr\]). Considering now the condition (\[cond4\]), and that the radial pressure (\[presrr\]) obeys $$\begin{aligned} p_{r}=g_1 f(T)\label{presr2}\;,\end{aligned}$$ where $g_1\in\Re$, we get $$\begin{aligned} Q&=&-\left(\frac{2}{16\pi g_1 +1}\right)\frac{P}{r^2}-\left(\frac{16\pi g_1}{16\pi g_1+1}\right)PT\,,\\ f(T)&=&\left(\frac{1}{16\pi g_1+1}\right)PT-\left(\frac{2}{16\pi g_1+1}\right)\frac{P}{r^2}\;.\label{fr1}\end{aligned}$$ Differentiating (\[fr1\]) with respect to $T$, equating it to $P$ and integrating, we obtain the relation $$\begin{aligned} r(T)=\left[8\pi g_1(T_0-T)\right]^{-1/2}\;,\label{r1}\end{aligned}$$ where $T_0$ is an integration real constant. Substituting (\[r1\]) in (\[fr1\]), the algebraic function $f(T)$ is then constructed as $$\begin{aligned} f(T)=PT+\left(\frac{-16\pi g_1 PT_0}{16\pi g_1+1}\right)\;.\label{fr2}\end{aligned}$$ We see that $Q$ is also a constant, given by the second term of (\[fr2\]). Substituting (\[te\]) into (\[r1\]), one gets the following solution $$\begin{aligned} dS^{2}=\frac{r_0}{r}dt^2-\left(\frac{16\pi g_1+1}{16\pi g_1}-\frac{T_0}{2}r^2\right)^{-2}dr^2-r^2d\Omega^2\label{sol6}\;.\end{aligned}$$ This is a new traversable wormhole solution. Following the same process of the solution (\[sol3\]), we obtain $\beta(r)=r[1-(k-T_0 r^2/2)^2]$, with $k=(16\pi g_1+1)/16\pi g_1$, and then, the minimum of $r$ is given by $r_1=\sqrt{2k/T_0}$. The condition a), which in this case is given by $d^2 r/dl^2=-T_0 r(k-T_0 r^2/2)$, is satisfied for $T_0<0$ and $-1/16\pi <g_1<0$. The conditions b) and c) are directly satisfied, and d) is also satisfied for $\beta^{\prime}(r_1)=1$. 2. Taking the condition (\[cond4\]) and considering that the radial pressure obeys [@stephane1] $$\begin{aligned} 4\pi p_r=g_1f(T)+g_2f_T (T)\label{presr3}\;,\end{aligned}$$ where $g_1,g_2\in\Re$, we get from (\[presrr\]) that $$\begin{aligned} Q=-\frac{4P}{4g_1+1}\left(g_1 T+g_2+\frac{1}{r^2}\right)\;,\;f(T)=\frac{4P}{4g_1+1}\left(\frac{T}{4}-g_2-\frac{1}{2r^2}\right)\label{fr3}\;.\end{aligned}$$ Differentiating (\[fr3\]) with respect to $T$ and equating it to $P$, we get $$\begin{aligned} g_1 P=\frac{dP}{dT}\left(\frac{T}{4}-g_2-\frac{1}{2r^2}\right)+\frac{P}{r^3}\frac{dr}{dT}\label{p1}\;.\end{aligned}$$ In order to integrate this equation, we consider first $P(r)=r^k$, then, (\[p1\]) becomes $$\begin{aligned} g_1 r=\left[\frac{k}{4}(T-4g_2)+\frac{1}{r^2}\left(1-\frac{k}{2}\right)\right]\frac{dr}{dT}\label{p2}\;,\end{aligned}$$ which is separable only for $k=2$, yielding $$\begin{aligned} r(T)=r_1\left(\frac{T-4g_2}{T_0-4g_2}\right)^{2g_1}\label{r3}\;,\end{aligned}$$ where $T_0$ and $r_1$ are integration real constants. Substituting (\[r3\]) into (\[fr3\]), the algebraic function $f(T)$ in then reconstructed as $$\begin{aligned} f(T)&=&\frac{1}{4g_1+1}\left[r_1^2\frac{(T-4g_2)^{4g_1+1}}{(T_0-4g_2)^{4g_1}}-2\right]\label{fr4}\;.\end{aligned}$$ This algebraic polynomial function $f(T)$ can be better extended for a particular more simplest case. One can illustrate the particular case in which $4g_1+1=2$, which leads to $$\begin{aligned} f(T)=a_1 T+\left(a_2 T^2+a_0\right)\label{fr5}\;,\end{aligned}$$ where $a_2=r_1^2/2(T_0-4g_2)\,,\,a_1=-4g_2 r_1^2/(T_0-4g_2)$ and $a_0=-1$. Then, we see that our reconstruction scheme leads to a function which presents the contributions of higher orders in the torsion scalar, added to the TT term, which in this case is of second and zero orders in (\[fr5\]). Conclusions =========== The particular choice of the representation of a spherically symmetric and static metric of Weitzenbock’s spacetime can be done in an arbitrary way, when we wish to maintain the invariance under local Lorentz transformations. Therefore, when we choose a set of diagonal tetrads, for a projection in the tangent space to the manifold, the imposition of the function $f(T)$ being linear in $T$, or a constant torsion scalar, is recovered. But, as described in this work, choosing a set of non-diagonal tetrads, we can get an arbitrary algebraic function $f(T)$. Therewith, the second derivative of the function $f(T)$ appears in the contribution of the energy density and the tangential pressure in (\[dens\]) and (\[prest\]). We shown some examples of new black holes and wormholes solutions. Through the fixation of the null radial pressure and the coordinates conditions $a(r)=-b(r)$ and (\[cond3\]), we illustrated an example of a black hole, (\[sol1\]), of a cold black hole, in (\[b1\]), and a solution for comparing the same conditions, with the diagonal case, in (\[a1\]). We also imposed to the radial pressure to be just function of the radial coordinate, and $f(T)$ to be a polynomial function of second order in $T$. We then obtained, through the coordinate condition (\[cond4\]), a new class of traversable wormhole solutions in (\[sol3\]). Finally, we made a summary of the reconstruction scheme for our static case of $f(T)$ gravity. For a particular case of the algebraic function $f(T)$ being a linear function, we were able to reconstruct it, considering that it is proportional the radial pressure. This case is that of a new solution of traversable wormhole. Taking into account the condition in which the radial pressure is a linear combination of the algebraic function $f(T)$ and its derivative, we reconstructed $f(T)$ as a polynomial function of the torsion scalar $T$, where we shown the particular case in which $f(T)$ gravity would be the TT plus some terms of second and zero orders of the torsion scalar $T$. M. H. Daouda thanks CNPq/TWAS for financial support. M. E. Rodrigues thanks UFES for the hospitality during the development of this work. M. J. S. Houndjo thanks CNPq/FAPES for financial support. [10]{} R. Weitzenbock, Noordhoff, Groningen; Chap. XIII, Sec 7 (1923). R. Aldrovandi and J. G. Pereira, An Introduction to Teleparallel Gravity, Instituto de Fisica Teorica, UNSEP, Sao Paulo (http://www.ift.unesp.br/gcg/tele.pdf). S. Nojiri and S. D. Odintsov, Phys. Rept. [**505**]{}, 59-144 (2011); S. Nojiri and S.D. Odintsov, ECONF C [**0602061**]{}:06, (2006); Int. J. Geom. Meth. Mod. Phys.[**4**]{}: 115-146, (2007). S. Capozziello and V. Faraoni, Beyond Einstein Gravity, A Survey of Gravitational Theories for Cosmology and Astrophysics, Series: Fundamental Theories of Physics, Vol. 170, Springer, New York (2011). Baojiu Li, T. P. Sotiriou, and J. D. Barrow, Phys. Rev. D [**83**]{}, 064035 (2011); Phys.Rev.D [**83**]{}:104030 (2011). Cemsinan Deliduman and Baris Yapiskan, Absence of Relativistic Stars in f(T) Gravity, arXiv:1103.2225v3 \[gr-qc\]. Rafael Ferraro and Franco Fiorini, Phys.Rev. D [**75**]{}, 084031 (2007). G. Bengochea and R. Ferraro, Phys.Rev. D [**79**]{}:124019 (2009). Ratbay Myrzakulov, Accelerating universe from F(T) gravities, arXiv:1006.1120v1 \[gr-qc\]. Eric V. Linder, Phys.Rev. D [**81**]{}:127301 (2010). Baojiu Li, Thomas P. Sotiriou, John D. Barrow, Phys.Rev.D [**83**]{}:104017 (2011); Shih-Hung Chen, J. B. Dent, S. Dutta and E. N. Saridakis, Phys.Rev.D [**83**]{}:023508 (2011). Rafael Ferraro and Franco Fiorini, Phys.Rev. D [**78**]{}:124019 (2008). Tower Wang, Phys.Rev. D [**84**]{}:024042 (2011). M. Hamani Daouda, Manuel E. Rodrigues and M. J. S. Houndjo, Eur.Phys.J. C [**71**]{}: 1817 (2011); arXiv:1108.2920v4 \[astro-ph.CO\]. M. Hamani Daouda, Manuel E. Rodrigues and M. J. S. Houndjo, Static Anisotropic Solutions in f(T) Theory, Eur. Phys. J. C [**72**]{} (2012) 1890; arXiv:1109.0528v3 \[physics.gen-ph\]. P. S. Florides, A new interior Schwarzschild solution, Proc. R. Soc. Lond. A [**337**]{}, 529-535 (1974). R. Aldrovandi, J. G. Pereira and K. H. Vu, Brazilian Journal of Physics, vol. [**34**]{}, no. 4A, December (2004). H. I. Arcos and J. G. Pereira, Int.J.Mod.Phys. D[**13**]{}: 2193-2240 (2004). Christian G. Boehmer and Francisco S. N. Lobo, Int.J.Mod.Phys. D [**17**]{}:897-910 (2008). S. Nojiri, S. D. Odintsov, Phys. Rev. D [**74**]{}: 086005 (2006). Jie Yang, Yun-Liang Li, Yuan Zhong and Yang Li, arXiv:1202.0129v1 \[hep-th\]; K. Karami and A. Abdolmaleki, arXiv:1201.2511v1 \[gr-qc\]; K. Atazadeh and F. Darabi, arXiv:1112.2824v1 \[physics.gen-ph\]; Hao Wei, Xiao-Jiao Guo and Long-Fei Wang, Phys.Lett.B [**707**]{}:298-304 (2012); K. Karami, A. Abdolmaleki, arXiv:1111.7269v1 \[gr-qc\]; P.A. Gonzalez, Emmanuel N. Saridakis and Yerko Vasquez, arXiv:1110.4024v1 \[gr-qc\]; S. Capozziello, V. F. Cardone, H. Farajollahi and A. Ravanpak, Phys.Rev.D [**84**]{}:043527 (2011); Rong-Xin Miao, Miao Li and Yan-Gang Miao, arXiv:1107.0515v3 \[hep-th\]; Xin-he Meng and Ying-bin Wang, Eur.Phys.J. C [**71**]{}: 1755 (2011); Hao Wei, Xiao-Peng Ma and Hao-Yu Qi, Phys.Lett.B [**703**]{}:74-80 (2011); Miao Li, Rong-Xin Miao and Yan-Gang Miao, JHEP [**1107**]{}:108 (2011); Surajit Chattopadhyay and Ujjal Debnath, Int.J.Mod.Phys.D [**20**]{}:1135-1152 (2011); Piyali Bagchi Khatua, Shuvendu Chakraborty and Ujjal Debnath, arXiv:1105.3393v1 \[physics.gen-ph\]; Yi-Fu Cai, Shih-Hung Chen, James B. Dent, Sourish Dutta and Emmanuel N. Saridakis, Class. Quantum Grav. [**28**]{}: 215011 (2011); Rong-Jia Yang, Europhys.Lett. [**93**]{}:60001 (2011); Christian G. Boehmer, Atifah Mussa and Nicola Tamanini, Class.Quant.Grav. [**28**]{}: 245020 (2011). J.W. Maluf and S.C. Ulhoa, Gen.Rel.Grav. 41 (2009) 1233-1247; arXiv:0810.1934 \[gr-qc\]. J.W. Maluf, F.F. Faria and S.C. Ulhoa, Class.Quant.Grav. [**24**]{}: 2743-2754 (2007); arXiv:0704.0986 \[gr-qc\]; J.W. Maluf, S. C. Ulhoa and J. F. da Rocha-Neto, Phys. Rev. D [**85**]{}: 044050 (2012). [^1]: E-mail address: [email protected] [^2]: E-mail address: [email protected] [^3]: E-mail address: [email protected] [^4]: Solutions that possess identically null Hawking temperature.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Homology of the circle with non-trivial local coefficients is trivial. From this well-known fact we deduce geometric corollaries concerning links of codimension two. In particular, the Murasugi-Tristram signatures are extended to invariants of links formed of arbitrary oriented closed codimension two submanifolds of an odd-dimensional sphere. The novelty is that the submanifolds are not assumed to be disjoint, but are transversal to each other, and the signatures are parametrized by points of the whole torus. Murasugi-Tristram inequalities and their generalizations are also extended to this setup.' address: 'Mathematics Department, Stony Brook University, Stony Brook NY 11794-3651, USAMathematical Institute, Fontanka 27, St. Petersburg, 191023, Russia.' author: - OLEG VIRO title: | Twisted acyclicity of a circle\ and link signatures --- Introduction {#s1} ============ The goal of this paper is to simplify and generalize a part of classical link theory based on various signatures of links (defined by Trotter [@Trot] Murasugi [@Mura1],[@Mura2], Tristram [@Trist], Levine [@Levine1] [@Levine2], Smolinsky [@Smolinsky], Florens [@Florens1] and Cimasoni and Florens [@CimaFlor]). This part is known for its relations to topology of 4-dimensional manifolds, see [@Trist], [@Viro1], [@Viro2] [@Gilmer], [@KaufTayl] and applications in topology of real algebraic curves [@Orevkov1], [@Orevkov2] and [@Florens1]. Similarity of the signatures to the new invariants [@Rasm], [@OzsSz1], which were defined in the new frameworks of link homology theories and had spectacular applications [@Rasm], [@Livingst], [@Shum] to problems on classical link cobordisms, gives a new reason to revisit the old theory. There are two ways to introduce the signatures: the original 3-dimensional, via Seifert surface and Seifert form, and 4-dimensional, via the intersection form of the cyclic coverings of 4-ball branched over surfaces. I believe, this paper clearly demonstrates advantages of the latter, 4-dimensional approach, which provides more conceptual definitions, easily working in the situations hardly available for the Seifert form approach. In the generalization considered here the classical links are replaced by collections of transversal to each other oriented submanifolds of codimension two. Technically the work is based on a systematic use of twisted homology and the intersection forms in the twisted homology. Only the simplest kinds of twisted homology is used, the one with coefficients in $\C$, see Appendix. Twisted acyclicity of a circle {#s1.2} ------------------------------ A key property of twisted homology, which makes the whole story possible, is the following well-known fact, which I call [twisted acyclicity of a circle]{}: [*Twisted homology of a circle with coefficients in $\C$ and non-trivial monodromy vanishes.*]{} This implies that the twisted homology of this kind completely ignores parts of the space formed by circles along which the monodromy of the coefficient system is non-trivial (for precise and detailed formulation see Section \[sT.2\]). How the acyclicity works {#s1.3} ------------------------ In particular, twisted acyclicity of a circle implies that the complement of a tubular neighborhood of a link looks like a closed manifold, because the boundary, being fibered to circles, is invisible for the twisted homology. Moreover, the same holds true for a collection of pairwise transversal generically immersed closed manifolds of codimension 2 in arbitrary closed manifold, provided the monodromy around each manifold is non-trivial. The twisted homology does not feel the intersection of the submanifolds as a singularity. The complement of a cobordism between such immersed links looks (again, from the point of view of twisted homology) like a compact cobordism between closed manifolds. This, together with classical results about signatures of manifolds and relations between twisted homology and homology with constant coefficients, allows us to deal with a link of codimension two as if it was a single closed manifold. Organization of the paper {#s1.4} ------------------------- I cannot assume the twisted homology well-known to the reader, and review the material related to it. Of course, the material on non-twisted homology is not reviewed. The review is limited to a very special twisted homology, the one with complex coefficients. More general twisted homology is not needed here. The review is postponed to appendices. The reader somehow familiar with twisted homology may visit this section when needed. The experts are invited to look through appendices, too. We begin in Section \[s2\] with a detailed exposition restricted to the classical links. Section \[s3\] is devoted to higher dimensional generalization, including motivation for our choice of the objects. Section \[s4\] is devoted to [*span inequalities*]{}, that is, restrictions on homology of submanifolds of the ball, which span a given link contained in the boundary of the ball. Section \[s5\] is devoted to [*slice inequalities*]{}, which are restrictions on homology of a link with given transversal intersection with a sphere of codimension one. In the classical dimension {#s2} ========================== Classical knots and links. {#s2.1} -------------------------- Recall that a [classical knot]{} is a smooth simple closed curve in the 3-sphere $S^3$. This is how one usually defines classical knots. However it is not the curve per se that is really considered in the classical knot theory, but rather its placement in $S^3$. Classical knots incarnate the idea of knottedness: both the curve and $S^3$ are topologically standard, but the position of the curve in $S^3$ may be arbitrarily complicated topologically. Therefore a classical knot is rather a pair $(S^3,K)$, where $K$ is a smooth submanifold of $S^3$ diffeomorphic to $S^1$. A [classical link]{} is a pair $(S^3,L)$, where $L$ is a smooth closed one-dimensional submanifold of $S^3$. If $L$ is connected, then this is a knot. Twisted homology of a classical link exterior {#s2.2} --------------------------------------------- An [exterior]{} of a classical link $(S^3,L)$ is the complement of an open tubular neighborhood of $L$. This is a compact 3-manifold with boundary. The boundary is the boundary of the tubular neighborhood of $L$. Hence, this is the total space of a locally trivial fibration over $L$ with fiber $S^1$. An exterior $X(L)$ is a deformation retract of the complement $S^3\sminus L$. It’s a nice replacement of $S^3\sminus L$, because $\Int X(L)$ is homeomorphic to $S^3\sminus L$, but $X(L)$ is compact manifold and has a nice boundary. If $L$ consists of $m$ connected components, $L=K_1\cup\dots\cup K_m$, then by the Alexander duality $H_0(X(L))=\Z$, $H_1(X(L))=\Z^m$, $H_2(X(L))=\Z^{m-1}$ and $H_i(X(L))=0$ for $i\ne 0,1,2$. The group $H_1(X(L))$ is dual to $H_1(L)$ with respect to the Alexander linking pairing $H_1(L)\times H_1(X(L))\to\Z$. Hence a basis of $H_1(L)$ defines a dual basis in $H_1(X(L))$. An orientation of $L$ determines a basis $[K_1]$, …, $[K_m]$ of $H_1(L)$, and the dual basis of $H_1(X(L))$, which is realized by meridians $M_1$, …, $M_m$ positively linked to $K_1$, …, $K_m$, respectively. (The meridians are fibers of a tubular fibration $\p X(L)\to L$ over points chosen on the corresponding components.) Therefore, if $L$ is oriented, then a local coefficient system on $X(L)$ with fiber $\C$ is defined by an $m$-tuple of complex numbers $(\Gz_1,\dots,\Gz_m)$, the images under the monodromy homomorphism $H_1(X(L))\to\C^\times$ of the generators $[M_1]$, …, $[M_m]$ of $H_1(X(L))$. Thus for an oriented classical knot $L$ consisting of $m$ connected components, local coefficient systems on $X(L)$ with fiber $\C$ are parametrized by $(\C^\times)^m$. Link signatures {#s2.3} --------------- Let $L=K_1\cup\dots\cup K_m\subset S^3$ be a classical link, $\Gz_i\in\C$, $|\Gz_i|=1$, $\Gz=(\Gz_1,\dots,\Gz_m)\in (S^1)^m$ and $\mu:H_1(S^3\sminus L)\to\C^\times$ takes to $\Gz_i$ a meridian of $K_i$ positively linked with $K_i$. Let $F_1,\dots F_m\subset D^4$ be smooth oriented surfaces transversal to each other with $\p F_i=F_i\cap\p D^4=K_i$. Extend the tubular neighborhood of $L$ involved in the construction of $X(L)$ to a collection of tubular neighborhoods $N_1$, …, $N_m$ of $F_1$, …, $F_m$, respectively. Without loss of generality we may choose $N_i$ in such a way that they would intersect each other in the simplest way. Namely, each connected component $B$ of $N_i\cap N_j$ would contain only one point of $F_i\cap F_j$ and no point of others $F_k$ and would consist of entire fibers of $N_i$ and $N_j$, so that the fibers define a structure of bi-disk $D^2\times D^2$ on $B$. To achieve this, one has to make the fibers of the tubular fibration $N_i\to F_i$ at each intersection point of $F_i$ and $F_j$ coinciding with a disk in $F_j$ and then diminish all $N_i$ appropriately. Now let us extend $X(L)$ to $X(F)=D^4\sminus\cup_{i=1}^m\Int N_i$. This is a compact 4-manifold. Its boundary contains $X(L)$, the rest of it is a union of pieces of boundaries of $N_i$ with $i=1,\dots, m$. These pieces are fibered over the corresponding pieces of $F_i$ with fiber $S^1$. By the Alexander duality, the orientation of $F_i$ gives rise to a homomorphism $H_1(X(F))\to\Z$ that maps a homology class to its linking number with $F_i$. These homomorphisms altogether determine a homomorphism $H_1(X(F))\to \Z^m$. For any $\Gz=(\Gz_1,\dots,\Gz_m)$, the composition of this homomorphism with the homomorphism $$\Z^n\to(\C^\times)^m: (n_1,\dots,n_m)\to(\Gz_1^{n_1},\dots,\Gz_m^{n_m})$$ is a homomorphism $H_1(X(F))\to(\C^\times)^m$ extending $\mu$. If each $F_i$ has no closed connected components, then this extension is unique. Let us denote it by $\overline\mu$. According to \[sT.4.6\], in $H_2(X(F);\C_{\overline\mu})$ there is a Hermitian intersection form. Denote its signature by $\Gs_{\Gz}(L)$. $\Gs_{\Gz}(L)$ does not depend on $F_1,\dots,F_m$. Any $F'_i$ with $\p F'_i=F'_i\cap\p D^4=K_i$ is cobordant to $F_i$. The cobordisms $W_i\subset D^4\times I$ can be made pairwise transversal. They define a cobordism $D^4\times I\sminus\cup_i\Int N(W_i)$ between $X(F)$ and $X(F')$. By Theorem \[VanSign\], $$\Gs_\Gz(\p D^4\times I\sminus\cup_i\Int N(W_i))=0.$$ The manifold $\p D^4\times I\sminus\cup_i\Int N(W_i)$ is the union of $X(F)$, $-X(F')$ and a [*homologically negligible*]{} part $\p (N(\cup_i\Int W_i))$, the boundary of a regular neighborhood of the cobordism $\cup_iW_i$ between $\cup_iF_i$ and $\cup_iF'_i$. By Theorem \[AddOfSign\], $$\Gs_\Gz(\p D^4\times I\sminus\cup_i\Int N(W_i))=\Gs_\Gz(D^4\sminus\cup_iF_i)-\Gs_\Gz(D^4\sminus\cup_iF'_i)$$ Hence, $\Gs_\Gz(D^4\sminus\cup_iF_i)=\Gs_\Gz(D^4\sminus\cup_iF'_i)$. Colored links {#s2.4} ------------- In the definition of signature $\Gs_{\Gz}(L)$ above one needs to numerate the components $K_i$ of $L$ to associate to each of them the corresponding component $\Gz_i$ of $\Gz$, but there is no need to require connectedness of each $K_i$. This leads to a notion of colored link. An [*$m$-colored link*]{} $L$ is an oriented link in $S^3$ together with a map (called [*coloring*]{}) assigning to each connected component of $L$ a color in $\{1,\dots, m\}$. The sublink $L_i$ is constituted by the components of $L$ with color $i$ for $i=1,\dots, m$. For an $m$-colored link $L=L_1\cup\dots\cup L_m$ and $\Gz=(\Gz_1,\dots,\Gz_m)\in (S^1)^m$, the signature $\Gs_\Gz(L)$ is defined as above, but each component $K_j$ colored with color $i$ is associated to $\Gz_i$. Relations to other link signatures {#s2.5} ---------------------------------- If $\Gz_i=-1$ for all $i=1,\dots,m$, then the signature $\Gs_{\Gz}(L)$ coincides with the Murasugi signature $\xi(L)$ introduced in [@Mura2]. If all $\Gz_i$ are roots of unity of a degree, which is a power of a prime number and all linking numbers $\lk(L_i,L_j)$ vanish, then $\Gs_{\Gz}(L)$ coincides with the signature defined by Florens [@Florens1]. In the most general case, $\Gs_{\Gz}(L)$ coincides with the signature defined for arbitrary $\Gz$ by Cimasoni and Florens [@CimaFlor] using a 3-dimensional approach, with a version of Seifert surface, $C$-complex. In higher dimensions {#s3} ==================== Apology for the generalization of higher dimensional links {#s3.1} ---------------------------------------------------------- There is a spectrum of objects considered as generalizations of classical knots and links. The closest generalization of classical knots are pairs $(S^n,K)$, where $K$ is a smooth submanifold diffeomorphic to $S^{n-2}$. Then the requirements on $K$ are weakened. Say, one may require $K$ to be only homeomorphic to $S^{n-2}$, not diffeomorphic. Or just a homology sphere of dimension $n-2$. The codimension is important in order to keep any resemblance to classical knots. In the same spirit, for the closest higher-dimensional counter-part of classical links one takes a pair consisting of $S^n$ and a collection of its disjoint smooth submanifolds diffeomorphic to $S^{n-2}$. One allows to weaken the restrictions on the submanifolds. Up to arbitrary closed submanifolds. [I suggest to allow transversal intersections of the submanifolds.]{} Of course, the main excuse for this is that some results can extended to this setup. Here is a couple of other reasons. First, in the classical dimension, it is easy for submanifolds to be disjoint. Generically curves in 3-sphere are disjoint. If they intersect, it is a miracle or, rather, has a special cause. Generic submanifolds of codimension two in a manifold of dimension $>3$ intersect. If they do [not]{} intersect, this is a miracle, or consequence of a special cause. Second, classical links emerge naturally as links of singular points of complex algebraic curves in $\C^2$. Recall that for an algebraic curve $C\subset\C^2$ and a point $p\in C$ the boundary of a sufficiently small ball $B$ centered at $p$, the link $(\p B,\p B\cap C)$ is well-defined up to diffeomorphism, and it is called the [link of $C$ at $p$]{}. An obvious generalization of this definition to an algebraic hypersurface $C\subset\C^n$ gives rise to a pair $(S^{2n-1},K)$ with connected $K$. It cannot be a union of [*disjoint*]{} submanifolds of $S^{2n-1}$. It would not be difficult to extend the results of this paper to a more general setup. For example, one can replace the ambient sphere with a homology sphere, or even more general manifold. However, one should stop somewhere. The author prefers this early point, because the level of generality accepted here suffices for demonstrating the new opportunities open by a systematic usage of twisted homology. On the other hand, further generalizations can make formulations more cumbersome. Colored links {#s3.2} ------------- By an [*$m$-colored link of dimension*]{} $n$ we shall mean a collection of $m$ oriented smooth closed $n$-dimensional submanifolds $L_1$, …, $L_m$ of the sphere $S^{n+2}$ such that any sub-collection has transversal intersection. The latter means that for any $x\in L_{i_1}\cap\dots\cap L_{i_k}$ the tangent spaces $T_xL_{i_1}$, …, $T_xL_{i_k}$ are transverse, that is, $\dim(T_xL_{i_1}\cap\dots\cap T_xL_{i_k})=n+2-2k$. Generic configurations of submanifolds {#s3.3} -------------------------------------- More generally, an $m$-colored configuration of transversal submanifolds in a smooth manifold $M$ is a family of $m$ smooth submanifolds $L_1$, …, $L_m$ of $M$ such that any sub-collection has transversal intersection. If $M$ has a boundary, the submanifolds are assumed to be transversal to the boundary, as well as the intersection of any sub-collection. Furthermore, assume that $\p M\cap L_i=\p L_i$ for any $i=1,\dots,m$. As above, in Section \[s2.3\], for any $m$-colored configuration $L$ of transversal submanifolds $L_1$, …, $L_m$ in $M$ one can find a collection of their tubular neighborhoods $N_1$, …, $N_m$ which agree with each other in the sense that for any sub-collection $L_{i_1}$, …, $L_{i_\nu}$ the intersection of the corresponding neighborhoods $N_{i_1}\cap\dots\cap N_{i_\nu}$ is neighborhood of the intersection $L_{i_1}\cap\dots\cap L_{i_\nu}$ fibered over this intersection with the corresponding poly-disk fiber. Denote the complement $M\sminus\cup_{i=1}^m\Int N_i$ by $X(L)$ and call it an [exterior]{} of $L$. This is a smooth manifold with a system of corners on the boundary. The differential type of the exterior does not depend on the choice of neighborhoods. Moreover, one can eliminate the choice of neighborhoods and deleting of them from the definition. Instead, one can make a sort of real blowing up of $M$ along $L_1$, …, $L_m$. However, for the purposes of this paper it is easier to stay with the choices. Link signatures {#s3.4} --------------- Let $L=L_1\cup\dots\cup L_m$ be an $m$-colored link of dimension $2n-1$ in $S^{2n+1}$. As well known (see, e.g., [@Levine1]), for each oriented closed codimension 2 submanifold $K$ of $S^{2n+1}$ there exists an oriented smooth compact submanifold $F$ of $D^{2n+2}$ such that $\p F=K$. Choose for each $L_i$ such a submanifold of $D^{2n+2}$, denote it by $F_i$, and make all the $F_i$ transversal to each other by small perturbations. As a union of $m$-colored transversal submanifolds of $D^{2n+2}$, $F= F_1\cup \dots\cup F_m$ has an exterior $X(F)$. By the Alexander duality, $H^1(X(F);\C^\times)$ is naturally isomorphic to $H_{2n}(F,L;\C^\times)$. Let $\Gz=(\Gz_1,\dots,\Gz_m)\in(S^1)^m$. Take $\sum_{i=1}^m\Gz_i[F_i]\in H_{2n}(F,L;\C^\times)$ and denote by $\mu$ the Alexander dual cohomology class considered as a homomorphism $H_1(X(F))\to\C^\times$. Denote by $\C_\mu$ the local coefficient system on $X(F)$ corresponding to $\mu$. According to \[sT.4.6\], in $H_{n+1}(X(F);\C_\mu)$ there is an intersection form, which is Hermitian, if $n$ is odd, and skew-Hermitian, if $n$ is even. Denote its signature by $\Gs_\Gz(L)$. $\Gs_{\Gz}(L)$ does not depend on $F_1,\dots,F_m$. Any $F'_i$ with $\p F'_i=F'_i\cap\p D^{2n+2}=L_i$ is cobordant to $F_i$. The cobordisms $W_i\subset D^{2n+2}\times I$ can be made pairwise transversal to form $m$-colored configuration $W$ of transversal submanifolds of $D^{2n+2}\times I$. They define a cobordism $X(W)$ between $X(F)$ and $X(F')$. By Theorem \[VanSign\], $$\Gs_\Gz(\p X(W))=0.$$ The manifold $\p X(W)=\p D^{2n+2}\times I\sminus\cup_i\Int N(W_i)$ is the union of $X(F)$, $-X(F')$ and a [*homologically negligible*]{} part $\p (N(\cup_i\Int W_i))$, the boundary of a regular neighborhood of the cobordism $\cup_iW_i$ between $F$ and $F'$. By Theorem \[AddOfSign\], $$\Gs_\Gz(\p X(W))=\Gs_\Gz(X(F))-\Gs_\Gz(X(F'))$$ Hence, $\Gs_\Gz(X(F))=\Gs_\Gz(X(F'))$. Span inequalities {#s4} ================= Let $L=L_1\cup\dots,\cup L_m$ be an $m$-colored link of dimension $2n-1$ in $S^{2n+1}$. Let $F=F_1\cup\dots\cup F_m$ be an $m$-colored configuration of transversal oriented compact $2n$-dimensional submanifolds of $D^{2n+2}$ with $\p F_i=F_i\cap\p D^{2n+2}=L_i$. In this section we consider restrictions on homological characteristics of $F$ in terms of invariants of $L$. History {#s4.1} ------- The first restrictions of this sort were found by Murasugi [@Mura1] and Tristram [@Trist] for classical (1-colored) links. To $m$-colored classical links and pairwise disjoint surfaces $F_i$ the Murasugi-Tristram inequalities were generalized by Florens [@Florens1]. A further generalization to $m$-colored classical links and intersecting $F_i$ was found by Cimasoni and Florens [@CimaFlor]. Higher dimensional generalizations for $1$-colored links were found by the author [@Viro2], [@Viro3]. No-nullity span inequalities {#s4.2} ---------------------------- The most general results in this direction are quite cumbersome. Therefore, let me start with weak but simple ones. Recall that $\Gs_\Gz(L)$ can be obtained from $F$: for an appropriate local coefficient system $\C_\mu$ on $X(F)$, this is the signature of a Hermitian intersection form defined in $H_{n+1}(X(F);\C_\mu)$. The signature of an Hermitian form cannot be greater than the dimension of the underlying space. In particular, $$\label{ineq:S=<D} |\Gs_\Gz(L)|\le\dim_\C H_{n+1}(X(F);\C_\mu).$$ This can be considered as a restriction on a homological characteristic of $F$ in terms of invariants of $L$. However, $\dim_\C H_{n+1}(X(F);\C_\mu)$ is not a convenient characteristic of $F$. It can be estimated in terms of more convenient ones. Let $\Gz=(\Gz_1,\dots,\Gz_m)\in (S^1)^m$. Let $p_1,\dots,p_k\in\Z[t_1,t_1^{-1}\dots,t_m,t_m^{-1}]$ be generators of the ideal of relations satisfied by complex numbers $\Gz_i$. Let $d$ be the greatest common divisor of the integers $p_1(1,\dots,1)$, …, $p_k(1,\dots,1)$, if at least one of these integers does not vanish, and zero otherwise. Cf. \[sT.3.6\]. Let $$P=\begin{cases} \Z/p\Z, &\text{ if }d>1\text{ and }p\text{ is a prime divisor of }d\\ \Q, &\text{ if }d=0 \end{cases}$$ By \[EstTwHom\], $$\dim_\C H_{n+1}(X(F);\C_\mu)\le \dim_{P} H_{n+1}(X(F);P).$$ The advantage of passing to homology with non-twisted coefficients is that we can use the Alexander duality: $$\begin{gathered} H_{n+1}(X(F);P)=H_{n+1}(D^{2n+2}\sminus F;P)\\= H^{n+1}(D^{2n+2},\p D^{2n+2}\cup F;P)\\= H^{n}(\p D^{2n+2}\cup F;P)= H^n(F,L;P).\end{gathered}$$ Hence, $$|\Gs_\Gz(L)|\le \dim_{P}H_n(F,L;P).$$ General span inequalities {#s4.3} ------------------------- The inequality can be improved. Indeed, the manifold $X(F)$ has a non-empty boundary. Therefore, its intersection form may be degenerate and the right hand side of may be replaced by a smaller quantity, the rank of the form. The rank is known to be the rank of the homomorphism $H_{n+1}(X(F);\C_\mu)\to H_{n+1}(X(F),\p X(F);\C_\mu)$. Let us estimate this rank. \[Lemma1Th7\] For any exact sequence $\dots\overset{\rho_{k+1}}\to C_k\overset{\rho_k}\to C_{k-1}\overset{\rho_{k-1}}\to\dots$ of vector spaces and any integers $n$ and $r$ $$\label{eqL1Th7} \rnk(\rho_{n+1})+\rnk(\rho_{n-2r}) =\sum_{s=0}^{2r}(-1)^s\dim C_{n-s}$$ The Euler characteristic of the exact sequence $$0\to\Im\rho_{n+1}\hookrightarrow C_n\overset{\rho_{n}}\to C_{n-1}\to\dots\overset{\rho_{n-2r+1}}\to C_{n-2r}\to\Im\rho_{n-2r}\to0$$ is the difference between the left and right hand sides of . On the other hand, it vanishes, as the Euler characteristic of an exact sequence. \[Lemma2Th7\] Let $X$ be a topological space, $A$ its subspace, $\xi$ a local coefficient system on $X$ with fiber $\C$. Then for any natural $n$ and $r\le\frac{n}2$ $$\begin{gathered} \rnk(H_{n+1}(X;\xi)\to H_{n+1}(X,A;\xi))+ \rnk(H_{n-2r}(X;\xi)\to H_{n-2r}(X,A;\xi))\\ \\ =\sum_{s=0}^{2r}(-1)^sb_{n+1-s}(X,A) -\sum_{s=0}^{2r}(-1)^sb_{n-s}(A) +\sum_{s=0}^{2r}(-1)^sb_{n-s}(X)\end{gathered}$$ where $b_k(*)=\dim_\C H_k(*;\xi)$ Apply Lemma \[Lemma1Th7\] to the homology sequence of pair $(X,A)$ with coefficients in $\xi$. \[Th7\] For any integer $r$ with $0\le r\le\frac{n}2$\ $$\begin{gathered} \label{ineq:Span} |\Gs_{\Gz}(L)|+\sum_{s=0}^{2r}(-1)^s\dim_\C H_{n-s}(S^{2n+1}\sminus L;\C_\Gz) \\ \le \sum_{s=0}^{2r}(-1)^s\dim H_{n+1+s}(F,L;P) +\sum_{s=0}^{2r}(-1)^s\dim H_{n+s}(F;P)\end{gathered}$$ $$\begin{gathered} \label{ineq:Span2} |\Gs_{\Gz}(L)|+\sum_{s=0}^{2r}(-1)^s\dim_\C H_{n+1+s}(S^{2n+1}\sminus L;\C_\Gz) \\ \le \sum_{s=0}^{2r}(-1)^s\dim H_{n-s}(F,L;P) +\sum_{s=0}^{2r}(-1)^s\dim H_{n-s-1}(F;P)\end{gathered}$$ where $\Gz$ and $P$ are is in Section \[s4.2\] As mentioned above, $$\label{eq1PfTh7} |\Gs_\Gz(L)|\le \rnk(H_{n+1}(X(F);\C_\mu)\to H_{n+1}(X(F),\p X(F);\C_\mu)).$$ By Lemma \[Lemma2Th7\], $$\begin{gathered} \label{eq2PfTh7} \rnk(H_{n+1}(X(F);\C_\mu)\to H_{n+1}(X(F),\p X(F);\C_\mu))\\ \le \sum_{s=0}^{2r}(-1)^s\dim_\C H_{n+1-s}(X(F),X(L);\C_\Gz) -\sum_{s=0}^{2r}(-1)^s\dim_\C H_{n-s}(X(L);\C_\Gz)\\ +\sum_{s=0}^{2r}(-1)^s\dim_\C H_{n-s}(X(F);\C_\Gz).\end{gathered}$$ Summing up these inequalities and moving one of the sums from the right hand side to the left, we obtain: $$\begin{gathered} \label{eq3PfTh7} |\Gs_\Gz(L)|+\sum_{s=0}^{2r}(-1)^s\dim_\C H_{n-s}(X(L);\C_\Gz)\\ \le \sum_{s=0}^{2r}(-1)^s\dim_\C H_{n+1-s}(X(F),X(L);\C_\Gz) +\sum_{s=0}^{2r}(-1)^s\dim_\C H_{n-s}(X(F);\C_\Gz).\end{gathered}$$ The left hand sum of coincides with the left hand side of . The right hand side can be estimated using Theorem \[EstTwHom\]: $$\begin{gathered} \label{eq4PfTh7} \sum_{s=0}^{2r}(-1)^s\dim_\C H_{n+1-s}(X(F),X(L);\C_\Gz) +\sum_{s=0}^{2r}(-1)^s\dim_\C H_{n-s}(X(F);\C_\Gz)\\ \le\sum_{s=0}^{2r}(-1)^s\dim_P H_{n+1-s}(X(F),X(L);P) +\sum_{s=0}^{2r}(-1)^s\dim_P H_{n-s}(X(F);P).\end{gathered}$$ Further, $$H_{n+1-s}(X(F),X(L);P)=H_{n+1-s}(D^{2n+2}\sminus F,S^{2n+1}\sminus L;P).$$ By the Alexander duality, $$H_{n+1-s}(D^{2n+2}\sminus F,S^{2n+1}\sminus L;P) = H^{n+1+s}(D^{2n+2},F;P).$$ By exactness of the pair sequence, $H^{n+1+s}(D^{2n+2},F;P)=H^{n+s}(F;P)$. Similarly, $$\begin{gathered} H_{n-s}(X(F);P)=H_{n-s}(D^{2n+2}\sminus F;P)\\ =H^{n+2+s}(D^{2n+2},F\cup S^{2n+1};P)\\ =H^{n+1+s}(S^{2n+1}\cup F;P)=H^{n+1+s}(F,L;P)\end{gathered}$$ The last equality in this sequence holds true if $n+1+s<2n+1$, that is, $s<n$. Since $P$ is a field, $$\begin{aligned} \dim_P H^{n+s}(F;P)=&\dim_P H_{n+s}(F;P),\label{eq5PfTh7} \\ \dim_P H^{n+1+s}(F,L;P)=&\dim_P H_{n+1+s}(F,L;P)\label{eq6PfTh7}.\end{aligned}$$ Combining formulas , with the calculations above and equalities and , we obtain the first desired inequalities . The inequalities are proved similarly. Namely, by Lemma \[Lemma2Th7\] $$\begin{gathered} \label{eq7PfTh7} \rnk(H_{n+1}(X(F);\C_\mu)\to H_{n+1}(X(F),\p X(F);\C_\mu))\\ \le \sum_{s=0}^{2r}(-1)^s\dim_\C H_{n+2+s}(X(F),X(L);\C_\Gz) -\sum_{s=0}^{2r}(-1)^s\dim_\C H_{n+1+s}(X(L);\C_\Gz)\\ +\sum_{s=0}^{2r}(-1)^s\dim_\C H_{n+1+s}(X(F);\C_\Gz).\end{gathered}$$ Summing up inequalities and and moving one of the sums from the right hand side to the left, we obtain: $$\begin{gathered} \label{eq8PfTh7} |\Gs_\Gz(L)|+\sum_{s=0}^{2r}(-1)^s\dim_\C H_{n+1+s}(X(L);\C_\Gz)\\ \le \sum_{s=0}^{2r}(-1)^s\dim_\C H_{n+2+s}(X(F),X(L);\C_\Gz) +\sum_{s=0}^{2r}(-1)^s\dim_\C H_{n+1+s}(X(F);\C_\Gz).\end{gathered}$$ After this the same estimates and transformations as in the proof of gives rise to . Nullities {#s4.4} --------- The sum in the left hand side of the inequalities is an invariant of the link $L$. Its special case for classical links with $r=0$ is known as $\Gz$-nullity and appeared in the Murasugi-Tristram inequalities and their generalizations. Denote $\sum_{s=0}^{2r}(-1)^s\dim H_{n-s}(S^{2n+1}\sminus L;\C_\mu)$ by $n^r_{\Gz}(L)$ and call it [$r$th $\Gz$-nullity of $L$]{}. By the Poincaré duality (see \[sT.4.3\]), $H_{n-s}(S^{2n+1}\sminus L;\C_\mu)$ is isomorphic to $H^{n+1+s}(S^{2n+1}\sminus L;\C_\mu)$. The latter vector space is dual to $H_{n+1+s}(S^{2n+1}\sminus L;\C_{\mu^{-1}})$ and anti-isomorphic to $H_{n+1+s}(S^{2n+1}\sminus L;\C_{\mu})$, see \[sT.4.5\]. Therefore, $$\label{null} n^r_{\Gz}(L)=\sum_{s=0}^{2r}(-1)^s\dim_\C H_{n+1+s}(S^{2n+1}\sminus L;\C_\mu)$$ and $n^r_\Gz(L)=n^r_{\overline{\Gz}}(L)$. This sum is a part of the left hand side of . Now we can rewrite Theorem \[Th7\] as follows: \[Th8\] For any integer $r$ with $0\le 2r\le n$ $$\begin{gathered} \label{null2} |\Gs_{\Gz}(L)|+n^r_\Gz(L) \\ \le \sum_{s=0}^{2r}(-1)^s\dim H_{n+s+1}(F,L;P) +\sum_{s=0}^{2r}(-1)^s\dim H_{n+s}(F;P)\end{gathered}$$ $$\begin{gathered} \label{null3} |\Gs_{\Gz}(L)|+n^r_\Gz(L) \\ \le \sum_{s=0}^{2r}(-1)^s\dim H_{n-s}(F,L;P) +\sum_{s=0}^{2r}(-1)^s\dim H_{n-s-1}(F;P)\end{gathered}$$ If $F_i$ are pairwise disjoint, than the right hand sides of and are equal due to Poincaré-Lefschetz duality for $F$, but we do not assume that $F=\cup F_i$ is a manifold, and therefore the inequalities and are not equivalent and we have to keep both of them. Slice inequalities {#s5} ================== Again, as in the preceding section, let $L_1,\dots, L_m\subset S^{2n+1}$ be smooth oriented transversal to each other submanifolds constituting an $m$-colored link $L=L_1\cup\dots\cup L_m$ of dimension $2n-1$. Let $\GL_i\subset S^{2n+2}$ be oriented closed smooth submanifolds transversal to each other and to $S^{2n+1}$, with $\p \GL_i\cap S^{2n+1}=L_i$. In this section we consider restrictions on homological characteristics of $\GL=\cup_{i=1}^m\GL_i$ in terms of invariants of link $L$. Of course, some results of this kind can be deduced from the results of the preceding section, but an independent consideration gives better results. No-nullity slice inequalities {#s5.1} ----------------------------- The most general results in this direction are quite cumbersome. Therefore, let me start with weak but simple ones. We will use the same algebraic objects as in the preceding section. In particular, $\Gz=(\Gz_1,\dots,\Gz_m)\in (S^1)^m$, $p_1,\dots,p_k\in\Z[t_1,t_1^{-1}\dots,t_m,t_m^{-1}]$ are generators of the ideal of relations satisfied by complex numbers $\Gz_i$. Integer $d$ is the greatest common divisor of the integers $p_1(1,\dots,1)$, …, $p_k(1,\dots,1)$, if at least one of them does not vanish, and $d=0$ otherwise. Cf. \[s4.2\] and \[sT.3.6\]. Finally, $$P=\begin{cases} \Z/p\Z, &\text{ if }d>1\text{ and }p\text{ is a prime divisor of }d\\ \Q, &\text{ if }d=0 \end{cases}$$ Let $\mu:H_1(S^{2n+1}\sminus L)\to\C^\times$ be the homomorphism which maps the meridian of $L_i$ to $\Gz_i$. The local coefficient system $\C_\mu$ on $S^{2n+1}\sminus L$ defined by $\mu$ extends to $S^{2n+2}\sminus\GL$. We will denote the extension by the same symbol $\C_\mu$. The sphere $S^{2n+1}$ bounds in $S^{2n+2}$ two balls, hemi-spheres $S^{2n+2}_+$ and $S^{2n+2}_-$ such that $\p S^{2n+2}_+=S^{2n+1}$ and $\p S^{2n+2}_-=-S^{2n+1}$ with the orientations inherited from the standard orientation of $S^{2n+2}$. In $H_{n+1}(S^{2n+2}\sminus\GL;\C_\mu)$ there is a (Hermitian or skew-Hermitian) intersection form. Its signature is zero by Theorem \[VanSign\], because $\GL$ bounds a configuration of pairwise transversal submanifolds $\GD=\GD_1\cup\dots\cup\GD_m$ in $D^{2n+3}$ and $\C_\mu$ extends over $D^{2n+3}\sminus\GD$. \[slice-small-Th\] Under the assumption above, $$\label{ineq:Slice-easy} 2|\Gs_\Gz(L)|\le\dim_P H_n(\GL;P).$$ The intersection form on $H_{n+1}(S^{2n+2}\sminus\GL;\C_\mu)$ restricted to the images of $H_{n+1}(S^{2n+2}_+\sminus\GL;\C_\mu)$ and $H_{n+1}(S^{2n+2}_-\sminus\GL;\C_\mu)$ has signatures $\Gs_\Gz(L)$ and $-\Gs_\Gz(L)$, respectively. Therefore the dimension of each of the images is at least $|\Gs_\Gz(L)|$. The images are obviously orthogonal to each other with respect to the intersection form, because their elements can be realized by cycles lying in disjoin open hemi-spheres. Hence $$2|\Gs_\Gz(L)|\le\dim_\C H_{n+1}(S^{2n+2}\sminus\GL;\C_\mu).$$ On the other hand, by Theorem \[EstTwHom\], $$\dim_\C H_{n+1}(S^{2n+2}\sminus\GL;\C_\mu)\le \dim_P H_{n+1}(S^{2n+2}\sminus\GL;P)=\dim_P H_n(\GL;P).$$ Summing up these two inequalities, we obtain the desired one. ### General slice inequalities {#s5.2} \[Th-Slice\] Under assumptions above $$\begin{gathered} \label{ineq:slice} 2|\Gs_\Gz(L)| +2n^r_\Gz(L)\\ \le \sum_{s=0}^{2r}(-1)^s\dim_P H_{n-s}(\GL\sminus L;P) +\sum_{s=-2r+1}^{2r-1}(-1)^s\dim_P H_{n-s}(\GL;P)\end{gathered}$$ \[Lemma1Th\] Let $j$ be the inclusion $S^{2n+1}\sminus L\to S^{2n+2}\sminus\GL$. Then $$\begin{gathered} \label{eq0PfTh} 2|\Gs_\Gz(L)| + 2\rnk(j_*:H_{n+1}(S^{2n+1}\sminus L;\C_\mu)\to H_{n+1}(S^{2n+2}\sminus\GL;\C_\mu))\\ \le \dim_\C H_{n+1}(S^{2n+2}\sminus\GL;\C_\mu)\end{gathered}$$ Denote by $i^\pm$ the inclusion $S^{2n+2}_\pm\sminus\GL\to S^{2n+2}\sminus\GL$. Observe that the space $H_{n+1}(S^{2n+2}\sminus\GL;\C_\mu)$ has a natural filtration: $$\begin{gathered} \label{eq1PfTh} j_*H_{n+1}(S^{2n+1}\sminus L;\C_\mu)\\ \subset i^+_*H_{n+1}(S^{2n+2}_+\sminus\GL;\C_\mu)+ i^-_*H_{n+1}(S^{2n+2}_-\sminus\GL;\C_\mu)\\ \subset H_{n+1}(S^{2n+2}\sminus\GL;\C_\mu)\end{gathered}$$ The inclusion homomorphisms $$j_*:H_{n+1}(S^{2n+1}\sminus L;\C_\mu)\to H_{n+1}(S^{2n+2}\sminus\GL;\C_\mu)$$ and the boundary homomorphism $$\p:H_{n+1}(S^{2n+2}\sminus\GL;\C_\mu)\to H_{n}(S^{2n+1}\sminus L;\C_\mu)$$ of the Mayer-Vietoris sequence of the triad $(S^{2n+2}\sminus\GL; S^{2n+2}_+\sminus\GL,S^{2n+2}_-\sminus\GL)$ are dual to each other with respect to the intersection forms: $$j_*(a)\circ b=a\circ\p(b)\ \text{ for any }a\in H_{n+1}(S^{2n+1}\sminus L;\C_\mu)\text{ and }b\in H_{n+1}(S^{2n+2}\sminus\GL;\C_\mu).$$ Since the intersection forms are non-singular, it follows that $\rnk j_*=\rnk \p$. By exactness of the Mayer-Vietoris sequence, the rank of $\p$ is the dimensions of the top quotient of the filtration , while the rank of $j_*$ is the dimension of the smallest term $j_*H_{n+1}(S^{2n+1}\sminus L;\C_\mu)$ of this filtration. The middle term of the filtration contains the subspaces $i^+_*H_{n+1}(S^{2n+2}_+\sminus\GL;\C_\mu)$ and $i^-_*H_{n+1}(S^{2n+2}_-\sminus\GL;\C_\mu)$. Their intersection is the smallest term, which is orthogonal to both of the subspaces. Therefore the dimension of the quotient of the middle term of the filtration by the smallest term is at least $2|\Gs_\Gz(L)|$ The dimension of the whole space $H_{n+1}(S^{2n+2}\sminus\GL;\C_\mu)$ is the sum of the dimensions of the factors. We showed above that the top and lowest factor have the same dimensions equal to $\rnk j_*$ and that the dimension of the middle factor is at least $2|\Gs_\Gz(L)|$. \[Lemma1Th8\] For any exact sequence $\dots\overset{\rho_{k+1}}\to C_k\overset{\rho_k}\to C_{k-1}\overset{\rho_{k-1}}\to\dots$ of vector spaces and any integers $n$ and $t$ $$\label{eqL1Th8} \rnk(\rho_{n})-\rnk(\rho_{n+2t}) =\sum_{s=0}^{2t-1}(-1)^s\dim C_{n+s}$$ The Euler characteristic of the exact sequence $$0\to\Im\rho_{n+2t}\hookrightarrow C_{n+2t-1}\overset{\rho_{n+2t-1}}\to C_{n+2t-2}\to\dots\overset{\rho_{n+1}}\to C_{n}\to\Im\rho_{n}\to0$$ is $ \rnk(\rho_{n})-\sum_{s=0}^{2t-1}(-1)^s\dim C_{n+s}-\rnk(\rho_n+2t)$, that is the difference between the left and right hand sides of . On the other hand, it vanishes, as the Euler characteristic of an exact sequence. \[Lemma2Th8\] Let $X$ be a topological space, $A$ its subspace, $\xi$ a local coefficient system on $X$ with fiber $\C$. Then for any natural $n$ and integer $r$ $$\begin{gathered} \rnk(H_{n+1}(A;\xi)\to H_{n+1}(X;\xi))- \rnk(H_{n+2+2r}(X;\xi)\to H_{n+2+2r}(X,A;\xi))\\ \\= \sum_{s=0}^{2r}(-1)^sb_{n+1+s}(A) -\sum_{s=0}^{2r}(-1)^sb_{n+2+s}(X,A) +\sum_{s=0}^{2r-1}(-1)^sb_{n+2+s}(X)\end{gathered}$$ where $b_k(*)=\dim_\C H_k(*;\xi)$. Apply Lemma \[Lemma1Th8\] to the homology sequence of pair $(X,A)$ with coefficients in $\xi$. \[Lemma2Th\] For any integer $r$ with $0\le r\le\frac{n}2$\ $$\begin{gathered} \label{ineq:Slice} 2|\Gs_\Gz(L)|+2n^r_\Gz(L)\\ \le 2\sum_{s=0}^{2r}(-1)^s\dim_\C H_{n+2+s}(S^{2n+2}\sminus\GL,S^{2n+1}\sminus L;\C_\mu)\\ +\sum_{s=-2r+1}^{2r-1}(-1)^s\dim_\C H_{n+1+s}(S^{2n+2}\sminus\GL;\C_\mu)\end{gathered}$$ By Lemma \[Lemma2Th8\] applied to the pair $(S^{2n+2}\sminus\GL,S^{2n+1}\sminus L)$, we obtain $$\begin{gathered} \rnk(j_*:H_{n+1}(S^{2n+1}\sminus L;\C_\mu) \to H_{n+1}(S^{2n+2}\sminus\GL;\C_\mu))\\ \ge \sum_{s=0}^{2r}(-1)^sH_{n+1+s}(S^{2n+1}\sminus L;\C_\mu)\\ -\sum_{s=0}^{2r}(-1)^s\dim_\C H_{n+2+s}(S^{2n+2}\sminus\GL,S^{2n+1}\sminus L;\C_\mu)\\ +\sum_{s=0}^{2r-1}(-1)^s\dim_\C H_{n+2+s}(S^{2n+2}\sminus\GL;\C_\mu)\end{gathered}$$ From this inequality and inequality we obtain $$\begin{gathered} \label{eq2PfTh} 2|\Gs_\Gz(L)|+2n^r_\Gz(L)\\ \le 2\sum_{s=0}^{2r}(-1)^s\dim_\C H_{n+1+s}(S^{2n+2}\sminus\GL,S^{2n+1}\sminus L;\C_\mu)\\ -2\sum_{s=0}^{2r-1}(-1)^s\dim_\C H_{n+s+2}(S^{2n+2}\sminus\GL;\C_\mu)\\ +\dim_\C H_{n+1}(S^{2n+2}\sminus\GL;\C_\mu)\end{gathered}$$ From this and the Alexander duality (which states that $H_{n+1+s}(S^{2n+2}\sminus\GL;\C_\mu)$ is isomorphic to $H_{n+1-s}(S^{2n+2}\sminus\GL;\C_{\mu})$) the desired inequality follows. \[Lemma3Th\] $$\begin{gathered} \sum_{s=0}^{2r}(-1)^s \dim_\C H_{n+1+s}(S^{2n+2}\sminus\GL,S^{2n+1}\sminus L;\C_\mu)\\ \le \sum_{s=0}^{2r}(-1)^s\dim_P H_{n-s}(\GL\sminus L;P)\end{gathered}$$ By Theorem \[EstTwHom\] $$\begin{gathered} \sum_{s=0}^{2r}(-1)^s \dim_\C H_{n+1+s}(S^{2n+2}\sminus\GL,S^{2n+1}\sminus L;\C_\mu)\\ \le \sum_{s=0}^{2r}(-1)^s \dim_P H_{n+1+s}(S^{2n+2}\sminus\GL,S^{2n+1}\sminus L;P).\end{gathered}$$ By Poincaré duality (cf. \[sT.4.3\]), $H_{n+1+s}(S^{2n+2}\sminus\GL,S^{2n+1}\sminus L;P)$ is isomorphic to $H^{n+1-s}(S^{2n+2}\sminus S^{2n+1},\GL\sminus L;P)$. The latter is isomorphic to $H^{n-s}(\GL\sminus L;P)$. By the universal coefficients formula, $H^{n-s}(\GL\sminus L;P)$ is isomorphic to $H_{n-s}(\GL\sminus L;P)$. \[Lemma4Th\] $$\begin{gathered} \sum_{s=-2r+1}^{2r-1}(-1)^s\dim_\C H_{n+1+s}(S^{2n+2}\sminus\GL;\C_\mu)\\ \le \sum_{s=-2r+1}^{2r-1}(-1)^s\dim_P H_{n-s}(\GL;P)\end{gathered}$$ By Theorem \[EstTwHom\] $$\begin{gathered} \sum_{s=-2r+1}^{2r-1}(-1)^s\dim_\C H_{n+1+s}(S^{2n+2}\sminus\GL;\C_\mu)\\ \le \sum_{s=-2r+1}^{2r-1}(-1)^s\dim_P H_{n+1+s}(S^{2n+2}\sminus\GL;P).\end{gathered}$$ By Poincaré duality, $H_{n+1+s}(S^{2n+2}\sminus\GL;P)$ is isomorphic to $H^{n+1-s}(S^{2n+2},\GL;P)$. From the sequence of pair $(S^{2n+2},\GL)$ it follows that $H^{n+1-s}(S^{2n+2},\GL;P)$ is isomorphic to $H^{n-s}(\GL;P)$. By the universal coefficient formula, $H^{n-s}(\GL;P)$ is isomorphic to $H_{n-s}(\GL;P)$. [**Proof of Theorem \[Th-Slice\].**]{} Sum up the inequalities of the last three Lemmas. Twisted homology ================ Twisted coefficients and chains {#sT.1} ------------------------------- ### Local coefficient system {#sT.1.1} Let $X$ be a topological space, and $\xi$ be a $\C$-bundle over $X$ with a fixed flat connection. Here by a [connection]{} we mean operations of [parallel transport]{}: for any path $s$ in $X$ connecting points $x$ and $y$ the parallel transport $T_s$ is an isomorphism from the fiber $\C_x$ over $x$ to the fiber $\C_y$ over $y$, such that the parallel transport along product of paths equals the composition of parallel transports along the factors. In formula: $T_{uv}=T_v\circ T_u$. A connection is flat, if the parallel transport isomorphism does not change when the path is replaced by a homotopic path. A flat connection in a bundle $\xi$ over a simply connected $X$ gives a trivialization of $\xi$. Another name for $\xi$ is a [local coefficient system]{} with fiber $\C$. ### Monodromy representation {#sT.1.2} Recall that for a path-connected locally contractible $X$ (and in more general situations, which would not be of interest here) it is defined by the [monodromy reprensentation]{} $\pi_1(X,x_0)\to\C^\times$, where $\C^\times=\C\sminus0$ is the multiplicative group of $\C$. The monodromy representation assigns to $\Gs\in\pi_1(X,x_0)$ a complex number $\Gz$ such that the parallel transport isomorphism along a loop which represents $\Gs$ is multiplication by $\Gz$. Since $\C^\times$ is commutative, a homomorphism $\pi_1(X,x_0)\to\C^\times$ factors through the abelianization $\pi_1(X,x_0)\to H_1(X)$. Thus a local coefficient system with fiber $\C$ is defined also by a homology version $\mu:H_1(X)\to\C^\times$ of the monodromy representation, which can be considered also as a cohomology class belonging to $H^1(X;\C^\times)$. The local coefficient system defined by a monodromy representation $\mu:H_1(X)\to\C^\times$ is denoted by $\C^{\mu}$. Sometimes instead of $\mu$ we will write data which defines $\mu$, for example the images under $\mu$ of generators of $H_1(X)$ selected in a special way. ### Twisted singular chains {#sT.1.3} Homology groups $H_n(X;\xi)$ of $X$ with coefficients in $\xi$ is a classical invariant studied in algebraic topology. It is an immediate generalization of $H_n(X;\C)$. Hence it is quite often ignored in textbooks on homology theory, I recall the singular version of the definition. Recall that a singular $p$-dimensional chain of $X$ with coefficients in $\C$ is a formal finite linear combination of singular simplices $f_i:T^p\to X$ with complex coefficients. A singular chain of $X$ with coefficients in $\xi$ is also a formal finite linear combination of singular simplices, but each singular simplex $f_i:T^p\to X$ appears in it with a coefficient taken from the fiber $\C_{f_i(c)}$ of $\xi$ over $f_i(c)$, where $c$ is the baricenter of $T^p$. Of course, all the fibers of $\xi$ are isomorphic to $\C$. So, a chain with coefficients in $\xi$ can be identified with a chain with coefficients in $\C$, provided the isomorphisms $\C_{f_i(c)}\to\C$ are selected. But they are not. All singular $p$-chains of $X$ with coefficients in $\xi$ form a complex vector space $C_p(X;\xi)$. The boundary of such a chain is defined by the usual formula, but one needs to bring the coefficient from the fiber over $f_i(c)$ to the fibers over $f_i(c_i)$, where $c_i$ is the baricenter of the $i$th face of $T^p$. For this, one may use translation along the composition with $f_i$ of any path connecting $c$ to $c_i$ in $T^p$: since $T^p$ is simply connected and the connection of $\xi$ is flat, the result does not depend on the path. These chains and boundary operators form a complex. Its homology is called [homology with coefficients in]{} $\xi$ and denoted by $H_p(X;\xi)$. Homology with coefficients in the local coefficient system corresponding to the trivial monodromy representation $1:H_1(X)\to\C^\times$ coincides with homology with coefficients in $\C$. ### Twisted cellular chains {#sT.1.4} It is possible to calculate the homology with coefficients in a local coefficient system using cellular decomposition. Namely, a $p$-dimensional cellular chain of a cw-complex $X$ with coefficients in a local coefficient system $\xi$ is a formal finite linear combination of $p$-dimensional cells in which a coefficient at a cell belongs to the fiber over a point of the cell. It does not matter which point is this, because fibers over different points in a cell are identified via parallel transport along paths in the cell: any two points in a cell can be connected in the cell by a path unique up to homotopy. In order to describe the boundary operator, let me define the [incidence number]{} $(z\Gs_x:\tau)_y\in\C_y$ where $\Gs$ is a $p$-cell, $\tau$ is a $(p-1)$-cell, $z\in\C_x$, $x\in\Gs$, $y\in\tau$. The boundary operator is then defined by the incidence numbers: $$\p(z\Gs)=\sum_\tau(z\Gs_x:\tau)_y\tau.$$ Let $f:D^p\to X$ be a characteristic map for $\Gs$. Assume that a point $y$ in $(p-1)$-cell $\tau$ is a regular value for $f$. This means that $y$ has a neighborhood $U$ in $\tau$ such that $f^{-1}(U)\subset S^{p-1}\subset D^p$ is the union of finitely many balls mapped by $f$ homeomorphically onto $U$. Connect $f^{-1}(x)\in D^p$ with all the points of $f^{-1}(y)$ by straight paths. Compositions of these paths with $f$ are paths $s_1$,…$s_N$ connecting $x$ with $y$. Then put $$(z\Gs:\tau)_y=\sum_{i=1}^N\Ge_iT_{s_i}(z)$$ where $T_{s_i}$ is a parallel transport operator and $\Ge_i=+1$ or $-1$ according to whether $f$ preserves or reverses the orientation on the $i$th ball out of $N$ balls constituting $f^{-1}(U)$. Twisted acyclicity {#sT.2} ------------------ ### Acyclicity of circle {#sT.2.1} According to one of the most fundamental properties of homology, the dimension of $H_0(X;\C)$ is equal to the number of path-connected components of $X$. In particular, $H_0(X;\C)$ does not vanish, unless $X$ is empty. This is not the case for twisted homology. A crucial example is the circle $S^1$. Let $\mu:H_1(S^1)\to\C^\times$ maps the generator $1\in\Z=H_1(S^1)$ to $\Gz\in\C^\times$. [Twisted acyclicity of circle.]{}\[1.A\] $H_*(S^1;\C^\mu)=0$, iff  $\Gz\ne 1$. The simplest cw-decomposition of $S^1$ consists of two cells, one-dimensional $\Gs_1$ and zero-dimensional $\Gs_0$. One can easily see that $\p\Gs_1=(\Gz-1)\Gs_0$. Hence $\p:C_1(S^1;\C^\mu)\to C_0(S^1;\C^\mu)$ is an isomorphism, iff $\Gz\ne0$. ### Vanishing of twisted homology {#sT.2.2} \[1.B\] Let $X$ be a path connected space and $\mu:H_1(S^1\times X)\to\C^\times$ be a homomorphism. Denote by $\Gz$ the image under $\mu$ of the homology class realized by a fiber $S^1\times \text{point}$. Then $H_*(S^1\times X;\C^\mu)=0$, if $\Gz\ne0$. Since $H_1(S^1\times X)=H_1(S^1)\times H_1(X)$, the homomorphism $\mu$ can be presented as product of homomorphisms $\mu_1:H_1(S^1)\to\C^\times$ and $\mu_2:H_1(X)\to\C^\times$ which can be obtained as compositions of $\mu$ with the inclusion homomorphisms. Thus $\C^\mu=\C^{\mu_1}\otimes\C^{\mu_2}$, and we can apply Künneth formula $$H_n(S^1\times X;\C^\mu)=\sum_{p=0}^n H_p(S^1;\C^{\mu_1})\otimes H_{n-p}(X;\C^{\mu_2})$$ and refer to Theorem \[1.A\]. \[1.C\] Let $B$ be a path connected space, $p:X\to B$ a locally trivial fibration with fiber $S^1$. Let $\mu:H_1(X)\to\C^\times$ be a homomorphism. Denote by $\Gz$ the image under $\mu$ of homology class realized by a fiber of $p$. Then $H_*(X;\C^\mu)=0$, if $\Gz\ne0$. It follows from Theorem \[1.A\] via the spectral sequence of fibration $p$. Estimates of twisted homology {#sT.3} ----------------------------- ### Equalities underlying the Morse inequalities {#sT.3.1} \[EqUnderlMI\] For a complex $C:\dots\to C_i\overset{\p_i}\to C_{i-1}\to$ of finite dimensional vector spaces over a field $F$ $$\begin{gathered} \label{dimH} \sum_{s=r}^{2n+r}(-1)^{s-r}\dim_FH_s(C)=\\ \sum_{s=r}^{2n+r}(-1)^{s-r}\dim_FC_s-\rnk\p_{r-1}-\rnk\p_{2n+r}.\end{gathered}$$ First, prove inequality for $n=0$. Since $H_s(C)=\Ker\p_s/\Im\p_{s+1}$, we have $\dim_FH_s(C)=\dim\Ker\p_s-\dim_F\Im\p_{s+1}$. Further, $\dim_F\Im\p_{s+1}=\rnk\p_{s+1}$, and $\dim_F\Ker\p_s=\dim_FC_s-\rnk\p_s$. It follows $$\label{dimHs} \dim_FH_s(C)=\dim_FC_s-\rnk\p_s-\rnk\p{s+1}$$ This is a special case of with $n=0$, $r=s$. The general case follows from it: make alternating summation of for $s=r,\dots,2n+s$. ### Algebraic Morse type inequalities {#sT.3.2} \[AlgLem\] Let $P$ and $Q$ be fields, $R$ be a subring of $Q$ and let $h:R\to P$ be a ring homomorphism. Let $C: \dots\to C_p\to C_{p-1}\to\dots\to C_1\to C_0$ be a complex of free finitely generated $R$-modules. Then for any $n$ and $r$ $$\sum_{s=r}^{2n+r}(-1)^{s-r}\dim_QH_s(C\otimes_RQ)\le \sum_{s=r}^{2n+r}(-1)^{s-r}\dim_PH_s(C\otimes_hP)$$ Thus, the greater ranks of differentials, the smaller $$\sum_{s=r}^{2n+r}(-1)^{s-r}\dim_FH_s(C).$$ Choose free bases in modules $C_i$. Let $M_i$ be the matrix representing $\p_i:C_i\to C_{i-1}$ in these bases. The same matrix represents the differential $\p^Q_i$ of $C\otimes_RQ$. The matrix obtained from $M_i$ by replacement the entries with their images under $h$ represents the differential $\p^P_i$ of $C\otimes_hP$. The minors of the latter matrix are the images of the former one under $h$. Consequently, the $\rnk\p^Q_i\ge\rnk\p^P_i$. By Lemma \[EqUnderlMI\] $$\begin{gathered} \label{eqQ} \sum_{s=r}^{2n+r}(-1)^{s-r}\dim_QH_s(C\otimes_RQ)= \\ \sum_{s=r}^{2n+r}(-1)^{s-r}\dim_QC_s\otimes_RQ-\rnk\p^Q_{r-1}-\rnk\p^Q_{r+2n}\end{gathered}$$ and $$\begin{gathered} \label{eqP} \sum_{s=r}^{2n+r}(-1)^{s-r}\dim_PH_s(C\otimes_hP)= \\ \sum_{s=r}^{2n+r}(-1)^{s-r}\dim_PC_s\otimes_hP-\rnk\p^P_{r-1}-\rnk\p^P_{r+2n}\end{gathered}$$ Compare the right hand sides of these equalities. The dimensions $\dim_PC_s\otimes_hP$, $\dim_QC_s\otimes_RQ$ are equal to the rank of free $R$-module $C_s$. Since, as it was shown above, $\rnk\p^Q_i\ge\rnk\p^P_i$, the right hand side of is smaller than the right hands side of . Probably, the simplest application of Lemma \[AlgLem\] gives well-known upper estimation of the Betti numbers with rational coefficients by the Betti numbers with coefficients in a finite field. It follows from the universal coefficients formula. ### Application to twisted homology {#sT.3.3} \[EstTwHom\] Let $X$ be a finite cw-complex, and $\mu:H_1(X)\to\C^\times$ be a homomorphism. If $\Im\mu\subset\C^\times$ generates a subring $R$ of $\C$ and there is a ring homomorphism $h:R\to Q$, where $Q$ is a field, such that $h\mu(H_1(X))=1$, then we can apply Lemma \[AlgLem\] and get an upper estimation for dimensions of twisted homology groups in terms of dimensions of non-twisted ones. $$\label{twHomEst} \sum_{s=r}^{2n+r}(-1)^{s-r}\dim_QH_s(X;\C^\mu)\le \sum_{s=r}^{2n+r}(-1)^{s-r}\dim_PH_s(X;P)$$ Here are several situations in which the assumptions of this theorem are fulfilled. ### Estimates by untwisted $\Z/p\Z$ Betti numbers {#sT.3.4} Let $H_1(X)$ be generated by $g$ and $\zeta=\mu(g)$ be an algebraic number. Assume that $p$ is the minimal integer polynomial with relatively prime coefficients which annihilates $\zeta$. Assume also that $g(1)$ is divisible by a prime number $p$. Then for $R$ we can take $\Q[\zeta]\subset\C$, for $P$ the field $\Z/p\Z$, and for $h$ the ring homomorphism $\Q[\zeta]\to\Z/p\Z$ mapping $\zeta\mapsto1$. Here is a more general situation: Let $H_1(X)$ be generated by $g_1$,…$g_k$, and $\zeta_i=\mu(g_i)$ be an algebraic number for each $i$. Assume that $p_i$ is the minimal integer polynomial with relatively prime coefficients which annihilates $\zeta_i$. Assume also that the greatest common divisor of $g_1(1)$,…, $g_k(1)$ is divisible by a prime number $p$. Then for $R$ we can take $\Q[\zeta_1,\dots,\zeta_k]\subset\C$, for $P$ the field $\Z/p\Z$, and for $h$ the ring homomorphism $\Q[\zeta_1,\dots,\zeta_k]\to\Z/p\Z$ mapping $\zeta_i\mapsto1$ for all $i$. ### Estimates by rational Betti numbers {#sT.3.5} Let $H_1(X)$ be generated by $g$ and $\zeta=\mu(g)$ be transcendent. Then for $R$ we can take the ring $\Z[\zeta,\Gz^{-1}]$, for $Q$ the field $\Q(\zeta)$, for $P$ the field $\Q$, and for $h$ the ring homomorphism $\Z[\zeta]\to\Q$ which maps $\zeta$ to 1. ### The most general estimates {#sT.3.6} Let $H_1(X)$ be generated by $g_1$,…$g_k$ and $\Gz_i=\mu(g_i)$. Laurent polynomials with integer coefficients annihilated by $\Gz_1,\dots,\Gz_m$ form an ideal in the ring $\Z[t_1,t_1^{-1}\dots,t_m,t_m^{-1}]$. Let $p_1,\dots,p_k$ be generators of this ideal. Let $d$ be the greatest common divisor of the integers $p_1(1,\dots,1)$, …, $p_k(1,\dots,1)$, if at least one of them is not 0. Otherwise, let $d=0$ In other words, consider the specialization homomorphism $$S:\Z[t_1,t_1^{-1}\dots,t_m,t_m^{-1}]\to \C: t_i\mapsto\Gz_i.$$ Let $K$ be the kernel of $S$, and let $d$ be the generator of the ideal which is the image of $K$ under the homomorphism $$\Z[t_1,t_1^{-1}\dots,t_m,t_m^{-1}]\to\Z : t_i\mapsto 1.$$ Then for $R$ we can take the ring $\Z[\Gz_1,\Gz_1^{-1},\dots,\Gz_k,\Gz_k^{-1}]$. For $Q$ we can take the quotient field of $R$, but since both $Q$ and its quotient field are contained in $\C$, let us take $Q=\C$. If $d>1$, then we can take for $P$ the field $\Z/p\Z$ with any prime $p$ which divides $d$. If $d=0$, then let $P=\Q$. The case $d=1$ is the most misfortunate: then our technique does not give any non-trivial estimate. For $d>1$ or $d=0$ we have the inequality . Twisted duality {#sT.4} --------------- ### Cochains and cohomology {#sT.4.1} Cochain groups $C^p(X;\xi)$ (which are vector spaces over $\C$) and cohomology $H^p(X;\xi)$ are defined similarly: $p$-cochain with coefficients in $\xi$ is a function assigning to a singular simplex $f:T^p\to X$ an element of $\C_{f(c)}$, the fiber of $\xi$ over $f(c)$. This can be interpreted as the chain complex of the local coefficient system $\Hom(\C,\xi)$ whose fiber over $x\in X$ is $\Hom_\C(\C,\C_x)$. More generally, for any local coefficient systems $\xi$ and $\eta$ on $X$ with fiber $\C$ there is a local coefficient system $\Hom(\xi,\eta)$ constructed fiber-wise with the parallel transport defined naturally in terms of the parallel transports of $\xi$ and $\eta$. If the monodromy representations of $\xi$ and $\eta$ are $\mu$ and $\nu$, respectively, then the monodromy representation of $\Hom(\xi,\eta)$ is $\mu^{-1}\nu:H_1(X)\to\C^\times:x\mapsto\mu^{-1}(x)\nu(x)$. Similarly, for any local coefficient systems $\xi$ and $\eta$ on $X$ with fiber $\C$ there is a local coefficient system $\xi\otimes \eta$. If $\mu,\nu: H^1(X)\to\C^\times$ are homomorphisms, then $\C^\mu\otimes\C^\nu$ is the local coefficient system $\C^{\mu\nu}$ corresponding to the homomorphism-product $\mu\nu:H^1(X)\to\C^\times:x\mapsto \mu(x)\nu(x)$. If $\nu=\mu^{-1}$ (that is $\mu(x)\nu(x)=1$ for any $x\in H^1(X)$), then $\C^\mu\otimes\C^\nu$ is the non-twisted coefficient system with fiber $\C$. In contradistinction to non-twisted case, there is no way to calculate $H_n(X;\xi\otimes\eta)$ in terms of $H_*(X;\xi)$ and $H_*(X;\eta)$. Indeed, both $H_*(S^1;\C^\mu)$ and $H_*(S^1;\C^{\mu^{-1}})$ vanish, unless $\mu:H_1(S^1)\to\C^\times$ is trivial, but $H_0(S^1;\C^\mu\otimes\C^{\mu^{-1}})=H_0(S^1;\C)=\C$. ### Multiplications {#sT.4.2} Usual definitions of various cohomological and homological multiplications are easily generalized to twisted homology. For this one needs a bilinear pairing of the coefficient systems. (Recall that in the case of non-twisted coefficient system a pairing of coefficient groups also is needed.) For local coefficient systems $\xi$, $\eta$ and $\zeta$ with fiber $\C$ on $X$, a pairing $\xi\oplus\eta\to\zeta$ is a fiber-wise map which is bilinear over each point of $X$. Given such a pairing, there are pairings $$\smallsmile:H^p(X;\xi)\times H^q(X;\eta)\to H^{p+q}(X;\zeta),$$ $$\smallfrown:H^{p+q}(X;\xi)\times H^q(X;\eta)\to H^{p}(X;\zeta),$$ etc. A pairing $\xi\oplus\eta\to\zeta$ of local coefficients systems can be factored through the universal pairing $\xi\oplus\eta\to\xi\otimes\eta$. Since $\C^\mu\otimes\C^{\mu^{-1}}$ is a non-twisted coefficient system with fiber $\C$, this gives rise to a non-singular pairing $$C_p(X;\C^{\mu^{-1}})\otimes C^p(X;\C^\mu)\to \C$$ which induces a non-singular pairing $$\smallfrown: H_p(X;\C^{\mu^{-1}})\otimes H^p(X;\C^\mu)\to \C$$ Thus, the vector spaces $H_p(X;\C^{\mu^{-1}})$ and $H^p(X;\C^\mu)$ are dual. ### Poincaré duality {#sT.4.3} Let $X$ be an oriented connected compact manifold of dimension $n$. Then $H_n(X,\p X)$ is isomorphic to $\Z$ and the orientation is a choice of the isomorphism, or, equivalently, the choice of a generator of $H_n(X,\p X)$. We denote the generator by $[X]$. Let $\mu:H_1(X)\to\C^\times$ be a homomorphism. There are the Poincaré-Lefschetz duality isomorphisms $$[X]\smallfrown :H^p(X;\C^\mu)\to H_{n-p}(X,\p X;\C^{\mu}),$$ $$[X]\smallfrown :H^p(X,\p X;\C^\mu)\to H_{n-p}(X;\C^{\mu})$$ Similarly to the case of non-twisted coefficients, there are non-singular pairings: the cup-product pairing $$\smallsmile:H^p(X;\C^\mu)\times H^{n-p}(X,\p X;\C^{\mu^{-1}})\to H^n(X;\C)=\C$$ and intersection pairing $$\label{bilin-ip} \circ:H_p(X;\C^\mu)\times H_{n-p}(X,\p X;\C^{\mu^{-1}})\to \C$$ However, the local coefficient systems of the homology or cohomology groups involved in a pairing are different, unless $\Im\mu\subset\{\pm1\}$. ### Conjugate local coefficient systems {#sT.4.4} Recall that for vector spaces $V$ and $W$ over $\C$ a map $f:V\to W$ is called semi-linear if $f(a+b)=f(a)+f(b)$ for any $a,b\in V$ and $f(za)=\overline zf(a)$ for $z\in\C$ and $a\in V$. This notion extends obviously to fiber-wise maps of complex vector bundles. If $\xi$ and $\eta$ local coefficient systems of the type that we consider, then fiber-wise semi-linear bijection $\xi\to\eta$ commuting with all the transport maps is called a [semi-linear equivalence]{} between $\xi$ and $\eta$. For any local coefficient system $\xi$ with fiber $\C$ on $X$ there exists a unique local coefficient system on $X$ which is semi-linearly equivalent to $\xi$. It is denoted by $\overline\xi$ and called [conjugate]{} to $\xi$. If $\xi=\C^\mu$, then $\overline\xi$ is $\C^{\overline\mu}$, where $\overline\mu(x)=\overline{\mu(x)}$ for any $x\in H_1(X)$. ### Unitary local coefficient systems {#sT.4.5} A homomorphism $\mu:H_1(X)\to\C^\times$ is called [unitary]{} if $\Im\mu\subset S^1=U(1)=\{z\in\C\mid |z|=1\}$. In $S^1$ the inversion $z\mapsto z^{-1}$ coincides with the complex conjugation: if $|z|=1$, then $z^{-1}=\overline z$. Therefore if $\mu:H_1(X)\to\C^\times$ is unitary, then $\overline{\C^\mu}=\C^{\mu^{-1}}$ and there exists a [semi-linear ]{} equivalence $\C^\mu\to\C^{\mu^{-1}}$. This semi-linear equivalence induces semi-linear equivalence $$H_{k}(X;\C^\mu)\to H_k(X;\C^{\mu^{-1}})$$ and similar semi-linear equivalences in cohomology and relative homology and cohomology. Combining a semi-linear isomorphism $$H_{n-p}(X,\p X;\C^\mu)\to H_{n-p}(X,\p X;\C^{\mu^{-1}})$$ of this kind with the intersection pairing we get a [sesqui-linear ]{} pairing $$\label{ssqlin-ip} \circ:H_p(X;\C^\mu)\times H_{n-p}(X,\p X;\C^{\mu})\to \C$$ (Sesqui-linear means that it is linear on the first variable, and semi-linear on the second one.) This pairing is non-singular, because the bilinear pairing is non-singular, and differs from it by a semi-linear equivalence on the second variable. ### Intersection forms {#sT.4.6} Let $X$ be an oriented connected compact smooth manifold of even dimension $n=2k$ and $\mu:H_1(X)\to\C^\times$ be a unitary homomorphism. Combining the relativisation homomorphism $$H_{n-p}(X;\C^{\mu})\to H_{n-p}(X,\p X;\C^\mu)$$ with the pairing for $p=k$ define sesqui-linear form $$\label{ssqlin-if} \circ:H_k(X;\C^\mu)\times H_k(X;\C^\mu)\to\C$$ It is called the [intersection form]{} of $X$. If $k$ is even, this form is [Hermitian]{}, that is $\Ga\circ\Gb=\overline{\Gb\circ\Ga}$. If $k$ is odd, it is [skew-Hermitian]{}, that is $\Ga\circ\Gb=-\overline{\Gb\circ\Ga}$. The difference between Hermitian and skew-Hermitian forms is not as deep as the difference between symmetric and skew-symmetric bilinear forms. Multiplication by $i=\sqrt{-1}$ turns a skew-Hermitian form into a Hermitian one, and the original form can be recovered. In order to recover, just multiply the Hermitian form by $-i$. The intersection form may be singular. Its radical, that is the orthogonal complement of the whole $H_k(X;\C^\mu)$, is the kernel of the relativisation homomorphism $H_k(X;\C^{\mu})\to H_k(X,\p X;\C^\mu)$. It can be described also as the image of the inclusion homomorphism $$H_k(\p X;\C^{\mu\inc_*})\to H_k(X;\C^\mu),$$ where $\inc_*$ is the inclusion homomorphism $H_1(\p X)\to H_1(X)$. ### Twisted signatures and nullities {#sT.4.7} As well-known for any Hermitian form on a finite-dimensional space $V$ there exists an orthogonal basis in which the form is represented by a diagonal matrix. The diagonal entries of the matrix are real. The number of zero diagonal entries is called the [nullity]{}, and the difference between the number of positive and negative entries is called the [signature]{} of the form. These numbers do not depend on the basis. For a skew-Hermitian form by nullity and signature one means the nullity and signature of the Hermitian form obtained by multiplication of the skew-Hermitian form by $i$. For a compact oriented $2k$-manifold $X$ and a homomorphism $\mu:H_1(X)\to\C$ the signature and nullity of the intersection form $$\circ:H_k(X;\C^\mu)\times H_k(X;\C^\mu)\to\C$$ are denoted by $\Gs_\mu(X)$ and $n_\mu(X)$, respectively, and called the [twisted]{} signature and nullity of $X$. The classical theorems about the signatures of the symmetric intersection forms of oriented compact $4k$-manifolds are easily generalized to twisted signatures: [Additivity of Signature.]{}\[AddOfSign\] Let $X$ be an oriented compact manifold of even dimension. If $A$ and $B$ are its compact submanifolds of the same dimension such that $A\cup B=X$, $\Int A\cap\Int B=\varnothing$ and $\p(A\cap B)=\varnothing$, then for any $\mu:H_1(X)\to\C^\times$ $$\Gs_\mu(X)=\Gs_{\mu\inc_*}(A)+\Gs_{\mu\inc_*}(B)$$ where $\inc$ denotes an appropriate inclusion. [Signature of Boundary.]{}\[VanSign\] Let $X$ be an oriented compact manifold of odd dimension. Then $\Gs_{\mu\inc_*}(\p X)=0$ for any $\mu:H_1(X)\to\C^\times$. [99]{} David Cimasoni, Vincent Florens [*Generalized Seifert surfaces and signatures of colored links*]{}, Trans. Amer. Math. Soc. [**360**]{} (2008), 1223–1264. V. Florens, [*Signatures of colored links with application to real algebraic curves*]{}, J. Knot Theory Ramifications 14 (2005), 883–918. V. Florens, P. Gilmer, [*On the slice genus of links*]{}, Algebr. Geom. Topol. [**3**]{} (2003), 905–920; arXiv:math/0311136 \[math.GT\]. P. M. Gilmer, [*Configuration of surfaces in 4-manifolds*]{}, Trans. Amer. Math. Soc., [**264**]{} (1981), 353–380. C. McA. Gordon, R. A. Litherland, [*On the signature of a link*]{}, Invent. Math. [**47:1**]{} (1978) p53–69. L. Kauffman, L. Taylor, [*Signature of links*]{}, Trans. Amer. Math. Soc. [**216**]{} (1976) 351–365. J. Levine, [*Knot cobordism in codimension two*]{}, Comment. Math. Helv., [**44**]{} (1969), 229–244. J. Levine, [*Invariants of knot cobordism*]{}, Invent. Math., [**8**]{} (1969), 98–110 and 355. C. Livingston, [*Computations of the Ozsvath-Szabo knot concordance invariant*]{}, Geom. Topol. [**8**]{} (2004), 735–742; arXiv:math.GT/0311036. Kunio Murasugi, [*On a certain numerical invariant of link types*]{}, Trans. Amer. Math. Soc., [**117**]{} (1965), 387–422. Kunio Murasugi, [*On the signature of links*]{}, Topology, [**9**]{} (1970), 283–298. S. Orevkov, [*Link theory and oval arrangements of real algebraic curves*]{}, Topology [**38**]{} (1999), 779–810. S. Orevkov, [*Plane real algebraic curves of odd degree with a deep nest*]{}, J. Knot Theory Ramifications 14 (2005), 497–522. P. Ozsvath and Z. Szabo, [*Knot Floer homology and the four-ball genus*]{}, Geom. Topol. [**7**]{} (2003), 615–639; arXiv:math.GT/0301149. J. Rasmussen, [*Khovanov homology and the slice genus*]{}, arXiv:math.GT/0402131. Alexander Shumakovitch, [*Rasmussen invariant, slice-Bennequin inequality, and sliceness of knots*]{}, arXiv:math/0411643 \[math.GT\]. L. Smolinsky,[*A generalization of the Levine-Tristram link invariant*]{}, Trans. of A.M.S. [**315**]{} (1989) 205–217. A. G. Tristram, [*Some cobordism invariants of links*]{}, Proc. Cambridge Philos. Soc., [**66**]{} (1969), 257–264. H. Trotter, [*Homology of group systems with applications to knot theory*]{}, Ann. of Math. [**76:2**]{} (1962), 464–498. O. Y. Viro, [*Branched coverings of manifolds with boundary and invariants of links. I*]{}, Izvestiya AN SSSR, ser. Matem. [**37:6**]{} (1973) 1242-1259 (Russian); English translation in Soviet Math. Izvestia [**7**]{} (1973), 1239–1255. O. Y. Viro, [*Placements in codimension 2 and boundary*]{}, Uspekhi Mat. Nauk [**30:1**]{} (1975) 231-232 (Russian). O. Y. Viro, [*Signatures of links,*]{} Tezisy VII Vsesojuznoj topologicheskoj konferencii, Minsk, 1977, p. 43 (Russian). O. Y. Viro, V. G. Turaev, [*Estimates of Twisted Homology,*]{} Tezisy VII Vsesojuznoj topologicheskoj konferencii, Minsk, 1977, p. 42 (Russian).
{ "pile_set_name": "ArXiv" }
--- abstract: | The octupole strengths of three nuclei: $\beta-$stable nucleus $^{208}_{82}Pb_{126}$, neutron skin nucleus $^{60}_{20}Ca_{40}$ and neutron drip line nucleus $^{28}_{8}O_{20}$ are studied by using the self-consistent Hartree-Fock calculation with the random phase approximation. The collective properties of low-lying excitations are analyzed by particle-vibration coupling. The results show that there is the coexistence of the collective excitations and the decoupled strong continuum strength near the threshold in the lowest isoscalar states in $^{60}_{20}Ca_{40}$ and $^{28}_{8}O_{20}$. For these three nuclei, both the low-lying isoscalar states and giant isoscalar resonance carry isovector strength. The ratio B(IV)/B(IS) is checked and it is found that, for $^{208}_{82}Pb_{126}$, the ratio is equal to $(\frac{N-Z}{A})^2$ in good accuracy, while for $^{60}_{20}Ca_{40}$ and $^{28}_{8}O_{20}$, the ratios are much larger than $(\frac{N-Z}{A})^2$. The study shows that the enhancement of the ratio is due to the excess neutrons that have small binding energies in $^{60}_{20}Ca_{40}$ and $^{28}_{8}O_{20}$.\ [*PACS*]{}: 21.10.Re, 21.60.Ev, 21.60.Jz, 27.30.+t\ [*Keywords*]{}: Neutron drip line nuclei; Collective excitations; Particle-vibration coupling; transition current; transition density address: - 'Department of Physics, Tsinghua University, Beijing 100084, P.R. China' - 'Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100080, P.R. China' - 'China Institute of Atomic Energy, P.O. Box 275, Beijing 102431, P.R. China' - 'Center of Theoretical Physics, National Laboratory of Heavy Ion Accelerator, Lanzhou 730000, P.R. China' author: - 'X.R. Zhou' - 'E.G. Zhao' - 'B.G. Dong' - 'X.Z. Zhang' - 'G.L. Long' title: 'Collective Properties of Low-lying Octupole Excitations in $^{208}_{82}Pb_{126}$, $^{60}_{20}Ca_{40}$ and $^{28}_{8}O_{20}$' --- , , , , Introduction ============ Various exotic properties are expected for nuclei far from $\beta-$stability. The collective properties of neutron drip line nuclei are especially interesting, because neutrons with small binding energies show a unique response to external fields. In the Ref. [@work1; @work2; @Ham96-2; @work3; @Ham97-2; @work4; @work5], monopole, dipole and quadrupole isoscalar (IS) and isovector (IV) giant resonances in stable and drip line, particularly neutron drip line nuclei, were studied by using the self-consistent Hartree-Fock (HF)calculation plus random phase approximation (RPA) with Skyrme interaction, and both the IS and IV correlations were taken into account simultaneously. It was found that for both $\beta $-stable and drip line nuclei the giant resonances can be well described by the collective model [@collective1; @collective2]. For neutron drip line nuclei, however, there is appreciable amount of low-lying strengths just above threshold and these low-lying strengths are nearly pure neutron unperturbed excitations. For example, in Ref[@work3], the quadrupole strength of the neutron drip line nucleus $^{28}_{8}O_{20}$ was analyzed and it was pointed out that there exists a so-called threshold strength, which is not of collective character and comes from the excitations of excess neutrons with small binding energies. In a recent publication [@Ca60], the low-lying octupole excitation of the neutron skin nucleus $_{20}^{60}Ca_{40}$ was studied and it was pointed out the low-lying ($\triangle N=1$) IS octupole states appear as collective excitation and are shifted down to very low energy region, due to the disappearance of the N=50 magic number. Low-lying octupole excitations usually consist of transitions from occupied states to bound states, resonance states and nonresonance states. In this paper, we make a detailed study of low-lying ($\triangle N=1$) octupole excitations of the $\beta -$stable nucleus $_{82}^{208}Pb_{126}$, the neutron skin nucleus $_{20}^{60}Ca_{40}$ and the drip line nucleus $_8^{28}O_{20}$. It can be seen that in all these nuclei, the low-lying octupole states near threshold appear as the coexistence of the collective excitations and the decoupled strong continuum strength. The properties of these low-lying states can be understood from the point of view of particle-vibration coupling. This paper is organized as follows. The theoretical formalism of HF plus RPA calculation is described in section 2. Numerical results and discussions are shown in section 3. A summary and conclusions are given in section 4. Formalism ========= The unperturbed strength function is defined by[@Ham99] $$\begin{aligned} S_{0} &\equiv &\sum_{i}\mid \langle i|Q^{\lambda }|0\rangle \mid ^{2}\delta (E-E_{i}) \nonumber \\ &=&\frac{1}{\pi }Im \ Tr(Q^{\lambda \dag }G_{0}(E)Q^{\lambda }) \label{S0}\end{aligned}$$ while RPA strength function is given by $$\begin{aligned} S &\equiv &\sum_{n}\mid \langle n|Q^{\lambda }|0\rangle\mid ^{2}\delta (E-E_{n}) \nonumber \\ &=&\frac{1}{\pi }Im \ Tr(Q^{\lambda \dag }G_{RPA}(E)Q^{\lambda }), \label{S}\end{aligned}$$ where $G_{0}$ is the noninteracting p-h Green function, and $G_{RPA}(E)$ is the RPA response function including the effect of the coupling to the continuum, $$G_{RPA}=G_{0}+G_{0}{v}_{ph}G_{RPA} =(1-G_{0}{v}_{ph})^{-1}G_{0}. \label{GRPA}$$ In Eqs. (\[S0\]) and (\[S\]), $Q^{\lambda }$ represents one-body operators and is written as $$Q_{\mu }^{\lambda =3,\tau=0} =\sum_{i}r_{i}^{3}Y_{3\mu} (\hat{r} _{i}),\ \ \ \ \ \ \ \ \ \ \mbox{for isoscalar octupole strength,} \label{Q0}$$ $$Q_{\mu }^{\lambda =3,\tau =1}=\sum_{i} \tau_{z}(i)r_{i}^{3}Y_{3\mu }(\hat{r} _{i}),\ \ \ \ \mbox{for isovector octupole strength.} \label{Q1}$$ The transition density for an exited state $|n\rangle$ is defined by $$\delta\rho_{n0}(\vec{r})\equiv \langle n|\sum_{i}\delta(\vec{r}-\vec{r}_{i})|0 \rangle \label{den},$$ which can be obtained by RPA response function $$\delta\rho_{n0}(\vec{r})=\alpha \int Im [G_{RPA}(\vec{r},{\vec{r}}^{\prime};E_{res})] Q^{\lambda}(\vec{r}^{\prime})d\vec{r}^{\prime}, \label{den2}$$ where the normalization factor $\alpha$ is determined from the transition strength $S(\lambda)$ by $$\alpha=\frac{1}{\pi \sqrt{S(\lambda)})}. \label{norm}$$ The radial transition density is defined by $$\delta\rho_{n0}(\vec{r})\equiv\delta\rho_{\lambda}(r)Y^{\ast}_{\lambda \mu}(\hat{r}). \label{denr}$$ The transition current is $$J_{n0}(\vec{r})=\langle n|\sum_{i} \frac{\hbar}{2mi} \{\delta (\vec{r}-\vec{r}_i) \overrightarrow{\bigtriangledown}_{i}-\overleftarrow{\bigtriangledown} _{i}\delta (\vec{r}-\vec{r}_i) \}|0\rangle, \label{J}$$ which can be expanded in the complete set of the vector spherical harmonics, $$J_{n0}(\vec{r})=(-i)\sum_{l=\lambda \pm 1}J_{\lambda l}\overrightarrow {Y} ^{\ast}_{\lambda l, \mu }(\hat{r}). \label{Jexpand}$$ The radial current component $J_{\lambda l}$ is defined as $$\begin{aligned} J_{\lambda l} &=&{i}\int \overrightarrow{Y}_{\lambda l,\mu} (\hat{r})J_{n0}(r)d\hat{r} \nonumber \\ &=&<n\mid \sum_i\frac \hbar {2m}\{\delta (r-r_i)[Y_l^{*}(\hat{r}_i)\times ( \overrightarrow{\bigtriangledown }_i^{+}-\overleftarrow{\bigtriangledown } _i^{+})]_{\lambda \mu }\}\mid 0\rangle. \label{Jr}\end{aligned}$$ In the Bohr-Tassie model, the transition density [@collective1; @collective2] is $$\delta\rho_{\lambda \tau}(r)\propto r^{\lambda-1}\frac{d\rho_{0}(r)}{dr},\ \ \ \ \ \ \mbox{for $\lambda>0$}, \label{Tasden}$$ and the radial current components [@collm] are $$J_{\lambda l}\propto \left \{ \begin{array}{l} r^{\lambda-1}\rho _{0}(r), \mbox{\ \ \ \ \ for $l=\lambda -1$}, \\ 0, \mbox{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \hspace{0.08cm} for $l=\lambda +1$}. \label{TassJ} \end{array} \right.$$ The Tassie transition density (\[Tasden\]) is normalized according to the following relationship $$S(\lambda)=|\int \delta\rho_{\lambda \tau}(r) r^{(\lambda +2)} dr|^2,$$ where $S(\lambda)$ is the transition strength of RPA state. This means that the normalized Tassie transition density should give the same strength as that of RPA state. In the present work, the properties of low-lying octupole excitations are studied from particle-vibration coupling. For octupole excitation, we use the radial dependence of the particle-vibration coupling $$V_{\mbox{pv}}(r)\sim r^2\frac{dU(r)}{dr}, \label{pv1}$$ or $$V_{\mbox{pv}}(r) \sim r^{2} \frac{d \rho_{0}}{dr}, \label{pv2}$$ where $U(r)$ is the HF potential and $\rho _0(r)$ is the ground state density. Eq.(\[pv1\]) has been successfully used for the coupling of particle to shape oscillations in Ref.[@collective1]. The sign of the ratio $$\frac{<p|V_{\mbox{pv}}(r)|h>}{<p|r^3|h>}=\frac{\int \delta \rho _{ph}(r)V_{\mbox{pv}}(r)r^2dr}{\int \delta \rho _{ph}(r)r^5dr} \label{ratio}$$ determines the influence of particle-vibration coupling on the strength of the unperturbed $p-h$ excitations[@coupling]. The magnitude of the ratio is a measure of how strongly the unperturbed strength of this $p-h$ excitation is modified by performing RPA calculation. If the ratio is equal to zero, the unperturbed strength of this $p-h$ transition will remain unchanged by the RPA correlation. Results and discussions ======================= We first perform the HF calculation with the SkM$^{*}$ interaction and then use the RPA with the same interaction including simultaneously both the IS and the IV correlation. It is solved in the coordinate space with the Green’s function method, taking into account the continuum exactly. The calculated unperturbed strengths of $_{82}^{208}Pb_{126}$, $_{20}^{60}Ca_{40}$ and $_8^{28}O_{20}$ are shown in Fig. \[Fig.1\]. In the harmonic oscillator model, the low-lying octupole strength ($\triangle N=1$) is approximately equal to the high-lying ($\triangle N=3$) strength in the large $N$ limit, and the low-lying and high-lying octupole strengths approximately exhaust $25\%$ and $75\%$ of the energy weighted sum rule, respectively. In real $\beta-$stable nuclei $N$ is finite and the low-lying strength is less than that predicted by harmonic oscillator model in the large $N$ limit. From Fig.\[Fig.1\](a) it can be seen that for $_{82}^{208}Pb_{126}$, the unperturbed low-lying ($\triangle N=1$) strength is centered at about 8 MeV and spread over the same order of energy range. It exhausts approximately $40\%$ of total strength. The high-lying ($\triangle N=3$) strength is centered at about 24 MeV and exhausts approximately $60\%$ of total strength. For $_{20}^{60}Ca_{40}$, the unperturbed strength (see Fig.\[Fig.1\](b)) spreads over an energy range from 2 MeV to 15 MeV and exhausts about $60\%$ of total strength. While for $_8^{28}O_{20}$ (see Fig.\[Fig.1\](c)), except a few high-lying proton excitations, nearly all of the octupole strength lies within the low energy region. For $_{20}^{60}Ca_{40}$ and $_8^{28}O_{20}$, large amount of neutron unperturbed octupole strengths are shifted down to very low energy region. This downward shifting of the octupole strengths is attributed to the disappearance of magic number N=50 for $_{20}^{60}Ca_{40}$ as shown in Fig.\[Fig.2\](a), and the disappearance of magic numbers N=20 and N=28 for $_8^{28}O_{20}$ as shown in Fig.\[Fig.2\](b). For all the three nuclei, the calculated energy weighted sum rules are equal to the classical sum rules to a good accuracy. In Fig.\[Fig.2\], one-particle energies are given with fixed neutron number for (a)N=20 and (b)N=40 as a function of proton number. It is easy to see that in Fig.\[Fig.2\](a), when it is near the neutron drip line, the magic number N=20 is disappearing, which agrees with the experimental observation[@n20], and new magic number N=16 appears, which is consistent with the conclusion in Ref.[@n16]. Similar phenomena appear also in Fig.\[Fig.2\](b). Near the neutron drip line, N=50 magic number is disappearing. Fig .\[Fig.3\] gives the IS and IV RPA octupole strength for the $_{82}^{208}Pb_{126}$, $ _{20}^{60}Ca_{40}$ and $_8^{28}O_{20}$. For $\beta-$stable nucleus $ _{82}^{208}Pb_{126}$ (Fig.\[Fig.3\](a)), we see a strong IS peak below threshold at Ex=3.48 MeV with strength $B(\lambda =3,IS:3^{-}\rightarrow 0^{+})=5.68\times 10^5fm^6$, which corresponds to $B(E3:3^{-}\rightarrow 0^{+})=(\frac{Ze}A)^2 \times 5.68\times 10^5fm^6=0.89\times 10^5e^2fm^6$. These values are comparable to experimental data $B(E3:3^{-}\rightarrow 0^{+})=1.0\times 10^5e^2fm^6$ at energy 2.61 MeV . The IS giant resonance is at about 20.5 MeV and the IV giant resonances are mainly distributed in the energy region from 25 MeV to 35 MeV. In Fig.\[Fig.4\] and Fig.\[Fig.5\] we show,for the case of $_{82}^{208}Pb_{126}$, the transition densities and radial current components of IS excitation at Ex=3.48 MeV, IS giant resonance at Ex=20.5 MeV, and IV giant resonance at Ex=34.2 MeV, together with the prediction of the Bohr-Tassie model, respectively. From these figures, it can be seen that the IS and IV octupole excitations are surface modes and they are well described by the collective model. For the strong collective IS state at 3.48 MeV in Fig.4(a) and Fig.5(a), there are some difference between two models at small $r$, but in the surface region with $r$ at about 7 $fm$, this excitation can well be described by the Bohr-Tassie model. In neutron-excess nuclei, the IS mode (shape vibration) gives rise to a IV moment which is proportional to $(N-Z)$, so the strength of an IS mode carries IV strength, and the ratio of IV strength/IS strength is expected to be $(\frac{N-Z}{A})^2$ as pointed in Refs. [@work3; @Cat97; @Sagawa]. We calculated this ratio for collective IS modes at Ex=3.48 MeV and giant IS resonance around Ex=20.5 MeV in $_{82}^{208}Pb_{126}$. Both modes give the ratios $(\frac{126-82}{208})^2$ to a good accuracy. For $_{20}^{60}Ca_{40}$ (Fig.\[Fig.3\](b)), the main IS strengths are shifted down to low energy region. Below 5 MeV there are 4 strong IS peaks and one of them is below threshold (Ex=1.91 MeV). &gt;From the transition densities and radial current components in Fig.\[Fig.6\], the IS collective state at Ex=1.91 MeV can be described by the Bohr-Tassie model. The transition densities and radial current components of the other three IS peaks below 5 MeV are shown in Fig.\[Fig.7\] and Fig.\[Fig.8\], respectively. &gt;From Fig.\[Fig.7\] it can be seen that these three IS peaks are surface modes and both neutrons and protons contribute. This indicates they are collective. In the central region of the nucleus there are differences between the transition densities of the Bohr-Tassie model and those of RPA calculation. But in the surface region the results of RPA calculation are similar to those of collective model. From Fig.\[Fig.8\] we see that for the currents, the Bohr-Tassie model results differ from the ones from RPA substantially, even in the surface region. For example, in our RPA calculation the small component $j_{3,4}(r)$ is comparable to the large component $j_{3,2}(r)$, but the Bohr-Tassi model gives $j_{3,4}(r)=0$. This difference will be explained from the viewpoint of particle-vibration coupling later. The ratios of IV strength/IS strength for these 4 strong IS peaks (below 5 MeV in $^{60}_{20}Ca_{40}$ are also calculated. We find that, however, unlike the case in $\beta-$stable nucleus $_{82}^{208}Pb_{126}$, the calculated ratios are larger than $(\frac{N-Z}{A})^2$ by factor 2 to 4. This result is closely related to the neutron orbits of small binding energies and they push down the unperturbed octupole strengths to very low energy region. For $_8^{28}O_{20}$ (see Fig.\[Fig.3\](c)), the high-lying IS giant resonance nearly disappears and almost all the IS octupole strengths are shifted down to low energy. The transition densities and radial current components for the lowest three peaks are shown in Fig.\[Fig.9\] and Fig.\[Fig.10\], respectively. From Fig.\[Fig.9\] we see that the three peaks are surface modes and both protons and neutrons contribute. In Fig.\[Fig.1\] and Fig.\[Fig.3\] we notice that the IS strengths of these peaks are larger than those of unperturbed ones, which shows they are collective. The Bohr-Tassie model can approximately describe the transition densities in the surface region. The Bohr-Tassie model gives quite different currents properties for these IS peaks in Fig.\[Fig.10\]. The reason will be analyzed later. We also calculated the ratios of IV strength/IS strength for these three IS peaks. The calculated values are factor 3 to 5 larger than $(\frac{N-Z}{A})^2$ in $^{28}_{8}O_{20}$. Similar to $^{60}_{20}Ca_{40}$, this is also related to the least bound neutrons. Next we try to understand the collective properties of these low-lying IS excitations in $^{60}_{20}Ca_{40}$ and $^{28}_{8}O_{20}$ based on the particle-vibration coupling. We have checked that the signs of the ratio Eq. (\[ratio\]) in our cases are always positive, so we only consider their absolute values here. Fig.\[Fig.11\] shows a few low-lying neutron unperturbed octupole strengths in $^{60}_{20}Ca_{40}$ for radial operators $r^3$, $r^2 \frac{dU(r)}{dr}$ and $r^2\frac{d\rho _0(r)}{dr}$, and Fig.\[Fig.12\] shows the corresponding quantities for $_8^{28}O_{20}$. Here where $U(r)$ is the neutron radial HF potential and $\rho _0(r)$ is the ground state density. &gt;From Fig.\[Fig.11\] we see that for the transition from bound state to nonresonance state, $1f_{\frac 52 }\rightarrow 3s_{\frac 12}$, there is a pronounced peak in octupole strength function for radial operator $r^{3}$, but almost no peaks for the other two radial operators $r^2\frac{dU(r)}{dr}$ and $r^2\frac{d\rho _0(r)}{dr}$, respectively. Just like the transitions from bound states to bound states, $1f_{\frac 52}\rightarrow 1g_{\frac 92}$ and $2p_{\frac 32}\rightarrow 1g_{\frac 92}$, there are strong peaks of octupole strengths for all three radial operators in corresponding energy region for the transitions from bound states to resonance states, $1f_{\frac 5 2}\rightarrow 2d_{\frac 5 2}$ and $2p_{\frac 1 2}\rightarrow 2d_{\frac 5 2}$, . It means that the ratios Eq. (\[ratio\]) for transitions to nonresonance states are much smaller than those for the transitions to resonance states and to bound states. In RPA calculation the unperturbed octupole strength of the radial operator $r^3$ for the transition from bound state to nonresonance state $1f_{\frac 52 }\rightarrow 3s_{\frac 12}$ will hardly be affected, but the strengths of the radial operator $r^3$ for the transitions from bound states to resonance states and to bound states will be strongly absorbed into the collective excitations. Because the unperturbed octupole strength of the transition from bound state to nonresonance state $1f_{\frac 52}\rightarrow 3s_{\frac 12}$ for radial operator $r^3$ is mainly distributed within the energy region from 3.0 MeV to 4.5 MeV, there is the coexistence of the collective excitations and the decoupled strong continuum strength near the threshold in these three lowest IS peaks. That is why the Bohr-Tassie model can only well describe the transition density of RPA calculation in the surface region for the low-lying excitations in $^{60}_{20}Ca_{40}$, but it can not well describe the calculated transition current. &gt;From Fig.\[Fig.12\] we see similar results for $^{28}_{8}O_{20}$. For transition from bound state to nonresonance state, $1d_{\frac 32}\rightarrow 2p_{\frac 32}$, the ratio in Eq. (\[ratio\]) is much smaller than those for transitions from bound states to resonance states, $1d_{3/2}\rightarrow 1f_{7/2}$ and $2s_{1/2}\rightarrow 1f_{7/2}$. In these three lowest IS peaks, there is the coexistence of the collective excitations and the decoupled strong continuum strength near the threshold. That is the reason why the Bohr-Tassie model can only approximately describe the calculated transition density in the surface region for the three lowest IS peaks in $_8^{28}O_{20}$, but it can not well describe the transition current of our calculation. Summary and Conclusions ======================= The octupole vibrations for the $\beta-$stable nucleus $_{82}^{208}Pb_{126}$, the neutron skin nucleus $_{20}^{60}Ca_{40}$ and the drip line nucleus $_8^{28}O_{20}$ are studied. It is found that the lowest IS excitation below threshold for the nuclei $_{82}^{208}Pb_{126}$ and $_{20}^{60}Ca_{40}$, and IS and IV giant resonances of the $\beta$-stable nucleus $_{82}^{208}Pb_{126}$ can be well described by collective model, at least in the surface region. For the neutron skin nucleus $_{20}^{60}Ca_{40}$ and the neutron drip line nucleus $ _8^{28}O_{20}$, there exist strengths of transitions from bound states to bound states, resonance states, and nonresonance states in the low-lying unperturbed neutron octupole strength ($\triangle N=1$). The strengths of transitions to nonresonance states are nearly unaffected and the strengths of other transitions are strongly absorbed into collective states by taking into account the RPA correlation. So there is the coexistence of the collective excitations and the decoupled strong continuum strength near the threshold in the lowest IS states. We also find that, for the $\beta-$stable nucleus $_{82}^{208}Pb_{126}$, both low-lying IS states and giant IS resonances carry IV component and the ratios of IV strength/IS strength are equal to $(\frac{N-Z}{A})^2$ to good accuracy, but for the neutron skin nucleus $_{20}^{60}Ca_{40}$ and the neutron drip line nucleus $_8^{28}O_{20}$, these ratios for a few lowest strong IS excitations are much larger than $(\frac{N-Z}{A})^2$. These results are closely related to the small binding energies of neutron orbits in these nuclei. The octupole transitions from these orbits are mainly distributed in low energy region, so the contribution to low-lying ($\triangle N=1$) octupole states from neutrons are much larger than those from protons. This work is supported by the National Natural Science Foundation of China under contact 10047001 and the Major State Basic Research Development Program under contract No. G200077400. We are grateful to I. Hamamoto and H. Sagawa for providing us with continuum RPA program. [99]{} I. Hamamoto, H. Sagawa and X. Z. Zhang, Phys. Rev. C 53 (1996) 765. I. Hamamoto and H. Sagawa, Phys. Rev. C 53 (1996) R1492. I. Hamamoto and H. Sagawa, Phys. Rev. C 54 (1996) 2369. I. Hamamoto, H. Sagawa and X. Z. Zhang, Phys. Rev. C 55 (1997) 2361; J. Phys. G 24 (1998) 1417. I. Hamamoto, H. Sagawa and X. Z. Zhang, Nucl. Phys. A 626 (1997) 669. I. Hamamoto, H. Sagawa and X. Z. Zhang, Phys. Rev. C 56 (1997) 3121. I. Hamamoto, H. Sagawa and X. Z. Zhang, Phys. Rev. C 57 (1998) R1064. A. Bohr and B. R. Mottelson, Nuclear Structure, Vol. II (Benjamin, New York, 1975). L. T. Tassie, Australian, J. Phys. 9 (1956) 407. F. Catara, E. G. Lanza, M. A. Nagarajan, and A. Vitturi, Nucl. Phys. A 614 (1997) 86. I. Hamamoto, H. Sagawa and X. Z. Zhang, Phys. Rev. C 64 (2001) 024313. I. Hamamoto, H. Sagawa and X. Z. Zhang, Nucl. Phys. A 648 (1999) 203. T. Suzuki and D. J. Rowe, Nucl. Phys. A 286 (1977) 307. I. Hamamoto nad X. Z. Zhang, Phys. Rev. C 58 (1998) 3388. D. Guillemaud-Mueller, [*et al.*]{}, Nucl. Phys. A 426 (1984) 37; T.Motobayashi, et al., Phys. Lett. B 346 (1995) 9. A.Ozawa, T. Kobayashi, Y. Suzuki, K. Yashida and I. Tanihata, Phys. Rev. Lett. 84 (2000) 5439. H. Sagawa, Phys. Rev. C 64 (2002) 064314. ![Unperturbed octupole strengths of (a) $_{82}^{208}Pb _{126}$, (b) $_{20}^{60}Ca_{40}$ and (c)$_8^{28}O_{20}$.[]{data-label="Fig.1"}](FIG1.eps){width="2.2in"} ![One-particle energies vary with proton number for (a) neutron number $N=20$ and (b) neutron number $N=40$.[]{data-label="Fig.2"}](FIG2.eps){width="2.2in"} ![Isoscalar and isovector octupole strengths of (a) $_{82}^{208}Pb_{126}$, (b) $_{20}^{60}Ca_{40}$ and (c) $_8^{28}O_{20}$.[]{data-label="Fig.3"}](FIG3.eps){width="2.2in"} ![(right) Radial transition current components of low-lying and high-lying IS and IV octupole modes of $ _{82}^{208}Pb_{126}$.[]{data-label="Fig.5"}](FIG4.EPS "fig:"){width="2.2in"} ![(right) Radial transition current components of low-lying and high-lying IS and IV octupole modes of $ _{82}^{208}Pb_{126}$.[]{data-label="Fig.5"}](FIG5.eps "fig:"){width="2.2in"} ![Radial transition densities and current components of low-lying IS octupole modes of $_{20}^{60}Ca_{40}$.[]{data-label="Fig.6"}](FIG6.eps){width="2.2in"} ![(right) Radial current components of low-lying IS octupole modes of $_{20}^{60}Ca_{40}$.[]{data-label="Fig.8"}](FIG7.eps "fig:"){width="2.2in"} ![(right) Radial current components of low-lying IS octupole modes of $_{20}^{60}Ca_{40}$.[]{data-label="Fig.8"}](FIG8.eps "fig:"){width="2.2in"} ![(right) Radial current components of low-lying IS octupole modes of $_8^{28}O_{20}$.[]{data-label="Fig.10"}](FIG9.eps "fig:"){width="2.2in"} ![(right) Radial current components of low-lying IS octupole modes of $_8^{28}O_{20}$.[]{data-label="Fig.10"}](FIG10.eps "fig:"){width="2.2in"} ![Few low-lying unperturbed octupole strength in $_{20}^{60}Ca_{40}$ for radial operators (a) $r^{3}$, (b) $r^2\frac{dU(r)}{dr}$ and (c) $r^2\frac{d\rho _0(r)}{dr}$, where $U(r)$ is the neutron radial HF potential and $\rho_{0} (r)$ is the ground state density.[]{data-label="Fig.11"}](FIG11.EPS){width="2.2in"} ![Few low-lying unperturbed octupole strength in $_{8}^{28}O_{20}$ for radial operators (a) $r^{3}$, (b) $r^2\frac{dU(r)}{dr}$ and (c)$r^2\frac{d\rho _0(r)}{dr}$, where $U(r)$ is the neutron radial HF potential and $\rho_{0} (r)$ is the ground state density.[]{data-label="Fig.12"}](FIG12.EPS){width="2.2in"}
{ "pile_set_name": "ArXiv" }
--- abstract: 'In concentrated electrolytes with asymmetric or irregular ions, such as ionic liquids and solvent-in-salt electrolytes, ion association is more complicated than simple ion-pairing. Large branched aggregates can form at significant concentrations at even moderate salt concentrations. When the extent of ion association reaches a certain threshold, a percolating ionic gel networks can form spontaneously. Gelation is a phenomenon that is well known in polymer physics, but it is practically unstudied in concentrated electrolytes. However, despite this fact, the ion-pairing description is often applied to these systems for the sake of simplicity. In this work, drawing strongly from established theories in polymer physics, we develop a simple thermodynamic model of reversible ionic aggregation and gelation in concentrated electrolytes accounting for the competition between ion solvation and ion association. Our model predicts the populations of ionic clusters of different sizes as a function of salt concentration, it captures the onset of ionic gelation and also the post-gel partitioning of ions into the gel. We discuss the applicability of our model, as well as the implications of its predictions on thermodynamic, transport, and rheological properties.' author: - Michael McEldrew - 'Zachary A. H. Goodwin' - Sheng Bi - 'Martin Z. Bazant' - 'Alexei A. Kornyshev' bibliography: - 'gel\_paper.bib' title: 'Theory of Ion Aggregation and Gelation in Super-Concentrated Electrolytes' --- ------------------------ ------------------------------------------ ------------------------ ---------------------------------------- $N_{lmsq}$ Number of $lmsq$ clusters $N^{gel}_i$ Number of species $i$ in gel $f_i$ Functionality of species $i$ $v_i$ Volume of species $i$ $\xi_i$ Scaled volume of species $i$ $V$ Total volume of mixture $\Omega$ Number of lattice sites $c_{lmsq}$ Dimensionless concentration of cluster $c^{gel}_{i}$ Dimensionless concentration of $c_{tot}$ Total dimensionless concentration species $i$ in gel of clusters $\phi_i$ Total volume fraction of species $i$ $\phi_{\pm}$ Volume fraction of salt $\phi^{sol}_i$ Volume fraction of species $i$ in sol $\phi^{gel}_i$ Volume fraction of species $i$ in gel $\phi_{lmsq}$ Volume fraction of an $lmsq$ cluster $\psi_{i}$ Concentration of association sites of species i $\beta$ Inverse thermal energy $\Delta F$ Free energy $\Delta_{lmsq}$ Free energy of formation of a rank $\Delta_{lmsq}^{comb}$ Combinatorial free energy of $lmsq$ cluster formation of a rank $lmsq$ cluster $\Delta_{lmsq}^{bond}$ Bonding free energy of $\Delta_{lmqs}^{conf}$ Configurational free energy of formation of an $lmsq$ cluster formation of an $lmsq$ cluster $\Delta_{lmsq}^{el}$ Electrostatic free energy of $\Delta^{gel}_{i}$ Free energy change of species $i$ formation of an lmsq cluster associating to the gel $\mu_{lmsq}$ Chemical potential of an $lmsq$ cluster $\mu^{gel}_{i}$ Chemical potential of species $i$ in the gel $K_{lmsq}$ Equilibrium constant $W_{lmsq}$ Combinatorial enumeration $\Delta u_{ij}$ Association free energy $S_{lmsq}$ Configurational entropy of a cluster $S_{lmsq}$ Configurational entropy of cluster $Z$ Coordination number of lattice $\Lambda_{ij}$ Association constant between $i$ and $j$ $\tilde{\Lambda}$ Association ratio $p_{ij}$ Association probabilities $p^{sol}_{ij}$ Association probabilities in the sol $\zeta$ Number of anion-cation associations $\Gamma$ Number of cation-solvent associations $\Xi$ Number of anion-solvent associations $\alpha$ Branching coefficient $\bar{n}_w$ Weight average of ionic aggregation $\alpha_{lm}$ Fraction of ions in $lm$ clusters $\mathcal{K}$ Cluster distribution constant $w^{gel}_i$ Fraction of species in the gel $w^{sol}_i$ Fraction of species in the sol $G_e$ Equilibrium shear modulus $R$ Gas constant $c$ Molar concentration of salt ------------------------ ------------------------------------------ ------------------------ ---------------------------------------- : List of Variables \[tab:my\_label\] Introduction ============ For most dilute electrolytes with high permittivity solvents, it is reasonable to assume that the salt is perfectly dissociated as confirmed by classical experiments [@harned1959physical]. However, for moderately concentrated systems or dilute solutions with low permittivity solvents, incomplete dissociation of ions can be substantial [@IBL]. Bjerrum popularized the concept of ion pairing, which was able to account for some deviations of experimental results from theoretical predictions [@bjerrum1926k]. In the Bjerrum theory of ion pairing, an ion pair is formed when the separation of oppositely charged ions is smaller than the length scale at which the Coulomb interaction is equivalent to thermal energy (known as the Bjerrum length). Many theoretical studies have focused on extending or modifying Bjerrum’s treatment/definition of ions pairs, and we direct the readers to Ref.  for an extensive review on the topic. Only a small fraction of studies considered ion aggregates larger than just simple ion pairs [@kraus1933_1; @fuoss1933_4; @fuoss1933_9; @barthel2000application], but even those works only apply for moderate concentrations and model only simple ionic clusters. In super-concentrated electrolytes, such as ionic liquids (ILs) or solvent-in-salt electrolytes (SiSEs) the picture is more complicated. With the recent explosion of interest in this regime for electrochemical applications [@Suo2013; @Sodeyama2014; @Suo2015; @Smith2015; @Yamada2016; @Wang2016; @Gambou-Bosca2016; @Sun2017; @suo2017water; @Dong2017; @Diederichsen2017; @yang2017a; @Yang2017; @wang2018hybrid; @leonard2018; @Yang2019; @Dou2019], a complete description of ion aggregation may be necessary for understanding the physicochemical, electrochemical, and thermodynamic properties of these concentrated mixtures. For ionic liquids, it has been useful to introduce the concept of free ions, without fully describing the nature of the associated species [@Chen2017; @goodwin2017underscreening]. These concepts have been applied to ILs to reproduce the temperature dependence of ionic conductivities [@feng2019free] and differential capacitance [@Chen2017], although these simple pictures still cannot fully explain the so-called underscreening paradox in ILs  [@Gebbie2013; @Gebbie2015; @Smith2016; @goodwin2017mean; @goodwin2017underscreening]. In SiSEs, as well as IL mixtures, there have been a multitude of molecular dynamics [@kim2014ion; @choi2014ion; @choi2015ion; @choi2017ion; @borodin2017liquid; @france2019; @yu2020asymmetric] and experimental [@borodin2017liquid; @lim2018; @lewis2020signatures] studies detailing complex ion association and hydration, often manifesting in highly asymmetric or even negative [@molinari2019transport; @molinari2019general] transference numbers. Although these molecular simulations and experimental studies provide valuable insight, it is often constrained to specific systems and is not readily transferable to new systems. For super-concentrated electrolytes it would therefore be beneficial to have a theoretical description of ion aggregates of arbitrary size, but to our knowledge, such a theory has not been reported in literature. Hence, in this article, we will formulate a thermodynamic model of ionic association beyond a simple description of ion pairing (or even triple and quadruple ions). Ultimately, we want our model to capture a distribution of aggregate sizes and even the formation of arbitrarily large ionic aggregates. In building such a model, we draw inspiration from polymer physics. In the early 1940’s, Flory [@flory1941molecular; @flory1942constitution] and Stockmayer [@stockmayer1943theory; @stockmayer1944theory] derived expressions describing the most likely distribution of polymer molecular weights in a mixture. These expressions only require knowledge of the probability of the polymerization reaction, as well as the *functionalities*, $f$, of the monomers. Functionalities refer to the number of bonds a monomer unit can make to extend the polymer. When $f=2$, then large linear chains can be formed, but when $f>2$, these aggregates will be branched and increasingly complex. Moreover, when $f>2$, Flory and Stockmayer were able to show that at a certain extent of reaction a percolating polymer network will be spontaneously formed in a process referred to as gelation. In the polymers community, this percolating network is referred to as a *gel*, while the remaining finite species in mixture are referred to as the *sol*. The gelation phenomenon outlined by Flory and Stockmayer turned out to be analogous to the percolation problem on a Bethe lattice [@stauffer1994introduction]. The theories of Flory and Stockmayer were formulated to describe the largely irreversible covalent bond formation characteristic of condensation polymerization reactions, as opposed to the more reversible physical associations of ions. Starting in the late 1980’s, Tanaka pioneered the theory of *thermoreversible* polymer association and gelation [@tanaka1989; @tanaka1990thermodynamic; @tanaka1994; @tanaka1995; @ishida1997; @tanaka1998; @tanaka1999; @tanaka2002]. In his work, Tanaka models the physical association between polymer strands within a thermodynamic framework that is able to capture the distribution of polymeric clusters, as well presence and breadth of gel networks. Of particular interest to us, is the two component case in which Tanaka describes a mixture of two types of polymer strands that associate heterogeneously in an alternating fashion [@tanaka1998]. This is quite analogous to ion association in that ions will only associate to counterions. Thus, our theory of ion association and gelation in concentrated ionic systems will build upon that of Tanaka. This paper is split into two main sections: Theory and Discussion. The Theory section is split into 5 subsections. First, we describe the stoichiometric definitions of our mixture, as well as its free energy of mixing. Then, we minimize that free energy yielding our pre-gel cluster distribution in terms of “free" species volume fractions. In the third theory section we introduce “association probabilities" that allow us to write the pre-gel cluster distributions in terms of experimentally accessible overall species volume fractions. In the fourth section, we describe the mechanism for gelation and derive the criterion for its onset. In the last theory section, we derive the post-gel relationships, yielding the post gel cluster distribution and the gel/sol partitioning. We end the paper, by discussing applicability of our model, and some of its implications on observable thermodynamic, transport, and rheological properties of the electrolyte solution, in particular those properties affected by the presence of ionic gel. At the start of this paper, we have a list of symbols in Tab. \[tab:my\_label\]. Theory ====== We consider a polydisperse mixture of $\sum_{lmsq}N_{lmsq}$ ionic clusters, each containing $l$ cations, $m$ anions, $s$ solvent molecules associated to cations, $q$ solvent molecules associated to anions ($lmsq$ cluster), and (if present) an interpenetrating gel network containing $N_+^{gel}$ cations, $N_-^{gel}$ anions, and $N_0^{gel}$ solvent molecules. We model the cations to have a functionality (defined as the number of associations that the species can make) of $f_+$, and anions to have a functionality of $f_-$. This means that a(n) cation (anion) is able to associate with $f_+$ ($f_-$) anions (cations) or solvent molecules. We also consider the ability of solvent molecules to coordinate to cations or anions with a functionality of 1. This, actually, means that we neglect the ability of solvent molecules to bridge ionic clusters through interactions with multiple ions, and thereby neglect the formation of any solvent-mediated clustering/gelation. This is obviously a simplification, justified by an assumption that the clusters that are not ‘glued’ by direct ion–counter-ion interactions are more labile, and as such can be disregarded. A typical ion cluster consistent with our description is depicted in Fig. \[fig:ion\_cluster\]. ![A cartoon example of cation/anion/solvent clusters that may be found with a certain probability in a model concentrated electrolyte. In this case, we have drawn a cluster in which $f_+=4$, $f_-=3$, $l=4$, $m=3$, $s=7$, and $q=3$.[]{data-label="fig:ion_cluster"}](ion_cluster_pt.png){width="35.00000%"} Following Tanaka, we account for molecular volumes by using a lattice model. We designate a single lattice site to have the volume of a single solvent molecule, $v_0$. Thus the entire volume of the mixture, $V$, is divided into $\Omega = V/v_0$ lattice sites. Moreover, cations will occupy $\xi_+=v_+/v_0$ lattice sites, and anions will occupy $\xi_-=v_-/v_0$ lattice sites. Furthermore, when a gel is formed, then we distinguish between the volume fractions of gel (superscript $gel$) and sol (superscript $sol$). The volume fractions in the sol and gel constitutes the total volume fraction, $\phi_j$ of a given species, $j$, is given by $$\begin{aligned} \phi_j=\phi_j^{sol}+\phi_j^{gel} \end{aligned}$$ in which the gel volume fractions is defined as $\phi_j^{gel}=\xi_j N_j^{gel}/\Omega$, with $N_j^{gel}$ as the mole number of species $j$ in the gel. The subscript $j=+,-,0$ corresponds to cation, anion, and solvent, respectively. The sol volume fraction of cations, anions, and solvent molecules have, respectively, the definitions $$\begin{aligned} \phi^{sol}_+=\sum_{lmsq} \xi_+l c_{lmsq}\end{aligned}$$ $$\begin{aligned} \phi^{sol}_-=\sum_{lmsq} \xi_- m c_{lmsq}\end{aligned}$$ $$\begin{aligned} \phi^{sol}_0=\sum_{lmsq} (s+q) c_{lmsq}\end{aligned}$$ where $c_{lmsq}=N_{lmsq}/\Omega$ is the dimensionless concentration of a $lmsq$ cluster (the number of $lmsq$ clusters per lattice site). Similarly, we define $\phi_\pm=\phi_++\phi_-$, which is the total volume fraction of the salt in solution. For simplicity the mixture is assumed to be incompressible, i.e. $$\begin{aligned} 1=\phi_\pm+\phi_0=\phi_++\phi_-+\phi_0 \label{eq:incomp}\end{aligned}$$ $\phi_+$ and $\phi_-$ are not independent owing to electroneutrality: $\phi_+/\xi_+=\phi_-/\xi_-$. The reduced volume of the mixture, $\Omega$, can also be expressed in terms of the mole number of each species/component due to the incompressibility constraint \[Eq. \] $$\begin{aligned} \Omega=\sum_{lmsq}(\xi_+l+\xi_-m+s+q)N_{lmsq}+\xi_+N^{gel}_++\xi_-N^{gel}_-+N^{gel}_0 \label{eq:O}\end{aligned}$$ This definition must be used when differentiating the free energy of mixture. Another important quantity that will be used abundantly later in the paper is the dimensionless concentration of association sites (number of association sites per lattice site). We denote this quantity by $\psi_j$ and define it as the following $$\begin{aligned} \psi_j=f_j \phi_j/\xi_j\end{aligned}$$ Thus, $\psi_j$ is the number of $j$ association sites per lattice site. Note that for solvent molecules $\psi_0=\phi_0$. Free Energy ----------- We use a Flory-Huggins like free energy of mixing given in units of thermal energy, $\beta = 1/k_BT$, $$\begin{aligned} \beta \Delta F &= \sum_{lmsq} \left[N_{lmsq}\ln \left( \phi_{lmsq} \right)+N_{lmsq}\Delta_{lmsq}^{\theta}\right] \nonumber \\ &+ \sum_{lmsq} \left[N_{lmsq} \delta_{l,1}\delta_{m,0}(\ln \gamma^{DH}_++\Delta u_+^{born})+N_{lmsq}\delta_{m,1}\delta_{l,0}(\ln \gamma^{DH}_-+\Delta u_-^{born}) \right] \nonumber \\ &+ \Delta^{gel}_+ N^{gel}_+ + \Delta^{gel}_- N^{gel}_- + \Delta^{gel}_0 N^{gel}_0 \label{eq:F}\end{aligned}$$ where $\phi_{lmsq}=(\xi_+l+\xi_-m+s+q)N_{lmsq}/\Omega$ is the volume fraction of an $lmsq$ cluster, $\Delta^{\theta}_{lmsq}$ is the ideal free energy of formation of an $lmsq$ cluster from its unassociated constituents, $\gamma^{DH}_\pm$ is the Debye-Huckle ionic activity coefficient (defined later), $\Delta u^{Born}_\pm$ is the Born solvation free energy of an ion (defined later), $\delta_{i,j}$ is the Kroenecker delta, and $\Delta^{gel}_i$ is the free energy change of species, $i$, associating to the gel [@flory1942thermodynamics; @flory1953principles; @tanaka1989]. We should note that Flory-Huggins type free energies typically contain regular solution interaction parameters between species in order to model phase separation, but we have omitted them here for the sake of simplicity. The free energy in Eq.  contains three essential pieces of physics: the entropy of mixing for a distribution of ion/solvent clusters and the gel, the association free energy corresponding to the formation of clusters or the gel, and finally the electrostatic non-idealities of free ions in solution. The entropy of mixing takes into account that species within specific clusters are not entropically independent, however the individual clusters are treated ideally. Additionally, $\phi_{lmsq}$ is constrained via the incompressibility condition \[Eqs.  & \]. In the second line of Eq. , we modify the chemical potential of unpaired or free ions by including terms to account for Debye-Huckel screening and Born solvation free energy of free ions. Differentiating the free energy with respect to $N_{lmsq}$ yields the chemical potential of a cluster rank $lmsq$ $$\begin{aligned} \beta \mu_{lmsq}&=\ln \phi_{lmsq} + 1 - (l+m+s+q) c_{tot}+\Delta^{\theta}_{lmsq} \nonumber \\ &+\delta_{l,1}\delta_{m,0}(\ln \gamma^{DH}_++\Delta u_+^{born})+\delta_{m,1}\delta_{l,0}(\ln \gamma^{DH}_-+\Delta u_-^{born})%\nonumber \\ %&+d\left[\phi_0 (\xi_+l+\xi_-m+s+q)-s-q\right], \label{eq:muclust}\end{aligned}$$ where $c_{tot}=\sum_{lmsq}c_{lmsq}$ is the total reduced concentration. Note we have used Eq.  when differentiating the free energy. Additionally, we may define the chemical potential of species immersed in the gel $$\begin{aligned} \beta \mu_+^{gel}=\Delta^{gel}_+ - c_{tot} %+\xi_+d\phi_0\end{aligned}$$ $$\begin{aligned} \beta \mu_-^{gel}=\Delta^{gel}_--c_{tot}%+\xi_-d\phi_0\end{aligned}$$ $$\begin{aligned} \beta \mu_0^{gel}=\Delta^{gel}_0-c_{tot}. %+d\phi_\pm\end{aligned}$$ Pre-gel Cluster Distribution ---------------------------- The distribution of clusters can be derived by enforcing a chemical equilibrium between all of the clusters and their bare constituents (unassociated components) $$\begin{aligned} l [\text{bare cation}]+m [\text{bare anion}]+(s+q) [\text{bare solvent}]\rightleftharpoons [lmsq\,\,\text{cluster}].\end{aligned}$$ Chemical equilibrium requires that the chemical potentials of free species and those in clusters are equivalent $$\begin{aligned} l \mu_{1000}+m \mu_{0100}+(s+q) \mu_{0010} = \mu_{lmsq} = l\mu^+_{lmsq} + m\mu^-_{lmsq} + (s+q)\mu^{0}_{lmsq} \label{eq:eqm}\end{aligned}$$ Note that we may refer to free solvent molecules with either the index 0001 or 0010. For simplicity we will use the index 0010 to refer to free solvent molecules, for the remainder of the text. In Eq. , we have defined the chemical potential of a cation, anion or solvent molecule in an arbitrary cluster in the following manner $$\begin{aligned} \mu^+_{lmsq}=\frac{\partial \mu_{lmsq}}{\partial l}=\mu_{1000}\end{aligned}$$ $$\begin{aligned} \mu^-_{lmsq}=\frac{\partial \mu_{lmsq}}{\partial m}=\mu_{0100}\end{aligned}$$ $$\begin{aligned} \mu^{0}_{lmsq}=\frac{\partial \mu_{lmsq}}{\partial s}=\frac{\partial \mu_{lmsq}}{\partial q}=\mu_{0010}\end{aligned}$$ Solving Eq.  for an arbitrary $lmsq$ cluster obtains the following relation $$\begin{aligned} \phi_{lmsq}=K_{lmsq}\phi_{1000}^l \phi_{0100}^m \phi_{0010}^{s+q} \label{eq:cluster_dist}\end{aligned}$$ where $\phi_{1000}$, $\phi_{0100}$, and $\phi_{0010}$ are the bare species’ volume fractions of cations, anions, and solvent molecules, respectively; and $K_{lmsq}$ is the equilibrium constant, given by $$\begin{aligned} K_{lmsq}=\exp(l+m+s+q-1-\Delta^{\theta}_{lmsq}+\Delta_{lmsq}^{el})\end{aligned}$$ where $$\begin{aligned} \Delta_{lmsq}^{el}=l(\delta_{l,1}\delta_{m,0}-1) (\ln \gamma_+^{DH}+\Delta u^{Born}_+) +m(\delta_{m,1}\delta_{l,0}-1)(\ln \gamma_-^{DH}+\Delta u^{Born}_-). \label{eq:delel1}\end{aligned}$$ Thus, $\Delta_{lmsq}^{el}$ can be considered the electrostatic contribution to the free energy of formation of the cluster. It is convenient to employ the following definition: $$\begin{aligned} \Delta_{lmsq}=\Delta_{lmsq}^{\theta}+\Delta_{lmsq}^{el} \label{eq:delta2}\end{aligned}$$ where $\Delta_{lmsq}$ is now the free energy of formation of an $lmsq$ cluster accounting for the electrostatic non-idealities of free ions, which we will discuss in more detail below. Thus, the partitioning of the species into clusters of different sizes is strongly governed by $\Delta_{lmsq}$. As such, this is where much of the physics of the ion/solvent association will be included. $\Delta_{lmsq}$ contains four contributions $$\begin{aligned} \Delta_{lmsq}=\Delta_{lmsq}^{comb}+\Delta_{lmsq}^{bond}+\Delta_{lmsq}^{conf}+\Delta_{lmsq}^{el} \label{eq:Delta}\end{aligned}$$ where $\Delta_{lmsq}^{comb}$ is the *combinatorial* (entropic) contribution, describing the multiplicity of clusters with the same number of constituents; $\Delta_{lmsq}^{bond}$ is the *bonding* contribution, describing the association enthalpy of the constituents in the cluster; $\Delta_{lmsq}^{conf}$ is the *configurational* contribution, describing the configurational entropy change upon forming a cluster from base constituents; and $\Delta_{lmsq}^{el}$ is the electrostatic contribution, accounting for the long range electrostatic interactions of free ions in the electrolyte. Note, the first three contributions are the same as included by Tanaka, however, the fourth contribution, $\Delta_{lmsq}^{el}$, is a necessary addition for modelling electrolytes due to the presence of free charges in solution. The entropy associated with the combinatorial enumeration, $W_{lmsq}$, of all of the possible ways a cluster with $l$ cations, $m$ anions, and $s+q$ solvent molecules can be formed is given by $$\begin{aligned} \Delta_{lmsq}^{comb}=-\ln \left(W_{lmsq}\right) \label{eq:dcomb}\end{aligned}$$ To derive $W_{lmsq}$ we use a two step procedure. First, we enumerate the number of ways to construct a network containing $l$ anions and $m$ cations, which are associated together in an alternating fashion, $W_{lm}$. This combinatorial problem is well known [@stockmayer1952molecular] $$\begin{aligned} W_{lm}=\frac{(f_+ l -l)!(f_- m -m)!}{l!m!(f_+l-l-m+1)!(f_-m-l-m+1)!}.\end{aligned}$$ In the second step, we enumerate the number of ways $s+q$ solvent molecules can be placed on the cation-anion cluster. We know that we may only place the $s$ solvent molecules on the remaining $f_+l-l-m+1$ open cation sites. Thus $s$ must be less than or equal to $f_+l -l-m+1$. This enumeration is expressed via the binomial coefficient$$\begin{aligned} \mathcal{C}^{f_+l-l-m+1}_s=\frac{(f_+l-l-m+1)!}{s!(f_+l-l-m-s+1)!}.\end{aligned}$$ Similarly, we must place $q$ solvent molecules on the remaining $f_-m-m-l+1$ open anion sites, which can be enumerated via $$\begin{aligned} \mathcal{C}^{f_-m-m-l-q+1}_q=\frac{(f_-m-m-l+1)!}{q!(f_-m-m-l-q+1)!}.\end{aligned}$$ Thus, we have $$\begin{aligned} W_{lmsq}&=W_{lm}\mathcal{C}^{f_+l-l-m+1}_s\mathcal{C}^{f_-m-m-l-q+1}_q \nonumber \\ &=\frac{(f_+ l -l)!(f_- m-m)!}{l!m!s!q!(f_+l-l-m-s+1)!(f_-m-m-l-q+1)!}\end{aligned}$$ Next, the bonding contribution, $\Delta^{bond}_{lmsq}$, can be described simply via the association free energies: $\Delta u_{ij}$ between species $i$ and $j$, where $i \neq j$ and $\Delta u_{ij} = \Delta u_{ji}$. Recall, that our model does not allow for solvent molecules to form clusters among themselves. For this reason, if a cluster contains 0 cations and anions, the cluster will necessarily only contain a single solvent molecule, corresponding to a free solvent molecule. Clearly, a free water molecule does not form associations and thus $\Delta^{bond}_{0010}=\Delta^{bond}_{0001}=0$. Overall, we can write $\Delta^{bond}_{lmsq}$ as $$\begin{aligned} \Delta^{bond}_{lmsq}=\left[(l+m-1)\Delta u_{+-}+s\Delta u_{+0}+q\Delta u_{-0}\right][1-\delta_{l,0}\delta_{m,0}(\delta_{q,0}\delta_{s,1}+\delta_{q,1}\delta_{s,0})] \label{eq:delbond1}\end{aligned}$$ where $\delta_{i,j}$ is Kroenecker delta function. For $l+m>0$, the association free energy for an $lmsq$ cluster is $$\begin{aligned} \Delta^{bond}_{lmsq}=(l+m-1)\Delta u_{+-} + s\Delta u_{+0} + q\Delta u_{-0} \label{eq:dbond}\end{aligned}$$ The coefficient in front of the cation-anion bond, $\Delta u_{+-}$, is due to the fact that there must be that many cation–anion associations to form a cluster with $l$ cations and $m$ anions. For the configurational contribution, $\Delta^{conf}_{lmsq}$, we use Flory’s lattice theoretical expression for the entropy of disorientation [@flory1942thermodynamics; @flory1953principles]. Tanaka adapted and modified Flory’s expression for more complicated associating polymer mixtures in refs. [@tanaka1989; @matsuyama1990theory; @tanaka1999], through a procedure outlined by Flory, involving the subsequent placement of lattice sized bits of molecules onto adjacent lattice sites. From this, we write the configurational entropy, $S_{lmsq}$, of an $lmsq$ cluster as $$\begin{aligned} S_{lmsq}=-\ln \left( \frac{(\xi_+l+\xi_-m+s+q)Z(Z-1)^{\xi_+l+\xi_-m+s+q-2}}{\exp \left( \xi_+l+\xi_-m+s+q -1\right)} \right)\end{aligned}$$ where $Z$ is the coordination number of the lattice. The configurational bit of $\Delta_{lmsq}$ is then $$\begin{aligned} \Delta^{conf}_{lmsq}&=S_{lmsq}-lS_{1000}-mS_{0100}-(s+q)S_{0010} \nonumber \\ &=-\ln\left(\frac{(\xi_+l+\xi_-m+s+q)\left[(Z-1)^2/Ze\right]^{l+m+s+q-1}}{\xi_+^l\xi_-^m}\right) \label{eq:dconf}\end{aligned}$$ The last contribution to $\Delta_{lmsq}$ in Eq. , which Tanaka does not need to consider for his systems, is the electrostatic contribution, $\Delta^{el}_{lmsq}$. Note, we have already defined this quantity in Eq.  In determining it, we had to make the following simplifying assumption. Both in the limit high and low salt concentrations, the concentration of free ions will be small. Thus, we could describe the contribution to their free energy using simple Debye screening theory, as suggested by surface force data for ionic liquids [@gebbie2013ionic]. We neglected the contribution of charged clusters containing multiple ions, because their contribution to the ionic strength of the solution is expected to be small. However, we will take into account the effects of ionic clusters on the effective dielectric constant of the medium in which the free ions are dissolved. Such an approximation is expected to work effectively as an interpolation between the two limiting cases of low and high salt concentration. Hence, the electrostatic screening will be characterized by the Debye screening length, $\lambda_D$, $$\begin{aligned} \lambda_D^2= \frac{\varepsilon \varepsilon_0 k_BT}{e^2 I},\end{aligned}$$ where $\varepsilon$ is the relative dielectric constant of the medium (affected by the degree of clustering), $\varepsilon_0$ is the vacuum permittivity, $e$ is the elementary charge, and $I$ is the ionic strength of the solution. In general, the ionic strength must take into account contributions from all the charged clusters: $$\begin{aligned} I = \frac{1}{2}\sum\limits_{lmsq}(l-m)^2c_{lmsq}/v_0 \label{eq:LI1}\end{aligned}$$ where $c_{lmsq}$ is the number of clusters rank $lmsq$ per lattice site (dimensionless concentration). However, as previously mentioned, we will make the assumption that the free ions dominate the ionic strength, yielding the simplification $$\begin{aligned} I = \frac{1}{2}\sum\limits_{i=+,-}\alpha_i\phi_i/\xi_iv_0 \label{eq:LI2}\end{aligned}$$ where $\alpha_+$ and $\alpha_-$ are the fraction of free cations and anions, respectively. In general, $\alpha_+$, $\alpha_-$, and $\varepsilon$ will depend on the composition of the electrolyte, and must be determined self-consistently as we will describe later. The DH formula for ionic activity \[appearing in Eq. \] is given by $$\begin{aligned} \ln \gamma_{\pm}^{DH}=-\frac{e^2}{8 \pi \varepsilon \varepsilon_0 k_B T\lambda_D}\left(\frac{1}{1+a_\pm/\lambda_D}\right)\end{aligned}$$ where $a_\pm$ is the radius of the free anion or cation [@debye1923theory]. Additionally, the salt concentration is expected to change the dielectric permittivity of the fluid, which has a strong effect on ionic activity[@vincze2010nonmonotonic], as first noted by Huckel[@huckel1925theorie]. The free energy of free ions is expected to change according to change in Born solvation energy, $\Delta u^{Born}_\pm$ \[also appearing in Eq. \], which is written as $$\begin{aligned} \Delta u^{Born}_\pm= \frac{e^2}{8 \pi \varepsilon_0 k_B T a_\pm}\left(\frac{1}{\varepsilon}-\frac{1}{\varepsilon_s}\right) \label{eq:delel}\end{aligned}$$ where $\varepsilon_s$ is the dielectric constant of the pure solvent [@born1920volumen]. Note that the solvation energy here is defined with a positive sign. Thus if $\varepsilon$ decreases, the chemical potential of free ions will increase, weakening the propensity for ions to be free. For simplicity, hereafter, we will assume the free ion radius, $a_\pm$ to be equal for anions and cations $a_+=a_-=(v_0(\xi_++\xi_-)/2)^{1/3}$. In this way, the Debye-Huckel activities and Born solvation energies are made to be equivalent for anions and cations. The permittivity of the electrolyte is taken to change as a function of the electrolyte composition, through both the dielectric freezing of hydrating solvent molecules, and the degree ionic clustering. We employ the following phenomenological interpolation formula: $$\varepsilon=\varepsilon_s \alpha_0 (1-x)+ \varepsilon^{*}_s (1-\alpha_0) (1-x) +\varepsilon^{*}_\pm(1-\alpha_\pm)x, \label{eq:perm}$$ where $x$ is the mole fraction of salt, $\alpha_0$, is the fraction of free solvent, $\varepsilon^{*}_s$ is the dielectric constant contribution of bound solvent, and $\varepsilon^{*}_\pm$ is the dielectric constant contribution of bound ions. Thus, $\varepsilon$ changes from $\varepsilon_s$ in the dilute regime to $\varepsilon_\pm^*$ as the ions become more and more bound in ionic clusters. Furthermore, this phenomenological expression will capture dielectric decrement via the decreasing fraction of free solvent molecules. However, this dielectric decrement will eventually level off as the free solvent disappears, in which case the dielectric constant would tend towards the lower value of a neat ionic liquid [@weingartner2006static]. It is typical, when modelling dielectric decrement across wide concentration ranges, to employ nonlinear, empirical models, as in Ref. , but equation will capture much of the same behavior, but with a more direct connection to the ion-association and ion-solvation modelled in this work. Thus, $\Delta_{lmsq}$ \[written in Eq. \] contains an electrostatic correction as a consequence of the chemical potential of free ions varying with electrolyte composition. The Debye-Huckel contribution stabilizes the free ions due to favorable electrostatic interactions with other free ions as concentration increases. This results in a decreased affinity for ion association. However, the dielectric constant of the electrolyte decreases as a function of salt concentration, which will increase the Born solvation free energy of the free ions, ultimately resulting in an increasing affinity of ion association. These two effects (electrostatic interaction with screening cloud and Born Solvation) tend to counteract each other for a large majority of salt concentrations, and $\Delta^{el}_{lmsq}$ is roughly constant. However, when free ions are dilute (either at very low or very high salt fractions) the Debye-Huckel activities, and thus $\Delta^{el}_{lmsq}$, become strong functions of free ion concentration. Having defined each component of $\Delta_{lmsq}$, it is extremely useful to introduce the notion of the “association constant", $\Lambda_{ij}$ for the association of species $i$ and $j$. The association constant characterizes the driving force or affinity–or more accurately the exponentiated driving force/affinity–for a specific type of association. It is written as the following $$\begin{aligned} \Lambda_{+-}=\frac{(Z-1)^2}{Z}\gamma^{DH}_\pm \exp(-\Delta u_{+-}+\Delta u^{Born}_\pm)=\Lambda^0_{+-}\Lambda_{+-}^{el}\end{aligned}$$ where we have defined a non-electrostatic ionic association constant, $\Lambda_{+-}^{\theta}$ $$\begin{aligned} \Lambda^{\theta}_{+-}=\frac{(Z-1)^2}{Z}\exp(-\Delta u_{+-})\end{aligned}$$ and the electrostatic association, $\Lambda_{+-}^{el}$ $$\begin{aligned} \Lambda_{+-}^{el}=\gamma_\pm^{DH}\exp \left(\Delta u^{Born}_\pm\right)\end{aligned}$$ The ion-solvent association constant, $\Lambda_{\pm 0}$, contains only a non-electrostatic part the association constant: $$\begin{aligned} \Lambda_{\pm 0}=\frac{(Z-1)^2}{Z}\exp \left( -\Delta u_{\pm 0} \right)\end{aligned}$$ We then plug in each contribution of $\Delta_{lmsq}$ into Eq. . Due to the Kroenecker delta functions in Eqs. and the distribution is most easily written separately for clusters with more than one ion, clusters containing a single ion, and clusters containing just solvent. First, for clusters containing more than one ion ($l+m>1$), we obtain the distribution $$\begin{aligned} c_{lmsq}=\frac{W_{lmsq}}{\lambda^{\theta}_{+-}}\left(\psi_{1000}\Lambda_{+-} \right)^{l} \left(\psi_{0100}\Lambda_{+-} \right)^{m}(\phi_{0010}\Lambda_{+0})^s(\phi_{0010}\Lambda_{-0})^q \label{eq:dist}\end{aligned}$$ where $\psi_{1000} = f_+\phi_{1000}/\xi_+$ and $\psi_{0100} = f_+\phi_{0100}/\xi_+$ are number of association sites per lattice site for bare cations and free anions, respectively. For solvent-ion clusters containing only a single ion ($l+m=1$), the cluster may either contain a single cation: $$\begin{aligned} c_{10s0}=W_{10s0}\psi_{1000}(\phi_{0010}\lambda_{+0})^s \label{eq:dist2}\end{aligned}$$ or a single anion: $$\begin{aligned} c_{010q}=W_{010q}\psi_{0100}(\phi_{0010}\lambda_{-0})^q. \label{eq:dist3}\end{aligned}$$ Note that when a cluster does not contain cations, $s$ must be 0. Similarly, if the cluster does not contain anions, $q$ must be 0. Finally, within this model, for clusters not containing ions, the only non-zero component of the distribution corresponds to free solvent molecules: $$\begin{aligned} c_{0010}=c_{0001}=\phi_{0010} \label{eq:dist4}\end{aligned}$$ Equations - give the thermodynamically consistent number distribution for clusters in the electrolyte mixture. It can readily give the volume fraction of a cluster of any size and makeup, if the volume fraction of the bare cations, anions, and solvent molecules are known. However, these bare species volume fractions are not experimentally accessible. Thus, we must write the volume fractions of the bare species in terms of the overall salt/solvent fractions, which are experimentally accessible. Association Probabilities ------------------------- Once again we follow Tanaka by introducing the association probabilities, $p_{ij}$. These probabilities are useful because we may write the bare species’ volume fractions in terms of them. Formally, $p_{ij}$ is defined as the fraction of association sites of species, $i$, that are occupied with an association to species, $j$. Recall that cations, anions, and solvent molecules are said to have $f_+$, $f_-$, and 1 association sites per molecule, respectively. This implies that generally $p_{ij} \neq p_{ji}$, unless the functionalities and concentrations of species $i$ and $j$ are equivalent, as we will show below. We may write the bare cation volume fraction as $$\phi_{1000}=\phi_+(1-p_{+-}-p_{+0})^{f_+} \label{eq:x}$$ The above equation arises because the probability that a given cation association site will be ‘dangling’ (not participating in associations) will be $1-p_{+-}-p_{+0}$. Thus for all $f_+$ sites to be dangling is $(1-p_{+-}-p_{+0})^{f_+}$. Analogously, for the bare anions and solvent molecules we have $$\phi_{0100}=\phi_-(1-p_{-+}-p_{-0})^{f_-} \label{eq:y}$$ $$\phi_{0010}= \phi_0(1-p_{0+}-p_{0-}) \label{eq:z}$$ We may insert Eqs. - into Eq. , obtaining a cluster distribution in terms of overall species volume fractions and the association probabilities, $p_{ij}$. However, we now have six new variables, $p_{ij}$, which are unknown and a function of the overall species volume fractions. Thus, we need six equations to determine these six unknowns. We can obtain three equations straight away due to a conservation of each type of association. For cation-anion associations we have $$\begin{aligned} \psi_+ p_{+-}=\psi_- p_{-+}=\zeta \label{eq:sys1}\end{aligned}$$ where $\zeta$ is the number of cation-anion associations per lattice site. For cation-solvent associations we have $$\begin{aligned} \psi_+ p_{+0}=\phi_0 p_{0+}=\Gamma \label{eq:sys2}\end{aligned}$$ where $\Gamma$ is the number of cation-solvent associations per lattice site. Finally, for anion-solvent associations we have $$\begin{aligned} \psi_- p_{-0}=\phi_0 p_{0-}=\Xi \label{eq:sys3}\end{aligned}$$ where $\Xi$ is the number of anion-solvent associations per lattice site. We obtain the last three equations following Tanaka, by employing the law of mass action on the number of associations using the association constants $\Lambda_{+-}$, $\Lambda_{+0}$, and $\Lambda_{+0}$. For cation-anion associations we have $$\begin{aligned} \Lambda_{+-}\zeta=\frac{p_{+-}p_{-+}}{(1-p_{+-}-p_{+0})(1-p_{-+}-p_{-0})}. \label{eq:sys4}\end{aligned}$$ Similarly, for the cation-solvent associations we have $$\begin{aligned} \Lambda_{+0}\Gamma=\frac{p_{+0}p_{0+}}{(1-p_{+-}-p_{+0})(1-p_{0+}-p_{0-})}. \label{eq:sys5} \end{aligned}$$ Finally, for the anion-solvent associations we have $$\begin{aligned} \Lambda_{-0}\Xi=\frac{p_{-0}p_{0-}}{(1-p_{-+}-p_{-0})(1-p_{0+}-p_{0-})}. \label{eq:sys6} \end{aligned}$$ Here $\Lambda_{+-}$, $\Lambda_{+0}$, and $\Lambda_{-0}$ are treated as equilibrium constants for the individual associations made. Recall that $\Lambda_{+-}$ contains both an electrostatic factor ($\Lambda_{+-}^{el}$), and a non-electrostatic factor ($\Lambda_{+-}^\theta$). The non-electrostatic factor is a constant, but the electrostatic factor is a function of the overall electrolyte composition ($\phi_\pm$), as well as the fraction of free ions ($\alpha_+,\alpha_-$) and solvent ($\alpha_0$) via the Debye length, $\lambda_D$, and relative permittivity, $\varepsilon$. Thus, if we want to model the electrostatic contribution to ion association, we must additionally write $\alpha_i$ in terms of the association probabilities, $p_{ij}$. For $\alpha_+$, we have $$\begin{aligned} \alpha_+=(1-p_{+-})^{f_+} \label{eq:freecat}\end{aligned}$$ and $\alpha_-$ we have $$\begin{aligned} \alpha_-=(1-p_{-+})^{f_-} \label{eq:freean}\end{aligned}$$ Note that, for the fraction of ions contributing to the ionic strength we only require that the ion is not associated to a counter-ion; free ions can be hydrated by solvent in any capacity. For the fraction of free solvent we simply have $$\begin{aligned} \alpha_0=1-p_{0+}-p_{0-}.\end{aligned}$$ Thus, Eqs. - provide six equations from which we may solve for each $p_{ij}$ in terms of the overall species volume fractions. Without making approximations we cannot obtain an analytical solution to this system, but nonetheless we may solve it numerically. A useful approximation based on assumptions of ion symmetry and “stickiness" permits an analytical solution of the association probabilities in terms of overall species volume fractions and is outlined in the Appendix. These association probabilities close the model, so that we may now obtain the full distributions of clusters as a function of the overall electrolyte composition. ![image](probability_curves_prg.pdf){width="\textwidth"} In Fig. \[fig:prg\_prob\], we plot sample curves of the concentration dependence of these association probabilities. The parameters detailed in the caption of Fig. \[fig:prg\_prob\], which will be used for the majority of this paper, were chosen to be representative of salts used in typical water-in-salt electrolytes (WiSEs), such as lithium bis(trifluoromethanesulfonyl)imide (LiTFSI)[@Suo2015], sodium trifluoromethane sulfonate (NaOTF)[@suo2017water], or even potassium containing analogues[@leonard2018]. Note that although these salts have extremely high solubility limits, they would likely precipitate from solution prior to the reaching the pure salt limit ($x=1$). Nevertheless, our figures will extend to the pure salt limit, in order to explore the behavior of the model in this regime. Furthermore, for different sets of parameters that are more representative of an ionic liquid salt, for example, the pure salt limit would be extremely relevant. Thus, the parameters used in most of our examples represent a model water-in-salt electrolyte. As would be expected for a LiTFSI-water or NaOTF-water system, the cation–solvent association constant ($\Lambda_{+0}=500$) is considerably larger than the anion–solvent association constant ($\Lambda_{-0}=2$). The anion is also made to be much larger ($\xi_-=10$) than the cation ($\xi_+=1$). Additionally, the cation has a larger functionality $f_+=5$ than the anion ($f_-=4$), to emphasize further cation/anion asymmetry. The ion–counter-ion association probabilities, $p_{\pm\mp}$ (left panel in Fig. \[fig:prg\_prob\]), increase monotonically with salt volume fraction, and the difference between the solid and dotted blue curves in Fig. \[fig:prg\_prob\] comes from the difference in cation and anion functionality; for a given total number of cation–anion associations, a lower fraction of cation association sites will be occupied with associations to anions. The ion–solvent association probabilities, $p_{\pm0}$ (middle panel in Fig. \[fig:prg\_prob\]), both decrease monotonically with increasing ion concentration. This is expected because there is less water available to associate to ions, and more associations with counter-ions at high salt volume fractions. Again, the solvent is more likely to associate to cations because the association constants considered here dictate the solvent to interact stronger with cations than anions. The cation:anion asymmetry is manifested most clearly for the solvent–ion association probabilities, $p_{0\pm}$ (right panel in Fig. \[fig:prg\_prob\]). The solvent-cation association probability increases monotonically with salt volume fraction due to the increasing concentration of cations and thus cationic association sites. However, the same argument does not hold for the solvent-anion association probability, which displays non-monotonic behavior. Initially, $p_{0-}$ increases due to increasing anion concentration, but then decreases because the cations monopolize the solvent association at high ion concentrations. The reason for this is that cations have more favorable association with the solvent ($\Lambda_{+0}>\Lambda_{-0}$), as well as having more open sites to accept solvent associations ($f_+>f_-$). ![image](electrostatics.pdf){width="\textwidth"} Having solved for the association probabilities, we can compute the various quantities involved in the electrostatic portion of ion association. In Fig. \[fig:elec\], the Debye screening length, $\lambda_D$, relative dielectric constant, $\varepsilon$, and the electrostatic ion association factor, $\Lambda^{el}_{+-}$ are plotted as functions of salt volume fraction. Interestingly, we see that $\lambda_D$ displays non-monotonic behavior as a function of $\phi_\pm$, with some qualitative similarities to the non-monotonic screening lengths observed in refs. & . Although ion aggregation, as modelled here, likely plays a large role in phenomenon observed in refs. & –since dubbed the “underscreening paradox"–the full explanation of the underscreening paradox would likely involve a more comprehensive structural description of the electrolyte and its double layer. The non-monotonicity in $\lambda_D$ is a direct result of the non-monotonicity of the ionic strength of the mixture. In this work, $\lambda_D$ was defined with an ionic strength that only accounts for “free" ions. At low salt concentration ions remain largely unpaired, thus increasing salt concentration leads to an increase in ionic strength. At high concentrations, increasing salt concentration actually decreases the ionic strength of the mixture, leading to an increase in $\lambda_D$ The behavior of $\lambda_D$ also leads directly to the non-monotonic behavior of $\Lambda^{el}_{+-}$. Sol/Gel Transition ------------------ ![A schematic illustrating the concept of the branching coefficient, $\alpha$, which is an essential quantity in determining the criterion for gelation, Eq. . Starting at the node labeled 1 (referring to a cation), we note that the cluster proceeds arbitrarily to the left. We then consider the probability ($\alpha$) of the cluster continuing to the right to the next cationic node (marked as 2). In order for the cluster to continue to the right the cationic node marked 1 must associate with an anion (with probability $p_{+-}$) and then one of the $f_--1$ remaining anionic association sites must associate with another cation (with probability $p_{-+}$).[]{data-label="fig:thought"}](gel_cartoon.png){width="35.00000%"} Since the functionalities, $f_\pm$, of anions and cations are both greater than two, the clusters have the potential to become infinitely large if the probabilities, $p_{+-}$ and $p_{-+}$ are large enough. The point at which this occurs (i.e. the gelation point) can be determined in the following manner with the help of Fig. \[fig:thought\]. Consider for example, that we traverse along a specific branch of the cluster until we stop arbitrarily at a cation, labeled as ‘1’ in Fig. \[fig:thought\]. The cation contains $f_+-1$ sites in addition to the site that was traversed to arrive at the cation. In order for the cluster to proceed infinitely–thus forming a gel–one of the additional $f_+-1$ sites must continue the chain with a probability of unity [@tanaka2011polymer]: $$\begin{aligned} (f_+-1)\alpha^*=1 \label{eq:gel_crit}\end{aligned}$$ where $\alpha$ (not to be confused with the fraction of free species, $\alpha_+,\, \alpha_-$, or $\alpha_0$) is known as the branching coefficient with a “$*$" denoting its critical value for gelation, and the factor of $f_+-1$ arises because there are $f_+-1$ additional branches on the cation capable of extending the cluster. The same criteria arises for mean-field percolation on a Bethe lattice with coordination number of $f_+$ [@stauffer1994introduction]. In our case, though, $\alpha$ refers to the probability that cation 1 continues to a subsequent cationic node (labeled as 2 in Fig. \[fig:thought\]) along any available branch, as depicted by the dotted arrows in Fig. \[fig:thought\]. In order to get from one cationic node to the next cationic node, we require that one of the cation sites associates with an anion with probability, $p_{+-}$, and that one of the $f_--1$ remaining anionic sites reacts with a cation with probability, $p_{-+}$. Thus, $$\begin{aligned} \alpha = p_{+-}(f_--1)p_{-+}\end{aligned}$$ The criterion for gelation is then $$\begin{aligned} (f_+-1)p^*_{+-}(f_--1)p^*_{-+}=1 \label{eq:gel_crit2}\end{aligned}$$ If this criterion is met, then we expect a macroscopic ionic gel network to spontaneously form and percolate through the electrolyte. Thus, if we know the probabilities, $p_{+-}$ and $p_{-+}$, as functions of concentration, then we may predict the critical concentration at which gelation will occur using Eq. . We can also see this criterion arise when analyzing the weight averaged degree of ionic aggregation, $\Bar{n}_w$ (the average sized cluster of which an ion is a part), which is defined by the following formula: $$\begin{aligned} \Bar{n}_w&=\frac{\sum_{lmsq}(l+m)^2c_{lmsq}}{\sum_{lmsq}(l+m)c_{lmsq}}\end{aligned}$$ We can then plug in Eq. , and perform the sum over $s$ and $q$ by invoking the binomial theorem obtaining $$\begin{aligned} \Bar{n}_w = \sum_{lm}(l+m)\alpha_{lm} \label{eq:nw1}\end{aligned}$$ where $\alpha_{lm}$ is the fraction of total ions in clusters containing $l$ cations and $m$ anions. For clusters containing more than one ion, $\alpha_{lm}$ is given by $$\begin{aligned} \alpha_{lm}=\frac{\Lambda^{el}_{+-}\mathcal{K}}{2}(l+m)W_{lm}\left(\frac{p_{-+}}{1-p_{-+}}(1-p_{+-})^{f_+-1}\right)^{l}\left(\frac{p_{+-}}{1-p_{+-}}(1-p_{-+})^{f_--1}\right)^{m} \label{eq:clust_frac}\end{aligned}$$ where $\mathcal{K}=f_+ (1-p_{+-})(1-p_{-+})/p_{-+}$ (analogously defined by Stockmayer in Ref. ). Note that Eq.  will not reduce to $\alpha_+$ or $\alpha_-$ for single ion clusters (free ions), because the cluster distribution is slightly modified for single ion clusters \[recall Eqs.  and \]. We can write the sum in Eq.  in a closed form with the help of Stockmayer (Ref. ), or with the methods developed within Ref. : $$\begin{aligned} \Bar{n}_w&=\Lambda^{el}_{+-}\left(1+\frac{p_{+-}p_{-+}\left((f_+-1)p_{+-}+(f_--1)p_{-+}+2\right)}{\left(\frac{p_{+-}}{f_-}+\frac{p_{-+}}{f_+}\right)\left(1-(f_+-1)(f_--1)p_{+-}p_{-+}\right)}\right) \nonumber \\ &+\frac{1}{2}(1-\Lambda_{+-}^{el})\left(\alpha_++\alpha_-\right) \label{eq:nw2}\end{aligned}$$ Note that Eq. $\eqref{eq:nw2}$ will reduce to Stockmayer’s result in Ref.  for $\Lambda_{+-}^{el}=1$. Interestingly, Eq.  predicts that $\Bar{n}_w$ diverges when $(f_+-1)p_{+-}(f_--1)p_{-+}=1$, which is the exact condition we previously derived for gelation. ![The weight averaged degree of ion aggregation, $\bar{n}_w$ plotted against the volume fraction of salt, $\phi_\pm$, using Eq.  with probabilities for association restricted to the sol (excluding the gel). In the inset we plot (on a log-log scale) the weight averaged degree of ion aggregation, $\bar{n}_w$, against the deviation from the gel point, $|p_{+-}p_{-+}-p^*_{+-}p^*_{-+}|$, showing a critical exponent of -1. This curve was generated for $\xi_+=1$, $\xi_-=10$, $\Lambda_{+-}=50$, $\Lambda_{+0}=500$, $\Lambda_{-0}=2$, $f_+=5$, $f_-=4$, $v_0=25\text{A}^3$, $\varepsilon_s=80$, $\varepsilon^*_s=\varepsilon^*_\pm=10$.[]{data-label="fig:nwbar"}](nwbar.png){width="49.00000%"} As an example, we plot the weight averaged degree of aggregation as a function of concentration in Fig. \[fig:nwbar\] using Eq.  with the model parameters listed in the caption, corresponding to the aforementioned fictitious water-in-salt electrolyte. As can be seen, the weight average degree of aggregation diverges at the gelation point. In the inset of Fig. \[fig:nwbar\], we display a log-log plot of the weight average degree of aggregation as a function of deviation in $p_{+-}p_{-+}$ from the critical value, yielding a linear curve with a slope of -1. Thus, $\bar{n}_w$ diverges at the gel point with a critical exponent of -1. This type of behavior is expected, when considering the direct analogy of our gelation model with percolation on a Bethe lattice. Interestingly in Fig. \[fig:nwbar\], beyond the gel point, $\bar{n}_w$ rapidly decreases. This is because we are plotting the weight averaged degree of aggregation for species in the sol only, excluding the gel. After the gel forms the vast majority of ion associations are contributing to the gel, as opposed to finite clusters in the sol. As we approach the no-solvent limit (ionic liquid/crystal limit), the degree of aggregation in the sol is essentially 1, implying that at large salt fractions, the electrolyte looks like a simple mixture of dilute free ions immersed in an ionic gel. Post-Gel Regime --------------- For salt concentrations beyond the critical concentration, we expect a gel to be present in the electrolyte containing an increasing fraction of the electrolyte’s ions. Thus, we must quantify the fraction of the species in the gel and in the sol. We employ Flory’s treatment of the post-gel regime in which the volume fraction of free species can be written equivalently in terms of overall association probabilities, $p_{ij}$, and association probabilities taking into account only the species residing in the sol, $p^{sol}_{ij}$. $$\begin{aligned} \phi_+(1-p_{+-}-p_{+0})^{f_+}=\phi_+^{sol}(1-p^{sol}_{+-}-p^{sol}_{+0})^{f_+} \label{eq:gel1}\end{aligned}$$ $$\begin{aligned} \phi_-(1-p_{-+}-p_{-0})^{f_-}=\phi_-^{sol}(1-p^{sol}_{-+}-p^{sol}_{-0})^{f_-} \label{eq:gel2}\end{aligned}$$ $$\begin{aligned} \phi_0(1-p_{0+}-p_{0-})=\phi_0^{sol}(1-p^{sol}_{0+}-p^{sol}_{0-}) \label{eq:gel3}\end{aligned}$$ Where $\phi_i^{sol}$ is the volume fraction of species, $i$ remaining in the sol. We may determine each of the three unknown $\phi_i^{sol}$ variables, as well as the six unknown sol association probabilities, $p_{ij}^{sol}$, using - in addition to Eqs. -, however in this case we use sol-specific quantities. Thus, we have nine equations and nine unknowns (six sol association probabilities and three sol species volume fractions). The fraction of species, $i$, in the gel is simply given by $$\begin{aligned} w_i^{gel}=1-\phi^{sol}_i/\phi_i\end{aligned}$$ Note that prior to the critical gel concentration, we have the trivial solution that $p_{ij}=p^{sol}_{ij}$ and $\phi_i=\phi^{sol}_i$, yielding a gel fraction of $w_i^{gel}=0$. However, beyond the gel point, there is a non-trivial solution yielding $w_i^{gel}>0$. As an example, we plot the “sol" association probabilities in Fig. \[fig:pog\_prob\] using the model parameters listed in the caption, corresponding to the aforementioned fictitious water-in-salt electrolyte. As expected, we observe distinct cusps in the “sol" association probabilities at the gel point. These cusps are the result of a bifurcation point for solutions to the equations. One solution branch belongs to the overall association probabilities that transition smoothly through the gel point, and the other solution branch belongs to the “sol" association probabilities that bifurcate from the gel point. Typically, beyond the gel point all of the sol association probabilities decrease, because the majority of associations are consumed by the gel. ![image](probability_curves_pog.pdf){width="\textwidth"} In the left panel of Fig. \[fig:cluster\_curves\], we plot the concentration dependence of a various ion clusters of different sizes ($1\leq l+m\leq10$ and the ionic gel). As expected, we see that the fraction of free ions ($l+m=1$) decreases monotonically as a function of salt volume fraction due to the increasing ionic association probability. Interestingly, all other finite ion clusters behave non-monotonically with salt fraction. In general, ion clusters with $l+m\geq2$ first increase with salt concentration due to the increasing ion association probability. However, as salt concentration increases further, more and more associations are directed towards the formation of higher order clusters, and eventually the ionic gel. Fig. \[fig:cluster\_curves\] also defines three distinct “regimes" in the solution. In the low concentration regime ($0<\phi_\pm\leq 0.15$), free ions are the major ionic species in the electrolyte. For $0.2<\phi_\pm\leq0.3$, finite ion clusters dominate the electrolyte. Finally, for high salt concentrations ($\phi_{\pm}>0.3$) the electrolyte is majorly comprised of the ionic gel. ![image](cluster_curves_both.pdf){width="\textwidth"} In the right panel of Fig. \[fig:cluster\_curves\], we plot the same cluster fractions, but consider only the ions that remain in the sol. The curves are identical to those in the left plot of Fig. \[fig:cluster\_curves\] prior to the gel point. Beyond the gel point, the cluster fractions behave in a very peculiar manner. The fraction of free ions in the sol actually increases as a function of concentration. This is due to the fact that the ion association probabilities for ions in the sol actually decreases after the gel point. Thus, the sol is looks more and more like “dilute" electrolyte as we increase the overall salt concentration. For the parameters chosen in Fig. \[fig:cluster\_curves\], we see that nearly all of the ions in the sol are free as we approach the pure salt limit. Though, this is actually a very small amount of free ions overall, because the electrolyte is nearly all gel. For model parameters more akin to an ionic liquid salt, we might expect a much larger fraction of free ions in the pure salt limit [@feng2019free]. Figure \[fig:cluster\_curves\] informs us as to the probabilities of seeing clusters containing a total amount of ions. However, it does not tell us specifically how many anions or cations compose those clusters. The full bivariate probability distribution of clusters, $\alpha_{lm}$, defined in Eq. , is plotted for various salt fractions in Fig. \[fig:clust\_hist\]. We have chosen three different mole fractions of salt for plotting the distribution: $x=0.08$ (pre-gel), $x=0.17$ (near-gel), and $x = 0.44$ (post-gel). Both the pre-gel and near-gel distributions are skewed below the neutral cluster line (black dashed line), centered around the red solid line (denoting the most probable cluster of rank $l+m$). This indicates that clustered ions have a slight tendency to be negatively charged, containing more anions than cations. This effect is expected when the functionalities for ions are different. In this case, because the cations have a larger functionality than anions, each cation can accept more ion associations than each anion. Thus, there will be a tendency for there to be more anions in each cluster than cations. Additionally, the cluster distribution is pushed towards larger clusters as the mole fraction is increased from 0.08 to 0.17, due to the increasing ionic association probability. However, as the mole fraction is increased to 0.44 (well above the gel point) the distribution is both pushed towards smaller clusters than at $x=0.44$, as well as being skewed above the neutral cluster line, indicating that the finite clusters will on average more likely to be positively charged. When the gel is formed it absorbs many of the large negative clusters, and is overall negatively charged. Therefore, the sol will have a net positive charge, leading to positively skewed cluster distribution. ![image](full_clust_dists.png){width="\textwidth"} We may probe the effect of solvent or salt type by tuning the different association constants, $\Lambda_{ij}$. If we assume that ion association sites are never empty (either occupied by solvent or counter-ions), and that the ions have equal functionality, we may use the “sticky symmetric ion approximation," which is outlined in the Appendix. If we operate within the sticky symmetric ion approximation, we are left with one primary variable to manipulate: $\tilde{\Lambda}=\Lambda_{+-}/\Lambda_{+0}\Lambda_{-0}$. By varying $\tilde{\Lambda}$ we are tuning the “strength" of the electrolyte: weak electrolytes have $\tilde{\Lambda}\gg 1$ and strong electrolytes have $\tilde{\Lambda} \ll 1$. ![A psuedo-phase diagram of the most probable ionic “state" (either free, in a finite cluster, or in the gel) as a function of $\tilde{\Lambda}$ and $\phi_\pm$. The Red dotted line denotes the critical gel boundary. The diagram was generated within the sticky symmetric ion approximation (see Appendix) for $\xi_+=\xi_-=5$, and $f_+=f_-=4$. []{data-label="fig:frac_maps"}](phase_diag.png){width="49.00000%"} In Fig. \[fig:frac\_maps\] we display a psuedo-phase diagram of the most probable ionic “state" (either free, in a finite cluster, or in the ionic gel) of an ion as a function of $\tilde{\Lambda}$ and $\phi_\pm$. Note that Fig. \[fig:frac\_maps\] is generated within the sticky symmetric ion approximation. As was noted in Fig. \[fig:cluster\_curves\](left), free ions dominate at low salt fractions and gel dominates at moderate-high salt fractions, with a narrow region of phase space where finite aggregates dominate. The critical gel boundary is denoted by the red dotted line, which generally resides within the finite aggregate region of the phase diagram, because at the along the gel boundary, the fraction of ions within gel will be infinitesimal. However, the gel tends to grow rapidly beyond the gel point by consuming the larger ion clusters. Thus, the gel dominates the mixture soon after crossing the gel boundary. For $\ln(\tilde{\Lambda}) > 0$, the strength of the ion-ion attraction more favorable than the ion-solvent interaction, which results in the onset of gelation occurring at smaller salt fractions. Whereas, for $\ln(\tilde{\Lambda}) < 0$, the favourable ion-solvent interaction tends to “pull" free ions out of finite aggregates and gel, which pushes out the onset of gelation to larger salt fractions. Discussion ========== Within the accuracy of our various assumptions, our developed model can be applied to the entire range of salt concentrations from dilute to pure IL. In the dilute regime, our model recovers Debye-Huckel behavior [@debye1923theory]; this regime is not of much interest in terms of aggregation and gelation. Rather, the more interesting regime occurs for super-concentrated solvent-in-salt electrolytes (including IL solvent mixtures, hydrate melts etc..) and ILs, which are highly relevant for battery or super-capacitor applications. Often SiSEs and ILs contain bulky or asymmetric ions that leads to high solubility or low melting points of the salts. Moreover, the ion aggregates that are formed in these systems tend to be irregular and disordered, which is quite consistent with the approximation of Cayley tree-like ion aggregates. Thus, the physics included in our model should be highly relevant for SiSEs and Ils in particular. More typical salts, such as NaCl for example, form aggregates that may be ordered and semi-crystalline, as opposed to the branched structures that are characteristic of Cayely trees. Ordered aggregates nucleate, phase separate, and induce the precipitation of crystalline salt, without forming a gel. In these types of system, the physics of ion gelation would probably not be as relevant, and our description of ion aggregation would be somewhat flawed. Nonetheless, we expect that our model is well-equipped for capturing the ion association, solvation, and gelation in super-concentrated SiSEs and ILs. Thermodynamic Implications -------------------------- Our theory can also be used to predict some important thermodynamic quantities, such as the activity coefficients of species in the mixture. In Eq. , we wrote the chemical potential of a cluster of rank $lmsq$. The equilibrium condition \[Eq. \] implies that the chemical potential of species in the cluster will be equal to their bare counterparts. Thus, we may write the chemical potential of an ion or solvent molecule as simply the chemical potential of a bare ion or solvent molecule: $$\begin{aligned} \beta \Delta \mu_{+}=\beta \Delta \mu_{1000}=\ln \left( \phi_{1000} \gamma_+^{DH} \right) + \Delta u^{Born}_{+} + 1-c_{tot}%+\xi_+\phi_0 d\end{aligned}$$ $$\begin{aligned} \beta \Delta \mu_{-}=\beta \Delta \mu_{0100}=\ln \left( \phi_{0100} \gamma_-^{DH} \right) + \Delta u^{Born}_{-}+1-c_{tot}%+\xi_-\phi_0 d\end{aligned}$$ $$\begin{aligned} \beta \Delta \mu_{0}=\beta \Delta \mu_{0010}=\ln \phi_{0010}+1-c_{tot}%+\phi_\pm d\end{aligned}$$ We may derive ionic activity coefficients (with respect to a dilute solution reference state) by obtaining the excess part of the chemical potential. First, we must subtract off ideal entropy of mixing terms ($\ln \phi_i$). Then, we must subtract off the excess part of the chemical potential of the bare ions in the dilute limit obtaining $$\begin{aligned} \ln \gamma_\pm &= \beta \Delta \mu_\pm - \ln \left\{\phi_{+,-}(1-p^\circ_{\pm\mp}-p^\circ_{\pm0})^{f_\pm}\right\}\end{aligned}$$ where the “$\circ$" superscript denotes the association probabilities in the dilute limit (as salt concentration approaches 0), $\phi_{+,-}$ denotes $\phi_+$ or $\phi_-$ (not to be confused with $\phi_\pm$, the volume fraction of salt). The limiting ionic association probabilities, $p^{\circ}_{\pm\mp}$ tend towards 0. However, the limiting ion-solvent association probabilities, $p^{\circ}_{\pm0}$, tend toward $\Lambda^{\theta}_{\pm0}/(\Lambda^{\theta}_{\pm0}+1)$. Thus, if $\Lambda^{\theta}_{\pm0}\gg1$, we would expect ions to be fully associated with water in the dilute limit. We can then write the ionic activity coefficient as $$\begin{aligned} \ln \gamma_\pm = \ln \gamma_\pm^{DH} +\Delta u_\pm^{Born} + f_\pm \ln \left\{(1-p_{\pm\mp}-p_{\pm0})(1+\Lambda^{\theta}_{\pm0})\right\} +1-c_{tot}%+\xi_\pm\phi_0 d\end{aligned}$$ Similarly, we may write the activity coefficient of solvent molecules as $$\begin{aligned} \ln \gamma_0 = \ln (1-p_{0+}-p_{0-}) +1-c_{tot} %+ \phi_\pm d.\end{aligned}$$ It is also useful to define a mean ionic activity coefficient, $\bar{\gamma}_\pm=(\gamma_+\gamma_-)^{1/2}$, which is the more experimentally accessible quantity. ![The activity coefficients of the salt and solvent are plotted against the mole fraction of salt. Within the inset of the figure, we zoom in on the dilute region where the model recovers Debye-Hückel behavior for salt activity. These curves are generated for $\xi_+=1$, $\xi_-=10$, $\Lambda_{+-}=50$, $\Lambda_{+0}=500$, $\Lambda_{-0}=2$, $f_+=5$, $f_-=4$, $v_0=25\text{A}^3$, $\varepsilon_s=80$, $\varepsilon^*_s=\varepsilon^*_\pm=10$.[]{data-label="fig:loggam"}](log_gamma.pdf){width="49.00000%"} In Fig. \[fig:loggam\], we plot the mean ionic activity coefficient, as well as that of the solvent, as a function of the volume fraction of salt. A fairly general prediction of our model, which can be seen in Fig. \[fig:loggam\], is that the activity of the salt tends to rise extraordinarily as a function of concentration, while that of the solvent simultaneously decreases. The salt activity increases for two primary reasons. First, the magnitude of Born solvation energy of free ions decreases due to the decreasing dielectric constant of the electrolyte–free ions become more active in lower dielectric constant fluids. Second, ions become more paired with counter-ions as opposed to solvent, which is unfavorable entropically, as well as enthalpically for very strongly hydrating solvents. At the same time the solvent activity tends to decrease at high salt concentrations, due to the increasing fraction of solvent that is favorably-incorporated within the hydration shell of ions. These trends in salt and solvent activity are interesting, because one of the primary reasons water-in-salt electrolytes (WiSEs), in particular, have garnered so much interest is their ability to form a passivating solid-electrolyte interface (SEI) at the negative electrode. This SEI layer suppresses the deleterious hydrogen evolution reaction, which prevents the use of more dilute aqueous electrolytes. The SEI layer on an anode in contact with a WiSE would consist of reduction products involving the salt. By raising the activity of the salt and lowering the activity of the solvent, the reduction potential of the salt is increased, while that of the solvent is decreased. Thus, by increasing salt concentration, the affinity to form an SEI layer is expected to increase, and that to evolve hydrogen is expected decrease, as observed experimentally [@Suo2015]. At some salt concentration, there must be a crossover, where it becomes more favorable to form an SEI layer, than to evolve hydrogen. Because our model can capture the trends in activity for both ions and the solvent, it could potentially help predict when this crossover might occur, and how it might change for different electrolyte materials. Transport Implications ---------------------- Although our model does not include any dynamics, we can begin to speculate on how certain transport properties, such as conductivity or ion transference numbers, may be influenced by ion association in the super-concentrated regime. For transport in multi-component, concentrated mixtures, it is often necessary to consider coupled diffusive fluxes [@de2013non; @krishna1997maxwell; @deen1998analysis], which are related to the vector of species chemical potential gradients through the Onsager linear-response tensor, or, after transformation to concentration gradients, the Stefan-Maxwell diffusivity tensor. This mathematical framework is the basis for concentrated solution theories of electrolyte transport [@newman2012electrochemical], which have been widely applied to batteries [@smith2017multiphase; @thomas2002advances] and fitted to experiments[@valoen2005transport; @nyman2008electrochemical; @lundgren2015electrochemical] and molecular simulations [@wheeler2004molecular]. The Stefan-Maxell formulation has also been extended to charged electrolytes in double layers[@psaltis2011comparing; @balu2018role]. Even for moderately concentrated electrolytes, however, the diffusivity tensor and ionic activity coefficients are fitted to experimental data with little theoretical guidance, and complex many-body interactions with solvent at high concentration are neglected. Our statistical model could provide a detailed, microscopic basis to model coupled fluxes in superconcentrated electrolytes as originating from the presence of ionic clusters. Remarkably, as a result of the ion clustering predicted by our model, superconcentrated electrolytes may behave more like dilute electrolytes in that low concentrations of mobile charge carriers drift and diffuse with nearly independent fluxes. As such,for an associative mixture of ions Ref.  proposed a modified Nernst-Einstein equation for conductivity, $\sigma$, $$\begin{aligned} \sigma = \frac{e^2c_{salt}}{k_B T} \sum_{lm} (l-m)^{2} \alpha_{lm} D_{lm} \label{eq:sig1}\end{aligned}$$ where $D_{lm}$ is the diffusivity of a cluster of rank $lm$, and the factor of $(l-m)^2$ arises because $l-m$ is the valence charge of a cluster of rank $lm$. Our model is able to predict the cluster fractions, $\alpha_{lm}$, (as in Fig. \[fig:clust\_hist\]) for different electrolyte compositions and temperatures, which could be extremely helpful when designing more conductive electrolytes. However, the cluster diffusivities, $D_{lm}$, would still be unknown, though, they would undoubtedly decrease with increasing cluster size. As detailed in Refs. , the contribution of clusters to the ionic current may be largely responsible for the very interesting observation of negative transference numbers for species in ionic liquid mixtures and solid-state electrolytes \[See Eq. (5) in Ref. \]. Though, for binary liquid electrolytes, we would not expect such exotic observations in ion transference numbers. Along the same vein, observations of negative Stefan-Maxwell diffusion coefficients[@kraaijeveld1993negative; @wesselingh1995exploring] have been reported for ion transport of concentrated electrolytes through membranes, which may be due to ion clustering. Although there are likely systems in which ion clusters play a large role in conducting ionic current, recent work in Ref.  found that free ions ($l+m=1$) are the major contributor to ionic current in neat ionic liquids. In that case, conductivity obeys an even simpler equation $$\begin{aligned} \sigma=\frac{e^2c_{salt}}{k_B T}\left(\alpha_+ D_+ + \alpha_- D_- \right) \label{eq:sig2}\end{aligned}$$ where $D_\pm$ is the diffusivity of the free cation or anion. The ability to use eq. instead of depends on if we can neglect the cluster contribution to the ionic strength of the electrolyte (Eq. vs. ). Our model allows us to predict the ionic strength, and decompose the respective contributions from free ions and clusters. In the left panel of Fig. \[fig:trans\], we plot the dimensionless ionic strength (non-dimensionalized by the overall salt concentration) using both Eq. and . The dashed line in Fig. \[fig:trans\], represents the free ion contribution to the ionic strength, while the solid curve represents the total ionic strength. It is apparent that free ions dominate the ionic strength of the electrolyte no matter the salt concentration, at least for the model parameters given in caption of the figure. There is a small region where there is a perceptible contribution of ion clusters to the ionic strength, which corresponds to concentrations very close to the gel point of the electrolyte ($x=0.18$). Nonetheless, it appears as if Eq.  could suffice for modelling the conductivity of our fictional electrolyte. Within our model, the concentration of free ions can display nonlinear or even non-monotonic behavior as a function of overall salt concentration. At high concentrations adding more salt can actually decrease the amount of free ions in solution. This can be seen in Fig. \[fig:trans\], where we have plotted the concentration of free ions as a function of the mole fraction of salt. Here, we have used the parameters listed in the figure caption to generate the curves, which are the same parameters that have been used for the majority of the paper. The non-monotonic concentration of free ions is likely largely responsible for the non-monotonic ionic conductivity that have been widely observed for concentrated electrolytes [@lobo1989handbook] or ionic liquid solvent mixtures [@stoppa2009conductivities; @li2007effect; @chaban2012acetonitrile]. Though we must note that $D_\pm$ is also expected to have a large role in the concentration dependence of ionic conductivity. ![image](transport.png){width="\textwidth"} One interesting aspect of this model, is that for asymmetrically associating ions, we obtain different fractions of free anions and cations, as seen in Fig. \[fig:trans\]. If the free anions and cations have equivalent diffusivities, then we can write the transference number as: $$\begin{aligned} t_\pm=\frac{\alpha_\pm}{\alpha_++\alpha_-} \label{eq:t}\end{aligned}$$ Thus, assuming free ions are the dominant carrier of charge, our model would predict asymmetric transference numbers ($t_{\pm} \neq 0.5$) for salts with ions that do not have equivalent functionalities, as seen in the inset of Fig. \[fig:trans\]. In general for binary mixtures of monovalent salts, the ion with more association sites will have a higher fraction of free ions than the ion with less association sites. The reason for this is quite subtle when examining the expressions for $\alpha_+$ and $\alpha_-$ \[Eqs.  & \]. Ultimately, when $f_+>f_-$, for a fixed number of ion–counter-ion associations, the cations need less molecules to form those associations than anions. Thus, more cations will be free than anions, and we would observe that $t_+>0.5$ and $t_-<0.5$. Rheological Implications ------------------------ Gel-forming electrolytes should display intriguing viscoelastic properties. In polymers, typically the presence of gel is detected by probing the rheology of the mixture. At the gel point, the viscosity is expected to diverge and the equilibrium shear modulus is expected to become finite [@flory1953principles]. Because our gel is composed of reversible physical associations between ions, we do not expect the viscosity to formally diverge. Nonetheless, thermoreversible gels should display a finite shear modulus. Flory related the equilibrium shear modulus, $G_e$ to the fraction of gel in the mixture for tetrafunctional associating polymer strands [@flory1953principles]. This was later extended for any $f$ functional associating polymer strand by Nijenhuis. This extension would be applicable for our case of ion gels if the ions have the equal functionalities, $f$: $$\begin{aligned} G_e=-2cRT\left( \frac{\ln w^{sol}_\pm}{1-w^{sol}_\pm}\cdot \frac{1-(w^{sol}_\pm)^{f/2}}{1-(w^{sol}_\pm)^{f/2-1}}\cdot \frac{f-2}{f}+1\right)(1-w^{sol}_\pm) \label{eq:Geq}\end{aligned}$$ where $c$ is the molar concentration of salt, and $R$ is the gas constant. Eq.  predicts, as expected, that $G_e$ will be zero prior to the formation of gel, and then increase with increasing gel fraction. If we again operate within the sticky symmetric ion approximation, then we can see how the equilibrium shear modulus is modulated by the electrolyte concentration (via $\phi_\pm$) and strength (via $\tilde{\Lambda}$) ![A Contour map of the equilibrium shear modulus, $G_e$, as a function of $\tilde{\Lambda}$ and $\phi_\pm$. The Red dotted line denotes the critical gel boundary. The region of white denotes the pre-gel region, where the equilibrium shear modulus is exactly 0. The diagram was generated within the sticky symmetric ion approximation (see Appendix) for $\xi_+=\xi_-=5$, and $f_+=f_-=4$.[]{data-label="fig:eqmshear"}](eqm_shear.pdf){width="49.00000%"} In Fig. \[fig:eqmshear\], we display a contour map of the equilibrium shear modulus using Eq.  as functions of $\phi_\pm$ and $\log \tilde{\Lambda}$. The shear modulus is predictably zero (white region), when there is no ionic gel present in the electrolyte, and becomes finite beyond the gel point. Additionally, the shear modulus increases monotonically with increasing gel fraction. As such, it increases with concentration, but tends to decrease as the electrolyte becomes weaker ($\log \Lambda$ decreases). There is a subtlety to this statement, as can be seen by the non-monotonicity in the contours of $G_e$ at high salt concentrations and low $\tilde{\Lambda}$. For very strong electrolytes ($\log \tilde{\Lambda}<2$) and for a given volume fraction of salt that is beyond the gel boundary, by increasing $\tilde{\Lambda}$ (increasing the affinity for ion association), the gel fraction actually decreases. This is quite counter-intuitive because we would expect more gel when the affinity for ion association is stronger. However, within the gel, the model allows for intramolecular loops. For very weakly associating salts, we would expect the gel to contain minimal intramolecular loops. Increasing the affinity for ion association, would induce more intramolecular loops, which would actually free up ions from the gel. The gel would simultaneously contain more ion–counter-ion associations, with less ions. Thus, in this regime, increasing the affinity for ion association actually decreases the shear modulus. This, subtlety should not obscure the result that when an ionic gel is present, the mixture may display viscoelastic properties. Interestingly, viscoelastic properties have been indeed observed experimentally for some common imidizolium-based ionic liquids [@makino2008viscoelastic]. In that work, the equilibrium shear modulus of elasticity decreases as a function of temperature, which would be consistent with the melting and destruction of an ionic gel. There is limited literature on this topic, however. Furthermore, Ref.  does not actually attempt to compute a gel point. Perhaps the most reliable method for determining the exact gel point was introduced by Winter and Chambon [@winter1986analysis]. They determined the gel point to occur at the intersection of the dynamic loss and the storage moduli for an oscillatory shear experiment. This could be a route to experimentally probe gelation in concentrated electrolytes. Conclusion ========== Here we have cast the mean-field theory of thermoreversible association and gelation from polymer physics into the context of electrolytes. The presented theory allows complicated, branched ionic aggregates to be included in models of concentrated electrolytes. Previously, ion pairs have only (typically) been included in models of ionic association for concentrated electrolytes. However, these simple models break down when the system becomes sufficiently concentrated, which motivated the presented theory. More specifically, we developed a model for aggregation and gelation between cations, anions and solvent molecules, with alternating cation-anion aggregates/gel and solvent molecules decorating this “ionic backbone". The theory can describe the composition of an electrolyte as a function of salt concentration and temperature, where different ionic states (free, aggregated, or gelled) dominate depending on the conditions. Higher salt concentrations favor the formation of a percolating gel, while smaller salt concentrations tend to have only free ions or small aggregates; between these extremes exists a narrow domain where finite aggregates dominate in the electrolyte. Note that the developed theory is best applied to electrolytes with “complicated" ions, such as ionic liquids and water in salt electrolytes, where crystaline solids cannot precipitate out. Moreover, since model is a mean-field theory that neglects any loops in ionic clusters, it cannot describe the strongly correlated “spin glass” ordering recently discovered in simulations of ionic liquids, which transitions to long-range order in ionic crystals for “simple” ions [@levy2019spin]. Nevertheless, motivated by the success of mean-field theories from polymer physics, we expect that our model will have implications for the bulk thermodynamic, transport, and rheological properties of super-concentrated electrolytes, which can be probed experimentally and used guide the design of these dense ionic fluids. It is possible to extend our approach to interfacial properties as well. Specifically, it has already been shown that understanding the partitioning of ions[@goodwin2017mean; @goodwin2017underscreening; @avni2020charge] and solvent[@mceldrew2018] between free and bound states has already been shown to be extremely enlightening in modelling the electrical double layer (EDL) of ionic liquids and WiSE’s. Our model provides a more detailed and generalized picture of the states of of ions or solvent, which may be leveraged to develop more accurate and general models of the EDL. EDL properties will also influence electrokinetic phenomena and may help to explain many puzzling observations, such as flow reversals in concentrated electrolytes [@Bazant2009a; @storey2012effects]. As with polymers under confinement, it will also be interesting to extend our model to nanopores, where cluster sizes are influenced by geometrical constraints. All authors would like to acknowledge the Imperial College-MIT seed fund. MM and MXB acknowledge support from a Amar G. Bose Research Grant. ZG was supported through a studentship in the Centre for Doctoral Training on Theory and Simulation of Materials at Imperial College London funded by the EPSRC (EP/L015579/1) and from the Thomas Young Centre under grant number TYC-101. S.B. was also supported by the National Natural Science Foundation of China (51876072) and the financial support from the China Scholarship Council. A.K. would like to acknowledge the research grant by the Leverhulme Trust (RPG-2016- 223). Sticky Symmetric Ions ===================== An analytical solution for the association probabilities is possible, if we make three primary assumptions. First, we must assume the ions to have equal number of association sites ($f_+=f_-=f$). Second, we assume that the electrostatic contribution to ion association is negligible. This implies that $\Lambda_{+-}^{el}=1$ or equivalently that $\Lambda_{+-}=\Lambda^{\theta}_{+-}$. This approximation is motivated by that $\Lambda_{+-}^{el}$ is mostly on the order of 1 (recall the right panel in Fig. \[fig:elec\]), and only changes significantly with very dilute free ion concentrations. For our final assumption, we require that the ions do not contain any open association sites: they are either associated to counter-ions or solvent molecules. The cluster distributions is then limited to aggregates containing $fl-l-m+1$ solvent molecules attached to cations and $fm-m-l+1$ solvent molecules attached to anions. Thus, we have the cluster distribution $$\begin{aligned} c_{lm}=W_{lm}\Lambda_{+-}^{l+m-1}\Lambda_{+0}^{fl-l+m+1}\Lambda_{-0}^{fm-m+l+1}\psi_{100}^l\psi_{010}^m\phi_{001}^{f(m+l)+2} + \Phi_{001} \label{eq:distss}\end{aligned}$$ with a slightly modified correction for when $l=m=0$ $$\begin{aligned} \Phi_{001} = \phi_{001}\left[1-\phi_{001}/\tilde{\Lambda} \right]\delta_{l,0}\delta_{m,0}\end{aligned}$$ where $\tilde{\Lambda}=\Lambda_{+-}/\Lambda_{+0}\Lambda_{-0}$. We can rewrite Eq. in the following manner: $$\begin{aligned} c_{lm}=\frac{\mathcal{K}}{2}W_{lm}\left(\frac{p_{-+}}{1-p_{-+}}(1-p_{+-})^{f_+-1}\right)^{l}\left(\frac{p_{+-}}{1-p_{+-}}(1-p_{-+})^{f_--1}\right)^{m}\end{aligned}$$ with an identical definition for $\mathcal{K}$ as previously written in the main text. The association probabilities are governed by the following equations: $$\begin{aligned} p_{+-}=1-p_{+0}\end{aligned}$$ $$\begin{aligned} p_{-+}=1-p_{-0}\end{aligned}$$ $$\begin{aligned} p_{+-}=p_{-+}\end{aligned}$$ $$\begin{aligned} \psi_{+} p_{+0}=\phi_0 p_{0+}\end{aligned}$$ $$\begin{aligned} \psi_- p_{-0}=\phi_0 p_{0-}\end{aligned}$$ $$\begin{aligned} \frac{\tilde{\Lambda}}{\psi_{+}}=\frac{p_{+-}(1-p_{0+}-p_{0-})}{p_{0+}p_{0-}}\end{aligned}$$ where $\psi_{\pm}=f \phi_+/\xi_+=f \phi_-/\xi_-$ is the concentration of cationic or anionic association sites. Solving this system of equations we obtain the solution: $$\begin{aligned} p_{+-}=p_{-+}=\frac{2 \tilde{\Lambda} \psi_{\pm} \phi_0\left(\phi_0-2\psi_{\pm}-\sqrt{4 \tilde{\Lambda}\psi_{\pm}+(\phi_0-2\psi_\pm)^2} \right)}{2 \psi_{\pm}(\tilde{\Lambda}-2\phi_0)}\end{aligned}$$ $$\begin{aligned} p_{+0}=p_{-0}=\frac{2 \psi_{\pm}}{2 \psi_{\pm}+\phi_0+\sqrt{4 \tilde{\Lambda}\psi_{\pm}+(\phi_0-2\psi_\pm)^2}}\end{aligned}$$ $$\begin{aligned} p_{0+}=p_{0-}=\frac{\phi_0-2\psi_{\pm}-\sqrt{4 \tilde{\Lambda}\psi_{\pm}+(\phi_0-2\psi_\pm)^2} }{2 \psi_{\pm}(\tilde{\Lambda}-2\phi_0)}\end{aligned}$$ The approximations we have made yield association probabilities that do not distinguish between anions and cations. We should note that taking the limit of this approximation for a salt volume fraction of 1 (ionic liquid/solid limit) yields the trivial solution that $p_{\pm\mp}=1$. Thus, we have a fully connected alternating ion network, somewhat resembling an ionic crystal. Thus, there will be no finite ion clusters and certainly no free ions that can conduct ionic current. This actually consistent with behavior we would expect for many salts, which do not conduct charge without solvent present to induce dissociation. Thus, we ionic liquid salts would not be captured with this sticky symmetric ion approximation. The gelation criterion for systems within the sticky symmetric ion approximation is identical to that of the general theory, except the symmetry of the ion allows for simplified expression: $$\begin{aligned} p_{\pm\mp}=1/(1-f)\end{aligned}$$ The post-gel relations will be slightly different to that for the general theory. We can write the fraction of free ions equivalently with overall probabilities and sol probabilities: $$\begin{aligned} \phi_+(1-p_{+-})^f=\phi^{sol}_+(1-p^{sol}_{+-})^f \end{aligned}$$ $$\begin{aligned} \phi_-(1-p_{-+})^f=\phi^{sol}_-(1-p^{sol}_{-+})^f \end{aligned}$$ Similarly for free solvent molecules we have $$\begin{aligned} \phi_0(1-p_{0+}-p_{0-})=\phi^{sol}_0(1-p^{sol}_{0+}-p^{sol}_{0-})\end{aligned}$$ The sticky symmetric ion assumptions are also valid for sol probabilities $$\begin{aligned} p^{sol}_{\pm0}=1-p^{sol}_{\pm\mp}\end{aligned}$$ $$\begin{aligned} \frac{\tilde{\Lambda}}{\psi^{sol}_{+}}=\frac{p^{sol}_{+-}(1-p^{sol}_{0+}-p^{sol}_{0-})}{p^{sol}_{0+}p^{sol}_{0-}}\end{aligned}$$ And finally we have the conservation of the associations made in the sol $$\begin{aligned} \psi^{sol}_{+} p^{sol}_{+0}=\phi^{sol}_0 p^{sol}_{0+}\end{aligned}$$ $$\begin{aligned} \psi^{sol}_- p^{sol}_{-0}=\phi^{sol}_0 p^{sol}_{0-}\end{aligned}$$ $$\begin{aligned} p^{sol}_{+-}=p^{sol}_{-+}\end{aligned}$$ Thus, we have 9 equations and 9 unknowns, exactly analogous to the general case. One thing to note is that the symmetry of the system implies that many of these equations will redundant. For sticky symmetric ions, $p^{sol}_{+-}=p^{sol}_{-+}$, $p^{sol}_{+0}=p^{sol}_{-0}$, $p^{sol}_{0+}=p^{sol}_{0-}$,and $\phi^{sol}_{+}=\phi^{sol}_{-}$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We show that the Gabor wave front set of a compactly supported distribution equals zero times the projection on the second variable of the classical wave front set.' address: 'Department of Mathematics, Linn[æ]{}us University, SE–351 95 Växjö, Sweden' author: - Patrik Wahlberg title: The Gabor wave front set of compactly supported distributions --- *Dedicated to Luigi Rodino on the occasion of his 70$^{th}$ birthday* Introduction {#PWsec:introduction} ============ The Gabor wave front set of tempered distributions was introduced by Hörmander 1991 [@PWHormander1]. The idea was to measure singularities of tempered distributions both in terms of smoothness and decay at infinity comprehensively. Using the short-time Fourier transform the Gabor wave front set can be described as the directions in phase space $T^* {\mathbf R^{d}}$ where a distribution $u \in {\mathscr{S}}'({\mathbf R^{d}})$ does not decay rapidly. The Gabor wave front set of $u \in {\mathscr{S}}'({\mathbf R^{d}})$ is empty exactly if $u \in {\mathscr{S}}({\mathbf R^{d}})$. The Gabor wave front set behaves differently than the classical $C^\infty$ wave front set, also introduced by Hörmander 1971 (see [@PWHormander0 Chapter 8]). It is for example translation invariant. The Gabor wave front set is adapted to the Shubin calculus of pseudodifferential operators [@PWShubin1] where symbols have isotropic behavior in phase space. With respect to this calculus, the corresponding notion of characteristic set, and the Gabor wave front set, pseudodifferential operators are microlocal and microelliptic, similar to pseudodifferential operators with Hörmander symbols in their natural context. The main result of this note concerns the Gabor wave front set of compactly supported distributions. We show (see Corollary \[PWcor:WFGcompact\]) that for $u \in {\mathscr{E}}'({\mathbf R^{d}})$ we have $$\label{PWeq:mainresult} {\mathrm{WF}}_G (u) = \{ 0 \} \times \pi_2 {\mathrm{WF}}(u)$$ where ${\mathrm{WF}}_G$ denotes the Gabor wave front set, ${\mathrm{WF}}$ denotes the classical wave front set, and $\pi_2$ is the projection on the covariable (second) phase space ${\mathbf R^{d}}$ coordinate. By [@PWHormander0 Theorem 8.1.3], $\pi_2 {\mathrm{WF}}(u) = \Sigma(u)$. The symbol $\Sigma(u)$ denotes the cone complement of the space of directions in ${\mathbf R^{d}}$ in an open conic neighborhood of which the Fourier transform of $u \in {\mathscr{E}}'({\mathbf R^{d}})$ decays rapidly. The equality thus describes exactly the Gabor wave front set of $u \in {\mathscr{E}}'({\mathbf R^{d}})$ in terms of known ingredients in terms of ${\mathrm{WF}}(u)$: The space coordinate is zero and the frequency directions are exactly the “irregular” frequency directions of $u$. In the literature there are several concepts of global wave front sets apart from the Gabor wave front set. There is a parametrized version [@PWWahlberg1] and an Gelfand–Shilov version [@PWCarypis1] of the same idea. Melrose [@PWMelrose1] introduced the scattering wave front set which was used by Coriasco and Maniccia [@PWCoriasco1] for propagation of singularities, cf. also [@PWCoriasco2]. Cappiello [@PWCappiello1] has studied the corresponding concept in a Gelfand–Shilov framework. The paper is organized as follows. Section \[PWsec:preliminaries\] contains notation and background, in Section \[PWsec:Gaborclassical\] we prove the main results, and finally in Section \[PWsec:propsing\] we discuss how the results from Section \[PWsec:Gaborclassical\] can be applied to propagation of singularities for certain Schrödinger type evolution equations. Preliminaries {#PWsec:preliminaries} ============= An open ball of radius $r>0$ and center at the origin is denoted $B_r$. The unit sphere in ${\mathbf R^{d}}$ is denoted $S_{d-1}$. We write $f (x) \lesssim g (x)$ provided there exists $C>0$ such that $f (x) \leqslant C g(x)$ for all $x$ in the domain of $f$ and $g$. The Japanese bracket on ${\mathbf R^{d}}$ is defined by $\langle x \rangle = \sqrt{1+|x|^2}$. Peetre’s inequality is $$\label{PWeq:peetre} {\langle x+y\rangle}^{s}\lesssim {\langle x\rangle}^{s} {\langle y\rangle}^{|s|}, \quad s \in {\mathbf R}.$$ The Fourier transform on the Schwartz space ${\mathscr{S}}({\mathbf R^{d}})$ is normalized as $${\mathscr{F}}f(\xi) = \widehat f (\xi) = \int_{{\mathbf R^{d}}} f(x) e^{- i \langle x, \xi \rangle} d x, \quad \xi \in {\mathbf R^{d}}, \quad f \in {\mathscr{S}}({\mathbf R^{d}}),$$ where $\langle \cdot, \cdot \rangle$ is the inner product on ${\mathbf R^{d}}$, and extended to its dual, the tempered distributions ${\mathscr{S}}'({\mathbf R^{d}})$. The inner product $(\cdot,\cdot)$ on $L^2 ({\mathbf R^{d}}) \times L^2({\mathbf R^{d}})$ is conjugate linear in the second argument. We use this notation also for the conjugate linear action of ${\mathscr{S}}'({\mathbf R^{d}})$ on ${\mathscr{S}}({\mathbf R^{d}})$. Some notions of time-frequency analysis and pseudodifferential operators on ${\mathbf R^{d}}$ are recalled [@PWFolland1; @PWGrochenig1; @PWHormander0; @PWLerner1; @PWNicola1; @PWShubin1]. Let $u \in {\mathscr{S}}'({\mathbf R^{d}})$ and $\psi \in {\mathscr{S}}({\mathbf R^{d}}) \setminus \{ 0 \}$. The *short-time Fourier transform* (STFT) $V_\psi u $ of $u$ with respect to the window function $\psi$, is defined as $$V_\psi u :\ {\mathbf R^{2d}} \rightarrow {\mathbf C}, \quad z \mapsto V_\psi u(x,\xi) = (u, \Pi(z) \psi ),$$ where $\Pi(z)=M_\xi T_x$, $z=(x,\xi) \in {\mathbf R^{2d}}$ is the time-frequency shift composed of the translation operator $T_x \psi(y)=\psi(y-x)$ and the modulation operator $M_\xi \psi(y)=e^{i \langle y , \xi \rangle} \psi(y)$. We have $V_\psi u \in C^\infty({\mathbf R^{2d}})$ and by [@PWGrochenig1 Theorem 11.2.3] there exists $m \geqslant 0$ such that $$\label{PWeq:STFTbound} |V_\psi u(z)| \lesssim {\langle z\rangle}^m, \quad z \in {\mathbf R^{2d}}.$$ The order of growth $m$ does not depend on $\psi \in {\mathscr{S}}({\mathbf R^{d}}) \setminus 0$. Let $\psi \in {\mathscr{S}}({\mathbf R^{d}})$ satisfy $\| \psi \|_{L^2}=1$. The Moyal identity $$(u,g)= (2\pi)^{-d} \int_{{\mathbf R^{2d}}} V_\psi u(z) \, \overline{V_\psi g(z)} \, d z, \quad g \in {\mathscr{S}}({\mathbf R^{d}}), \quad u \in {\mathscr{S}}'({\mathbf R^{d}}),$$ is sometimes written $$\label{PWeq:moyal} u = (2\pi)^{-d} \int_{{\mathbf R^{2d}}} V_\psi u(x,\xi) \, M_\xi T_x \psi \, d x \, d \xi, \quad u \in {\mathscr{S}}'({\mathbf R^{d}}),$$ with action understood to take place under the integral. In this form it is an inversion formula for the STFT. Let $a \in C^\infty ({\mathbf R^{2d}})$ and $m \in {\mathbf R}$. Then $a$ is a *Shubin symbol* of order $m$, denoted $a\in \Gamma^m$, if for all $\alpha,\beta \in {\mathbf N^{d}}$ there exists a constant $C_{\alpha\beta}>0$ such that $$\label{PWeq:shubinineq} |\partial_x^\alpha \partial_\xi^\beta a(x,\xi)| \leqslant C_{\alpha\beta}\langle (x,\xi)\rangle^{m-|\alpha|-|\beta|}, \quad x,\xi \in {\mathbf R^{d}}.$$ The Shubin symbols form a Fréchet space where the seminorms are given by the smallest possible constants in . For $a \in \Gamma^m$ a pseudodifferential operator in the Weyl quantization is defined by $$a^w(x,D) u(x) = (2\pi)^{-d} \int_{{\mathbf R^{2d}}} e^{i \langle x-y, \xi \rangle} a \left(\frac{x+y}{2},\xi \right) \, u(y) \, d y \, d \xi, \quad u \in {\mathscr{S}}({\mathbf R^{d}}),$$ when $m<-d$. The definition extends to general $m \in {\mathbf R}$ if the integral is viewed as an oscillatory integral. The operator $a^w(x,D)$ then acts continuously on ${\mathscr{S}}({\mathbf R^{d}})$ and extends by duality to a continuous operator on ${\mathscr{S}}'({\mathbf R^{d}})$. By Schwartz’s kernel theorem the Weyl quantization procedure may be extended to a weak formulation which yields operators $a^w(x,D):{\mathscr{S}}({\mathbf R^{d}}) \rightarrow {\mathscr{S}}'({\mathbf R^{d}})$, even if $a$ is only an element of ${\mathscr{S}}'({\mathbf R^{2d}})$. The phase space $T^* {\mathbf R^{d}}$ is a symplectic vector space equipped with the canonical symplectic form $$\sigma((x,\xi), (x',\xi')) = \langle x' , \xi \rangle - \langle x, \xi' \rangle, \quad (x,\xi), (x',\xi') \in T^* {\mathbf R^{d}}.$$ The real symplectic group ${\operatorname{Sp}}(d,{\mathbf R})$ is the set of matrices in ${\operatorname{GL}}(2d,{\mathbf R})$ that leaves $\sigma$ invariant. To each $\chi \in {\operatorname{Sp}}(d,{\mathbf R})$ is associated an operator $\mu(\chi)$ which is unitary on $L^2({\mathbf R^{d}})$, and determined up to a complex factor of modulus one, such that $$\mu(\chi)^{-1} a^w(x,D) \, \mu(\chi) = (a \circ \chi)^w(x,D), \quad a \in {\mathscr{S}}'({\mathbf R^{2d}})$$ (cf. [@PWFolland1; @PWHormander0]). The operators $\mu(\chi)$ are homeomorphisms on ${\mathscr{S}}$ and on ${\mathscr{S}}'$, and are called metaplectic operators. The metaplectic representation is the mapping ${\operatorname{Sp}}(d,{\mathbf R}) \ni \chi \mapsto \mu(\chi)$ which is a homomorphism modulo sign $$\mu(\chi_1) \mu(\chi_2) = \pm \mu(\chi_1 \chi_2), \quad \chi_1,\chi_2 \in {\operatorname{Sp}}(d,{\mathbf R}).$$ Two ways to overcome the sign ambiguity are to pass to a double-valued representation [@PWFolland1], or to a representation of the so called two-fold covering group of ${\operatorname{Sp}}(d,{\mathbf R})$. The latter group is called the metaplectic group ${\operatorname{Mp}}(d,{\mathbf R})$. The two-to-one projection $\pi: {\operatorname{Mp}}(d,{\mathbf R}) \rightarrow {\operatorname{Sp}}(d,{\mathbf R})$ is $\mu(\chi)\mapsto \chi$ whose kernel is $\{\pm 1\}$. The sign ambiguity may be fixed (hence it is possible to choose a section of $\pi$) along a continuous path ${\mathbf R}\ni t\mapsto \chi_t \in {\operatorname{Sp}}(d,{\mathbf R})$. This involves the so called Maslov factor [@PWLeray1]. The Gabor and the classical wave front sets {#PWsec:Gaborclassical} =========================================== First we define the Gabor wave front set ${\mathrm{WF}}_G$ introduced in [@PWHormander1] and further elaborated in [@PWRodino1]. \[PWdef:Gaborwavefront\] Let $\varphi \in {\mathscr{S}}({\mathbf R^{d}}) \setminus 0$, $u \in {\mathscr{S}}'({\mathbf R^{d}})$ and $z_0 \in T^* {\mathbf R^{d}} \setminus 0$. Then $z_0 \notin {\mathrm{WF}}_G(u)$ if there exists an open conic set $\Gamma \subseteq T^* {\mathbf R^{d}} \setminus 0$ such that $z_0 \in \Gamma$ and $$\label{PWeq:conedecay} \sup_{z \in \Gamma} {\langle z\rangle}^N | V_\varphi u(z)| < \infty, \quad N \geqslant 0.$$ This means that $V_\varphi u$ decays rapidly (super-polynomially) in $\Gamma$. The condition is independent of $\varphi \in {\mathscr{S}}({\mathbf R^{d}}) \setminus 0$, in the sense that super-polynomial decay will hold also for $V_\psi u$ if $\psi \in {\mathscr{S}}({\mathbf R^{d}}) \setminus 0$, in a possibly smaller cone containing $z_0$. The Gabor wave front set is a closed conic subset of $T^*{\mathbf R^{d}} \setminus 0$. By [@PWHormander1 Proposition 2.2] it is symplectically invariant in the sense of $$\label{PWeq:metaplecticWFG} {\mathrm{WF}}_G( \mu(\chi) u) = \chi {\mathrm{WF}}_G(u), \quad \chi \in {\operatorname{Sp}}(d, {\mathbf R}), \quad u \in {\mathscr{S}}'({\mathbf R^{d}}).$$ The Gabor wave front set is naturally connected to the definition of the $C^\infty$ wave front set [@PWHormander0 Chapter 8], often called just the wave front set and denoted ${\mathrm{WF}}$. A point in the phase space $(x_0,\xi_0) \in T^* {\mathbf R^{d}}$ such that $\xi_0 \neq 0$ satisfies $(x_0,\xi_0) \notin {\mathrm{WF}}(u)$ if there exists $\varphi \in C_c^\infty({\mathbf R^{d}})$ such that $\varphi(0) \neq 0$, an open conical set $\Gamma_2 \subseteq {\mathbf R^{d}} \setminus 0$ such that $\xi_0 \in \Gamma_2$, and $$\sup_{\xi \in \Gamma_2} {\langle \xi\rangle}^N | V_\varphi u(x_0,\xi)| < \infty, \quad N \geqslant 0.$$ The difference compared to ${\mathrm{WF}}_G(u)$ is that the $C^\infty$ wave front set ${\mathrm{WF}}(u)$ is defined in terms of super-polynomial decay in the frequency variable, for $x_0$ fixed, instead of super-polynomial decay in an open cone in the phase space $T^* {\mathbf R^{d}}$ containing the point of interest. Pseudodifferential operators with Shubin symbols are microlocal with respect to the Gabor wave front set. In fact we have by [@PWHormander1 Proposition 2.5] $${\mathrm{WF}}_G (a^w(x,D) u) \subseteq {\mathrm{WF}}_G (u)$$ provided $a \in \Gamma^m$ and $u \in {\mathscr{S}}'({\mathbf R^{d}})$. In the next result we relate the Gabor wave front set with the $C^\infty$ wave front set for a tempered distribution. We use the notation $\pi_2(x,\xi) = \xi$ for the projection $\pi_2: T^* {\mathbf R^{d}} \to {\mathbf R^{d}}$ onto the covariable. \[PWprop:WFGinclusion1\] If $u \in {\mathscr{S}}'({\mathbf R^{d}})$ then $$\{ 0 \} \times \pi_2 {\mathrm{WF}}(u) \subseteq {\mathrm{WF}}_G (u) .$$ Suppose $|\xi_0| = 1$ and $(0,\xi_0) \notin {\mathrm{WF}}_G(u)$. Let $\varphi \in C_c^\infty({\mathbf R^{d}})$ satisfy $\varphi(0) \neq 0$. There exists an open conic set $\Gamma \subseteq T^* {\mathbf R^{d}} \setminus 0$ such that $(0,\xi_0) \in \Gamma$ and $$\sup_{(x,\xi) \in \Gamma} \langle (x,\xi) \rangle^N |V_\varphi u(x,\xi)| < \infty, \quad N \geqslant 0.$$ We have to show that $(x_0,\xi_0) \notin {\mathrm{WF}}(u)$ for all $x_0 \in {\mathbf R^{d}}$. Let $x_0 \in {\mathbf R^{d}}$. Define for ${\varepsilon}>0$ the open conic set containing $\xi_0$ $$\Gamma_{2,{\varepsilon}} = \left\{ \xi \in {\mathbf R^{d}} \setminus 0: \, \left| \frac{\xi}{|\xi|} - \xi_0 \right| < {\varepsilon}\right\} \subseteq {\mathbf R^{d}} \setminus 0.$$ We claim that there exists ${\varepsilon}>0$ such that $$\label{PWconeinclusion1} (\{ x_0 \} \times \Gamma_{2,{\varepsilon}}) \setminus B_{1/{\varepsilon}} \subseteq \Gamma$$ holds. To prove this we assume for a contradiction that there exists $\xi_n \in \Gamma_{2,1/n}$ such that $|(x_0,\xi_n)| \geqslant n$ and $(x_0,\xi_n) \notin \Gamma$ for all $n \in {\mathbf N}$. Since $\Gamma$ is conic we have $|\xi_n|^{-1} (x_0,\xi_n) \notin \Gamma$. The sequence $\left( |\xi_n|^{-1} (x_0,\xi_n) \right)_{n} \subseteq {\mathbf R^{2d}}$ is bounded so we get for a subsequence $$\left( \frac{x_0}{|\xi_{n_j}|},\frac{\xi_{n_j}}{|\xi_{n_j}|} \right) \rightarrow (x,\xi), \quad j \rightarrow \infty,$$ where $(x,\xi) \notin \Gamma$ due to $\Gamma$ being open, and $|\xi|=1$. From $|\xi_n| \geqslant n - |x_0|$ we may conclude that $x=0$. From $\xi_n \in \Gamma_{2,1/n}$ for all $n \in {\mathbf N}$ we obtain $\xi=\xi_0$, which gives $(0,\xi_0) \notin \Gamma$. This is a contradiction which shows that must hold for some ${\varepsilon}>0$. Thus we have for $N \geqslant 0$ arbitrary $$\begin{aligned} \sup_{\xi \in \Gamma_{2,{\varepsilon}}, \ |\xi| \geqslant {\varepsilon}^{-1} + |x_0|} \langle \xi \rangle^N |V_\varphi u(x_0,\xi)| & \leqslant \sup_{\xi \in \Gamma_{2,{\varepsilon}}, \ |\xi| \geqslant {\varepsilon}^{-1} + |x_0|} \langle (x_0,\xi) \rangle^N |V_\varphi u(x_0,\xi)| \\ & \leqslant \sup_{(x,\xi) \in \Gamma} \langle (x,\xi) \rangle^N |V_\varphi u(x,\xi)| < \infty\end{aligned}$$ which shows that $(x_0,\xi_0) \notin {\mathrm{WF}}(u)$. \[PWprop:WFGinclusion2\] If $u \in {\mathscr{E}}'({\mathbf R^{d}}) + {\mathscr{S}}({\mathbf R^{d}})$ then $$\label{PWeq:WFGinclusion1} {\mathrm{WF}}_G (u) \subseteq \{ 0 \} \times \pi_2 {\mathrm{WF}}(u).$$ We may assume $u \in {\mathscr{E}}'({\mathbf R^{d}})$. We start with the less precise inclusion $$\label{PWeq:subinclusion2} {\mathrm{WF}}_G (u) \subseteq \{ 0 \} \times {\mathbf R^{d}} \setminus 0.$$ Suppose $0 \neq (x_0,\xi_0) \notin \{ 0 \} \times {\mathbf R^{d}} \setminus 0$. Then $x_0 \neq 0$ so for some $C>0$ we have $(x_0,\xi_0) \in \Gamma = \{ (x,\xi) \in T^* {\mathbf R^{d}} \setminus 0: \, |\xi| < C |x| \} \subseteq T^* {\mathbf R^{d}} \setminus 0$ which is an open conic set. If we pick $\varphi \in C_c^\infty({\mathbf R^{d}})$ we have $V_\varphi u(x,\xi) = 0$ if $|x| \geqslant A$ for $A>0$ sufficiently large due to $u \in {\mathscr{E}}'({\mathbf R^{d}})$. Since we have the bound for some $m \geqslant 0$, we obtain for any $N \geqslant 0$ $$\label{eq:STFTestimate1} \begin{aligned} \sup_{(x,\xi) \in \Gamma} {\langle (x,\xi)\rangle}^N |V_\varphi u(x,\xi)| & = \sup_{(x,\xi) \in \Gamma, \ |x| \leqslant A} {\langle (x,\xi)\rangle}^N |V_\varphi u (x,\xi)| \\ & \lesssim \sup_{(x,\xi) \in \Gamma, \ |x| \leqslant A} {\langle x\rangle}^{N+m} < \infty. \end{aligned}$$ It follows that $(x_0,\xi_0) \notin {\mathrm{WF}}_G(u)$, which proves the inclusion . In order to show the sharper inclusion , suppose $0 \neq (x_0,\xi_0) \notin \{ 0 \} \times \pi_2 {\mathrm{WF}}(u)$. Then either $x_0 \neq 0$ or $\xi_0 \notin \pi_2 {\mathrm{WF}}(u)$. If $x_0 \neq 0$ then by we have $(x_0,\xi_0) \notin {\mathrm{WF}}_G(u)$. Therefore we may assume that $x_0=0$ and $\xi_0 \notin \pi_2 {\mathrm{WF}}(u)$, and our goal is to show $(0,\xi_0) \notin {\mathrm{WF}}_G(u)$, which will prove . By [@PWHormander0 Proposition 8.1.3] we have $\pi_2 {\mathrm{WF}}(u) = \Sigma(u)$, where $\Sigma(u) \subseteq {\mathbf R^{d}} \setminus 0$ is a closed conic set defined as follows. A point $\eta \in {\mathbf R^{d}} \setminus 0$ satisfies $\eta \notin \Sigma(u)$ if $\eta \in \Gamma_2$ where $\Gamma_2 \subseteq {\mathbf R^{d}} \setminus 0$ is open and conic, and $$\label{PWeq:frequencydecay1} \sup_{\xi \in \Gamma_2} {\langle \xi\rangle}^N |\widehat u(\xi)| < \infty, \quad N \geqslant 0.$$ Thus we have $\xi_0 \notin \Sigma(u)$, so there exists an open conic set $\Gamma_2 \subseteq {\mathbf R^{d}} \setminus 0$ such that $\xi_0 \in \Gamma_2$, and holds. Let $\Gamma_2' \subseteq {\mathbf R^{d}} \setminus 0$ be an open conic set such that $\xi_0 \in \Gamma_2'$ and $\overline{\Gamma_2' \cap S_{d-1}} \subseteq \Gamma_2$. We have $$V_\varphi u (x,\xi) = \widehat{u T_x \overline{\varphi}} (\xi) = (2 \pi)^{-d} \widehat{u} * \widehat{T_x \overline{\varphi}} (\xi)$$ which gives $$\label{PWeq:STFTconvolution1} |V_\varphi u (x,\xi)| \lesssim |\widehat{u}| * |g| (\xi), \quad x, \ \xi \in {\mathbf R^{d}},$$ where $g (\xi)= \widehat \varphi(-\xi) \in {\mathscr{S}}({\mathbf R^{d}})$. By the Paley–Wiener–Schwartz theorem we have for some $m \geqslant 0$ $$\label{PWeq:PWS} |\widehat{u} (\xi)| \lesssim {\langle \xi\rangle}^m, \quad \xi \in {\mathbf R^{d}}.$$ Define the open conic set $$\Gamma = \{ (x,\xi) \in T^* {\mathbf R^{d}} \setminus 0: \, |x| < |\xi|, \, \xi \in \Gamma_2' \}.$$ Then $(0,\xi_0) \in \Gamma$. To prove $(0,\xi_0) \notin {\mathrm{WF}}_G(u)$ it thus suffices to show $$\sup_{(x,\xi) \in \Gamma} {\langle (x,\xi)\rangle}^N |V_\varphi u (x,\xi)| < \infty, \quad N \geqslant 0.$$ In turn, these estimates will by follow from the estimates $$\label{PWeq:FTconvolution1} \sup_{\xi \in \Gamma_2'} {\langle \xi\rangle}^N (|\widehat{u}| * |g|) (\xi) < \infty, \quad N \geqslant 0.$$ Let ${\varepsilon}>0$ and split the convolution integral as $$|\widehat{u}| * |g| (\xi) = \int_{{\langle \eta\rangle} \leqslant {\varepsilon}{\langle \xi\rangle} } |\widehat{u} (\xi-\eta)| \, |g(\eta)| \, d \eta + \int_{{\langle \eta\rangle} > {\varepsilon}{\langle \xi\rangle} } |\widehat{u} (\xi-\eta)| \, |g(\eta)| \, d \eta.$$ If $\xi \in \Gamma_2'$, $|\xi| \geqslant 1$ and ${\langle \eta\rangle} \leqslant {\varepsilon}{\langle \xi\rangle}$ for ${\varepsilon}>0$ sufficiently small, then $\xi-\eta \in \Gamma_2$. Using and we thus obtain if $\xi \in \Gamma_2'$ and $|\xi| \geqslant 1$ for any $N \geqslant 0$ $$\label{PWeq:integralestimate1} \begin{aligned} \int_{{\langle \eta\rangle} \leqslant {\varepsilon}{\langle \xi\rangle} } |\widehat{u} (\xi-\eta)| \, |g(\eta)| \, d \eta & \lesssim \int_{{\langle \eta\rangle} \leqslant {\varepsilon}{\langle \xi\rangle} } {\langle \xi-\eta\rangle}^{-N} \, |g(\eta)| \, d \eta \\ & \lesssim {\langle \xi\rangle}^{-N} \int_{{\mathbf R^{d}}} {\langle \eta\rangle}^{N} \, |g(\eta)| \, d \eta \\ & \lesssim {\langle \xi\rangle}^{-N} \end{aligned}$$ since $g \in {\mathscr{S}}({\mathbf R^{d}})$. The remaining integral we estimate using . We thus have for any $\xi \in {\mathbf R^{d}}$ and any $N \geqslant 0$ $$\label{PWeq:integralestimate2} \begin{aligned} \int_{{\langle \eta\rangle} > {\varepsilon}{\langle \xi\rangle} } |\widehat{u} (\xi-\eta)| \, |g(\eta)| \, d \eta & \lesssim \int_{{\langle \eta\rangle} > {\varepsilon}{\langle \xi\rangle} } {\langle \xi-\eta\rangle}^{m} \, |g(\eta)| \, d \eta \\ & \lesssim {\langle \xi\rangle}^{-N} \int_{{\langle \eta\rangle} > {\varepsilon}{\langle \xi\rangle} } {\langle \xi\rangle}^{m+N} {\langle \eta\rangle}^{m} \, |g(\eta)| \, d \eta \\ & \lesssim {\langle \xi\rangle}^{-N} \int_{{\langle \eta\rangle} > {\varepsilon}{\langle \xi\rangle} } {\langle \eta\rangle}^{2 m+N} \, |g(\eta)| \, d \eta \\ & \lesssim {\langle \xi\rangle}^{-N}. \end{aligned}$$ Combining and shows that the estimates hold, which as earlier explained proves $(0,\xi_0) \notin {\mathrm{WF}}_G(u)$. This proves the inclusion . \[PWcor:WFGcompact\] If $u \in {\mathscr{E}}'({\mathbf R^{d}}) + {\mathscr{S}}({\mathbf R^{d}})$ then $${\mathrm{WF}}_G (u) = \{ 0 \} \times \pi_2 {\mathrm{WF}}(u) = \{ 0 \} \times \Sigma(u).$$ The next result is a consequence of [@PWHormander1 Proposition 2.7]. We include a proof here in order to give a self-contained account, and also in order to show an alternative proof technique. \[PWprop:smooth\] If $u \in {\mathscr{S}}'({\mathbf R^{d}})$ and ${\mathrm{WF}}_G(u) \cap ( \{ 0 \} \times {\mathbf R^{d}}) = \emptyset$ then $u \in C^\infty({\mathbf R^{d}})$ and there exists $L \geqslant 0$ such that $$\label{PWeq:polynomialbound1} |\partial^\alpha u(x)| \leqslant C_\alpha {\langle x\rangle}^{L + |\alpha|}, \quad x \in {\mathbf R^{d}}, \quad \alpha \in {\mathbf N^{d}}, \quad C_\alpha>0.$$ From the assumptions it follows that for some $C>0$ we have $${\mathrm{WF}}_G(u) \subseteq \Gamma := \{ (x,\xi) \in T^* {\mathbf R^{d}} \setminus 0: \, |\xi| < C |x| \}.$$ Let $\psi \in {\mathscr{S}}({\mathbf R^{d}})$ satisfy $\| \psi \|_{L^2} = 1$. We use the Moyal identity and show that the integral for $\partial^\alpha u$ is absolutely convergent for any $\alpha \in {\mathbf N^{d}}$. Thus we write formally $$\label{PWeq:STFTreconstruction} \partial^\alpha u (y) = (2\pi)^{-d} \sum_{\beta \leqslant \alpha} \binom{\alpha}{\beta} \int_{{\mathbf R^{2d}}} V_\psi u(x,\xi) \, (i\xi)^\beta e^{i \langle \xi,y \rangle} \partial^{\alpha-\beta} \psi(y-x) \, d x \, d \xi.$$ We split the integral in two parts. Since ${\mathbf R^{2d}} \setminus \Gamma \subseteq T^* {\mathbf R^{d}}$ is a closed conic set that does not intersect ${\mathrm{WF}}_G(u)$ we have for any $N \geqslant 0$ and any $k \geqslant 0$ $$\label{PWeq:integralpiece1} \begin{aligned} & \left| \int_{{\mathbf R^{2d}} \setminus \Gamma} V_\psi u(x,\xi) \, (i\xi)^\beta e^{i \langle \xi,y \rangle} \partial^{\alpha-\beta} \psi(y-x) \, d x \, d \xi \right| \\ & \leqslant \int_{{\mathbf R^{2d}} \setminus \Gamma} |V_\psi u(x,\xi)| \, {\langle \xi\rangle}^{|\alpha|} \, |\partial^{\alpha-\beta} \psi(y-x)| \, d x \, d \xi \\ & \lesssim \int_{{\mathbf R^{2d}} \setminus \Gamma} {\langle (x,\xi)\rangle}^{-N} \, {\langle \xi\rangle}^{|\alpha|} \, |\partial^{\alpha-\beta} \psi(y-x)| \, d x \, d \xi \\ & \lesssim \int_{{\mathbf R^{2d}} \setminus \Gamma} {\langle \xi\rangle}^{|\alpha|-N} \, {\langle x\rangle}^{N} \, {\langle y-x\rangle}^{-k} \, d x \, d \xi \\ & \lesssim {\langle y\rangle}^{k} \int_{{\mathbf R^{2d}}} {\langle \xi\rangle}^{|\alpha|-N} \, {\langle x\rangle}^{N-k} \, d x \, d \xi \\ & \lesssim {\langle y\rangle}^{k} \end{aligned}$$ provided $N > d + |\alpha|$ and $k > d + N. $ For the remaining part of the integral we use the estimate for some $m \geqslant 0$. Using $|\xi| < C |x|$ when $(x,\xi) \in \Gamma$, this gives for any $k \geqslant 0$ $$\label{PWeq:integralpiece2} \begin{aligned} & \left| \int_{\Gamma} V_\psi u(x,\xi) \, (i\xi)^\beta e^{i \langle \xi,y \rangle} \partial^{\alpha-\beta} \psi(y-x) \, d x \, d \xi \right| \\ & \lesssim \int_{|\xi| < C |x|} {\langle (x,\xi)\rangle}^m \, {\langle \xi\rangle}^{|\alpha|} \, |\partial^{\alpha-\beta} \psi(y-x)| \, d x \, d \xi \\ & \lesssim \int_{|\xi| < C |x|} {\langle \xi\rangle}^{-d-1} \, {\langle x\rangle}^{|\alpha|+m+d+1} \, |\partial^{\alpha-\beta} \psi(y-x)| \, d x \, d \xi \\ & \lesssim \int_{{\mathbf R^{2d}}} {\langle \xi\rangle}^{-d-1} \, {\langle x\rangle}^{|\alpha|+m+d+1} \, {\langle y-x\rangle}^{-k} \, d x \, d \xi \\ & \lesssim {\langle y\rangle}^{k} \int_{{\mathbf R^{2d}}} {\langle \xi\rangle}^{-d-1} \, {\langle x\rangle}^{|\alpha|+m+d+1-k} \, d x \, d \xi \\ & \lesssim {\langle y\rangle}^{k} \end{aligned}$$ provided $k > |\alpha|+m+2d+1$. Combining and shows in view of that $u \in C^\infty({\mathbf R^{d}})$ and the estimate follows. Propagation of singularities for Schrödinger equations {#PWsec:propsing} ====================================================== In this section we discuss briefly the initial value Cauchy problem for a Schrödinger equation of the form $$\label{PWeq:schrodeq} \left\{ \begin{array}{rl} \partial_t u(t,x) + q^w(x,D) u (t,x) & = 0, \\ u(0,\cdot) & = u_0, \end{array} \right.$$ where $u_0 \in {\mathscr{S}}'({\mathbf R^{d}})$, $t \geqslant 0$ and $x \in {\mathbf R^{d}}$. The Weyl symbol of the Hamiltonian $q^w(x,D)$ is a quadratic form $$q(x,\xi) = \langle (x, \xi), Q (x, \xi) \rangle, \quad x, \, \xi \in {\mathbf R^{d}},$$ where $Q \in {\mathbf C^{2d \times 2d}}$ is a symmetric matrix with ${{\rm Re} \, }Q \geqslant 0$. The *Hamilton map* $F$ corresponding to $q$ is defined by $$F = {\mathcal{J}}Q \in {\mathbf C^{2d \times 2d}}$$ where $${\mathcal{J}}= \left( \begin{array}{rr} 0 \ & I \\ -I \ & 0 \end{array} \right) \in {\mathbf R^{2d \times 2d}}$$ is a symplectic matrix that is a cornerstone in symplectic linear algebra. The equation is solved for $t \geqslant 0$ by $$u(t,x) = e^{-t q^w(x,D)} u_0(x)$$ where the propagator $e^{-t q^w(x,D)}$ is defined in terms of semigroup theory [@PWHormander2; @PWYosida1]. According to [@PWRodino2 Theorem 6.2] the Gabor wave front set propagates as stated in the following result. Let $q$ be a quadratic form on $T^* {\mathbf R^{d}}$ defined by a symmetric matrix $Q \in {\mathbf C^{2d \times 2d}}$, ${{\rm Re} \, }Q \geqslant 0$ and $F = {\mathcal{J}}Q$. Then for $u \in {\mathscr{S}}'({\mathbf R^{d}})$ and $t>0$ $${\mathrm{WF}}_G ( e^{-t q^w(x,D)} u ) \subseteq \left( \left( e^{2 t {{\rm Im} \, }F} ( {\mathrm{WF}}_G (u) \cap S ) \right) \cap S \right) \setminus 0$$ where the *singular space* is defined by $$\label{singspace} S=\Big(\bigcap_{j=0}^{2d-1} {\operatorname{Ker}}\big[{{\rm Re} \, }F({{\rm Im} \, }F)^j \big]\Big) \cap T^* {\mathbf R^{d}} \subseteq T^* {\mathbf R^{d}}.$$ Under the additional assumption on the Poisson bracket $\{q,\overline q\} = 0$, [@PWRodino2 Corollary 6.3] says that $S = {\operatorname{Ker}}({{\rm Re} \, }F )$ and hence $${\mathrm{WF}}_G ( e^{-t q^w(x,D)} u ) \subseteq \left( \left( e^{2 t {{\rm Im} \, }F} ( {\mathrm{WF}}_G (u) \cap {\operatorname{Ker}}({{\rm Re} \, }F ) ) \right) \cap {\operatorname{Ker}}({{\rm Re} \, }F ) \right) \setminus 0$$ for $u \in {\mathscr{S}}'({\mathbf R^{d}})$ and $t>0$. If we combine these results with Proposition \[PWprop:smooth\] we get the following consequence. Let $q$ be a quadratic form on $T^* {\mathbf R^{d}}$ defined by a symmetric matrix $Q \in {\mathbf C^{2d \times 2d}}$, ${{\rm Re} \, }Q \geqslant 0$ and $F = {\mathcal{J}}Q$. If $$S \cap (\{0\} \times {\mathbf R^{d}})= \{ 0 \},$$ which if $\{q,\overline q\} = 0$ reads $${\operatorname{Ker}}({{\rm Re} \, }F ) \cap (\{0\} \times {\mathbf R^{d}})= \{ 0 \},$$ then for $u \in {\mathscr{S}}'({\mathbf R^{d}})$ and $t>0$ we have $e^{-t q^w(x,D)} u \in C^\infty({\mathbf R^{d}})$, and $$|\partial^\alpha e^{-t q^w(x,D)} u (x)| \lesssim {\langle x\rangle}^{L_t + |\alpha|}, \quad x \in {\mathbf R^{d}}, \quad \alpha \in {\mathbf N^{d}},$$ for some $L_t \geqslant 0$. Next we specialize to the Cauchy initial value problem for the harmonic oscillator Schrödinger equation $$\label{PWeq:harmonicoscillator} \left\{ \begin{array}{rl} \partial_t u(t,x) + i(|x|^2- \Delta_x) u(t,x) & = 0, \qquad t \geqslant 0, \quad x \in {\mathbf R^{d}}, \\ u(0,\cdot) & = u_0 \in {\mathscr{S}}'({\mathbf R^{d}}). \end{array} \right.$$ This problem is a particular case of the general problem with $Q = i I_{2d}$. When ${{\rm Re} \, }Q = 0$ the propagator is the unitary group $e^{- t q^w(x,D)} = \mu(e^{2 t {{\rm Im} \, }F})$, $t \in {\mathbf R}$, on $L^2({\mathbf R^{d}})$ [@PWRodino2], and $${\mathrm{WF}}_G ( e^{- t q^w(x,D)} u_0 ) = e^{2 t {{\rm Im} \, }F} {\mathrm{WF}}_G(u_0), \quad t \in {\mathbf R}, \quad u_0 \in {\mathscr{S}}'({\mathbf R^{d}}).$$ The propagation is exact and time is reversible. This result is a consequence of the metaplectic representation and . The quoted results [@PWRodino2 Theorem 6.2 and Corollary 6.3] are not needed. For the equation we thus have periodic propagation of the Gabor wave front set: $$\label{PWeq:propagationharmosc} {\mathrm{WF}}_G(e^{- t q^w(x,D)} u_0) = e^{2 t {\mathcal{J}}} {\mathrm{WF}}_G(u_0) = \left( \begin{array}{cc} \cos(2t) I_d & \sin(2t) I_d \\ -\sin(2t) I_d & \cos(2t) I_d \\ \end{array} \right) {\mathrm{WF}}_G(u_0), \quad t \in {\mathbf R}$$ (cf. [@PWRodino2 Example 7.5]). The following result gives a partial explanation, from the point of view of the Gabor wave front set, of Weinstein’s [@PWWeinstein1] and Zelditch’s [@PWZelditch1] results on the propagation of the $C^\infty$ wave front set for the harmonic oscillator. The result says that a compactly supported initial datum will give a solution that is smooth except for a lattice on the time axis. At the points of the lattice the propagator is the identity or coordinate reflection $f (x) \mapsto f(- x)$, which gives precise propagation of singularities by possible sign changes. Note that we do not allow as general potentials as in [@PWWeinstein1; @PWZelditch1]. Consider the equation and suppose $u_0 \in {\mathscr{E}}'({\mathbf R^{d}}) + {\mathscr{S}}({\mathbf R^{d}})$. If $t \notin (\pi/2) {\mathbf Z}$ then $e^{- t q^w(x,D)} u_0 \in C^\infty({\mathbf R^{d}})$ and for some $L_t \geqslant 0$ $$\label{PWeq:growthestimate1} |\partial^\alpha e^{-t q^w(x,D)} u_0 (x)| \lesssim {\langle x\rangle}^{L_t + |\alpha|}, \quad x \in {\mathbf R^{d}}, \quad \alpha \in {\mathbf N^{d}}.$$ If $t \in (\pi/2) {\mathbf Z}$ then $$\begin{aligned} {\mathrm{WF}}_G( e^{- t q^w(x,D)} u_0 ) & = (-1)^{2t/\pi} {\mathrm{WF}}_G( u_0 ), \label{PWpropWFG1} \\ {\mathrm{WF}}( e^{- t q^w(x,D)} u_0 ) & = (-1)^{2t/\pi} {\mathrm{WF}}( u_0 ). \label{PWpropWFG2}\end{aligned}$$ Corollary \[PWcor:WFGcompact\] implies ${\mathrm{WF}}_G (u_0) \subseteq \{ 0 \} \times ({\mathbf R^{d}} \setminus 0)$. Combined with this means that $${\mathrm{WF}}_G( e^{- t q^w(x,D)} u_0 ) \cap (\{0\} \times {\mathbf R^{d}}) = \emptyset$$ unless $t \in (\pi/2) {\mathbf Z}$. By Proposition \[PWprop:smooth\] we then have $e^{- t q^w(x,D)} u_0 \in C^\infty({\mathbf R^{d}})$ and the estimates unless $t \in (\pi/2) {\mathbf Z}$. If $t=\pi k/2$ for $k \in {\mathbf Z}$ then $\cos(2t) = (-1)^k$ and $\sin(2t) = 0$. Thus yields the following conclusion. If $t = \pi k$ for $k \in {\mathbf Z}$ then ${\mathrm{WF}}_G( e^{- t q^w(x,D)} u_0 ) = {\mathrm{WF}}_G( u_0 )$ whereas if $t = \pi (k+1/2)$ for some $k \in {\mathbf Z}$ then ${\mathrm{WF}}_G( e^{- t q^w(x,D)} u_0 ) = - {\mathrm{WF}}_G( u_0 )$. This proves . When $k = 2 t /\pi \in {\mathbf Z}$ we have $$e^{2 t {\mathcal{J}}} = \left( \begin{array}{cc} \cos(2t) I_d & \sin(2t) I_d \\ -\sin(2t) I_d & \cos(2t) I_d \\ \end{array} \right) = (-1)^k I_{2d}$$ and therefore the corresponding metaplectic operator is $\mu(e^{2 t {\mathcal{J}}}) f(x) = f( (-1)^k x)$. Thus the propagator $e^{- t q^w(x,D)} = \mu(e^{2 t {\mathcal{J}}})$ is the reflection operator $f(x) \mapsto f((-1)^k x)$ when $k = 2 t /\pi \in {\mathbf Z}$, which justifies ${\mathrm{WF}}( e^{- t q^w(x,D)} u_0 ) = (-1)^{2t/\pi} {\mathrm{WF}}( u_0 )$ when $2 t /\pi \in {\mathbf Z}$, that is, . If $t \in \pi(1 + 2 {\mathbf Z})/4$ then $e^{2 t {\mathcal{J}}} = \pm {\mathcal{J}}$ and consequently $$e^{- t q^w(x,D)} = \mu(e^{2 t {\mathcal{J}}}) = \mu(\pm {\mathcal{J}}) = \left\{ \begin{array}{l} (2 \pi)^{-d/2} {\mathscr{F}}\\ (2 \pi)^{d/2} {\mathscr{F}}^{-1} \end{array} \right.$$ see e.g. [@PWCarypis1]. When $t \in \pi(1 + 2 {\mathbf Z})/4$ the estimates are thus a consequence of the Paley–Wiener–Schwartz theorem [@PWHormander0 Theorem 7.3.1]. When $t \notin \pi {\mathbf Z}/4$ the estimates reveals that the solution satisfies similar estimates as ${\mathscr{F}}{\mathscr{E}}' ({\mathbf R^{d}})$. [99.]{} M. Cappiello. Wave front set at infinity for tempered ultradistributions and hyperbolic equations. *Ann. Univ. Ferrara* **52** (2) (2006), 247–270. E. Carypis and P. Wahlberg. Propagation of exponential phase space singularities for Schrödinger equations with quadratic Hamiltonians. *J. Fourier Anal. Appl.* **23** (3) (2017), 530–571. S. Coriasco and L. Maniccia. Wave front set at infinity and hyperbolic linear operators with multiple characteristics. *Ann. Global Anal. Geom.* **24** (2003), 375–400. S. Coriasco, K. Johansson, and J. Toft. Global wave-front sets of Banach, Fréchet and modulation space types, and pseudo-differential operators. *J. Differential Equations* **254** (8) (2013), 3228–3258. G. B. Folland. *Harmonic Analysis in Phase Space*. Princeton University Press, 1989. K. Gr" ochenig. *Foundations of Time-Frequency Analysis*. Birkh" auser, Boston, 2001. L. Hörmander. *The Analysis of Linear Partial Differential Operators*, Vol. I, III. Springer, Berlin, 1990. L. Hörmander. Quadratic hyperbolic operators. *Microlocal Analysis and Applications*, LNM vol. 1495, L. Cattabriga, L. Rodino (Eds.), pp. 118–160, 1991. L. Hörmander. Symplectic classification of quadratic forms, and general Mehler formulas. *Math. Z.* **219** (3) (1995), 413–449. J. Leray. *Lagrangian Analysis and Quantum Mechanics: A Mathematical Structure Related to Asymptotic Expansions and the Maslov Index*. The MIT Press, 1981. N. Lerner. *Metrics on the Phase Space and Non-Selfadjoint Pseudo-Differential Operators*. Birkhäuser, Basel, 2010. R. Melrose. Geometric scattering theory. Stanford Lectures. Cambridge University Press, Cambridge (1995). F. Nicola and L. Rodino. *Global Pseudo-Differential Calculus on Euclidean Spaces*. Birkhäuser, Basel, 2010. L. Rodino and P. Wahlberg. The Gabor wave front set. *Monaths. Math.* **173** (4) (2014), 625–655. K. Pravda–Starov, L. Rodino and P. Wahlberg. Propagation of Gabor singularities for Schrödinger equations with quadratic Hamiltonians. *Math. Nachr.* **291** (1) (2018), 128–159. M. A. Shubin. *Pseudodifferential Operators and Spectral Theory*. Springer, 2001. P. Wahlberg. Propagation of polynomial phase space singularities for Schrödinger equations with quadratic Hamiltonians. *Mathematica Scandinavica* **122** (2018), 107–140. A. Weinstein. A symbol class for some Schrödinger equations on ${\mathbf R^{n}}$. *Amer. J. Math.* **107** (1) (1985), 1–21. K. Yosida. *Functional Analysis*. Classics in Mathematics, Springer-Verlag, Berlin Heidelberg, 1995. S. Zelditch. Reconstruction of singularities for solutions of Schrödinger’s equation. *Commun. Math. Phys.* **90** (1983), 1–26.
{ "pile_set_name": "ArXiv" }
--- author: - | Bas Lemmens\ Mathematics Institute, University of Warwick\ CV4 7AL Coventry, United Kingdom\ E-mail: [[email protected]]{}\ \ Cormac Walsh[^1]\ INRIA Saclay & CMAP, École Polytechnique,\ 91128 Palaiseau, France\ E-mail: [[email protected]]{} --- [Abstract.-]{} We show that the isometry group of a polyhedral Hilbert geometry coincides with its group of collineations (projectivities) if and only if the polyhedron is not an $n$-simplex with $n\geq 2$. Moreover, we determine the isometry group of the Hilbert geometry on the $n$-simplex for all $n\geq 2$, and find that it has the collineation group as an index-two subgroup. These results confirm, for the class of polyhedral Hilbert geometries, several conjectures posed by P. de la Harpe.\ [AMS Classification (2000):]{} 53C60, 22F50\ [Keywords.-]{} Hilbert metric, horofunction boundary, detour metric, isometry group, collineations, Busemann points Introduction {#sec:1} ============ In a letter to Klein, Hilbert remarked that every open bounded convex subset $X$ of $\mathbb{R}^n$ can be equipped with a metric $d_X\colon X\times X\to[0,\infty)$ defined by $$\label{eq:1.1} d_X(x,y)=\log\,[x',x,y,y'],$$ where $x',y'\in\partial X$, the points $x',x,y,y'$ are aligned in this order, and $$\label{eq:1.2} [x',x,y,y']=\frac{|x'y|\,|y'x|}{|x'x|\,|y'y|}$$ is the *cross-ratio*. This metric is called the *Hilbert metric* and $(X,d_X)$ is said to be the *Hilbert geometry* on $X$. As Hilbert noted [@Hil], if $X$ is an open $n$-dimensional ellipsoid, then $(X,d_X)$ is a model for the hyperbolic $n$-space. On the other hand, if $X$ is an open $n$-simplex, then $(X,d_X)$ is isometric to a normed space. More precisely, let $V=\mathbb{R}^{n+1}/\sim$, where $x\sim y$ if $x=y +\alpha (1,1,\ldots,1)$ for some $\alpha \in\mathbb{R}$, and equip the vector space $V$ with the *variation norm*: $$\|x\|_{\mathrm{var}}= \max_i x_i - \min_j x_j.$$ It is known [@Nu1 Proposition 1.7] that if $X$ is an $n$-dimensional simplex, then $(X,d_X)$ is isometric to $(V,\|\cdot\|_{\mathrm{var}})$. Hilbert geometries display features of negative curvature and are of interest in metric geometry. The extent to which the shape of the domain $X$ affects the geometry of $(X,d_X)$ has been the subject of numerous studies, for example [@Be; @Bu; @CV; @FK; @KN; @So1; @So2; @Wa1]. The Hilbert metric also has striking applications in the spectral theory of (non-linear) operators on cones in a Banach space; see, for instance, [@Bi; @Bus; @LN; @Nu1; @Sa]. We study the group of isometries $\mathrm{Isom}(X)$ of the Hilbert geometry when the domain $X$ is a polyhedron in $\mathbb{R}^n$, in other words, when $X$ is the intersection of finitely many open half-spaces. For simplicity, we call such Hilbert geometries *polyhedral*. Natural isometries arise from collineations (projectivities) of $X$. Indeed, let $\mathbb{P}^n=\mathbb{R}^n\cup\mathbb{P}^{n-1}$ be the real $n$-dimensional projective space. Suppose that $X$ is contained in the open cell $\mathbb{R}^n$ inside $\mathbb{P}^n$, and let $\mathrm{Coll}(X)=\{h\in\mathrm{PGL}(n,\mathbb{R})\colon h(X)=X\}$ be the group of collineations that map $X$ onto itself. As every collineation preserves the cross-ratio, we have that $\mathrm{Coll}(X)\subseteq \mathrm{Isom}(X)$. In [@dlH], de la Harpe raised a number of questions concerning $\mathrm{Isom}(X)$ and its relation to $\mathrm{Coll}(X)$. In particular, he conjectured that $\mathrm{Isom}(X)$ is a Lie group, and that $\mathrm{Isom}(X)$ acts transitively on $X$ if and only if $\mathrm{Coll}(X)$ does. He also asked for which sets $X$ the groups $\mathrm{Isom}(X)$ and $\mathrm{Coll}(X)$ coincide. Of course, if the two groups are equal, then $\mathrm{Isom}(X)$ is a Lie group, since $\mathrm{Coll}(X)$ is a closed subgroup of $\mathrm{PGL}(n,\mathbb{R})$. De la Harpe [@dlH Proposition 3] proved that if the norm closure $\overline{X}$ of $X$ is strictly convex, then the groups are equal. He also determined $\mathrm{Isom}(X)$ when $X$ is an open $2$-simplex and showed that $\mathrm{Isom}(X)=\mathrm{Coll}(X)$ when $X$ is an open quadrilateral in the plane. Our main results are the following two theorems, which confirm de la Harpe’s conjectures for the class of polyhedral Hilbert geometries. \[thm:1.1\] If $(X,d_X)$ is a polyhedral Hilbert geometry, then $$\mathrm{Isom}(X)=\mathrm{Coll}(X)$$ if and only if $X$ is not an open $n$-simplex with $n\geq 2$. We also determine the isometry group in the case of the $n$-simplex. Let $\sigma_{n+1}$ be the group of coordinate permutations on $V$, let $\rho\colon V\to V$ be the isometry given by $\rho(x)=-x$ for $x\in V$, and identify the group of translations in $V$ with $\mathbb{R}^n$. \[thm:1.2\] If $X$ is an open $n$-simplex with $n\geq 2$, then $$\mathrm{Coll}(X) \cong \mathbb{R}^n\rtimes \sigma_{n+1} \quad\mbox{and}\quad \mathrm{Isom}(X) \cong \mathbb{R}^n\rtimes \Gamma_{n+1},$$ where $\Gamma_{n+1} = \sigma_{n+1}\times\langle\rho\rangle$. It is clear from this that the collineation group of the $n$-simplex ($n\geq 2$) is a subgroup of index two in the isometry group. Birkhoff’s version of the Hilbert metric {#sec:2} ======================================== In [@Bi] Birkhoff used the Hilbert metric to analyse the spectral properties of linear operators that leave a closed cone in a Banach space invariant, which led him to consider another version of the Hilbert metric. We shall use both versions in this paper. In Birkhoff’s setting, one considers an open cone $C\subseteq\mathbb{R}^{n+1}$, that is, $C$ is open and convex and $\lambda C\subseteq C$ for all $\lambda > 0$. If, in addition, $\overline{C}\cap (-\overline{C})=\{0\}$, then we call $C$ a *proper* open cone. An open cone $C$ induces a pre-order $\leq_C$ on $\mathbb{R}^{n+1}$ by $x\leq_C y$ if $y-x\in \overline{C}$. If $C$ is a proper open cone, then $\leq_C$ is also anti-symmetric and hence a partial ordering on $\mathbb{R}^{n+1}$. For $x\in C$ and $y\in\mathbb{R}^{n+1}$, define $$M(y/x;C) =\inf\{\lambda > 0\colon y\leq_C\lambda x\}.$$ Note that $M(y/x;C)$ is finite since $C$ is open. Also note that, by the Hahn-Banach separation theorem, $x\leq_C y$ if and only if $\langle\phi,x\rangle\leq \langle\phi,y\rangle$ for all $\phi\in C^*$, where $C^*=\{\phi\in\mathbb{R}^{n+1}\colon \mbox{$\langle\phi,x\rangle\geq 0$ for all $x\in \overline{C}$}\}$ is the *dual cone* of $\overline{C}$. Thus, $$\label{eq:2.2} M(y/x;C)=\sup_{\phi\in C^*\setminus\{0\}}\frac{\langle\phi,y\rangle}{\langle\phi,x\rangle} \mbox{\quad for all $x\in C$ and $y\in\mathbb{R}^{n+1}$.}$$ Birkhoff’s version of the Hilbert metric is called *Hilbert’s projective metric* on $C$ and is defined by $$d_C(x,y)=\log M(x/y;C) + \log M(y/x;C)\mbox{\quad for all }x,y\in C.$$ Note that $d_C(\alpha x,\beta y)=d_C(x,y)$ for all $\alpha,\beta>0$. It is known [@Nu1] that $d_C$ is a semi-metric on the rays in $C$, but in general not a metric, as $d_C(x,y) =0 $ does not imply $x=\alpha y$ for some $\alpha>0$. If, however, $C$ is a proper open cone, then $d_C$ is a genuine metric on the rays in $C$. To establish the connection with the Hilbert metric, we imagine $X$ as a subset of a hyperplane in $\mathbb{R}^{n+1}$ that does not contain the origin. Let $C_X$ be the cone generated by $X$ in $\mathbb{R}^{n+1}$. So, $$C_X=\{\lambda x\in\mathbb{R}^{n+1}\colon \lambda> 0\mbox{ and }x\in X\}$$ is a proper open cone in $\mathbb{R}^{n+1}$. Birkhoff [@Bi] proved that $d_C$ and $d_X$ coincide on $X$. In fact, $$\log M(x/y;C)=\log\frac{|y'x|}{|y'y|} \mbox{\quad and \quad} \log M(y/x;C)=\log\frac{|x'y|}{|x'x|}$$ for all $x,y\in X$. We write $$\mathcal{F}_C(x,y) = \log M(x/y;C)$$ for all $y\in C$ and $x\in\mathbb{R}^{n+1}$, and $$\mathcal{RF}_C(x,y) = \log M(y/x;C)$$ for all $x\in C$ and $y\in\mathbb{R}^{n+1}$. The function $\mathcal{F}_C$ is called the *Funk metric* after P. Funk who used it in [@Funk]. It is easy to verify that $\mathcal{F}_C(x,z)\leq \mathcal{F}_C(x,y)+\mathcal{F}_C(y,z)$ and $\mathcal{F}_C(x,x) = 0$ for all $x,y,z\in C$, but $\mathcal{F}_C(x,y)$ is neither symmetric nor non-negative. We call $\mathcal{RF}_C$ the *reverse-Funk metric*. We have that $$d_C(x,y) = \mathcal{F}_C(x,y)+\mathcal{RF}_C(x,y) \mbox{\quad for all $x,y\in C$}.$$ We write $[0]_C$ to denote the subspace $\{x\in \mathbb{R}^{n+1}\colon \mbox{$x\in \overline{C}$ and $-x\in \overline{C}$}\}$. Clearly if $z\in [0]_C$, then $\langle\phi,z\rangle=0$ for all $\phi\in C^*$. From (\[eq:2.2\]) we deduce that if $z\in [0]_C$ and $x,y\in C$, then $$M((x+z)/y;C)=M(x/y;C)\mbox{\quad and \quad } M(y/(x+z);C) = M(y/x;C),$$ so that $d_C(x+z,y)=d_C(x,y)$. Therefore, if $\Sigma$ is a cross-section of the proper open cone $C'=C/[0]_C$, then $(\Sigma,d_{C'})$ is isometric to a Hilbert geometry with dimension $n-\dim [0]_C$. We call $n-\dim [0]_C$ the *dimension of the Hilbert geometry on* $C$. The horoboundary and the detour metric {#sec:3} ====================================== To prove Theorem \[thm:1.1\], we use results from [@Wa1] on the horofunction boundary of the Hilbert geometry. Following [@BGS], recall that if $(X,d)$ is an unbounded locally-compact metric space, then to each $z\in X$ a continuous function $\phi_{z,b}\colon X\to\mathbb{R}$, with $$\phi_{z,b}(x)=d(x,z)-d(b,z)\mbox{\quad for }x\in X,$$ is assigned. Here $b\in X$ is a fixed *base-point*. The map $\Phi\colon X\to C(X)$ given by $\Phi(z)=\phi_{z,b}$ embeds $X$ into the space of continuous functions on $X$, which is endowed with the topology of uniform convergence on compact subsets of $X$. The *horoboundary* of $X$ is defined by $$X(\infty)=\overline{\Phi(X)}\setminus\Phi(X),$$ and its members are called *horofunctions*. Since $X$ is locally compact, the space $X\cup X(\infty)$ is a compactification of $X$, and so every unbounded sequence $(z_k)_k$ in $X$ has a subsequence such that $\phi_{z_k,b}$ converges to a point in $X(\infty)$. It is easy to verify that, for any alternative base-point $b'$, $$\phi_{z_k,b'}(x) = \phi_{z_k,b}(x)-\phi_{z_k,b}(b').$$ Therefore, if $\phi_{z_k,b}$ converges to $\xi$, then $\phi_{z_k,b'}$ converges to $\xi-\xi(b')$. If $r\colon [0,\infty)\to X$ is a geodesic ray, then $\phi_{r(t),r(0)}(x)$ is non-increasing and bounded below by $-d(r(0),x)$. Therefore, each geodesic ray yields a horofunction. More generally, one obtains a horofunction from each “almost-geodesic”, a concept introduced by Rieffel [@Rief]. A map $\gamma\colon T\to X$, with $T$ an unbounded subset of $\mathbb{R}$ and $0\in T$, is called an *almost-geodesic* if for each $\epsilon>0$ there exists $M\geq 0$ such that $$\label{eq:3.1} |d(\gamma(t),\gamma(s))+d(\gamma(s),\gamma(0))-t|<\epsilon \mbox{\quad for all $s,t\in T$ with $t\geq s\geq M$}.$$ Rieffel [@Rief] proved that, for any almost-geodesic $\gamma\colon T\to X$, the quantity $d(x,\gamma(t))-d(b,\gamma(t))$ converges to some limit $\xi(x)$ for each $x\in X$. In this case, we say that $\gamma$ *converges to* $\xi$. A horofunction $\xi\in X(\infty)$ is called a *Busemann point* if there exists an almost-geodesic converging to it. We denote by $X_B(\infty)$ the set of all Busemann points in $X(\infty)$. It was shown in [@AGW] that the Busemann points can also be obtained as limits of $\epsilon$-almost-geodesics. Recall that a sequence $(x_k)_k$ in $X$ is called an *$\epsilon$-almost-geodesic* if $$d(x_0,x_1)+\dots+d(x_m,x_{m+1})\leq d(x_0,x_{m+1})+\epsilon \mbox{\quad for all $m\geq 0$}.$$ In fact, it was shown in [@AGW Proposition 7.12] that every almost-geodesic has a subsequence that is an $\epsilon$-almost-geodesic for some $\epsilon>0$, and, conversely, every unbounded $\epsilon$-almost-geodesic has a subsequence that is an almost-geodesic. For any two Busemann points $\xi$ and $\eta$, we define the *detour cost* by $$\begin{aligned} H(\xi,\eta) &=\sup_{W\ni\xi} \inf_{x\in W} d(b,x)+\eta(x), \end{aligned}$$ where the supremum is taken over all neighbourhoods $W$ of $\xi$ in the compactification $X\cup X(\infty)$. This concept originated in [@AGW]. An equivalent definition is $$\begin{aligned} \label{eq:3.2} H(\xi,\eta) &= \inf_{\gamma} \liminf_{t\to\infty} d(b,\gamma(t))+\eta(\gamma(t)), \end{aligned}$$ where the infimum is taken over all paths $\gamma:T\to X$ converging to $\xi$. The following is a special case of [@Wa0 Lemma 3.3]. \[lem:3.2\] Let $\gamma$ be an almost-geodesic converging to a Busemann point $\xi$. Then, $$\lim_{t\to\infty} d(b,\gamma(t)) + \xi(\gamma(t)) = 0.$$ Moreover, for any horofunction $\eta$, $$\lim_{t\to\infty} d(b,\gamma(t)) + \eta(\gamma(t)) = H(\xi,\eta).$$ Let $\epsilon>0$ and assume that $b=\gamma(0)$. As $\gamma$ is an almost-geodesic we have that $$d(\gamma(0),\gamma(t)) \geq d(\gamma(0),\gamma(s)) + d(\gamma(s),\gamma(t)) - \epsilon$$ for all $s$ and $t$ sufficiently large, with $s\le t$. Subtracting $d(\gamma(0),\gamma(t))$ from both sides and letting $t$ tend to infinity gives $$0\geq d(\gamma(0),\gamma(s))+\xi(\gamma(s))-\epsilon \mbox{\quad for all $s$ sufficiently large.}$$ This implies that $$\limsup_{s\to\infty} d(\gamma(0),\gamma(s))+\xi(\gamma(s))\leq 0.$$ As $d(\gamma(0),\gamma(s))+d(\gamma(s),\gamma(t))-d(\gamma(0),\gamma(t))\geq 0$ for all $t$, we see that $$\liminf_{s\to\infty} d(\gamma(0),\gamma(s))+\xi(\gamma(s))\geq 0,$$ which proves the first statement when $b=\gamma(0)$. The equality for general $b$ follows from the fact that if $\gamma$ converges to $\xi$ with respect to the base-point $\gamma(0)$, then $\gamma$ converges to $\xi'=\xi-\xi(b)$ with respect to the base-point $b$. Observe that $$\begin{aligned} \eta(x) \le \Big( d(x,z) - d(b,z) \Big) + \Big( d(b,z) + \eta(z) \Big) \quad \text{for all $x$ and $z$ in $X$}.\end{aligned}$$ It follows that $$\begin{aligned} \eta(x) \le \xi(x) + H(\xi,\eta) \quad \text{for all $x$ in $X$}.\end{aligned}$$ So, $$\begin{aligned} d(b,\gamma(t)) + \eta(\gamma(t)) \le d(b,\gamma(t)) + \xi(\gamma(t)) + H(\xi,\eta) \quad \text{for all $t$}.\end{aligned}$$ Taking the limit supremum as $t$ tends to infinity and using the first part of the lemma, we get that $$\begin{aligned} \limsup_{t\to\infty} d(b,\gamma(t)) + \eta(\gamma(t)) \le H(\xi,\eta).\end{aligned}$$ The lower bound on the limit infimum follows from (\[eq:3.2\]): $$H(\xi,\eta) \le \liminf_{t\to\infty} d(b,\gamma(t)) + \eta(\gamma(t)) .$$ Thus, the second statement is proved. In particular, we see that $\lim_{k\to\infty} d(b,\gamma(t))+\eta(\gamma(t))$ is independent of the almost geodesic $\gamma$ converging to $\xi$. By symmetrising the detour cost, the set of Busemann points can be equipped with a metric. For $\xi$ and $\eta$ in $X_B(\infty)$, we define $$\label{eq:3.4} \delta(\xi,\eta) = H(\xi,\eta)+H(\eta,\xi)$$ and call $\delta$ the *detour metric*. This construction appears in [@AGW Remark 5.2]. \[prop:3.3\] The function $\delta\colon X_B(\infty)\times X_B(\infty)\to [0,\infty]$ is a metric, which might take the value $+\infty$. Clearly $\delta$ is symmetric. Let $\gamma$ and $\lambda$ be almost-geodesics converging, respectively, to $\xi$ and $\eta$. From the triangle inequality we get that $$d(b,\gamma(t))+d(\gamma(t),\lambda(s))-d(b,\lambda(s))\geq 0.$$ Letting $s$ tend to infinity, we find that $$\label{eq:3.5} d(b,\gamma(t))+\eta(\gamma(t))\geq 0,$$ so that $H(\xi,\eta)\geq 0$. We conclude that $\delta$ is non-negative. From Lemma \[lem:3.2\], it follows that $\delta(\xi,\xi)= 0$ for all $\xi\in X_B(\infty)$. Now suppose that $\delta(\xi,\eta)=0$. To show that $\xi=\eta$, we let $x\in X$. By (\[eq:3.5\]) we know that, for all $s$, $$\begin{aligned} d(x,\gamma(t))-d(b,\gamma(t)) &\leq& d(x,\lambda(s))+d(\lambda(s),\gamma(t))+\eta(\gamma(t))\\ &=& d(x,\lambda(s))+\big{(}d(\lambda(s),\gamma(t))- d(b,\gamma(t))\big{)} \\ & & \quad\quad + \big{(}d(b,\gamma(t))+\eta(\gamma(t))\big{)}. \end{aligned}$$ Taking the limit as $t$ tends to infinity gives, by Lemma \[lem:3.2\], $$\begin{aligned} \xi(x) &\leq& d(x,\lambda(s))+\xi(\lambda(s))+ H(\xi,\eta)\\ &=& \big{(}d(x,\lambda(s))-d(b,\lambda(s))\big{)} +\big{(} d(b,\lambda(s)) + \xi(\lambda(s))\big{)} + H(\xi,\eta). \end{aligned}$$ Subsequently letting $s$ tend to infinity shows that $$\xi(x) \leq \eta(x)+ H(\eta,\xi) + H(\xi,\eta) = \eta(x).$$ Interchanging the roles of $\xi$ and $\eta$ gives the desired equality. It remains to show that $\delta$ satisfies the triangle inequality. Let $\xi$, $\eta$, and $\nu$ be Busemann points with respective almost-geodesics $\gamma$, $\lambda$, and $\kappa$. Clearly $$\begin{aligned} d(b,\gamma(t)) +d(\gamma(t),\kappa(u)) - d(b,\kappa(u)) & \leq d(b,\gamma(t)) + d(\gamma(t),\lambda(s)) -d(b,\lambda(s)) \\ & \quad + d(b,\lambda(s)) +d(\lambda(s),\kappa(u)) - d(b,\kappa(u)). \end{aligned}$$ Taking the limits as $u$, $s$, and then $t$ tend to infinity, we get that $H(\xi,\nu)\leq H(\xi,\eta)+H(\eta,\nu)$, which implies that $\delta$ satisfies the triangle inequality. Note that we can partition $X_B(\infty)$ into disjoint subsets such that $\delta(\xi,\eta)$ is finite for each pair of horofunctions $\xi$ and $\eta$ lying in the same subset. We call these subsets the *parts* of the horofunction boundary of $(X,d)$, and $\delta$ is a genuine metric on each one. Consider an isometry $g$ from one metric space $(X,d)$ to another $(Y,d')$. We can extend $g$ to the horofunction boundary $X(\infty)$ of $X$ as follows: $$g(\xi)(y) = \xi(g^{-1}(y))-\xi(g^{-1}(b')),$$ for all $\xi\in X(\infty)$ and $y\in Y$. Here $b'$ is the base-point in $Y$. Observe that if $\lambda\colon T\to X$ is a path converging to a horofunction $\xi$, then $g\circ\lambda$ converges to $g(\xi)$ in the horofunction compactification $Y\cup Y(\infty)$ of $Y$. If, furthermore, $\lambda$ is an almost-geodesic, then $g\circ \lambda$ is an almost-geodesic in $(Y,d')$. The following lemma shows that $g$ is an isometry on $X_B(\infty)$ with respect to the detour metric. The first part has appeared in [@AGW Remark 5.2]. \[lem:3.4\] The detour metric $\delta$ is independent of the base-point. Moreover, if $g\colon (X,d)\to (Y,d')$ is an isometry of $X$ onto $Y$, then $$\delta(\xi,\eta)=\delta(g(\xi),g(\eta)) \mbox{\quad for all $\xi,\eta\in X(\infty)$}.$$ Let $\xi$ and $\eta$ be horofunctions with respect to the base-point $b\in X$. Now let $\hat b\in X$ and note that $\hat\xi = \xi -\xi(\hat b)$ and $\hat\eta=\eta-\eta(\hat b)$ are the corresponding horofunctions when using $\hat b$ as the base-point instead of $b$. So, $$\begin{aligned} H(\hat\xi,\hat\eta) &=\inf_\gamma \liminf_{t\to\infty} d(\hat b,\gamma(t)) + \hat\eta(\gamma(t)) \\ &=\inf_\gamma \liminf_{t\to\infty} d(\hat b,\gamma(t)) - d(b,\gamma(t)) +d(b,\gamma(t))+ \eta(\gamma(t))- \eta(\hat b)\\ &= \xi(\hat b) + H(\xi,\eta) -\eta(\hat b),\end{aligned}$$ where each time the infimum is taken over all paths converging to $\xi$. This implies that $\delta(\hat\xi,\hat\eta)=\delta(\xi,\eta)$. Let $b'$ be the base-point of $Y$. We have $$\begin{aligned} H(g(\xi),g(\eta)) &= \inf_\gamma \liminf_{t\to\infty} d'(b',g(\gamma(t)))+\eta(\gamma(t))-\eta(g^{-1}(b')) \\ &= \inf_\gamma \liminf_{t\to\infty} d(g^{-1}(b'),\gamma(t))-d(b,\gamma(t)) +d(b,\gamma(t)) + \eta(\gamma(t)) - \eta( g^{-1}(b'))\\ & = \xi(g^{-1}(b')) + H(\xi,\eta) - \eta( g^{-1}(b')),\end{aligned}$$ where the infimum is as before. We conclude that $g$ preserves the detour cost. Parts of the horoboundary of a Hilbert geometry {#sec:4} =============================================== In this section, we describe the detour metric on parts of the horoboundary of a Hilbert geometry using the characterisation of its Busemann points obtained in [@Wa1]. To present the results it is convenient to work with Hilbert’s projective metric. We begin by recalling some notions from [@Wa1]. Given an open cone $C\subseteq\mathbb{R}^{n+1}$, the *open tangent cone at* $z\in\partial C$ is defined by $$\tau(C,z) = \{\lambda(x-z)\in\mathbb{R}^{n+1}\colon \mbox{$\lambda>0$ and $x\in C$}\}.$$ Observe that $C=\tau(C,0)$. \[lem:tangent\_cone\_formula\] For each $ z\in\partial C$ we have $$\tau(C,z)=\{u\in\mathbb{R}^{n+1}\colon \langle \phi, u\rangle>0\mbox{ for all } \phi\in C^*\setminus\{0\} \mbox{ with }\langle \phi, z\rangle =0\}.$$ The inclusion $\subseteq$ is clear. To prove the opposite inclusion let $Z=\{ \phi\in C^*\setminus\{0\} \colon \langle \phi, z\rangle =0\mbox{ and }\|\phi \|=1\}$. Suppose that $u\in\mathbb{R}^{n+1}$ is such that $\langle \phi, u\rangle > 0 $ for all $\phi \in Z$. As $Z$ is compact, $\alpha =\min_{\phi \in Z} \langle \phi, u\rangle>0$. Let $0<\epsilon <\alpha/\|u\|$ and $W_1=\{\psi \in C^*\setminus\{0\}\colon \|\psi\|=1 \mbox{ and }\|\psi -\phi\|<\epsilon \mbox{ for some }\phi\in Z\}$. Then $$\begin{aligned} \langle \psi, u\rangle & = &\langle \phi,u\rangle +\langle \psi,u\rangle-\langle \phi,u\rangle \\ & \geq & \langle \phi,u\rangle -\|\psi-\phi\|\|u\|\\ & \geq & \alpha -\epsilon \|u\|>0, \end{aligned}$$ where $\phi\in Z$ with $\|\psi-\phi\|<\epsilon$. Now let $W_2=\{\psi \in C^*\setminus\{0\}\colon \|\psi\|=1 \mbox{ and }\|\psi -\phi\|\geq \epsilon \mbox{ for all }\phi\in Z\}$. Denote $\beta =\min_{\psi\in W_2} \langle \psi,u\rangle$ and $\gamma = \min_{\psi\in W_2} \langle \psi,z\rangle>0$. Note that it suffices to show that $x=\mu u +z\in C$ for some $\mu>0$. Take $0<\mu<|\gamma/\beta|$ and remark that if $\psi\in W_2$, then $$\langle \psi,\mu u\rangle + \langle \psi,z\rangle\geq \mu\beta +\gamma>0.$$ We also have that $$\langle \psi,\mu u\rangle + \langle \psi,z\rangle>0$$ for all $\psi\in W_1$. Thus, $x\in C$ and we are done. Given a collection $\Pi$ of open cones in $\mathbb{R}^{n+1}$, we write $$\Gamma(\Pi) = \{\tau(T,z)\colon \mbox{$T\in\Pi$ and $z\in\partial T$}\}.$$ Starting with $C$ and iterating this operation gives a collection of open cones $$\mathcal{T}(C)=\bigcup_{k=1}^n \Gamma^k(\{C\}),$$ where $\Gamma^{k+1}(\{C\})=\Gamma(\Gamma^k(\{C\})$ for all $k$. In particular, if $C\subseteq\mathbb{R}^{n+1}$ is an open polyhedral cone with $N$ facets, then there exist $N$ facet defining functionals $\psi_1,\ldots,\psi_N\in C^*$ such that $$C=\{x\in\mathbb{R}^{n+1}\colon \psi_i(x)>0\mbox{ for }i=1,\ldots,N\}.$$ In this case it can be shown that $$\mathcal{T}(C)=\{C_I\colon \mbox{$I$ is a non-empty subset of $\{1,\ldots,N\}$}\},$$ where $C_I=\{x\in\mathbb{R}^{n+1}\colon \psi_i(x)>0\mbox{ for all }i\in I\}$. It is instructive to determine the Busemann points that come from straight-line geodesics in the Hilbert geometry. In fact, we will need this result later. \[lem:4.1\] If $C\subseteq\mathbb{R}^{n+1}$ is an open cone, and $\gamma(t)=(1-t)z+ty$, with $t\in (0,1]$, is a straight-line geodesic connecting $z\in\partial C$ to $y\in C$, then $$\begin{aligned} \label{eq:4.3} \lim_{t\to 0} d_C(x,\gamma(t))-d_C(b,\gamma(t)) & = \mathcal{RF}_C(x,z)-\mathcal{RF}_C(b,z)\\ &\quad \quad +\mathcal{F}_{\tau(C,z)}(x,y)-\mathcal{F}_{\tau(C,z)}(b,y) \end{aligned}$$ for each $x\in C$. It follows from (\[eq:2.2\]) that $$\label{eq:4.4} \begin{split} \lim_{t\to 0} \mathcal{RF}_C(x,\gamma (t))-\mathcal{RF}_C(b,\gamma (t)) & = \lim_{t\to 0}\quad \log \sup_{\phi\in C^*\setminus\{0\}} \frac{(1-t)\langle \phi,z\rangle + t\langle \phi,y\rangle} {\langle \phi,x\rangle} \\ & \quad \quad - \log\sup_{\phi\in C^*\setminus\{0\}} \frac{(1-t)\langle \phi,z\rangle+ t\langle \phi,y\rangle} {\langle \phi,b\rangle} \\ & =\mathcal{RF}_C(x,z)-\mathcal{RF}_C(b,z)\\ \end{split}$$ for each $x\in C$. By [@Wa1 Lemma 3.3] we also know that $$\mathcal{F}_C(x,\gamma(t))-\mathcal{F}_{\tau(C,z)}(x,\gamma(t))\to 0 \mbox{\quad as $t\to 0$},$$ for all $x\in C$. It follows from (\[eq:2.2\]) and Lemma \[lem:tangent\_cone\_formula\] that $$\label{eq:4.5} \begin{split} \mathcal{F}_{\tau(C,z)}(x,\gamma(t)) & = \log \sup_{\phi\in C^*\setminus\{0\}, \langle \phi,z\rangle =0} \frac{\langle \phi,x\rangle}{\langle \phi, (1-t)z+ty\rangle}\\ & = \log\frac{1}{t} + \log \sup_{\phi\in C^*\setminus\{0\}, \langle \phi,z\rangle =0} \frac{\langle \phi,x\rangle}{\langle \phi,y\rangle}\\ & = \log\frac{1}{t} + \mathcal{F}_{\tau(C,z)}(x,y). \\ \end{split}$$ Thus, $$\lim_{t\to 0} \mathcal{F}_C(x,\gamma(t))-\mathcal{F}_C(b,\gamma(t)) = \mathcal{F}_{\tau(C,z)}(x,y)-\mathcal{F}_{\tau(C,z)}(b,y).$$ Combining this with (\[eq:4.4\]) completes the proof. To describe all the Busemann points, not only the tangent cone is needed, but all the cones in $\mathcal{T}(C)\setminus\{C\}$. According to [@Wa1 Lemma 4.1], a sequence $(x_k)_k\subseteq C$ is an $\epsilon$-almost geodesic with respect to Hilbert’s projective metric on $C$ if and only if it is an $\epsilon'$-almost-geodesic under both the Funk metric and the reverse-Funk metric on $C$. For $T\in\mathcal{T}(C)$ and $y\in T$, let $f_{T,y}\colon T\to \mathbb{R}$ be defined by $$f_{T,y}(x)=\mathcal{F}_T(x,y)-\mathcal{F}_T(b,y),$$ where $b\in C$ is the fixed base-point. Likewise, for $z\in\overline{C}$, we define $r_{C,z}\colon C\to \mathbb{R}$ by $$r_{C,z}(x)=\mathcal{RF}_C(x,z)-\mathcal{RF}_C(b,z).$$ Following [@Wa1], we say that a sequence $(x_k)_k\subseteq C$ *converges to $f\colon C\to\mathbb{R}$ in the Funk sense on $C$* if $(f_{C,x_k})_k$ converges pointwise to $f$ on $C$. Similarly, a sequence $(x_k)_k\subseteq C$ *converges to $r\colon C\to\mathbb{R}$ in the reverse-Funk sense* if $(r_{C,x_k})_k$ converges pointwise to $r$ on $C$. Much like the Busemann points in the Hilbert geometry, we can consider Busemann points in the Funk and in the reverse-Funk geometries on $C$, which are defined as follows. A function $f\colon C\to\mathbb{R}$ is a *Busemann point in the Funk geometry on $C$* if there exists a Funk metric $\epsilon$-almost-geodesic $(x_k)_k\subseteq C$ which converges to $f$ in the Funk sense and $f$ is not of the form $\mathcal{F}_C(\cdot,p)-\mathcal{F}_C(b,p)$ for $p\in C$. Similarly, a function $r\colon C\to\mathbb{R}$ is a *Busemann point in the reverse-Funk geometry on $C$* if there exists a reverse-Funk metric $\epsilon$-almost-geodesic $(x_k)_k\subseteq C$ which converges to $r$ in the reverse-Funk sense and $r$ is not of the form $\mathcal{RF}_C(\cdot,p)-\mathcal{RF}_C(b,p)$ for $p\in C$. The following proposition, proved in [@Wa1 Proposition 2.5], describes the Busemann points of the reverse-Funk geometry. \[prop:4.2\] Let $C\subseteq\mathbb{R}^{n+1}$ be a proper open cone. The set of Busemann points of the reverse-Funk geometry on $C$ is $$\mathcal{B}_\mathcal{RF}=\{r_{C,x}\colon x\in\partial C\setminus\{0\}\}.$$ Moreover, a sequence $(x_k)_k$ in a cross-section of $C$ converges in the reverse-Funk sense to $r_{C,x}\in \mathcal{B}_\mathcal{RF}$ if and only if it converges to a positive multiple of $x$ in the norm topology. The Busemann points of the Funk geometry are more complicated as the following result [@Wa1 Proposition 3.11] shows. \[prop:4.3\] If $C\subseteq\mathbb{R}^{n+1}$ is a proper open cone, then the set of Busemann points of the Funk geometry on $C$ is $$\mathcal{B}_\mathcal{F} = \{f_{T,p\mid C} \colon \mbox{$T\in\mathcal{T}(C)\setminus\{C\}$ and $p\in T$}\}.$$ Each Busemann point in the Hilbert geometry is the sum of a Busemann point in the Funk geometry and a Busemann point in the reverse-Funk geometry. Indeed, the following characterisation was obtained in [@Wa1 Section 4]. \[thm:4.4\] If $C\subseteq\mathbb{R}^{n+1}$ is a proper open cone, then the set of Busemann points of the Hilbert geometry on $C$ is $$\mathcal{B}=\{r_{C,x}+f_{T,p\mid C} \colon \mbox{$x\in\partial C\setminus\{0\}$, $T\in\mathcal{T}(\tau(C,x))$, and $p\in T$}\}.$$ Moreover, for each $r_{C,x}+f_{T,p\mid C}\in \mathcal{B}$ there exists an almost-geodesic that converges in the norm topology to $x$ and in the Funk sense to $f_{T,p}$. Thus, if $(x_k)$ is an almost-geodesic converging to $g=r_{C,x}+f_{S,p\mid C}\in \mathcal{B}$, and $h=r_{C,y}+f_{T,q\mid C}\in\mathcal{B}$, then, by Lemma \[lem:3.2\], $$\begin{split} H(g,h) & = \lim_{k\to\infty} d_C(b,x_k)+h(x_k)\\ & = \lim_{k\to\infty} \Big{(}\mathcal{RF}_C(b,x_k)+r_{C,y}(x_k)\Big{)} +\ \Big{(}\mathcal{F}_C(b,x_k)+f_{T,q}(x_k)\Big{)}. \end{split}$$ We will consider the two parenthesised expressions separately. Recall that for $x\in\overline{C}$ the *face of* $x$ is defined as the set containing those points $y\in\overline{C}$ such that the straight-line through $x$ and $y$ contains a open line segment $I$ with $x\in I$ and $I\subseteq \overline{C}$. \[prop:4.5\] Let $C\subseteq\mathbb{R}^{n+1}$ be a proper open cone and $x,y\in\partial C\setminus\{0\}$. Let $(x_k)_k$ be an almost-geodesic with respect to the reverse-Funk metric converging to $x$ in the norm topology. If $y$ lies in the face $F$ of $x$, then $$\label{eq:4.9} \lim_{k\to\infty} \mathcal{RF}_C(b,x_k)+r_{C,y}(x_k) = \mathcal{RF}_C(b,x)+\mathcal{RF}_F(x,y)-\mathcal{RF}_C(b,y).$$ The limit is $\infty$ otherwise. Each almost-geodesic (under the reverse-Funk metric) in $C$ that converges in the norm topology to $x$ converges to $r_{C,x}$ in the reverse-Funk sense by [@Wa1 Proposition 2.5]. Therefore, we can argue just as in the proof of Lemma \[lem:3.2\], replacing the metric $d$ by $\mathcal{RF}_C(\cdot,\cdot)$, to conclude that $$\lim_{k\to\infty} \mathcal{RF}_C(b,x_k)+r_{C,y}(x_k)$$ is independent of the (reverse-Funk metric) almost-geodesic $(x_k)_k$ converging to $x$ in norm. Let us consider $(z_k)_k$ with $z_k = \frac{1}{k}b +(1-\frac{1}{k})x$ for all $k\geq 1$. As every straight-line segment is a geodesic under the reverse-Funk metric, $(z_k)_k$ is an almost-geodesic. Note that as $C$ is a proper cone, $C^*$ has non-empty interior. Therefore there exists $\psi\in C^*$ such that $\langle \psi,y\rangle =1$ and $\langle \psi,z\rangle >0$ for all $z\in C\setminus\{0\}$. Define $$u=\frac{x}{\langle \psi,x\rangle} \mbox{\quad and\quad } u_k=\frac{z_k}{\langle \psi,z_k\rangle} \quad\mbox{for each $k\geq 1$}.$$ Recall that $\mathcal{RF}_C(\alpha v,\beta w)=\log(\beta/\alpha)+\mathcal{RF}_C(v,w)$ for all $\alpha,\beta>0$. Therefore $$\begin{aligned} \lim_{k\to\infty} r_{C,y}(z_k) & = & \lim_{k\to\infty} -\log \langle\psi,z_k\rangle + \mathcal{RF}_C(u_k,y) -\mathcal{RF}_C(b,y)\\ & = & \lim_{k\to\infty} -\log \langle\psi,z_k\rangle + \log \frac{|w_ky|}{|w_k u_k|} -\mathcal{RF}_C(b,y),\end{aligned}$$ where $w_k$ is the point in the intersection of the straight line through $u_k$ and $y$ with $\partial C$ on the same side of $y$ as $u_k$. Suppose that $y=\lambda x$ for some $\lambda>0$. So, $u=y$ and each $u_k$ lies on the straight-line segment connecting $b'=b/\langle \psi,b\rangle$ and $y$. In this case, obviously, $$\frac{|w_ky|}{|w_ku_k|}\to 1$$ as $k$ tends to infinity. Moreover, $\mathcal{RF}_C(b,x_k)$ converges to $\mathcal{RF}_C(b,x)$ and $-\log \langle \psi,z_k\rangle$ converges to $-\log \langle \psi,x\rangle =\log \lambda$ as $k$ tends to infinity. Since $\mathcal{RF}_C(b,x)=\mathcal{RF}_C(b,y)-\log \lambda$, equality (\[eq:4.9\]) holds in this case. Now suppose that $y\in F$ and $y\neq \lambda x$ for all $\lambda>0$. So, $y\neq u$ and $y$ in the face of $u$, since $u$ has the same face as $x$. Therefore we can define $w$ to be the point in the intersection of $\partial C$ with the straight line through $y$ and $u$ that is on the same side of $y$ as $u$, and farthest away from $y$. Since $y$ is in the face of $u$, we know that $w\neq u$. Thus, $$\begin{aligned} \lim_{k\to\infty} r_{C,y}(z_k) & = & \lim_{k\to\infty} -\log \langle\psi,z_k\rangle + \mathcal{RF}_C(u_k,y) -\mathcal{RF}_C(b,y)\\ & = & -\log \langle\psi,x\rangle + \log \frac{|wy|}{|wu|}-\mathcal{RF}_C(b,y)\\ & = & \mathcal{RF}_F(x,y) -\mathcal{RF}_C(b,y).\end{aligned}$$ As $\mathcal{RF}_C(b,z_k)$ converges to $\mathcal{RF}_C(b,x)$ as $k$ tends to infinity, equality (\[eq:4.9\]) holds. Finally, suppose $y$ is not in the face of $x$. So, $w=u$ and $$\frac{|w_ky|}{|w_ku_k|}\to\infty\mbox{\quad as }k\to\infty.$$ This completes the proof. Given an open cone $C\subseteq \mathbb{R}^{n+1}$ and a base-point $b\in C$, we define for $x\in C$ a function $j_{C,x}\colon\mathbb{R}^{n+1}\to \mathbb{R}$ by $$j_{C,x}(y)=\frac{M(y/x;C)}{M(b/x;C)}\mbox{\quad for }y\in\mathbb{R}^{n+1}.$$ It follows from (\[eq:2.2\]) that $j_{C,x}$ is convex. Also note that $f_{C,x}(y) = \log j_{C,x}(y)$ for all $x,y\in C$. We recall several concepts from convex analysis; the reader may consult [@Beer] for details. The *epi-graph* of a convex function $f\colon\mathbb{R}^{n+1}\to\mathbb{R}$ is given by $$\mathrm{epi}(f) =\{(x,\alpha)\in\mathbb{R}^{n+1}\times\mathbb{R} \colon f(x)\leq\alpha\}.$$ The epi-graph is a convex set and can be used to define a topology on the space $\Lambda(\mathbb{R}^{n+1})$ of proper, lower semi-continuous, convex functions on $\mathbb{R}^{n+1}$ as follows. A sequence $(f_k)_k$ in $\Lambda(\mathbb{R}^{n+1})$ is said to converge in the *epi-graph topology* to $f$ if the epi-graphs $\mathrm{epi}(f_k)$ converge to $\mathrm{epi}(f)$ in the Painlevé-Kuratowski topology. Here a sequence of closed sets $(A_k)_k$ in $\mathbb{R}^{n+1}\times \mathbb{R}$ converges to $A$ in the *Painlevé-Kuratowski topology* if $$\mathrm{Ls} A_k :=\bigcap_{k\geq 0}\Big{(}\overline{\bigcup_{i\geq k} A_i}\Big{)}$$ and $$\mathrm{Li} A_k := \bigcap_{(k_i), k_i\to\infty} \Big{(}\overline{\bigcup_{i\geq 0}A_{k_i}}\Big{)}$$ satisfy $A=\mathrm{Li} A_k=\mathrm{Ls} A_k$. We write $j^*_{C,x}\colon\mathbb{R}^{n+1}\to\mathbb{R}\cup\{\infty\}$ to denote the *Legendre-Fenchel transform* of $j_{C,x}$, so $$j^*_{C,x}(\phi)=\sup_{y\in\mathbb{R}^{n+1}}\langle \phi,y\rangle - j_{C,x}(y) \mbox{\quad for $\phi\in\mathbb{R}^{n+1}$}.$$ The Legendre-Fenchel transform is a homeomorphism on the space $\Lambda(\mathbb{R}^{n+1})$ with respect to the epi-graph topology [@Beer Proposition 7.2.11]. Furthermore it was proved in [@Wa1 Lemma 3.15] that if $T\subseteq\mathbb{R}^{n+1}$ is an open cone, then for each $x\in T$ we have that $$j^*_{T,x}(\phi) = \left\{\begin{array}{ll} 0 & \mbox{if $\phi\in\{\psi\in T^*\colon M(b/x;T) \langle \psi,x\rangle\leq 1\}$}\\ \infty &\mbox{otherwise}. \end{array}\right.$$ For $T\in\mathcal{T}(C)$ and $x\in T$, define $$U_{T,x}=\{ \psi\in T^*\colon M(b/x;T)\langle \psi,x\rangle > 1\}.$$ \[prop:4.6\] Let $f_{S,p\mid C}$ and $f_{T,q\mid C}$ be Busemann points of the Funk geometry on a proper open cone $C\subseteq\mathbb{R}^{n+1}$, with $S$ and $T$ in $\mathcal{T}(C)\backslash\{C\}$. Then $$\liminf_{k\to\infty} \mathcal{F}_C(b,x_k)+ f_{T,q}(x_k) = \left\{ \begin{array}{ll} \mathcal{F}_S(b,p)+\mathcal{F}_T(p,q)-\mathcal{F}_T(b,q), &\mbox{if $S\subseteq T$,}\\ \infty, & \mbox{otherwise,} \end{array}\right.$$ where the infimum is taken over all sequences in $C$ converging to $f_{S,p\mid C}$ in the Funk sense on $C$. Let $(x_k)_k$ be any sequence in $C$ converging to $f_{S,p}$ in the Funk sense. By Lemma 4.15 of [@Wa1], $j_{C,x_k}$ converges to $j_{S,p}$ in the epigraph topology, and so $j^*_{C,x_k}$ converges to $j^*_{S,p}$. Let $y\in C^*$ be such that $j^*_{S,p}(y)= \infty$. The properties of epi-convergence imply that $j^*_{C,x_k}(y)$ converges to $\infty$. Therefore, $y\in U_{C,x_k}$ for $k$ large enough. Observe that $$\begin{aligned} \mathcal{F}_C(b,x_k) + f_{T,q}(x_k) &= \log\Big( M(b/x_k;C) \sup_{z\in T^*} \frac{{\langle{z},{x_k}\rangle}}{{\langle{z},{q}\rangle}} \frac{1}{M(b/q;T)} \Big) \\ \label{eqn:logsup} &= \log\Big( \sup_{z\in T^*\cap U_{C,x_k}} \frac{1}{{\langle{z},{q}\rangle}} \frac{1}{M(b/q;T)} \Big).\end{aligned}$$ Suppose that $S$ is not a subset of $T$. Then $T^*$ is not a subset of $S^*$ and we can consider a point $y\in T^* \backslash S^*$. As $j^*_{S,p}(\alpha y)= \infty$ for all $\alpha>0$, we know that $\alpha y\in U_{C,x_k}$ for $k$ large enough. So, $$\begin{aligned} \liminf_{k\to\infty} \mathcal{F}_C(b,x_k)+ f_{T,q}(x_k) \geq \log\frac{1}{{\langle{\alpha y},{q}\rangle}}\frac{1}{M(b/q;T)}.\end{aligned}$$ But $\alpha$ can be chosen to be as small as we like, and so, in this case, $$\begin{aligned} \liminf_{k\to\infty} \mathcal{F}_C(b,x_k)+ f_{T,q}(x_k) = \infty.\end{aligned}$$ Now suppose that $S \subseteq T$. For any $y\in U_{S,p}$ we know that $j^*_{S,p}(y)= \infty$. Thus, as before, $y\in U_{C,x_k}$ for all $k$ large enough. Therefore, from (\[eqn:logsup\]), $$\begin{aligned} \liminf_{k\to\infty} \mathcal{F}_C(b,x_k)+ f_{T,q}(x_k) &\geq \log\Big( \sup_{z\in T^*\cap U_{S,p}} \frac{1}{{\langle{z},{q}\rangle}} \frac{1}{M(b/q;T)} \Big) \\ & = \log\Big( M(b/p;S) \sup_{z\in T^*} \frac{{\langle{z},{p}\rangle}}{{\langle{z},{q}\rangle}} \frac{1}{M(b/q;T)} \Big) \\ & = \mathcal{F}_S(b,p) + \mathcal{F}_T(p,q) - \mathcal{F}_T(b,q).\end{aligned}$$ We now wish to show that this bound can be attained by a judicious choice of the sequence $(x_k)_k$. Since $S\in\mathcal{T}(C)\backslash\{C\}$, there exists a finite sequence of cones $(S_k)_{1\le k\le N}$ such that $S_k\in\Gamma(\{S_{k-1}\})$ for all $1<k\le N$, and $S_1=C$ and $S_N=S$. Let $x\in\partial S_{N-1}$ be such that $S_N = \tau(S_{N-1}, x)$. Define the constant sequence $x_k = p$, for all $k\in\mathbb{N}$. Obviously, $(x_k)$ converges to $f_{S,p}$ in the Funk sense on $S_N$. Let $(w_k)_k$ be a sequence of points in $S_{N-1}$ such that $W=\bigcup_k \{w_k\}$ is dense in $S_{N-1}$ and contains the basepoint $b$. For each $k\in\mathbb{N}$, let $y_k= (1-\lambda_k) x + \lambda_k x_k$, where the sequence $(\lambda_k)_k$ of positive reals is chosen so that, for each $k\in\mathbb{N}$, $$\begin{aligned} \label{eqn:busemann2} y_k &\in S_{N-1}, \qquad\text{and}\\ \label{eqn:busemann3} \Big|\mathcal{F}_{S_{N-1}}(w,y_k) - \mathcal{F}_{S_N}(w,y_k)\Big| &< \frac{1}{k}, \qquad\text{for all $w\in\{w_0,\dots,w_k\}$}.\end{aligned}$$ Inclusion (\[eqn:busemann2\]) holds when $\lambda_k$ is small enough, and, by [@Wa1 Lemma 3.3], the same is true for (\[eqn:busemann3\]). By [@Wa1 Lemma 3.1], $$\begin{aligned} \label{eqn:busemann6} \mathcal{F}_{S_N}(w,y_k) &= \mathcal{F}_{S_N}(w,x_k) - \log\lambda_k, \qquad \text{for all $k\in\mathbb{N}$ and $w\in S_N$}.\end{aligned}$$ Let $w\in W$. For $k\in\mathbb{N}$ large enough, both $b$ and $w$ are in $\{w_0,\dots,w_k\}$. So, applying (\[eqn:busemann3\]) and (\[eqn:busemann6\]) twice each, we get $$\Big|\mathcal{F}_{S_{N-1}}(w,y_k) - \mathcal{F}_{S_{N-1}}(b,y_k) - \mathcal{F}_{S_N}(w,x_k) + \mathcal{F}_{S_N}(b,x_k) \Big| < \frac{2}{k}.$$ We conclude that $\mathcal{F}_{S_{N-1}}(w,y_k) - \mathcal{F}_{S_{N-1}}(b,y_k)$ converges to $f_{S,p}(w)$ as $k$ tends to infinity. Since this holds for all $w$ in a dense subset of $S_{N-1}$, we see that $(y_k)$ converges to $f_{S,p}$ in the Funk sense on $S_{N-1}$. Since $x\in[0]_S$ and $S\subseteq T$, we have that $x\in[0]_T$. Therefore, by [@Wa1 Lemma 3.1] again, $$\begin{aligned} \label{eqn:busemann7} \mathcal{F}_T(y_k,q) &= \mathcal{F}_T(x_k,q) + \log\lambda_k, \qquad \text{for all $k\in\mathbb{N}$}.\end{aligned}$$ We combine (\[eqn:busemann3\]), (\[eqn:busemann6\]), and (\[eqn:busemann7\]) to get $$\begin{aligned} \mathcal{F}_{S_{N-1}}(b,y_k) + \mathcal{F}_T(y_k,q) < \mathcal{F}_{S_N}(b,x_k)+ \mathcal{F}_T(x_k,q) + \frac{1}{k},\end{aligned}$$ for all $k\in\mathbb{N}$. We can iterate the above argument to get a sequence $(z_k)$ in $C$ such that $z_k$ converges to $f_{S,p\mid C}$ in the Funk sense on $C$, and such that $$\begin{aligned} \mathcal{F}_C(b,z_k) + \mathcal{F}_T(z_k,q) < \mathcal{F}_S(b,p) + \mathcal{F}_T(p,q) + \frac{N}{k},\end{aligned}$$ for all $k\in\mathbb{N}$. Taking the limit inferior and subtracting $\mathcal{F}_{T}(b,q)$, we get $$\begin{aligned} \liminf_{k\to\infty} \mathcal{F}_C(b,z_k)+ f_{T,q}(z_k) & \leq \mathcal{F}_S(b,p) + \mathcal{F}_T(p,q) - \mathcal{F}_T(b,q).\end{aligned}$$ Reasoning exactly as in the proof of Lemma \[lem:3.2\] with $\mathcal{F}_C$ for $d$ gives the following result. \[lem:4.7.2\] If $f_{S,p\mid C}$ and $f_{T,q\mid C}$ are Busemann points of the Funk geometry on a proper open cone $C\subseteq\mathbb{R}^{n+1}$, with $S$ and $T$ in $\mathcal{T}(C)\backslash\{C\}$, and $(z_k)_k$ is an almost-geodesic in $C$ with respect to the Funk metric that converges to $f_{S,p\mid C}$ in the Funk sense, then $$\lim_{k\to\infty} \mathcal{F}_C(b,z_k)+f_{T,q}(z_k) = \inf_{(x_k)_k}\liminf_{k\to\infty} \mathcal{F}_C(b,x_k)+ f_{T,q}(x_k)$$ where the infimum is taken over all sequences in $C$ converging to $f_{S,p\mid C}$ in the Funk sense on $C$. By combining Propositions \[prop:4.5\] and \[prop:4.6\], we obtain the following formula for the detour metric. \[thm:4.7\] If $g= r_{C,x} + f_{S,p\mid C}$ and $h=r_{C,y}+f_{T,q\mid C}$ are Busemann points of the Hilbert geometry on a proper open cone $C\subseteq\mathbb{R}^{n+1}$, then $$\delta(g,h)= \left\{ \begin{array}{ll} d_F(x,y)+d_S(p,q) &\mbox{if $x$ and $y$ have the same face $F$, and $S=T$,} \\ \infty & \mbox{otherwise.} \end{array}\right.$$ Using [@Wa1 Lemma 4.3] and the formulae in the proof of [@Wa1 Theorem 1.1, p. 524], we get that there exists an almost-geodesic $(x_k)_k$ in $C$ converging to $x$ in the norm topology and to $f_{S,p\mid C}$ in the Funk sense. Recall that each almost-geodesic under Hilbert’s projective metric is an almost-geodesic under the Funk metric and the reverse-Funk metric. Therefore we can combine Lemmas \[lem:3.2\] and \[lem:4.7.2\] and Propositions \[prop:4.5\] and \[prop:4.6\] to deduce $$\begin{aligned} \delta(g,h) & = & H(g,h)+H(h,g)\\ & = & \mathcal{RF}_F(x,y)+\mathcal{RF}_F(y,x) + \mathcal{F}_T(p,q) + \mathcal{F}_S(q,p)\\ & = & d_F(x,y)+d_S(p,q), \end{aligned}$$ if $x$ and $y$ have the same face $F$ and $S=T$. In the contrary case, we get that $\delta(g,h)=\infty$. Isometric actions on parts {#sec:5} ========================== We now analyse how isometries between polyhedral Hilbert geometries act on parts. By Lemma \[lem:3.4\], each isometry $g\colon X\to Y$ preserves the detour metric, and hence maps parts to parts. If $X$ is a Hilbert geometry, then it follows from Theorem \[thm:4.7\] that there is a one-to-one correspondence between the parts of the horoboundary of $(X,d_X)$ and pairs of the form $(F,U)$, where $F$ is a (relatively) open face of the open cone $C_X$ generated by $X$, and $U\in\mathcal{T}(\tau(C_X,z))$ for some $z$ in $F$. Moreover, the part corresponding to $(F,U)$ is isometric to $(F\times U', d_{F\times U'})$, where $U'=U/[0]_U$ and $$d_{F\times U'}((x,u),(y,v))= d_F(x,y)+d_{U'}(u,v) \mbox{\quad for all $x,y\in F$ and $u,v\in U'$}.$$ A part of a polyhedral Hilbert geometry $(X,d_X)$ is called a *vertex part* if the corresponding pair is of the form $(F_z,\tau(C_X,z))$, where $F_z$ is a ray through a vertex $z\in\partial X\subseteq\partial C_X$ of $X$. It is said to be a *facet part* if the pair is of the form $(F,\tau(C_X,z))$, where $F$ is a (relatively) open facet of $C$, i.e., $\dim F =n$, and $z\in F$. Note that for a facet part, $\tau(C_X,z)$ is the open half-space $\{x\in\mathbb{R}^{n+1}\colon \langle\phi,x\rangle>0\}$ with $\phi\in C_X^*$ the facet defining functional of $F$. The main objective of this section is to prove that an isometry between polyhedral Hilbert geometries either maps vertex parts to vertex parts, and facet parts to facet parts, or it interchanges them. Recall that, as the topology of the Hilbert metric coincides with the norm topology, isometric Hilbert geometries must have the same dimension. We start with the following basic observation. \[lem:5.1\] If $(X,d_X)$ and $(Y,d_Y)$ are polyhedral Hilbert geometries and $g\colon X\to Y$ is an isometry, then $g$ maps parts corresponding to pairs of the form $(F,\tau(C_X,z))$, with $F$ a relatively open face of the cone $C_X$ generated by $X$ and $z\in F$, to parts corresponding to pairs $(F',\tau(C_Y,z'))$, with $F'$ a relatively open face of the cone $C_Y$ generated by $Y$ and $z'\in F'$. Note that the dimension of the Hilbert geometry on an open cone $T\subseteq\mathbb{R}^{n+1}$ is equal to $n-\mathrm{dim}\,[0]_T$. Thus, for $z\in\partial X\subseteq\mathbb{R}^{n+1}$, the dimension of the Hilbert geometry on $\tau(C,z)$ is greater than the dimension of any other open cone in $\mathcal{T}(\tau(C,z))$. Clearly, if $F$ is a relatively open face of $C$ and $z\in F$, then $\mathrm{dim}\, [0]_{\tau(C,z)} = \mathrm{dim}\, F$. On the other hand, the Hilbert geometry on $F$ has dimension equal to $\mathrm{dim}\, F -1$. Thus, the parts corresponding to pairs $(F,\tau(C,z))$, with $F$ a relatively open face of $C$ and $z\in F$, are precisely those that have maximal dimension $n-1$. The same is true for parts of $(Y,d_Y)$ corresponding to pairs $(F',\tau(C_Y,z'))$, with $F'$ a relatively open face of the cone $C_Y$ generated by $Y$ and $z'\in F'$. As the topology of the Hilbert geometry coincides with the norm topology, it follows from Theorem \[thm:4.7\] that $g\colon X\to Y$ must preserve the dimension of the parts. This completes the proof. Before we start proving the main result of this section we recall, for definiteness, several basic concepts from metric geometry and prove some auxiliary statements. Given a metric space $(X,d)$ and an interval $I\subseteq\mathbb{R}$, a map $\gamma\colon I\to X$ is called a *geodesic* if $$d(\gamma(s),\gamma(t))=|s-t|\mbox{\quad for all }s,t\in I.$$ If $I=[a,b]$ with $-\infty<a<b<\infty$, the image of $\gamma$ is called a *geodesic segment* connecting $\gamma(a)$ and $\gamma(b)$. Likewise if $I=\mathbb{R}$, we call the image of $\gamma$ a *geodesic line*. A geodesic line is said to be *unique* if for each finite interval $[s,t]\subset\mathbb{R}$, the geodesic segment $\gamma([s,t])$ is the only one connecting $\gamma(s)$ and $\gamma(t)$. A subset $U\subseteq X$ is said to be *geodesically closed* if for every $u,v\in U$, every geodesic segment connecting $u$ and $v$ is contained in $U$. In the Hilbert geometry, since straight-line segments are geodesic segments, geodesically closed sets are convex. The following result is well known. \[lem:5.2\] Let $(X,d_X)$ be a Hilbert geometry. If $\ell$ is a straight-line intersecting $X$ and $\ell$ intersects $\partial X$ at an extreme point, then $\ell\cap X$ is a unique-geodesic line. Conversely, if $\Gamma$ is a unique-geodesic line in $(X,d_X)$, then $\Gamma=\ell\cap X$ for some straight-line $\ell$. The following elementary topological fact will be useful. \[lem:5.3\] Let $X\subseteq\mathbb{R}^n$ be an open bounded convex set. If $U$ is a non-empty convex subset of $X$, and $U$ is closed in $X$ and homeomorphic to $\mathbb{R}^m$, then $U$ is the intersection of $X$ with an $m$-dimensional affine space. Let $A=\mathrm{aff}\, U$. Clearly $A$ is $m$-dimensional. Since $U$ is convex and homeomorphic to $\mathbb{R}^m$, it must be open in $A$. Remark that $X\cap A$ is also open in $A$ and contains $U$. Therefore $U$ is open in $X\cap A$. But by assumption $U$ is also closed in $X\cap A$, and so $U=X\cap A$, since $U$ is non-empty and connected. We say a Hilbert geometry $(X,d_X)$ is *trivial* if $X$ consists of a single point. \[prop:5.4\] Let $(Y,d_Y)$ and $(Z,d_Z)$ be non-trivial Hilbert geometries and suppose that $Y\times Z$ is equipped with the metric, $$d_{Y\times Z} ((y,z),(y',z'))= d_Y(y,y')+d_Z(z,z') \mbox{\quad for $y,y'\in Y$ and $z,z'\in Z$.}$$ Then $(Y\times Z,d_{Y\times Z})$ is not isometric to any Hilbert geometry. Let $\ell_Y\subseteq Y$ be a geodesic line such that one of its end-points is an extreme point of $Y$. Likewise let $\ell_Z\subseteq Z$ be a geodesic line with one of its end-points an extreme point of $Z$. Note that by Lemma \[lem:5.2\] both $\ell_Y$ and $\ell_Z$ are unique-geodesic lines. Obviously, $\ell_Y\times \ell_Z$ is homeomorphic to $\mathbb{R}^2$ and closed in $(Y\times Z,d_{Y\times Z})$. We now show that $\ell_Y\times \ell_Z$ is also geodesically closed. Let $(y,z)$ and $(y',z')$ be points in $\ell_Y\times \ell_Z$ and let $\Gamma$ be a geodesic segment in $Y\times Z$ connecting them. By definition of the metric $d_{Y\times Z}$, the projection $\Gamma_Y$ of $\Gamma$ to $Y$ is a geodesic segment connecting $y$ and $y'$ in $Y$. As $\ell_Y$ is a unique-geodesic line, the only geodesic segment connecting $y$ to $y'$ in $Y$ is the straight-line segment $[y,y']$. Therefore, $\Gamma_Y\subseteq \ell_Y$. By the same argument $\Gamma_Z\subseteq \ell_Z$. We conclude that $\Gamma \subseteq \ell_Y\times \ell_Z$. For the sake of contradiction suppose that $h$ is an isometry mapping $(Y\times Z,d_{Y\times Z})$ onto a Hilbert geometry $(X,d_X)$. Then $U=h(\ell_Y\times \ell_Z)$ is homeomorphic to $\mathbb{R}^2$ and closed in $(X,d_X)$. Moreover, $U$ is geodesically closed and hence convex. Thus, by Lemma \[lem:5.3\], $U$ is the intersection of $X$ with an affine plane. This implies that it is itself a Hilbert geometry. Note that $(\ell_Y\times \ell_Z,d_{Y\times Z})$ is isometric to $\mathbb{R}^2$ with the $\ell_1$-norm, $\|(x_1,x_2)\|_1=|x_1|+|x_2|$ for $(x_1,x_2)\in\mathbb{R}^2$. According to Foertsch and Karlsson [@FK], the only Hilbert geometry isometric to a 2-dimensional normed space is the Hilbert geometry on a $2$-simplex. In that case, however, the unit ball of the norm is hexagonal, and hence it cannot be isometric to the $\ell_1$-norm on $\mathbb{R}^2$. This is the desired contradiction. \[cor:5.5\] If $(X,d_X)$ and $(Y,d_Y)$ are polyhedral Hilbert geometries and $g\colon X\to Y$ is an isometry, then $g$ maps the collection of vertex parts and facet parts of the horoboundary of $(X,d_X)$ to the collection of vertex parts and facet parts of the horoboundary of $(Y,d_Y)$. We may consider $X$ and $Y$ to be open subset of $\mathbb{R}^n$ for some $n\geq 1$. Let $P$ be a vertex part or facet part of the horoboundary of $(X,d_X)$. According to Theorem \[thm:4.7\], $P$ is isometric to an $n-1$ dimensional Hilbert geometry. Therefore the part $g(P)$ of the horoboundary of $(Y,d_Y)$ with the detour metric must also be isometric to such a geometry. If $(F,U)$ is the pair corresponding to the part $g(P)$, then by Lemma \[lem:5.1\], $F$ is a relatively open face of the cone $C_Y\subseteq \mathbb{R}^{n+1}$ generated by $Y$, and $U=\tau(C_Y,z)$ for some $z\in F$. From Proposition \[prop:5.4\] and Theorem \[thm:4.7\], it follows that either $F$ is the ray through a vertex of $Y$, in which case $g(P)$ is a vertex part, or $F$ is a relatively open facet of $C_Y$ and $\tau(C_Y,z)$ is a half-space, in which case $g(P)$ is a facet part. We will now show that there are only two types of isometries between polyhedral Hilbert geometries: namely, those that map vertex parts to vertex parts, and facet parts to facet parts, and those that interchange them. \[thm:5.6\] If $(X,d_X)$ and $(Y,d_Y)$ are polyhedral Hilbert geometries and $g\colon X\to Y$ is an isometry, then either $g$ maps vertex parts to vertex parts, and facet parts to facet parts, or it interchanges them. By Corollary \[cor:5.5\], it suffices to prove that if a facet part of the horoboundary of $(X,d_X)$ is mapped to a vertex part of the horoboundary of $(Y,d_Y)$ under $g$, then every facet part gets maps to a vertex part and every vertex part gets mapped to a facet part. So, suppose that $g$ maps the facet part corresponding to $(F,\tau(C_X,z))$, with $z\in F$, to the vertex part $(F_v,\tau(C_Y,v))$, where $F_v$ is the ray through the vertex $v\in \overline Y$. Now let $F'$ be a facet adjacent to $F$. For the sake of contradiction, suppose that the facet part corresponding to the pair $(F',\tau(C_X,z'))$, with $z'\in F'$, is not mapped to a vertex part of the horoboundary of $(Y,d_Y)$. By Corollary \[cor:5.5\], its image must be a facet part of $(Y,d_Y)$. Let us denote the corresponding pair of this facet part by $(g(F'),\tau(C_Y,v'))$. Note that the vertex $v$ is adjacent to $g(F')$, as otherwise there would be a unique-geodesic line $\Gamma$ connecting $v$ to a point in $g(F')$. This would imply, however, that $g^{-1}(\Gamma)$ is a unique-geodesic line connecting points in the facets $F$ and $F'$, which is impossible by [@dlH Proposition 2]. Now let $\gamma_1\colon \mathbb{R}\to X$ be a unique-geodesic line such that $\lim_{t\to\infty}\gamma_1(t)\in g(F')$. There exists a unique-geodesic line $\gamma_2\colon \mathbb{R}\to Y$ such that $\lim_{s\to\infty}\gamma_2(s)=v$ and $\gamma_2(0)=\gamma_1(0)$. Put $r=\gamma_1(0)$, and remark that $\mathrm{aff}\,(\gamma_1,\gamma_2)$ is 2-dimensional. Let $$(x\mid y)_r = \frac{1}{2}\Big(d(x,r)+d(y,r)-d(x,y)\Big)$$ denote the Gromov product with base-point $r$. For $i=1,2$, let $\gamma_i(\pm\infty)$ denote the limits as $s,t\to\pm\infty$, respectively. In particular, $\gamma_2(\infty)=v$. For each $m>0$, there exist $s_m$ and $t_m$ greater than $m$ such that the straight-line through $\gamma_1(t_m)$ and $\gamma_2(s_m)$ is parallel to the straight-line through $\gamma_1(\infty)$ and $v$. Note that there exists a constant $C_1$ such that the following inequality holds. $$\label{eq:5.1} \begin{split} d_X(\gamma_2(s_m),\gamma_2(0)) & = \log \Big{(}\frac{|v\gamma_2(0)|}{|v\gamma_2(s_m)|}\frac{|\gamma_2(s_m)\gamma_2(-\infty)|}{|\gamma_2(0)\gamma_2(-\infty)|} \Big{)}\\ & \geq \log |v\gamma_2(0)|-\log|v\gamma_2(s_m)|\\ &\geq C_1-\log|v\gamma_2(s_m)| \end{split}$$ for all $m>0$. There also exists a constant $C_2$ such that $$\label{eq:5.2} \begin{split} d_X(\gamma_1(t_m),\gamma_2(s_m)) & = \log [u'_m,\gamma_1(t_m),\gamma_2(s_m),v'_m]\\ & = \log \frac{|u'_m\gamma_2(s_m)|}{|u'_m\gamma_1(t_m)|} + \log |\gamma_1(t_m)v'_m| - \log |\gamma_2(s_m)v'_m|\\ &\leq C_2-\log|\gamma_2(s_m)v'_m|. \end{split}$$ for all $m>0$. Substituting (\[eq:5.1\]) and (\[eq:5.2\]) into the Gromov product gives $$\limsup_{m\to\infty} 2(\gamma_1(t_m)\mid\gamma_2(s_m))_r\geq \limsup_{m\to\infty} d_X(\gamma_1(t_m),\gamma_1(0))+\log \frac{|\gamma_2(s_m)v'_m|}{|\gamma_2(s_m)v|}+C_3,$$ for some constant $C_3$. By construction $$\frac{|\gamma_2(s_m)v'_m|}{|\gamma_2(s_m)v|}$$ is constant for large $m$. Since $d_X(\gamma_1(t_m),\gamma_1(0))\to\infty$ as $m$ tends to $\infty$, we find that $$\limsup_{m\to\infty} 2(\gamma_1(t_m)\mid\gamma_2(s_m))_r = \infty.$$ Note that $g^{-1}$ is an isometry that maps $Y$ onto $X$ and $$(g^{-1}(\gamma_1(t))\mid g^{-1}(\gamma_2(s)))_{g^{-1}(r)} = (\gamma_1(t)\mid\gamma_2(s))_r$$ for each $s$ and $t$. Thus, $$\label{eq:5.3} \limsup_{m\to\infty} 2(g^{-1}(\gamma_1(t_m)\mid g^{-1}(\gamma_2(s_m)))_{g^{-1}(r)} =\infty.$$ As $g$ maps the facet part $(F', \tau(C_X,z'))$ to the facet part $(g(F'),\tau(C_Y,v'))$ and the facet part $(F,\tau(C_X,z))$ to the vertex part $(F_v,\tau(C_Y,v))$, it follows from Lemmas \[lem:4.1\] and \[lem:5.2\] that $g^{-1}(\gamma_1(t_m))$ converges to $x\in F'$ and $g^{-1}(\gamma_2(s_m))$ converges to $y\in F$ as $m$ tends to $\infty$. As the straight-line segment $[x,y]\not\subseteq \partial X$, we deduce from [@KN Theorem 5.2] that $$\limsup_{m\to\infty} 2(g^{-1}(\gamma_1(t_m)) \mid g^{-1}(\gamma_2(s_m)))_{g^{-1}(r)} <\infty,$$ which contradicts (\[eq:5.3\]). We can reason in the same way from $F'$, and conclude that $g$ maps each facet part to a vertex part. It remains to show that $g$ maps vertex parts to facet parts. Again we argue by contradiction. So, let $P$ be a vertex part of $(X,d_X)$ corresponding to $(F_v,\tau(C_X,v))$, and suppose hat $g$ maps $P$ to a vertex part $(F'_u,\tau(C_Y,u))$ of $(Y,d_Y)$. There exists a unique-geodesic line $\Gamma_{p}\subseteq X$ connecting $v$ to a point $p\in F$, where $F$ is a facet of $C_X$ whose closure does not contain $v$. We already know that the facet part $(F,\tau(C_X,p))$ of $(X,d_X)$ is mapped to a vertex part $(F'_w,\tau(C_Y,w))$ of $(Y,d_Y)$. The image of $\Gamma_{p}$ under $g$ is a unique-geodesic line, $\Gamma'_{p}$, which connects $u$ to $w$ in $(Y,d_Y)$ by Lemmas \[lem:4.1\] and \[lem:5.2\]. This implies that $u$ and $w$ do not lie in the same closed facet of $Y$, and hence $\Gamma'_{p}$ must be the straight-line segment $(u,w)$ in $Y$ for each $p\in F$, which contradicts the fact that $g$ is one-to-one. We shall prove that every isometry between polyhedral Hilbert geometries that maps vertex parts to vertex parts, and hence facet parts to facet parts, is a collineation. In addition, we shall see that isometries that interchange vertex parts and facet parts only exist between two $n$-simplices with $n\geq 2$. Isometries that map vertex parts to vertex parts ================================================ We first show that if an isometry between polyhedral Hilbert geometries maps vertex parts to vertex parts, then it admits a continuous extension to the norm boundary of its domain. \[lem:6.1\] Let $(X,d_X)$ and $(Y,d_Y)$ be polyhedral Hilbert geometries and $g\colon X\to Y$ be an isometry. If $g$ maps vertex parts to vertex parts, then $g$ extends continuously to $\partial X$. Let $n=\dim X=\dim Y$. For $m\leq n$, let $X_m$ be the union of the relative open faces of $\overline{X}$ with dimension at least $m$. In particular, $X_n=X$. We use an inductive argument with the following hypothesis: the map $g$ extends continuously to $X_m$, and every straight-line segment $(v,x)\subseteq X_m$ with $v$ a vertex of $X$ is mapped onto a straight-line segment $(g(v),y)$ in $\overline{Y}$, where $g(v)$ is the vertex of $Y$ corresponding to the part that is the image under $g$ of the part of $v$. To see that the assertion is true for $m=n$, remark that $(v,x)$ is (part of) a unique-geodesic line, and hence $g((v,x))$ is a straight-line segment $(w,y)$ with $w\in\partial Y$, by Lemma \[lem:5.2\]. Let $(v_k)_k$ be a sequence in $(v,x)$ converging to $v$. It follows from Lemma \[lem:4.1\] that $(v_k)_k$ converges in the horofunction compactification to a Busemann point of the form $r_{C_X,v} +f_{\tau(C_X,v),p}$ for some $p\in X$. Thus, by assumption, $(g(v_k))_k\subseteq (w,y)$ must converge to $r_{C_Y,g(v)}+f_{\tau(C_Y,g(v)),q}$ for some $q\in Y$. As $g(v_k)$ converges to $w$, it follows that $g(v)=w$. Now suppose the assertion is true for some $m\in\{1,\dots,n\}$. Let $F$ be a relative open face of $\overline{X}$ of dimension $m-1$. Fix a vertex $v_F$ of $X$ not lying in $\overline F$. For each $x\in F$, consider the straight-line segment $(v_F,x)$, which, by our choice of $v_F$, is contained in $X_m$. By the induction hypothesis, $g$ maps $(v_F,x)$ onto a straight-line segment $(g(v_F),y)$. Define $g$ on $F$ by $g(x)=y$. We claim that this extension of $g$ to $X_{m-1}$ is continuous. Let $(x_k)_{k}$ be a sequence of points in $X_{m-1}$ converging to some point $x$ of $F$. Without loss of generality we may assume that $(x_k)_k$ lies within $F\cup X_m$. Any point $z\in (g(v_F), g(x))$ is the image under $g$ of a point $u\in(v_F,x)$. Moreover, we can find a sequence $(u_k)_k$ in $X_m$ converging to $u$ such that $u_k \in (v_F, x_k)$ for all $k$. By one part of the induction hypothesis, $g(u_k) \in (g(v_F), g(x_k))$ for all $k$. By the other part, $g(u_k)$ converges to $z= g(u)$, since $u$ is in $X_m$. Therefore, every limit point $y'$ of $(g(x_k))_k$ satisfies $z\in(g(v_F), y']$. By letting $z$ approach $g(x)$, we conclude that $y'=g(x)$, and hence $g$ is continuous on $X_{m-1}$ To complete the induction step, let $(v,x)\subseteq X_{m-1}$ be a straight-line segment with $v$ a vertex of $X$. Suppose $s,t\in (v,x)$ and $s\in (v,t)$. Let $(s_k)_k$ and $(t_k)_k$ be sequences in $X$ with $s_k\in (v,t_k)$ for all $k$, and such that $s_k\to s$ and $t_k\to t$ as $k\to\infty$. By the induction hypothesis, the straight-line segment $(v,t_k)$ is mapped onto $(g(v),g(t_k))$, so that $g(s_k)\in (g(v),g(t_k))$ for all $k$. As $g$ is continuous on $X_{m-1}$ we conclude that $g(s)\in [g(v),g(t)]$. Thus, the image of $(v,x)$ under $g$ is contained in a straight-line segment $(g(v),y)$ for some $y\in\overline{Y}$. Moreover, as $g$ is continuous, $g((v,x))$ must be connected, and hence it is a straight-line segment. We also need the following two lemmas. \[lem:6.2\] Let $U\subseteq\mathbb{R}^n$ be an $n$-dimensional compact convex set. If $x_0,\ldots,x_n\in\partial U$ form an $n$-simplex, then for each $u\in U$, there exists $x_m$ such that $\ell_{u,x_m}$ intersects $\mathrm{aff}\,(\{x_0,\ldots,x_n\}\setminus\{x_m\})$ at a point in $U$. Write $S=\mathrm{conv}\,(x_0,\ldots,x_n)$ to denote the $n$-simplex, and for $m=1,\ldots,n$ define $A_m=\mathrm{aff}\,(\{x_0,\ldots,x_n\}\setminus\{x_m\})$. If $u\in S$, then $\ell_{u,x_m}$ intersects $A_m$ at a point in $\mathrm{conv}(\{x_0,\ldots,x_n\}\setminus\{x_m\})\subseteq U$. On the other hand, if $u\not\in S$, then for each $k$ we let $H_k$ be the closed half-space containing $x_k$ with boundary $A_k$. Obviously, $S$ is the intersection of these halfspaces, and so there exists $m\in\{0,\dots,n\}$ such that $u$ is not in $H_m$. Since $x_m$ is in $H_m$ any $u$ is not, the intersection of $\ell_{u,x_m}$ and $A_m$ lies in $[u,x_m]$. But $[u,x_m]$ is a subset of $U$ since $U$ is convex. \[lem:6.3\] Let $(X,d_X)$ and $(Y,d_Y)$ be polyhedral Hilbert geometries and $g\colon X\to Y$ be an isometry having a continuous extension to $\partial X$. If $x_0,\ldots,x_m\in\partial X$ are vertices of $X$ and $u\in \mathrm{aff}\,(x_0,\ldots,x_m)\cap \overline{X}$, then $g(u)\in\mathrm{aff}\,(g(x_0),\ldots,g(x_m))$. We use induction on $m$. The case $m=1$ is a direct consequence of Lemma \[lem:5.2\]. Now suppose that the assertion holds for all $m<k$. Let $x_0,\ldots,x_k$ be vertices of $X$. By removing points we may assume that $\mathrm{conv}\,(x_0,\ldots,x_k)$ is an $k$-simplex. By Lemma \[lem:6.2\], there exists $k^*$ such that $\ell_{u,x_{k^*}}$ intersects $\mathrm{aff}\,(\{x_0,\ldots,x_k\}\setminus\{x_{k^*}\})$ at some point $z$ in $\overline{X}$. By the induction hypothesis $g(z)$ is in $\mathrm{aff}\,(\{g(x_0),\ldots,g(x_k)\}\setminus\{g(x_{k^*})\}$. Let $(v_i)_i$ be a sequence in $X$ converging to $u$. Since $g$ extends continuously to the boundary and $g(\ell_{v_i,x_{k^*}}\cap X) = \ell_{g(v_i),g(x_{k^*})}\cap Y$, we find that $g(z)\in \ell_{g(u),g(x_{k^*})}$. Thus, $g(u)$ is an affine combination of $g(z)$ and $g(x_{k^*})$, and hence contained in $\mathrm{aff}\,(g(x_0),\ldots,g(x_k))$. The next theorem shows that every isometry between polyhedral Hilbert geometries mapping vertex parts to vertex parts is a collineation. \[thm:6.4\] Let $(X,d_X)$ and $(Y,d_Y)$ be polyhedral Hilbert geometries and $g\colon X\to Y$ be an isometry. If $g$ and $g^{-1}$ extend continuously to the boundary, then $g$ is a collineation. We will use induction on $n=\dim X=\dim Y$. Assume that $X$ is 1-dimensional. Let $a$ and $b$ be the points of $\partial X$, and let $x$ be any point in $X$. Then $a$, $b$, $x$ form a projective basis for $\mathbb{P}^2$. Hence there exists a unique collineation $h$ that coincides with $g$ on $a$, $b$ and $x$. Let $y\in X$ be between $x$ and $b$. As $g$ extends continuously to $\partial X$, we must have that $g(y)$ is between $g(x)$ and $g(b)$. Since $h$ preserves cross-ratios, $$[g(a),g(x),g(y),g(b)]=[a,x,y,b]=[g(a),g(x),h(y),g(b)].$$ This equality uniquely determines $h(y)$ and hence $g$ and $h$ agree at $y$. By interchanging the roles of $a$ and $b$ we conclude that $g$ and $h$ agree on $X$. Now assume that the assertion is true for all $k<n$. Then we can find $n+1$ vertices $x_0, \dots,x_n$ of $\overline{X}$ that form an $n$-simplex, which we denote by $S$. Choose a point $y$ in the interior of $S$. The points $x_0, \dots, x_n, y$ form a projective basis for $\mathbb{P}^n$. Note that, since $y$ is not in $\mathrm{aff}\,(\{x_0,\ldots,x_n\}\setminus\{x_m\})$ for any $m$, we can apply Lemma \[lem:6.3\] to $g^{-1}$ and conclude that $g(y)$ is not in the affine hull of $\{g(x_0),\ldots,g(x_n)\}\setminus\{g(x_m)\}$ for any $m$. A similar argument shows that $g(x_i)$ is not in the affine hull of $\{g(x_0), \dots,g(x_n)\}\setminus\{g(x_i)\}$ for any $i$. It follows that $g(x_0), \dots,g(x_k),g(y)$ form a projective basis for $\mathbb{P}^n$. Therefore, there is a unique collineation $h$ agreeing with $g$ at $x_0, \dots,x_n$ and $y$. For each $i\in\{0,\dots,n\}$, define $$L_i = \overline{X} \cap \ell_{x_i,y} \mbox{\quad and\quad } H_i = \overline{X} \cap \mathrm{aff}\,(\{x_0, \dots,x_n\}\setminus\{x_i\}).$$ Since $y$ is in the interior of $S$, we have that $L_i$ intersects $H_i$ at a single point $z_i$. Note that $g$ maps $L_i$ to $L'_i = \overline{Y}\cap\ell_{g(y),g(x_i)}$. For $i=1,\ldots,n$, let $$H'_i = \overline{Y} \cap \mathrm{aff}\,(\{g(x_0), \dots,g(x_n)\}\setminus\{g(x_i)\}).$$ By applying Lemma \[lem:6.3\] to both $g$ and $g^{-1}$ we also know that $g(H_i)=H_i'$ for all $i$. Therefore, $g(z_i)$ is the unique point of intersection of $L'_i$ and $H_i'$. The collineation $h$ also maps $L_i$ to $L'_i$ and $H_i$ to $H'_i$, and therefore $g(z_i)=h(z_i)$. Let $X_i$ and $Y_i$ denote the relative interiors of $H_i$ and $H'_i$, respectively. Equipped with the restrictions of $d_X$ and $d_Y$ respectively, these sets become Hilbert geometries. Moreover, by Lemma \[lem:6.3\], the map $g$ restricted to $X_i$ is an isometry of $X_i$ onto $Y_i$. Of course, $g_{|X_i}$ extends continuously to $\partial X_i$ and its inverse extends continuously to $\partial Y_i$. So, we may apply the induction hypothesis to deduce that $g_{|X_i}$ is a collineation. As $g$ and $h$ agree on $\{x_0, \dots,x_n,z_i\}\setminus\{x_i\}$, which forms a projective basis for the projective closure of $X_i$, we have that $g$ and $h$ agree on $H_i$, for each $i$. Let $p$ be in the interior of $S$. Define $p_0=\ell_{p,x_0}\cap H_0$ and $p_1=\ell_{p,x_1}\cap H_1$. Since $g$ and $h$ agree on both $x_0$ and $p_0$, they both map $\ell_{p_0,x_0}\cap\overline{X}$ to $\ell_{g(p_0),g(x_0)}\cap\overline{Y}$. Similarly, they both map $\ell_{p_1,x_1}\cap\overline{X}$ to $\ell_{g(p_1,g(x_1)}\cap\overline{Y}$. We conclude that $g(p)=h(p)$, and hence $g$ and $h$ agree on the whole of $S$. Let $\{u_0,\dots,u_n\}$ be a set of $n+1$ vertices of $X$ such that $S'= \mathrm{conv}\,(u_0,\dots,u_n)$ is an $n$-simplex. By the basis exchange property for affine spaces, there exists an $i$ such that $u_i, x_1,\dots,x_n$ form a $n$-simplex. Let $q$ be in the interior of $\mathrm{conv}\,(u_i, x_1,\dots,x_n)$. The straight line $\ell_{q,u_i}$ intersects the relative interior of the facet $\mathrm{conv}\,(x_1,\ldots,x_n)$ of $S$. Therefore $\ell_{q,u_i}$ also intersects the interior of $S$. Thus, $g$ and $h$ agree on at least three distinct points $u$, $v$, and $w$ of $\ell_{q,u_i} \cap X$. Let $a$ be the point different from $u_i$ where $\ell_{q,u_i}$ intersects $\partial X$. There exists a unique collineation $f$ that agrees with $g$ on $a$, $u$, and $u_i$. The map $f$ is an isometry on $(a,u_i)$ and hence $f$ and $g$ agree on $\ell_{q,u_i} \cap X$. Since $u$, $v$, $w$ forms a projective basis for the $1$-dimensional projective space containing $\ell_{q,u_i}$, we find that $f$ and $h$ agree on $\ell_{q,u_i} \cap X$, and hence $g$ and $h$ also agree on $\ell_{q,u_i}\cap X$. Thus, we have shown that $g$ and $h$ are identical on the interior of $\mathrm{conv}\,(u_i, x_1,\dots,x_n)$. In fact, as $g$ has a continuous extension to $\partial X$, the maps $g$ and $h$ agree on $\mathrm{conv}\,(u_i, x_1,\dots,x_n)$. Now note that we can iterate this procedure and replace, one-by-one, the elements of $\{x_0,\dots,x_n\}$ with elements of $\{u_0,\dots,u_k\}$ to deduce that $g$ and $h$ are identical on $S'$. By Carathéodory’s theorem, every point in $\overline{X}$ can be written as a convex combination of $n+1$ vertices of $X$. Therefore $g$ and $h$ agree on the whole of $\overline{X}$, which shows that $g$ is a collineation. Isometries that interchange vertex and facet parts {#sec:7} ================================================== \[thm:7.1\] Let $(X,d_X)$ and $(Y,d_Y)$ be polyhedral Hilbert geometries with $\dim X=\dim Y\geq 2$. If there exists an isometry $g\colon X\to Y$ that maps vertex parts to facet parts, then $X$ and $Y$ are $n$-simplices. By Theorem \[thm:5.6\], we know that both $g$ and $g^{-1}$ map vertex parts to facet parts and *vice versa*. Thus, it suffices to show that $X$ is an $n$-simplex. To establish this we prove that its vertex set $V_X$ is affinely independent. If $v\in V_X$, then there exists a relative open face $F_v$ of $\overline{X}$ such that $v$ is not in $\overline F_v$. Suppose that there exists another vertex $u$ of $X$, different from $v$, that is not in $\overline F_v$. Choose $p\in F_v$. Let $\gamma\colon \mathbb{R}\to X$ and $\mu\colon\mathbb{R}\to X$ be parametrisations of the unique-geodesic lines $(p,v)$ and $(p,u)$, respectively, such that both $\gamma(t)$ and $\mu(t)$ converge to $p$ as $t$ tends to $\infty$. Note that, by Lemma \[lem:4.1\], both $\gamma$ and $\mu$ converge, as $t$ tends to $\infty$, to the same Busemann point $r_{C_X,p} +f_{\tau(C_X,p),q}$, where $q$ is any point in $X$. Thus, $g\circ\gamma$ and $g\circ\mu$ converge, as $t$ tends to $\infty$, to the same Busemann point in $Y_B(\infty)$. By assumption this Busemann point is in a vertex part, and so is of the form $r_{C_Y,w}+f_{\tau(C_Y,w),s}$, where $w$ is a vertex of $Y$ and $s\in Y$. By Lemma \[lem:3.4\], the Busemann points in $X_B(\infty)$ corresponding to $\gamma(t)$ and $\mu(t)$ with $t$ tending to $-\infty$ are mapped to Busemann points in different facet parts of $(Y,d_Y)$. Thus, $g((p,v))=(w,r)$ and $g((p,u))=(w,r')$ for some $r$ and $r'$ lying in distinct facets of $\overline{Y}$. However, by Lemma \[lem:4.1\], this implies that $g\circ\gamma$ and $g\circ\mu$ converge, as $t$ tends to $\infty$, to different Busemann points in the part of $w$, which is a contradiction. Suppose that $g$ is in $\mathrm{Isom}(X)$ and is not a collineation. By Theorem \[thm:6.4\], either $g$ or $g^{-1}$ does not extend continuously to $\partial X$. From Theorem \[thm:5.6\] and Lemma \[lem:6.1\], it follows that $g$ has to interchange vertex parts and facet parts and $\dim X\geq 2$. It thus follows from Theorem \[thm:7.1\] that $X$ is an $n$-simplex with $n\geq 2$ . The existence of an isometry that is not a collineation on any $n$-simplex with $n\ge 2$, follows immediately from Theorem \[thm:1.2\], which will be proved in the next section. The isometry group of the simplex {#sec:8} ================================= It is known [@Nu1] that the $n$-simplex endowed with the Hilbert metric is isometric to the normed vector space $V=\mathbb{R}^{n+1}/\sim$, where $x\sim y $ if and only if $ x= y+ h(1,1,\ldots,1)$ for some $h\in\mathbb{R}$, and norm $$\| x\|_{\mathrm{var}}= \max_{i} x_i - \min_{j}x_j.$$ We denote the equivalence class of $x\in\mathbb{R}^{n+1}$ by $[x]$. It is obvious that each element of $\mathbb{R}^n\rtimes \Gamma_{n+1}$, where $\Gamma_{n+1} = \sigma_{n+1}\times\langle\rho\rangle$, is an isometry of $(V,\|\cdot\|_{\mathrm{var}})$. By the Mazur-Ulam theorem, every isometry of $V$ is affine. Let $g\colon V\to V$ be an isometry that fixes the origin. Clearly the unit ball of $V$ is a polyhedron, each vertex of which has exactly one representative in the set $$V_{\mathrm{var}} = \big\{ (b_0, \dots, b_n) \mid \text{$b_i \in \{0,1\}$ for all $i$} \big\} \backslash \big\{ (0,\dots,0), (1,\dots,1) \big\}.$$ This is the set of vertices of a hypercube with two diagonally opposite corners removed. We see that there are $2^{n+1}-2$ vertices. Edges of $B_\mathrm{var}$ are segments connecting vertices having representatives in $V$ that differ on exactly one coordinate. Thus, there are $n+1$ edges incident to every vertex, except for those whose representative has exactly one coordinate equal to $0$ or $1$. Let $V_0$ be the set of vertices whose representative has exactly one coordinate equal to $0$, and let $V_1$ be the set of vertices whose representative has one coordinate equal to $1$. Since $(0,\ldots,0)$ and $(1,\ldots,1)$ are not in $V_\mathrm{var}$, each vertex in $V_0\cup V_1$ is incident to exactly $n$ edges. Since $g$ is linear, it preserves the number of edges incident to each vertex, and so we conclude that $g$ leaves $V_0 \cup V_1$ invariant. Now consider a subset $U$ of $V_0\cup V_1$ containing $n+1$ elements and having the following properties: no element $U$ is the negative of another element in $U$, and $\sum_{[u]\in U}[u]=[0]$. It straightforward to verify that $U$ is equal to either $V_0$ or $V_1$. Since the properties of $U$ are invariant under linear transformations, $g$ maps $V_1$ either onto itself, or onto $V_0$. As $V_1$ spans $V$, any linear map on $V$ is completely determined by its values on $V_1$. Thus, if $g$ maps $V_1$ onto itself, then $g$ is a permutation in $\sigma_{n+1}$. On the other hand, if $g$ maps $V_1$ onto $V_0$, then $g$ is the composition of a permutation in $\sigma_{n+1}$ and $\rho$, as $V_0=-V_1$. We conclude that $$\mathrm{Isom}(X)\cong \mathbb{R}^n\rtimes \Gamma_{n+1}.$$ To determine the collineation group, let $C_X\subseteq \mathbb{R}^{n+1}$ be the open cone generated by an $n$-simplex $X$ inside a hyperplane not containing the origin. Any element $A$ of $\mathrm{GL}(n+1,\mathbb{R})$ that maps $C_X$ onto itself, maps the extreme rays of $C_X$ to extreme rays. As the $n+1$ vertices of $X$ span $\mathbb{R}^{n+1}$, the map $A$ is completely determined by its values on the vertices of $X$. Thus, $A$ can be uniquely represented by a product of an $(n+1)\times (n+1)$ permutation matrix and an $(n+1)\times (n+1)$ positive diagonal matrix. From this we conclude that $\mathrm{Coll}(X)\cong \mathbb{R}^n\rtimes \sigma_{n+1}$. We can go from the normed space representation of simplical Hilbert geometries given above to the cone setting of Section \[sec:2\] by exponentiating coordinate-wise. Indeed, let $\Phi$ be given by $\Phi(x_1,\dots,x_{n+1})=(e^{x_1},\dots, e^{x_{n+1}})$. Then, $\Phi$ is an isometry between $(V,||\cdot||_{\mathrm{var}})$ and $(P_{n+1},d_{P_{n+1}})$, where $P_{n+1}$ is the interior of the standard positive cone. The map $\rho$ on $V$ corresponds to the map $\rho' = \Phi\circ\rho\circ\Phi^{-1}$ on $P_{n+1}$, which takes the coordinate-wise reciprocal. It is clear that $\rho'$ is both order-reversing and homogeneous of degree $-1$. Maps with these two properties exist on all *symmetric cones*, of which the cone $P_{n+1}$ is an example. Indeed, recall [@FaK] that a proper open cone $C$ in a finite dimensional real vector space $V$ with inner-product $\langle\cdot,\cdot\rangle$ is called *symmetric* if $\{A\in\mathrm{GL}(V)\colon A(C)=C\}$ acts transitively on $C$ and $C=C^*$, where $$C^*=\{y\in V^*\colon \langle x,y\rangle>0\mbox{ for all }x\in\overline{C}\}$$ is the (open) dual of $C$. The *characteristic function* $\phi$ on $C$ given by, $$\phi(x) =\int_{C^*} e^{-\langle x,y\rangle}dy\mbox{\quad for }x\in C,$$ is homogeneous of degree $-\dim V$, so that Vinberg’s *$*$-map*, $x\in C\mapsto x^*\in C^*$, where $x^*=-\nabla\log\phi(x)$ for $x\in C$, is homogeneous of degree $-1$. The $*$-map is order-reversing on symmetric cones; see [@Kai Proposition 3.2]. As a matter of fact, it was proved in [@Kai] that this property of the $*$-map characterises the symmetric cones among the homogeneous cones. The reader can verify that the map $\rho'$ above is the $*$-map for the positive cone. Since the $*$-map is order-reversing and homogeneous of degree $-1$, it is non-expansive in Hilbert’s projective metric on $C$; see [@Nu1]. But $(x^*)^* =x$ for all $x\in C$, so the $*$-map is actually an isometry under this metric. Composing it with the canonical projection yields an isometry of the Hilbert geometry on a section $X$ of $C$. This isometry is not a collineation except when the symmetric cone $C$ is a *Lorentz cone*, $$\begin{aligned} \Lambda_n=\{(x_1,\ldots,x_n)\in \mathbb{R}^n \colon \mbox{$x_1>0$ and $x_1^2-x_2^2-\ldots -x_n^2>0$}\},\end{aligned}$$ for some $n\geq 2$. To our knowledge there exist no other cones for which $\mathrm{Isom}\,(X)$ differs from $\mathrm{Coll}\,(X)$. In fact, we conjecture that $\mathrm{Isom}\,(X)$ and $\mathrm{Coll}\,(X)$ differ if and only if the cone generated by $X$ is symmetric and not Lorentzian, in which case we believe the isometry group is generated by the collineations and the isometry coming from the $*$-map. This is known to be true for the cone of positive-definite Hermitian matrices; see [@Mol]. [10]{} M. Akian, S. Gaubert, and C. Walsh, The max-plus Martin boundary. *Doc. Math.*, to appear, `arXiv:math.MG/0412408`. W. Ballmann, M. Gromov, and V. Schroeder, *Manifolds of nonpositive curvature*. Progress in Mathematics, **61**. Birkhäuser Boston, Inc., Boston, MA, 1985. G. Beer, *Topologies on closed and closed convex sets*. Mathematics and its Applications, **268**. Kluwer Academic Publishers Group, Dordrecht, 1993. Y. Benoist, Convexes hyperboliques et fonctions quasisymétriques. *Publ. Math. Inst. Hautes Études Sci.* **97**, (2003), 181–237. G. Birkhoff, Extensions of Jentzsch’s theorems. *Trans. Amer. Math. Soc.* **85**, (1957), 219–277. H. Busemann, Timelike spaces. *Dissertationes Math.* **53**, (1967). P. Bushell, Hilbert’s metric and positive contraction mappings in a Banach space. *Arch. Rat. Mech. Anal.* **52**, (1973), 330–338. B. Colbois and C. Vernicos, Bas du spectre et delta-hyperbolicité en géométrie de Hilbert plane. *Bull. Soc. Math. France* **134**(3), (2006), 357–381. J. Faraut and A. Korányi, *Analysis on Symmetric Cones*. Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 1994. T. Foertsch and A. Karlsson, Hilbert metrics and Minkowski norms. *J. Geom.* **83**(1-2), (2005), 22–31. P. Funk, Über Geometrien, bei denen die Geraden die Kürzesten sind. *Math. Ann.* **101**(1), (1929), 226–237. D. Hilbert, Über die gerade Linie als kürzeste Verbindung zweier Punkte. *Math. Ann.* **46**, (1895), 91–96. P. de la Harpe, On Hilbert’s metric for simplices. In: *Geometric Group Theory*, Vol. 1 (Sussex, 1991), London Math. Soc. Lecture Note Ser. **181**, 97–119. Cambridge Univ. Press, 1993. C. Kai, A characterization of symmetric cones by an order-reversing property of the pseudoinverse maps. *J. Math. Soc. Japan* **60**(4), (2008), 1107–1134. A. Karlsson and G.A. Noskov, The Hilbert metric and Gromov hyperbolicity. *Enseign. Math.* **48**(2), (2002), 73–89. B. Lins and R. Nussbaum, Denjoy-Wolff theorems, Hilbert metric nonexpansive maps and reproduction-decimation operators. *J. Funct. Anal.* **254**(9), (2008), 2365–2386. L. Molnár, Thompson isometries of the space of invertible positive operators, *Proc. Amer. Math. Soc.*, to appear. R D Nussbaum, *Hilbert’s projective metric and iterated nonlinear maps*, Mem. Amer. Math. Soc. **75**, (1988). M. Rieffel, A. Group $C\sp *$-algebras as compact quantum metric spaces. *Doc. Math.* **7**, (2002), 605–651. C. Sabot, Existence and uniqueness of diffusions on finitely ramified self-similar fractals. *Ann. Sci. École Norm. Sup.* **30**(5), (1997) 605–673. E. Socié-Méthou, *Comportements asymptotiques et rigidités des géométries de Hilbert*. Thèse de doctorat. Univ. de Strasbourg, 2000. E. Socié-Méthou, Behaviour of distance functions in Hilbert-Finsler geometry. *Differential Geom. Appl.* **20**(1), (2004), 1–10. C. Walsh, Minimum representing measures in idempotent analysis, preprint, `arXiv:math.MG/0503716`. C. Walsh, The horofunction boundary of the Hilbert geometry. *Adv. Geom.* 8(4), (2008), 503–529. [^1]: C. Walsh was partially supported by the joint RFBR-CNRS grant number 05-01-02807
{ "pile_set_name": "ArXiv" }
--- abstract: 'The subject of *Polynomiography* deals with algorithmic visualization of polynomial equations, having many applications in STEM and art, see [@Kal04]-[@KalDCG]. Here we consider the polynomiography of the partial sums of the exponential series. While the exponential function is taught in standard calculus courses, it is unlikely that properties of zeros of its partial sums are considered in such courses, let alone their visualization as science or art. The Monthly article Zemyan [@Zemyan] discusses some mathematical properties of these zeros. Here we exhibit some fractal and non-fractal [*polynomiographs*]{} of the partial sums while also presenting a brief introduction of the underlying concepts. Polynomiography establishes a different kind of appreciation of the significance of polynomials in STEM, as well as in art. It helps in the teaching of various topics at diverse levels. It also leads to new discoveries on polynomials and inspires new applications. We also present a link for the educator to get access to a demo polynomiography software together with a module that helps teach basic topics to middle and high school students, as well as undergraduates.' author: - | Bahman Kalantari\ Department of Computer Science, Rutgers University, NJ\ [email protected] date: - - title: An Invitation to Polynomiography via Exponential Series --- [**Keywords:**]{} Exponential Function, Complex Polynomial, Iterative Methods, Polynomiography. Introduction ============ Ever since introducing the term *polynomiography* for the visualization of polynomial equations via iteration functions, when encountering certain polynomials I have found it tempting to consider the shape of their *polynomiographs* in the complex plane. The word *polynomiography* is a combination of the term *polynomial*, first used in the 17th century, see Barbeau [@Bar], and the suffix *graphy*. Polynomiography grew out of my research into the subject of polynomial root-finding, an ancient and historic subject that continues to grow and finds new applications with every generation of mathematicians and scientists. We can create literally hundreds of polynomiographs for a single polynomial equation, even when restricted to the same portion of the complex plane. One familiar with the term *fractal*, coined by Mandelbrot many years earlier, might think polynomiography is just another name for fractal images. This however is not valid and underestimates the significance of the theory and algorithms that have led to polynomiography. One reason to call an image a polynomiograph rather than a fractal is because the image may exhibit no fractal behavior no matter in what part of the Euclidean plane it is generated. It would thus not make sense to call it a fractal. After all, fractal properties that may be inherent in some iterations may not be present everywhere in an image and an iterative method may be well behaved in certain areas of the Euclidean plane, or exhibit no fractal pattern anywhere. Consider for instance the polynomiograph of the polynomial $1+z+0.5z^2$, the quadratic partial sum of the exponential function under the iterations of Newton’s method, shown in Figure \[Fig1\], top-left. Given a particular point $z_0$, the iterations of Newton’s method generate a sequence $z_0, z_1, z_2, \dots,$ that may or may not converge to a root. The sequence is called the *orbit* of $z_0$. The *basin of attraction* of a root is the set of all points in the plane whose orbit converges to that root. In the figure the upper half represents the basin of attraction of the one root and the lower half the basin of attraction of the other root. The points on the perpendicular bisector of the line connecting the roots, the $x$-axis, does not belong to either one of the two basins. There is no fractal property in this image, no broken lines, no self-similarity. There is chaos corresponding to orbits of the points on the bisecting line, however this set is not a fractal set. The basins of attraction form the *Fatou* set, and the bisecting line the *Julia* set. This is an example of a non-fractal polynomiograph. On the other hand, for a polynomial of degree three with distinct roots, the corresponding Newton’s polynomiograph is fractal if it contains a portion of the Julia set. The Julia set is the boundary of each basin of attraction of a root and is fractal but the Fatou set may contain more than the union of the basins of attractions. There are deep results on the Julia and Fatou sets of polynomials under the iterations of Newton’s method and other iteration functions, including their own iterations. For instance, the animation [@KS] gives a 3D depiction of the dynamics of the Fatou sets for the polynomial $z^3-1$. About fractals and the amazing mathematical properties of iterations of rational functions one can consult Beardon [@Bea], Devaney [@Dev], Mandelbrot [@Man], Milnor [@Milnor], and Kalantari [@Kalbook] with emphasis on iteration functions for polynomial root-finding. A polynomiograph may exhibit fractal features, in which case one can refer to it as a *fractal polynomiograph*. Even if a polynomiograph exhibits fractal behavior it is more informative to refer to it as such, rather than plain fractal. Indeed the word fractal is used in very general terms. It may refer to many different types of objects, such as 2D images coming from iterations of all kinds, 3D fractals and even objects such as trees, clouds, mountains, nature and the universe. After all, just because we may refer to an image or an object as fractal it does not imply that we understand all its properties. In [@Kalbook] and several other articles I have described reasons in support of the definition of the term. While originally polynomiography was to represent as algorithmic visualization of a polynomial equation via a specific family of iteration functions, called the *basic family* to be discussed later in the article, after many more years of experiences I would like polynomiography to refer to visualization of polynomials in broader terms, allowing the possibility of other iterative methods, even a mixture of iterative methods, even 3D visualizations or visualizations that pertain to the zeros of complex polynomials, not necessarily via iterations. Based on many educational experiences, including those of educators and students who have come to experiment with polynomiography software, there is convincing evidence that the images convey meaningful mathematical attributes of polynomials and algorithmic properties that make the images interesting beyond their aesthetic beauty as art, especially to the youth and students. There have been attempts by educators to popularize fractals in education and to teach some basic properties, for instance at high schools. However, introducing fractals in a very general setting could be confusing to the youth. After all, the underlying mathematics of fractals and iterations is sophisticated. On the other hand, since solving quadratic equations is common knowledge in middle and high school, students can connect to polynomiography in an easy fashion, turning polynomials of any degree into fun objects to deal with. Studying the underlying theory of polynomiography also makes it possible to teach and learn about fractals. Polynomiography helps motivate the teaching of fractals and related material at the K-12 level and beyond in a constructive manner, also connecting geometry and algebra. The present article is an attempt to demonstrate the beauty of a well known class of polynomials, seldom considered as complex polynomials in the manner presented here. What makes visualization of a polynomial equation interesting, even when all its coefficients are real numbers, is to view its domain not as the real line, but the Euclidean plane. The simplest case of polynomiography is the visualization of the basins of attraction of Newton’s method when applied to a quadratic equation, e.g. $z^2-1$, historically considered by Cayley [@cay] in 1879. Except for a shift, its polynomiograph is identical with the one shown in Figure \[Fig1\], top-left image. It is not difficult to mathematically prove this property of the basins of attraction without computer visualization. On the other hand, the analysis of basins of attraction of $z^3-1$ under Newton’s method is very complex. It is well known that the resulting image is fractal, a fractal polynomiograph. With the advent of computers it became more plausible to understand the shape of basins of attraction and apparently the first person who tried this and saw the surprising fractal behavior was the mathematician John Hubbard, see Gleick [@Gl]. While polynomiographs of Newton’s method for quadratic are not fractal, one can easily modify Newton’s method to get fractal polynomiographs. This modified method is a *parametrized Newton’s method*, described in more generality in the next section. The subject of fractals is significant, vastly rich and very beautiful. Mandelbrot not only coined the term fractal but undoubtedly played an enormous role in bringing them into view and this in turn has resulted in many theoretical advancements and visualizations, including new kinds of algorithmic mathematical art. Polynomiography does overlap with fractals in many way. However, it is not a subset of fractals, not in theory, nor in practice, nor in terms of its images as art or otherwise. I believe that polynomiography can play an important role in the teaching of fractals and dynamical systems at various levels. To support this point, in numerous personal experiences that include presentations to hundreds of middle and high school students, lectures during a first-year seminars or formal courses, only a very small percentage of students have ever heard of the term “fractal.” Even those who had familiarity with fractals could only identify them as visual images that represent self-similarity. Even at universities, topics on fractals and dynamical systems are only offered as graduate level courses. For those who wish to teach or learn basic concepts from the theory of fractals and dynamical systems polynomiography can provide a powerful bridge into these subjects areas, as well as many others. Polnomiography appeals to students because they can connect it with a task they have leaned early on, namely solving a polynomial equation. This in particular makes polynomiography effective for introducing it to K-16 students at elementary or advanced levels. As an example consider introducing the Mandelbrot set to middle and high students beyond just showing the aesthetic beauty of the set. We must first introduce them to Julia sets resulting from the iterations of a quadratic function. However, these iterations attempt to find fixed points but not roots. The notion of fixed points, while implicit in Newton’s method, typically is not taught in K-12, and hence is unfamiliar to students. In order to popularize fractals, first the notion of fixed points and fixed point iterations must be introduced. Having introduced these, then we can consider the task of approximating the roots of a polynomial $p(z)$ as that of finding the fixed points of the polynomial, $q(z)=p(z)+z$. A fixed point $\theta$ of $q(z)$ is *attractive, repulsive* and *indifferent* if the modulus of $p'(\theta)$ is less than one, larger than one, or equal to one. For a complex number $z=x+i y$, $i=\sqrt{-1}$, its modulus is $|z|= \sqrt{x^2+y^2}$. The fixed point iteration refers to computing $z_{k}=q(z_{k-1})$ for $k=1, 2, \dots$, where $z_0$ is a starting *seed*. If a fixed point is attractive, the iterations are guaranteed to converge to it, provided $z_0$ is close enough. That iterations are necessary to approximate the roots even of a quadratic equation is clear to a middle schooler who knows the quadratic formula fails to provide a numeric decimal value to the solution of $x^2-2$. Students can experience the behavior of iterations of quadratic functions via polynomiography. These experiences will demonstrate that not both fixed points of a quadratic can be attractive. However, Newton’s method will never fail to approximate the roots because both fixed points are attractive with respect to Newton’s function. While Newton’s method doesn’t result in fractal polynomiographs for any quadratic, fractal Julia sets result in polynomiography of a quadratic corresponding to a parametrized Newton’s method, see Figure \[Fig3\]. By introduction of different values for the parameter, students quickly learn the notion of attractive and repulsive fixed points and appreciate Newton’s method for numerically solving a quadratic equation. Students can also be introduced to the notion of open and closed sets, as well as Fatou and Julia sets. By considering polynomiography students and teachers quickly discover the vastness of the world of polynomials and they discover new applications of them, distinct from the traditional applications considered in standard textbooks. Among these applications one can include educational lesson plans, visual cryptography, art and much more. In my personal experiences in the teaching of the subject of polynomiography I am often delighted to find many creative applications that students are able to think of, including those that I did not even imagine. Why limit the applications of polynomials to the standard ones discussed in typical textbooks on algebra, calculus, or numerical analysis? Why not think of the shape of zeros of a polynomial, even if these do not come up in ordinary application? Once we think of the zeros of a polynomial, its polynomiography becomes a relevant matter of curiosity, leading to new images, new discoveries, new applications, new questions, and new art. Polynomiography may be a mouthful of a word, however it is a meaningful one. Students quickly accept it. In order to introduce polynomiography we need to consider polynomial equations over the complex plane. Since everyone is already familiar with the Cartesian coordinate system in the plane, it is easy to describe a polynomial equation as a way to encrypt a bunch of points in the Euclidean plane. We think of the points as *complex numbers*. This allows for turning points in the Euclidean plane into objects that inherit the four elementary operations on real numbers. For a middle or high school student, learning about elementary operations on complex numbers is a matter of minutes rather than hours. Once these operations are understood, a polynomial equation together with the *fundamental theorem of algebra* is nothing more than a way to encrypt points. Solving a polynomial equation is a game of hide-and-seek. For a fun introduction to the fundamental theorem of algebra, see Kalantari and Torrence [@KT]. In this article I present some polynomiography for the partial sums of the exponential series, familiar to every student who has come across calculus. The exponential function is considered by some mathematicians to be the most important function in mathematics. Polynomiography for the partial sums of some analytic functions such as sine and cosine is already considered in [@Kalbook]. In fact we can do polynomiography for functions that are not polynomial, and witness a visual convergence in the sense of polynomiography. The irony is that the exponential function itself has no zeros. At the end for the educator I will provide links to demo polynomiography software and a teaching module. The $n$-th Partial Sum of Exponential Series and The Basic Family {#the-n-th-partial-sum-of-exponential-series-and-the-basic-family .unnumbered} ================================================================= The exponential function and its $n$-th partial sum polynomial are, respectively $$\exp(z)= \sum_{k=0}^\infty \frac{z^k}{k!}, \quad P_n(z)= \sum_{k=0}^n \frac{z^k}{k!}=1+z+\frac{z^2}{2!}+\cdots + \frac{z^n}{n!}.$$ The shape of zeros of these polynomials has been studied, see Zemyan [@Zemyan] for a wonderful review. Many results are known, for instance, bounds on the zeros of $P_n(z)$. Some conjectures are raised on the zeros, such as the convexity of the roots are described in [@Zemyan]. Undoubtedly meany research questions can be stated. The polynomiographs of the partial sums to be exhibited here are generated via the *basic family* of iteration functions. For a given arbitrary polynomial $p(z)$, the basic family is an infinite collection of iteration functions. To define the basic family even in more generally, given a complex number $\alpha$ satisfying $\vert 1- \alpha \vert < 1$, the *parametrized basic family* is: $$B_{m, \alpha}(z)=z- \alpha p(z) \frac{D_{m-2}(z)} {D_{m-1}(z)}, \quad m=2,3, \dots$$ where $D_0(z)=1$, $D_k(z)=0$ for $k <0$, and, $D_m(z)$ satisfies the recurrence $$D_m(z)= \sum_{i=1}^n (-1)^{i-1}p(z)^{i-1}\frac{p^{(i)}(z)}{i!}D_{m-i}(z).$$ The range for $\alpha$ guarantees that each root of $p(z)$ remains an attractive fixed point of $B_{m, \alpha}(z)$, see [@Kalbook]. When $\alpha=1$ we denote the family by $B_m(z)$. Each member is capable of generating different polynomiography of the same polynomial. The first two members are, the $B_2(z)$, Newton, and the $B_3(z)$, Halley, iteration functions. This family and its variations are extensively studied in [@Kalbook] establishing many fundamental properties of why they are probably the most important family of iteration functions for polynomial root-finding. In particular, for each fixed $m\geq 2$, there exists a disc centered at a root $\theta$ such that for any $z_0$ in this disc the sequence of fixed point iteration $z_{k+1}=B_m(z_k)$, $k=0,1,\dots$, is well-defined and converges to $\theta$. When $\theta$ is a *simple root*, i.e. $p(\theta)=0$, $p'(\theta) \not =0$, the order of convergence is $m$. Variations of the basic family, other than the parameterized version, are described in [@Kalbook]. In contrast to using individual members of the basic family, there is a collective application, using the *basic sequence*, $\{B_m(w), m=2, \dots\}$, where $w$ is some fixed complex number, see [@Kalbook]. To describe the convergence of the basic sequence we need to define the notion of the *Voronoi diagram* of a set of points in the Euclidean plane. Given a set of points $\theta_1, \dots, \theta_n$ in the Euclidean plane, the *Voronoi cell* of a particular point $\theta_i$ is the set of all points in the plane that are closer to $\theta_i$ than to any other $\theta_j$. If $w$ lies in the Voronoi cell of a particular root $\theta$ of $p(z)$, then it can be shown that the sequence $B_m(w)$ converges to $\theta$. For pointwise convergence see [@Kalbook], and for a proof of a stronger property, uniform convergence of the basic family, see [@KalDCG]. Based on this convergence property we can produce non-fractal polynomiographs that are very different from the usual fractal ones. Polynomiographs for the Partial Sum {#polynomiographs-for-the-partial-sum .unnumbered} =================================== Here we describe several polynomiographies for the first few partial sums based on the basic family. It is easy to show that the roots of $P_n(z)$ are simple, i.e. $P_n(z)$ and its derivative $P'_n(z)$ have no common zeros. Also, it can be shown that the modulus of any root $\theta$ a root of $P_n(z)$ satisfies $0 < |\theta | < n$. The polynomiography of $P_1(z)=1+z$ is quite simple. Any member of the basic family will converge to the root in one iteration. Polynomiographs of $P_n(z)$ under Newton’s method for $n=2, \dots, 10$ are depicted in Figure \[Fig1\], showing them in increasing order from left to right and top to bottom. The shape of the zeros form a convex shape, reminiscent of a parabola of the form $x=y^2$. This can be seen in the polynomiographs. The norm of the roots goes to infinity as $n$ does. Figure \[Fig2\] shows the polynomiography of $P_n(z)$, $n=2, \dots, 7$ under the point-wise convergence. These are not fractal images. Figure \[Fig3\] shows the polynomiography of $P_n(z)$, $n=2, \dots, 7$ under parametrized Newton’s method all for a particular value of $\alpha$. ![Polynomiographs of $P_n(z)$, $n=2, \dots, 10$ under Newton’s method.[]{data-label="Fig1"}](p2FinalA.png "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 10$ under Newton’s method.[]{data-label="Fig1"}](p3FinalA.png "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 10$ under Newton’s method.[]{data-label="Fig1"}](p4FinalA.png "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 10$ under Newton’s method.[]{data-label="Fig1"}](p5FinalA.png "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 10$ under Newton’s method.[]{data-label="Fig1"}](p6FinalA.png "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 10$ under Newton’s method.[]{data-label="Fig1"}](p7FinalA "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 10$ under Newton’s method.[]{data-label="Fig1"}](p8FinalA.png "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 10$ under Newton’s method.[]{data-label="Fig1"}](p9FinalA.png "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 10$ under Newton’s method.[]{data-label="Fig1"}](p10FinalA.png "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 7$ under point-wise convergence.[]{data-label="Fig2"}](Exp2B.png "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 7$ under point-wise convergence.[]{data-label="Fig2"}](Exp3B.png "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 7$ under point-wise convergence.[]{data-label="Fig2"}](Exp4B.png "fig:"){width="1.5"}\ ![Polynomiographs of $P_n(z)$, $n=2, \dots, 7$ under point-wise convergence.[]{data-label="Fig2"}](Exp5B.png "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 7$ under point-wise convergence.[]{data-label="Fig2"}](Exp6B.png "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 7$ under point-wise convergence.[]{data-label="Fig2"}](Exp7B.png "fig:"){width="1.5"}\ ![Polynomiographs of $P_n(z)$, $n=2, \dots, 7$ under parameterized Newton.[]{data-label="Fig3"}](Exp2alpha.png "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 7$ under parameterized Newton.[]{data-label="Fig3"}](Exp3alpha.png "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 7$ under parameterized Newton.[]{data-label="Fig3"}](Exp4alpha.png "fig:"){width="1.5"}\ ![Polynomiographs of $P_n(z)$, $n=2, \dots, 7$ under parameterized Newton.[]{data-label="Fig3"}](Exp5alpha.png "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 7$ under parameterized Newton.[]{data-label="Fig3"}](Exp6alpha.png "fig:"){width="1.5"} ![Polynomiographs of $P_n(z)$, $n=2, \dots, 7$ under parameterized Newton.[]{data-label="Fig3"}](Exp7alpha.png "fig:"){width="1.5"} Polynomiographs of Szegö Partial Sums {#polynomiographs-of-szegö-partial-sums .unnumbered} ===================================== The norm of the roots of $P_n(z)$ get large as $n$ does. Szegö partial sums are $$S_n(z)=P_n(nz)= \sum_{k=0}^n \frac{(nz)^k}{k!}=1+nz+\frac{(nz)^2}{2!}+\cdots + \frac{(nz)^n}{n!}.$$ Many interesting properties of this polynomial are known, see Pólya and G.  Szegö [@Pol] and Zemyan [@Zemyan]. If $\theta$ is a root of $P_n(z)$ then $\theta/n$ is a root of $S_n(z)$. Thus all the roots of $S_n(z)$ are inside the disc of radius one, centered at the origin. Polynomiographs of $S_n(z)$ for small $n$ look like scaled version of those of $P_n(z)$. However, as $n$ goes to infinity the zeros of $S_n(z)$ bend, forming an almond-shape inside the unit disc, see Zemyan [@Zemyan]. Figure \[Fig4\] shows the polynomiography of $S_n(z) \times (z^n-1)$ under point-wise convergence of the basic family. The reason for multiplying by the roots of unity $z^n-1$ is two-fold: To show roots lie inside the unit disc and that under multiplication of polynomials we can generate interesting polynomiographs as science and as art. ![Polynomiographs of $S_n(z) \times (z^n-1)$, $n=2, \dots, 7$ under point-wise convergence.[]{data-label="Fig4"}](ExpS2z2m1.png "fig:"){width="1.5"} ![Polynomiographs of $S_n(z) \times (z^n-1)$, $n=2, \dots, 7$ under point-wise convergence.[]{data-label="Fig4"}](ExpS3z3m1.png "fig:"){width="1.5"} ![Polynomiographs of $S_n(z) \times (z^n-1)$, $n=2, \dots, 7$ under point-wise convergence.[]{data-label="Fig4"}](ExpS4z4m1.png "fig:"){width="1.5"} ![Polynomiographs of $S_n(z) \times (z^n-1)$, $n=2, \dots, 7$ under point-wise convergence.[]{data-label="Fig4"}](ExpS5z5m1.png "fig:"){width="1.5"} ![Polynomiographs of $S_n(z) \times (z^n-1)$, $n=2, \dots, 7$ under point-wise convergence.[]{data-label="Fig4"}](ExpS6z6m1.png "fig:"){width="1.5"} ![Polynomiographs of $S_n(z) \times (z^n-1)$, $n=2, \dots, 7$ under point-wise convergence.[]{data-label="Fig4"}](ExpS7z7m1.png "fig:"){width="1.5"} Concluding Remarks {#concluding-remarks .unnumbered} ================== In this article I have demonstrated polynomiography for the partial sums of the exponential series. One can appreciate the images as art, but also as a way to get interested in learning or teaching root-finding algorithms. What is intriguing about polynomiography software is that in the course of generating images we can learn about the shape of the zeros and get introduced to many other concepts in math and related areas. Polynomiography is a medium for STEM, a bridge to learning or teaching about different subject areas, and making artistic images by considering variations of polynomials, root-finding algorithms, coloring techniques, and operations such as multiplication of polynomials, scaling their zeros, compositions and more. Indeed we could make many artistic images based on the partial sums alone. Interested educators can get a student module, see [@Ander], as well as a link to a free demo polynomiography software upon registration at <http://www.comap.com/Free/VCTAL/>. See also, Choate [@jon] for lesson plans and a short manual for the demo software. [99]{} C.  Anderberg, J. Choate and B. Kalantari, “Computational Thinking Module, Polynomiography: Visual Displays of Solutions of Polynomial Equations”, (2016), <http://www.comap.com/Free/VCTAL/PDF/Polynomiography_SE.pdf>. E. J.  Barbeau, [*Polynomials*]{}, Springer,  2003. A.  F.  Beardon, [*Iteration of Rational functions: Complex Analytic Dynamical Systems*]{}, Springer-Verlag, New York,  1991. A.  Cayley, “The Newton-Fourier imaginary problem”, *American Journal of Mathematics*, 2 (1879), pp. 97. J.  Choate, “Polynomiography”, *Geometer’s Corner, Consortium for Mathematics and Its Applications*, 105 (2013) pp. 1-3. R.  L.  Devaney, [*A First Course in Chaotic Dynamic System Theory and EXPERIMENT*]{} (ABP), 1992. J.  Gleick, [*Chaos: Making of A Science*]{}, Harmondswoth: Pengin Books, 1988. B.  Kalantari, “Polynomiography and applications in art, education, and science”, *Computers & Graphics*, 28 (2004), pp. 417–430. B.  Kalantari, “A new visual art medium: Polynomiography”, *ACM SIGGRAPH Computer Graphics Quarterly*, 38 (2004), pp. 21–23. B.  Kalantari, “Polynomiography: From the Fundamental Theorem of Algebra to Art”, *Leonardo*, 38 (2005), pp.  233–238. B.  Kalantari, *Polynomial Root-Finding and Polynomiography*, World Scientific, Hackensack, NJ,  2008. B.  Kalantari, “Polynomial root-finding methods whose basins of attraction approximate Voronoi diagram”, *Discrete & Computational Geometry*, 46 (2011), pp.  187–203. B.  Kalantari and B.  Torrence, “The Fundamental Theorem of Algebra for Artists,” [*M*ath Horizons]{}, 20 (2013), pp.  26–29. B. Kalantari and A.  Sinclair, “The Rise of Polynomials”, (2008) <https://www.youtube.com/watch?v=kMP0vclKlDA>. B.  B.  Mandelbrot, [*Fractal Geometry of Nature*]{}, W. F. Freeman, New York,  1993. J.  Milnor, [*Dynamics in One Complex Variable: Introductory Lectures*]{}, Vol 160, 3rd en. Princeton University Press, New Jersey,  2006. G.  Pólya and G.  Szegö, [*A*ufgaben und Lehrsätze aus der Analysis]{}, Erster Band, Springer-Verlag, Berlin,  1964. S.  M.  Zemyan, “On the Zeroes of the Nth Partial Sum of the Exponential Series”, [*T*he American Mathematical Monthly]{}, 112 (2005), pp. 891–909.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study the axisymmetric propagation of a viscous gravity current over a deep porous medium into which it also drains. A model for the propagation and drainage of the current is developed and solved numerically in the case of constant input from a point source. In this case, a steady state is possible in which drainage balances the input, and we present analytical expressions for the resulting steady profile and radial extent. We demonstrate good agreement between our experiments, which use a bed of vertically aligned tubes as the porous medium, and the theoretically predicted evolution and steady state. However, analogous experiments using glass beads as the porous medium exhibit a variety of unexpected behaviours, including overshoot of the steady-state radius and subsequent retreat, thus highlighting the importance of the porous medium geometry and permeability structure in these systems.' --- [Axisymmetric viscous gravity currents\ flowing over a porous medium]{} [Melissa J. Spannuth$^1$, Jerome A. Neufeld$^2$,\ J. S. Wettlaufer$^{1,3,4}$, and M. Grae Worster$^2$]{} [*1. Department of Geology and Geophysics, Yale University, New Haven, CT 06520, USA\ 2. Institute of Theoretical Geophysics, Department of Applied Mathematics and Theoretical Physics,\ University of Cambridge, Wilberforce Road, CB3 0WA, UK\ 3. Department of Physics, Yale University, New Haven, CT 06520, USA\ 4. Nordic Institute for Theoretical Physics, Roslagstullsbacken 23, University Center, 106 91 Stockholm, Sweden*]{} Introduction ============ Gravity currents are primarily horizontal fluid flows driven by a density difference between the intruding and ambient fluids. These flows are common in natural systems and industrial processes and describe, for example, the spread of cold air into a room, the dispersal of pollutants from an industrial spill, and the flow of snow and debris avalanches [@huppert-2006]. Many previous studies have examined in detail the propagation of currents along impermeable boundaries; here we consider flow over porous substrates through which these currents can also drain. Two-dimensional gravity currents propagating over porous media have been addressed both theoretically and experimentally by several authors. For currents flowing over thin porous substrates, only the weight of the overlying fluid drives drainage [@thomas-1998; @ungarish-2000; @marino-2002; @pritchard-2001]. In contrast, for gravity currents propagating over deep porous media, [-@acton-2001] showed that both the hydrostatic pressure of the fluid in the current and the weight of the fluid within the porous medium drive drainage. They used this description of drainage in a model of experiments in which low Reynolds number gravity currents spread over a deep porous layer in two dimensions. [-@thomas-2004] used this drainage law to describe their experiments on the propagation of high Reynolds number currents over deep porous media. [@pritchard-2002] have also applied the same drainage law to their examination of gravity currents propagating within a porous medium overlying a deep layer of lower permeability. Similar studies have also been conducted that consider two-phase flow within the porous medium [e.g. -@hussein-2002], but these effects are beyond the scope of the present study. Axisymmetric gravity currents propagating over porous media have been studied primarily as microscale flows in which capillary forces drive the drainage and therefore the wetting properties of the medium are important [e.g. @davis-1999; @davis-2000; @kumar-2006]. At the macroscale, [@pritchard-2001] considered gravity-driven drainage of an axisymmetric current flowing through a porous medium overlying a thin layer of lower permeability. In both geometries, previous experiments only involved currents of fixed volume, whereas our experiments explore the fixed flux case. Here we examine the axisymmetric propagation of a macroscopic viscous gravity current over a deep porous medium. Our model uses lubrication theory for flow within the current, the drainage law of [@acton-2001], and Darcy flow within the porous medium. While the full spatial and temporal evolution of the current can only be obtained numerically, an analytical expression for the steady-state extent and profile of a current fed by a constant input of fluid is found. Additionally, we develop scaling laws describing the propagation of the current. Our experimental setup in which a gravity current fed by a constant flux of golden syrup spreads across a bed of vertically aligned straws conforms closely to the assumptions of our model, so the scaling laws provide a good collapse of all data onto a curve in agreement with the numerical solution. In contrast to these well-behaved currents, we describe the non-ideal behaviour observed in experiments using glycerin and glass beads similar to the system used by [@acton-2001]. We propose that the axisymmetric geometry makes the currents particularly sensitive to any non–uniformities of the porous medium which leads to the disagreement between these experiments and the theory. Theoretical model ================= We consider the axisymmetric spreading of a fluid of kinematic viscosity $\nu$ and density $\rho$ into an ambient fluid of density $\rho_a \ll \rho$ and viscosity $\nu_a \ll \nu$. As shown schematically in figure \[ASgeom\], fluid is supplied at the origin and spreads radially over a porous medium with porosity $\phi$ and permeability $k$ into which the fluid drains. We consider the general case in which the volume of fluid increases as $q t^{\alpha}$, where $t$ is time and $q$ and $\alpha$ are constants. After a brief initial stage, the radial extent of the current $r_N(t)$ is much greater than its height $h(r,t)$, and in this limit we apply the approximations of lubrication theory: velocity within the current is assumed to be predominantly horizontal and pressure within the current is assumed to be hydrostatic. Under this approximation, viscous flow within the current is driven by radial gradients of its thickness. We apply conditions of no slip at $z = 0$ and no tangential stress at $z = h$ to determine the horizontal fluid flux $q_h = -\left( g / 3 \nu \right) \, r h^3 \, \partial h / \partial r$. Conservation of fluid mass through an infinitesimal control volume of the gravity current gives the equation $$\frac{\partial h}{\partial t} - \frac{g}{3\nu}\frac{1}{r}\frac{\partial}{\partial r}\left(rh^3\frac{\partial h}{\partial r}\right) = w(r,0,t) \label{heightevo}$$ governing the current’s structure and evolution, where $w(r,0,t)$ is the drainage velocity from the base of the current into the underlying porous medium. Following [@acton-2001], we assume that drainage into the porous medium is driven both by the weight of the draining fluid and the hydrostatic pressure of the fluid within the current, giving $$w(r,0,t) = -\frac{g k}{\nu}\left(1+\frac{h}{l}\right) = -\phi \frac{\partial l}{\partial t}, \label{depthevo}$$ where $l(r,t)$ is the depth of the fluid within the porous medium. In this analysis, we have made a few assumptions that merit further examination. First, our assumption of no slip at the porous medium surface is valid when the current height is much greater than the pore size because the presence of a slip velocity is equivalent to extending the fluid region a distance of less than one pore size into the medium [@beavers-1967; @lebars-2006]. Near the nose of the current or for currents flowing over very rough substrates apparent slip may be important. Secondly, we have assumed that surface tension is negligible in the drainage law. As discussed in [@acton-2001], this is accurate as long as the pressure due to surface tension is much less than the hydrostatic pressure; equivalently the capillary rise height $h_c \approx \gamma / \rho g a$, where $\gamma$ is the surface tension and $a$ is the pore radius, must be much larger than $h$. For our experimental setup $h_c \approx 2\ \mbox{mm}$, much less than typical current heights. Finally, equation (\[depthevo\]) assumes that flow within the porous medium is single-phase and that the porous matrix is stationary; therefore we can ignore flow of the displaced fluid and assume that $k$ is constant in time. This is an accurate assumption when the wetting properties of the displaced and displacing fluids with respect to the porous matrix are similar and when the displaced fluid is inviscid. Equation (\[depthevo\]) also assumes that fluid flow is predominantly vertical, implicitly neglecting the potential for the Rayleigh–Taylor instability. The governing equations (\[heightevo\]) and (\[depthevo\]) are subject to one boundary condition specifying the flux near the origin and another requiring zero flux through the nose of the current $r_N(t)$. Respectively, they are $$\lim_{r \to 0} \left[ 2\pi r \frac{g h^3}{3 \nu} \frac{\partial h}{\partial r} \right] = -\alpha q t^{\alpha-1} \label{originflux} \quad \mbox{and} \quad \left[ 2\pi r \frac{g h^3}{3 \nu} \frac{\partial h}{\partial r} \right]_{r_N} = 0. \label{nosedef} \eqno{(\theequation{\mathit{a},\mathit{b}})}$$ We note that boundary condition (\[originflux\]*a*) along with the evolution equation (\[heightevo\]) is equivalent to a statement of global mass conservation, namely $$qt^\alpha = 2\pi\int_{0}^{r_N} r h \, dr - 2\pi\int_0^t\int_{0}^{r_N} r w(r,0,t) \, dr \, dt. \label{gcons}$$ We non-dimensionalize equations (\[heightevo\])–(\[nosedef\]) by introducing horizontal, vertical and temporal scales $S_H$, $S_V$ and $S_T$ given by $$S_H = \left( q/\Gamma \right)^{1/2} \left( \frac{\Gamma^4 g}{3 q \nu} \right)^{\left(\alpha-1\right)/2\left(\alpha-5\right)}, \quad S_V = \Gamma \left( \frac{\Gamma^4 g}{3 q \nu} \right)^{1/\left(\alpha-5\right)} \quad\mbox{and}\quad S_T = \left( \frac{\Gamma^4 g}{3 q \nu} \right)^{1/\left(\alpha-5\right)}, \eqno{(\theequation{\mathit{a},\mathit{b},\mathit{c}})}$$ where $\Gamma = gk/\nu$ is the characteristic drainage velocity in the porous medium. By introducing dimensionless variables $$H = h/S_V, \quad L = l/S_V, \quad R = r/S_H \quad \mbox{and} \quad T = t/S_T, \eqno{(\theequation{\mathit{a},\mathit{b},\mathit{c},\mathit{d}})}$$ the equations governing the dimensionless height $H(R,T)$ and depth $L(R,T)$ of the intruding fluid become $$\frac{\partial H}{\partial T} - \frac{1}{R}\frac{\partial}{\partial R}\left(RH^3\frac{\partial H}{\partial R}\right) = -\left(1+\frac{H}{L}\right) \label{NDH}$$ and $$\phi\frac{\partial L}{\partial T} = \left(1+\frac{H}{L}\right). \label{NDL}$$ These are subject to the scaled boundary conditions $$\lim_{R \to 0} \left[ 2\pi RH^3\frac{\partial H}{\partial R} \right] = -\alpha T^{\alpha-1} \label{NDoriginflux} \quad \mbox{and} \quad \left[ 2\pi RH^3\frac{\partial H}{\partial R} \right]_{R_N} = 0. \label{NDnosedef} \eqno{(\theequation{\mathit{a},\mathit{b}})}$$ For a fixed flux at the origin ($\alpha = 1$) the scalings simplify to $$S_H = \left(\frac{q\nu}{gk}\right)^{1/2}, \quad S_V = \left(\frac{3q\nu}{g}\right)^{1/4} \quad \mbox{and} \quad S_T = \left(\frac{3q\nu^5}{g^5k^4}\right)^{1/4}. \label{constflux} \eqno{(\theequation{\mathit{a},\mathit{b},\mathit{c}})}$$ This system admits a steady state in which drainage exactly balances the material input. In the long-time limit, the depth of drainage greatly exceeds the height of the current $L \gg H$ and the current has the steady profile $$H = \left[ R^2-R_N^2-2R_N^2\ln{(R/R_N)}\right]^{1/4}, \label{ssshape}$$ plotted in figure \[ss\]. The logarithmic singularity at $R = 0$ accounts for the finite flux there [@huppert-1982]. Balancing the external flux with the drainage flux, we find the steady-state extent $$R_N(t\rightarrow\infty) = \pi^{-1/2} \approx 0.564. \label{sslength}$$ We note that @pritchard-2001 determined an analytical solution for the steady state of currents in a similar system. They considered a two–dimensional current fed by a constant source at the origin spreading through a porous medium of high permeability and underlain by a very thin low permeability porous layer. Comparing their figure 3 and equation (2.20) with our figure \[ss\] and equation (\[ssshape\]) highlights the importance of the specific system to the current behaviour. Whereas in our system, the steady-state profile of the current has an inflection point where the curvature switches from being positive near the source to negative near the nose, in their system the steady–state current surface is concave upwards near the nose. Numerical solution ================== The full time evolution of the current height $H\left(R,T\right)$ and depth $L\left(R,T\right)$ is found numerically by integrating equations (\[NDH\]) and (\[NDL\]) on a uniform grid with spacing $0.001$. We compute the new drainage depth $L_{n+1} = L \left(R, T_{n+1}\right)$ from equation (\[NDL\]) using the height $H_{n} = H\left(R,T_n\right)$ and depth $L_{n} = L\left(R,T_n\right)$ from the previous time step. We then use $L_{n+1}$ to compute the drainage velocity on the right-hand side of equation (\[NDH\]) and solve for the height from equation (\[NDH\]) using the control volume (or flux conservative) method in space and a Crank–Nicholson (semi-implicit) scheme in time [@patankar-1980]. In this computation, $\left(H_n\right)^3$ is our initial estimate for $H^3$ in the non-linear term on the left-hand side of equation (\[NDH\]). We update the height as described above and use the new estimate for $H$ in the non-linear term, iterating this process until the updated value of $H$ converges. This converged value is $H_{n+1}$. Finally, we proceed to the next time step. We introduce fluid into the current by assigning a constant flux at the left-hand boundary of the first control volume ($R = 0$) in the discretized equations. The right-hand boundary of our grid is impermeable to fluid flow and is positioned beyond the steady-state extent. The initial condition is an empty box with no fluid. We record the nose of the current as the position where $H(R)$ falls below a prescribed small tolerance: the height beyond this point is set to $0$. This condition is necessary because the drainage velocity is ill-defined when $L = 0$ and $H \not= 0$, as occurs near the current nose at the beginning of a time step. We have tested the sensitivity of the numerical solution to the choice of grid spacing, time–step size and height tolerance, and found the results to be relatively insensitive to these parameters. Additionally, we have tested our numerical results for non-draining currents against those of [@huppert-1982], and find good agreement for both the constant volume and constant flux cases. Figure \[numprof\] shows three calculated profiles (curves) at different times for a numerically simulated current with porosity $\phi = 0.907$, similar to that of our experimental setup. We note that, as the system approaches steady state, the volume of fluid residing within the porous medium becomes much larger than the volume in the current above the medium. The solid curve in figure \[TheoryExpt\] shows how the extent increases with time, approaching the steady-state value predicted by equation (\[sslength\]). Experiments =========== We performed a series of experiments using Lyle’s golden syrup as the viscous intruding fluid and a bed of vertically oriented drinking straws of radius $r_s = 0.29 \pm 0.01$ cm as the underlying porous medium, as shown in figure \[ExptFig\]. Lyle’s golden syrup was used as the working fluid because its viscosity of $\nu \gtrsim 400\ \mbox{cm}^2 \mbox{s}^{-1}$ (as measured by a U-tube viscometer) results in currents with heights much greater than the surface topography of the porous medium. The simple geometry of the porous medium ensures strictly vertical drainage flow and allows for a comparison between the experimentally measured and the theoretically predicted permeability. We measured the permeability of the porous medium by conducting drainage experiments in which syrup with $\nu = 453\ \mbox{cm}^2 \mbox{s}^{-1}$ and $\rho = 1.5\ \mbox{g}\ \mbox{cm}^{-3}$ was maintained at a constant height $h = 10\ \mbox{cm}$ above the porous medium within a large cylinder of radius $r_C = 5.75\ \mbox{cm}$ as it drained through the straws of length $l = 20\ \mbox{cm}$. By measuring the mass flux $dM/dt$ through the straws with a digital scale connected to a computer, we obtained the drainage velocity $$w = \frac{g k}{\nu}\left(1+\frac{h}{l}\right) = \frac{dM/dt}{\rho \pi r_C^2},$$ and consequently a measure of the permeability of the porous medium $$k_{exp} = 6.36 \pm 0.04 \times10^{-3}\ \mbox{cm}^2.$$ The uncertainty in this value comes from estimating $dM / dt$ from the measured mass versus time, which has an uncertainty of $\pm 0.02\ \mbox{g}\ \mbox{s}^{-1}$. This experimental value can be compared with the theoretical permeability for aligned capillary tubes given by [@bear-1972] $$k = \frac{\phi r_s^2}{8} = 9.5 \pm 0.3 \times10^{-3}\ \mbox{cm}^2,$$ where $\phi = \pi \sqrt{3} / 6 \simeq 0.907$ is the packing fraction of the straws for hexagonal close packing. We attribute the approximately $30 \%$ discrepancy between the measured and theoretical values to a slow leakage of golden syrup through the interstices between the straws and imperfections in the straw packing that produced a porosity not equal to that of a close packing. In the following analysis of the experimental data, we use the measured permeability. For each experiment, a fixed flux of syrup was supplied at the origin from a reservoir maintained at a constant gravitational head. The mass flux was measured with a digital balance connected to a computer prior to the initiation of each experiment. The flux, viscosity and resultant scaling laws are summarised in table \[ExptValues2\] for each experiment. ------------ ---------------------------------- ------------------------------------ -------------------- -------------------- ------------------- -- Experiment $q\ (\mbox{cm}^3 \mbox{s}^{-1})$ $\nu\ (\mbox{cm}^2 \mbox{s}^{-1})$ $S_H\ (\mbox{cm})$ $S_V\ (\mbox{cm})$ $S_T\ (\mbox{s})$ 1 $1.06\pm0.01$ $453$ $8.78$ $1.10$ $80.0$ 2 $4.18\pm0.03$ $453$ $17.4$ $1.55$ $113$ 3 $2.09\pm0.01$ $453$ $12.3$ $1.30$ $94.7$ 4 $9.83\pm0.01$ $453$ $26.7$ $1.92$ $139$ 5 $6.74\pm0.01$ $453$ $22.1$ $1.75$ $127$ 6 $2.31\pm0.01$ $401$ $12.2$ $1.30$ $83.4$ 7 $1.27\pm0.01$ $401$ $9.05$ $1.12$ $71.9$ 8 $2.00\pm0.01$ $401$ $11.3$ $1.25$ $80.4$ 9 $6.11\pm0.01$ $401$ $19.8$ $1.65$ $106$ 10 $7.37\pm0.01$ $401$ $21.8$ $1.73$ $111$ 11 $6.63\pm0.01$ $401$ $20.7$ $1.69$ $109$ 12 $20.35\pm0.03$ $401$ $36.2$ $2.24$ $144$ ------------ ---------------------------------- ------------------------------------ -------------------- -------------------- ------------------- -- : Summary of the experimental parameters. For each experiment, the permeability was assumed to be $k = 6.36 \pm 0.04 \times10^{-3}$ cm$^2$. The uncertainty in viscosity is described in the text. \[ExptValues2\] Digital images of the side profile of each experiment were made at regular intervals (see figure \[ExptFig\]), and later analysed to obtain the radial extent and height profiles of each current. A comparison between the scaled radial extent of each current and the numerical solution to equations (\[NDH\])–(\[NDL\]) is shown in figure \[TheoryExpt\]. The dotted curves represent a $\pm 10 \%$ error bound in $S_H$ applied to the numerical extent (solid curve). The error in $S_T$ is not represented in the plot. Uncertainty in $\nu$ and $k$ are the main contributors to the overall uncertainty, as the error in $q$ is less than $1\%$. Although the viscosity was measured regularly throughout the set of experiments, the large range of values obtained and the known large temperature dependence of the viscosity ($20\%$ per $^\circ$C) result in a large uncertainty in the actual viscosity. Due to this uncertainty in the viscosity and the discrepancy between the theoretical and measured permeabilities, we estimate the total error to be about $\pm 10\%$. Within the error bounds, the collapse of the scaled data and agreement with the numerical solution is good. We also obtained height profiles from the images of experiment 9, which are compared to the numerical profiles at the same scaled times in figure \[numprof\]. For clarity error bounds are not plotted, but the uncertainty is again $\pm 10\%$. Although the finite width and coiling instability of the fluid source cause some discrepancy, the overall agreement is good. Discussion ========== We have shown that for a fixed flux of golden syrup flowing across a bed of vertically aligned straws a simple model based upon lubrication theory and the drainage law of [@acton-2001] can describe the current propagation and steady state. In contrast, experiments conducted using glycerin as the working fluid and $\sim 3$ mm diameter spherical glass beads as the porous medium (detailed results not included here) exhibited non–ideal behaviour that violated a number of assumptions in our model. In particular, most currents had a scalloped front as the current propagated across the beads (figure \[badcurrent\]*a*), complicating measurement of the current radius. Many were also non–axisymmetric as shown in figure \[badcurrent\]*b*. Finally, all of the glycerin currents exhibited a maximum extent from which the current nose then retreated (figure \[badcurrent\]*c*). We attribute these non–ideal behaviours primarily to a sensitive dependence on the characteristics and geometry of the underlying porous medium. For example, at the nose of the current the thickness is small and may be comparable to the surface topography of the porous medium. This could cause the front to stick on surface asperities, producing the scalloped edge. Additionally, the currents are sensitive to inhomogeneities in the bead packing, and thus the permeability of the medium, due to the strong influence of permeability on the drainage velocity in equation (\[depthevo\]) and on the scaling laws in equation (\[constflux\]). This may have contributed to the scalloped front and the non–axisymmetric propagation. The non–axisymmetry also could have arisen from a bead surface that was not sufficiently level. Although care was taken to level the surface, we cannot exclude this possibility. These hypotheses are supported by our experiments using golden syrup and straws, a level and uniform porous medium, and could be tested by conducting more experiments using, for example, glycerin and straws or smaller beads. For the roll-back phenomenon, we have no simple explanation. However, we can rule out some possibilities. First, we verified that horizontal flow within and immediately above the surface of the porous medium was negligible as assumed in our model (no-slip condition). Powdered dye placed in several small piles on the bead surface along the path of the current was picked up by the draining fluid and carried purely vertically into the beads. Secondly, the geometry of the glycerin experiments afforded us a cross-sectional view of the current and draining fluid from which we observed a uniform saturation of the beads. This supports our assumption of a constant permeability, though as we could not observe the interior of the porous medium, we cannot completely rule out these effects in the bulk of the flow. Because the roll-back phenomenon was not observed in the golden syrup and straws system, we think that it is related to the specific combination of fluid and porous medium properties. Again, this could be tested with experiments involving different fluids and porous media. Finally, we note that no experimental evidence for a Rayleigh–Taylor instability was found at the lower interface of the current on the time scales over which the experiments were conducted. This observation implies that, at least here, vertical drainage is the dominant factor controlling radial spreading of the current. The contrast between our experiments using glycerin and beads and those of [@acton-2001] using glycerin and beads in a linear geometry with a fixed fluid volume suggests that some characteristic of either the axisymmetric geometry or the fixed fluid input results in currents that are much more sensitive to the properties of the porous medium. For example, axisymmetric currents have a much longer front and therefore a larger nose area than linear currents. Therefore their spreading is more strongly influenced by surface roughness and the failure of our model assumptions near the nose. To explore these ideas further, we suggest conducting fixed flux experiments in the linear geometry and fixed volume experiments in the axisymmetric geometry using different porous media. The sensitive dependence of propagation and drainage of the current on the spatial structure of the permeability and the surface topography suggests that further studies are needed to characterise fluid flow in these situations. Nonetheless, our model provides a simple framework to estimate the evolution of the current over time and the maximum extent at steady state for currents flowing over simple porous media. We wish to thank Mark Hallworth and Michael Patterson for their assistance with the experiments. This research was partially supported by the U.S. Department of Energy (DE-FG02-05ER15741). MJS acknowledges support from a U.S. National Science Foundation Graduate Research Fellowship. JSW gratefully acknowledges support from the Wenner-Gren Foundation, the Royal Institute of Technology, and NORDITA in Stockholm. [17]{} natexlab\#1[\#1]{} 2001 Two-dimensional viscous gravity currents flowing over a deep porous medium. [*J. Fluid Mech.*]{} [**440**]{}, 359–380. 1972 [*Dynamics of [F]{}luids in [P]{}orous [M]{}edia*]{}. Dover. 1967 Boundary conditions at a naturally permeable wall. [*J. Fluid Mech.*]{} [**30**]{}, 197–207. 1999 Spreading and imbibition of viscous liquid on a porous base. [*Phys. Fluids*]{} [**11**]{} (1), 48–57. 2000 Spreading and imbibition of viscous liquid on a porous base. [II]{}. [*Phys. Fluids*]{} [**12**]{} (7), 1646–1655. 1982 The propagation of two-dimensional and axisymmetric viscous gravity currents over a rigid horizontal surface. [*J. Fluid Mech.*]{} [**121**]{}, 43–58. 2006 Gravity currents: a personal perspective. [*J. Fluid Mech.*]{} [**554**]{}, 299–322. 2002 Development and verification of a screening model for surface spreading of petroleum. [*J. Contam. Hydrol.*]{} [**57**]{}, 281–302. 2006 Dynamics of drop spreading on fibrous porous media. [*Colloids Surf., A*]{} [**277**]{}, 157–163. 2006 Interfacial conditions between a pure fluid and a porous medium: implications for binary alloy solidification. [*J. Fluid Mech.*]{} [**550**]{}, 149–173. 2002 Spreading of a gravity current over a permeable surface. [*J. Hydr. Eng. ASCE*]{} [**128**]{} (5), 527–533. 1980 [*Numerical [H]{}eat [T]{}ransfer and [F]{}luid [F]{}low*]{}. Taylor and Francis. 2002 Draining viscous gravity currents in a vertical fracture. [*J. Fluid Mech.*]{} [**459**]{}, 207–216. 2001 On the slow draining of a gravity current moving through a layered permeable medium. [*J. Fluid Mech.*]{} [**444**]{}, 23–47. 1998 Gravity currents over porous substrates. [*J. Fluid Mech.*]{} [**366**]{}, 239–258. 2004 Lock-release inertial gravity currents over a thick porous layer. [*J. Fluid Mech.*]{} [**503**]{}, 299–319. 2000 High-[R]{}eynolds number gravity currents over a porous boundary: shallow-water solutions and box-model approximations. [*J. Fluid Mech.*]{} [**418**]{}, 1–23.
{ "pile_set_name": "ArXiv" }
--- abstract: | We present a series of measurements based on $K_{L,S}\to {\mbox{$\pi^{+}\pi^{-}$}}$ and $K_{L,S}\to {\mbox{$\pi^{0}\pi^{0}$}}$ decays collected in 1996-1997 by the experiment (E832) at Fermilab. We compare these four  decay rates to measure the direct CP violation parameter ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}= ( \KtevReepoe \pm \KtevTErr){ \times 10^{-4}}$. We also test CPT symmetry by measuring the relative phase between the CP violating and CP conserving decay amplitudes for  (${\mbox{$\phi_{+-}$}}$) and for  (${\mbox{$\phi_{00}$}}$). We find the difference between the relative phases to be ${\mbox{$\Delta \phi$}}\equiv {\mbox{$\phi_{00}$}}-{\mbox{$\phi_{+-}$}}= \left( \DelPhi \pm \DelPhiTOTerr \right){^{\circ}}$, and the deviation of ${\mbox{$\phi_{+-}$}}$ from the superweak phase to be ${\mbox{$\phi_{+-}$}}- {\mbox{$\phi_{SW}$}}= (\dPhiSW \pm \dPhiSWTOTerr){^{\circ}}$; both results are consistent with CPT symmetry. In addition, we present new measurements of the $K_L$-$K_S$ mass difference and $K_S$ lifetime: ${\mbox{$\Delta m$}}= ( \KtevDelm \pm \KtevDelmTerr ) {\mbox{$\times 10^{6}~\hbar {\rm s}^{-1}$}}$ and ${\mbox{$\tau_{S}$}}= ( \KtevTaus \pm \KtevTausTerr ) {\mbox{$\times 10^{-12}~{\rm s}$}}$ .\ author: - 'A. Alavi-Harati' - 'T. Alexopoulos' - 'M. Arenton' - 'K. Arisaka' - 'S. Averitte' - 'R.F. Barbosa' - 'A.R. Barker' - 'M. Barrio' - 'L. Bellantoni' - 'A. Bellavance' - 'J. Belz' - 'D.R. Bergman' - 'E. Blucher' - 'G.J. Bock' - 'C. Bown' - 'S. Bright' - 'E. Cheu' - 'S. Childress' - 'R. Coleman' - 'M.D. Corcoran' - 'G. Corti' - 'B. Cox' - 'A.R. Erwin' - 'R. Ford' - 'A. Glazov' - 'A. Golossanov' - 'G. Graham' - 'J. Graham' - 'E. Halkiadakis' - 'J. Hamm' - 'K. Hanagaki' - 'S. Hidaka' - 'Y.B. Hsiung' - 'V. Jejer' - 'D.A. Jensen' - 'R. Kessler' - 'H.G.E. Kobrak' - 'J. LaDue' - 'A. Lath' - 'A. Ledovskoy' - 'P.L. McBride' - 'P. Mikelsons' - 'E. Monnier' - 'T. Nakaya' - 'K.S. Nelson' - 'H. Nguyen' - 'V. O’Dell' - 'R. Pordes' - 'V. Prasad' - 'X.R. Qi' - 'B. Quinn' - 'E.J. Ramberg' - 'R.E. Ray' - 'A. Roodman' - 'S. Schnetzer' - 'K. Senyo' - 'P. Shanahan' - 'P.S. Shawhan' - 'J. Shields' - 'W. Slater' - 'N. Solomey' - 'S.V. Somalwar' - 'R.L. Stone' - 'E.C. Swallow' - 'S.A. Taegar' - 'R.J. Tesarek' - 'G.B. Thomson' - 'P.A. Toale' - 'A. Tripathi' - 'R. Tschirhart' - 'S.E. Turner' - 'Y.W. Wah' - 'J. Wang' - 'H.B. White' - 'J. Whitmore' - 'B. Winstein' - 'R. Winston' - 'T. Yamanaka' - 'E.D. Zimmerman' date: 'August 6, 2002' title: | Measurements of Direct CP Violation, CPT Symmetry,\ and Other Parameters in the Neutral Kaon System --- Introduction {#sec:intro} ============ The discovery of the ${\mbox{$K_{L}\rightarrow\pi^{+}\pi^{-}$}}$ decay in 1964 [@ccft64] demonstrated that CP symmetry is violated in weak interactions. Subsequent experiments showed that the effect is mostly due to a small asymmetry between the ${\mbox{$K^{0}$}}\to {\mbox{$\overline{K^{0}}$}}$ and ${\mbox{$\overline{K^{0}}$}}\longrightarrow {\mbox{$K^{0}$}}$ transition rates, which is referred to as indirect CP violation. Over the last three decades, significant effort has been devoted to searching for direct CP violation in a decay amplitude. Direct CP violation can be detected by comparing the level of CP violation for different decay modes. The parameters $\epsilon$ and $\epsilon'$ are related to the ratio of CP violating to CP conserving decay amplitudes for ${\mbox{$K\rightarrow\pi^{+}\pi^{-}$}}$ and ${\mbox{$K\rightarrow\pi^{0}\pi^{0}$}}$: $$\begin{array}{lcccc} {\mbox{$\eta_{+-}$}}& \equiv & \frac{\textstyle A \left( {\mbox{$K_{L}\rightarrow\pi^{+}\pi^{-}$}}\right)} {\textstyle A \!\left({\mbox{$K_{S}\rightarrow\pi^{+}\pi^{-}$}}\right)} & = & \epsilon + {\mbox{$\epsilon^\prime$}}, \\ {\mbox{$\eta_{00}$}}& \equiv &\frac{\textstyle A \left( {\mbox{$K_{L}\rightarrow\pi^{0}\pi^{0}$}}\right)} {\textstyle A \!\left({\mbox{$K_{S}\rightarrow\pi^{0}\pi^{0}$}}\right)} & = & \epsilon - 2{\mbox{$\epsilon^\prime$}}. \end{array}$$ $\epsilon$ is a measure of indirect CP violation, which is common to all decay modes. If CPT symmetry holds, the phase of $\epsilon$ is equal to the “superweak” phase: $${\mbox{$\phi_{SW}$}}\equiv \tan^{-1} \left( 2 {\mbox{$\Delta m$}}/ \Delta \Gamma \right),$$ where ${\mbox{$\Delta m$}}\equiv m_L - m_S$ is the $K_L$-$K_S$ mass difference and $\Delta\Gamma =\Gamma_S - \Gamma_L$ is the difference in the decay widths. The quantity $\epsilon^{\prime}$ is a measure of direct CP violation, which contributes differently to the $\pi^+\pi^-$ and $\pi^0\pi^0$ decay modes, and is proportional to the difference between the decay amplitudes for ${\mbox{$K^{0}$}}\to\pi^+\pi^-(\pi^0\pi^0)$ and ${\mbox{$\overline{K^{0}}$}}\to\pi^+\pi^-(\pi^0\pi^0)$. Measurements of $\pi\pi$ phase shifts [@ochs] show that, in the absence of CPT violation, the phase of $\epsilon'$ is approximately equal to that of $\epsilon$. Therefore, ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ is a measure of direct CP violation and ${\mbox{$Im({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ is a measure of CPT violation. Experimentally, ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ is determined from the double ratio of the two pion decay rates of $K_L$ and $K_S$: $$\begin{aligned} \frac{\Gamma\!\left({\mbox{$K_{L}\rightarrow\pi^{+}\pi^{-}$}}\right)/\,\Gamma\!\left({\mbox{$K_{S}\rightarrow\pi^{+}\pi^{-}$}}\right)}{ \Gamma\!\left({\mbox{$K_{L}\rightarrow\pi^{0}\pi^{0}$}}\right)/\,\Gamma\!\left({\mbox{$K_{S}\rightarrow\pi^{0}\pi^{0}$}}\right)} \nonumber \\ = \left| \frac{{\mbox{$\eta_{+-}$}}}{{\mbox{$\eta_{00}$}}} \right|^2 \approx 1 + 6 {\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}. & & \label{eq:reepoe}\end{aligned}$$ For small $|\epsilon'/\epsilon|$, ${\mbox{$Im({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ is related to the phases of ${\mbox{$\eta_{+-}$}}$ and ${\mbox{$\eta_{00}$}}$ by $$\Delta\phi \equiv {\mbox{$\phi_{00}$}}-{\mbox{$\phi_{+-}$}}\approx -3 {\mbox{$Im({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}~. \label{eq:delphimpe}$$ The Standard Model accommodates both direct and indirect CP violation [@ckm; @ellis; @gilman_wise]. Unfortunately, there are large hadronic uncertainties in the calculation of ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. Most recent Standard Model predictions for ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ are less than $30 \times 10^{-4}$ [@nbp:hambye; @Cheng:1999dj; @jhep:bijnens; @Pallante:2001he; @prd:wu; @Buras:2000qz; @Bertolini:2000dy; @npb:narison; @npps:ciuchini01; @Aoki:2001dw; @Blum:2001yx]. The superweak model [@superweak], proposed shortly after the discovery of ${\mbox{$K_{L}\rightarrow\pi^{+}\pi^{-}$}}$, also accommodates indirect CP violation, but not direct CP violation. Previous measurements have established that  is non-zero [@prl:731; @pl:na31; @prl:pss; @na48:reepoe]. This paper reports an improved measurement of ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ by the KTeV Experiment (E832) at Fermilab. This measurement is based on 40 million reconstructed  decays collected in 1996 and 1997, and represents half of the total KTeV data sample. The 1996+1997 sample is four times larger than, and contains, our previously published sample [@prl:pss]. We also present measurements of the kaon parameters ${\mbox{$\Delta m$}}$ and ${\mbox{$\tau_{S}$}}$, and tests of CPT symmetry based on measurements of ${\mbox{$\Delta \phi$}}$ and ${\mbox{$\phi_{+-}$}}-{\mbox{$\phi_{SW}$}}$. The outline of the paper is as follows. Section \[sec:exp\] describes the  measurement technique, including details about the neutral hadron beams (Sec. \[sec:beam\]) and the detector used to identify ${\mbox{$K\rightarrow\pi\pi$}}$ decays (Sec. \[sec:det\]). The detector description also includes the calibration procedures and the performance characteristics relevant to the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ measurement. The Monte Carlo simulation of the kaon beams and detector is described in Section \[sec:mc\]. Section \[sec:ana\] explains the reconstruction and event selection for the ${\mbox{$K\rightarrow\pi^{+}\pi^{-}$}}$ and ${\mbox{$K\rightarrow\pi^{0}\pi^{0}$}}$ decay modes, and also the background subtraction for each mode. Section \[sec:extract\] describes the acceptance correction, and the fit used to extract ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ and other physics parameters. Each of these sections is followed by a discussion of systematic uncertainties related to that part of the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ measurement. Section \[sec:reepoe\_measure\] presents the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ result along with several crosschecks. Finally, Section \[sec:kaonpar\] presents our measurements of the kaon parameters ${\mbox{$\Delta m$}}$, ${\mbox{$\tau_{S}$}}$, ${\mbox{$\Delta \phi$}}$, and ${\mbox{$\phi_{+-}$}}$. Additional details of the work presented here are given in [@reepoe_theses]. Measurement Technique and Apparatus {#sec:exp} =================================== Overview {#sec:overview} -------- The measurement of  requires a source of $K_L$ and $K_S$ decays, and a detector to reconstruct the charged (${\mbox{$\pi^{+}\pi^{-}$}}$) and neutral (${\mbox{$\pi^{0}\pi^{0}$}}$) final states. The strategy of the  experiment is to produce two identical $K_L$ beams, and then to pass one of the beams through a thick “regenerator.” The beam that passes through the regenerator is called the regenerator beam, and the other beam is called the vacuum beam. The regenerator creates a coherent $\ket{K_L}+\rho\ket{K_S}$ state, where $\rho$ is the regeneration amplitude chosen such that most of the  decay rate downstream of the regenerator is from the $K_S$ component. The measured quantities are the numbers of  and decays in the vacuum and regenerator beams. The vacuum-to-regenerator “single ratios” for  and   decays are proportional to $|{\mbox{$\eta_{+-}$}}/\rho|^2$ and $|{\mbox{$\eta_{00}$}}/\rho|^2$, and the ratio of these two quantities gives ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ via Eq. \[eq:reepoe\] (also see Appendix \[app:exp details\]). The effect of  interference in the regenerator beam is accounted for in a fitting program used to extract . To reduce systematic uncertainties related to left-right asymmetries in the detector and beamline, the regenerator position alternates between the two beams once per minute. Decays in both beams are collected simultaneously to reduce sensitivity to time-dependent variations in the beam intensity and in detector efficiencies. The fixed geometry of the beamline elements, combined with alternating the regenerator position, ensures a constant vacuum-to-regenerator kaon flux ratio.  decays are detected in a spectrometer consisting of four drift chambers and a dipole magnet; the well-known kaon mass is used to determine the momentum scale. The four photons from  decays are detected in a pure Cesium Iodide (CsI) calorimeter; electrons from ${\mbox{$K_{L}\to\pi^{\pm}e^{\mp}\nu$}}$ decays (${\mbox{$K_{e3}$}}$) are used to calibrate the CsI energy scale. An extensive veto system is used to reject events coming from interactions in the regenerator, and to reduce backgrounds from kaon decays into non-$\pi\pi$ final states such as ${\mbox{$K_{L}\to\pi^{\pm}{\mu}^{\mp}\nu$}}$ and ${\mbox{$K_{L}\rightarrow \pi^{0}\pi^{0}\pi^{0}$}}$. A Monte Carlo simulation is used to correct for the acceptance difference between  decays in the two beams, which results from the very different $K_L$ and $K_S$ lifetimes. The simulation includes details of detector geometry and efficiency, as well as the effects of “accidental” activity from the high flux of particles hitting the detector. The decay-vertex distributions provide a critical check of the simulation. To study the detector performance, and to verify the accuracy of the Monte Carlo simulation, we also collect samples of ${\mbox{$K_{L}\to\pi^{\pm}e^{\mp}\nu$}}$ and  decays with much higher statistics than the ${\mbox{$K\rightarrow\pi\pi$}}$ signal samples. The Kaon Beams {#sec:beam} -------------- Neutral kaons are produced by a proton beam hitting a fixed target (Fig. \[fig:beamline\]). The Fermilab Tevatron provides $3 \times 10^{12}$ 800 GeV/$c$ protons in a 20 s extraction cycle (“spill”) once every minute. The proton beam has a 53 MHz RF micro-structure such that protons arrive in $\sim 1$ ns wide “buckets” and at 19 ns intervals. The bucket-to-bucket variations in beam intensity are typically 10%. The target is a narrow beryllium oxide (BeO) rod, $3 \times 3$ mm$^2$ in the dimensions transverse to the beam, and 30 cm long corresponding to about one proton interaction length. The proton beam is incident on the target at a downward angle of $4.8$ mrad with respect to the line joining the target and the center of the detector. This targeting angle is chosen as a compromise between higher kaon flux at small angles, and a smaller neutron-to-kaon ratio ($n/K$) at large angles. The center of the BeO target defines the origin of the KTeV right-handed coordinate system. The positive $z$-axis is directed from the target to the center of the detector. The positive $y$-axis is directed up. The particles produced in the BeO target include very few kaons compared to other hadrons and photons. A $\sim100$ m long beamline is used to remove unwanted particles and to collimate two well-defined kaon beams. In this beamline, charged particles are removed with sweeping magnets, photons are absorbed by a 7.62 cm Pb slab located at $z = 19$ m, and most of the hyperons decay near the target. To reduce the neutron-to-kaon ratio, neutrons (kaons) are attenuated by a factor of 4.6 (2.6) in a beryllium absorber that is common to both beams. An extra “movable” absorber, synchronized with the regenerator position, provides an additional attenuation factor of 3.8 (2.3) for neutrons (kaons) in the regenerator beam. Each neutral kaon beam is defined by two collimators: a 1.5 m long primary collimator ($z = 20$ m), and a 3 m long “defining” collimator ($z = 85$-$88$ m) that defines the size and solid angle of each beam. Each collimator has two square holes, which are tapered to reduce scattering. A “crossover” absorber at $z=40$ m prevents kaons that scatter in the upstream absorbers from crossing over into the other beam. At the defining collimator, the two beams have the same size ($4.4 \times 4.4$ cm$^2$) and solid angle ($0.24~\mu$str). The beam centers are separated by 14.2 cm, and the horizontal angle between the two beams is 1.6 mrad. The two beams pass through an evacuated volume, held at $10^{-6}$ Torr, which extends from 28 m to 159 m from the target. This evacuated region includes the 110 m to 158 m range used in the analysis. The downstream end of the evacuated volume is sealed with a 0.14% radiation length ($X_0$) [vacuum-window]{} made of kevlar and mylar. Most of the $K_S$ component decays near the BeO target. Downstream of the defining collimator, the small “[[target-$K_S$]{}]{}” component that remains in the vacuum beam increases the  decay rate by 0.4% compared with a pure $K_L$ beam of equal intensity. This contribution from [target-$K_S$]{} is essentially zero for kaon momenta below $100~{ {\rm GeV}/c }$ and rises to 15% at $160~{ {\rm GeV}/c }$. In the regenerator beam, most of the  decays are from the regenerated $K_S$ component. Decays from the $K_L$ component and from  interference account for about 20% of the  decay rate, ranging from 5% near the regenerator to 90% at the [vacuum-window]{}. The composition of the neutral hadron beams is as follows. At 90 m from the BeO target, the vacuum beam has a 2.0 MHz flux of kaons with $n/K = 1.3$, and an average kaon momentum of about $70~{ {\rm GeV}/c }$. Upstream of the regenerator, the kaon flux is 0.9 MHz with $n/K =0.8$; downstream of the regenerator the flux of unscattered kaons is 0.15 Mhz, or $13$ times smaller than in the vacuum beam. The flux of hyperons at 90 m from the target is about $1$ kHz in the vacuum beam. In addition to hadrons, there are muons that come from the BeO target, the proton beam dump, and kaon decays; the total muon flux hitting the  detector is about 200 kHz. The Detector {#sec:det} ------------ Kaon decays downstream of the defining collimator ($z=85$ m, Fig. \[fig:beamline\]) are reconstructed by the  detector (Fig. \[fig:detector\]), which includes a magnetic spectrometer, CsI calorimeter, and veto system. Downstream of the [vacuum-window]{}, the space between detector components is filled with helium to reduce interactions from the neutral beam, and to reduce multiple scattering and photon conversions of decay products. The total amount of material upstream of the CsI calorimeter corresponds to $4\%$ of a radiation length; about 60% of the material is in the trigger hodoscope just upstream of the CsI calorimeter, and 10% of the material is upstream of the first drift chamber. Each of the two neutral beams passes through holes in the Mask Anti veto, trigger hodoscope, and CsI calorimeter. The beams finally strike a beam-hole photon veto 5 meters downstream of the CsI. The following sections describe the detector components in more detail. ### Spectrometer {#sec:det_spec} The  spectrometer includes four drift chambers (DCs) that measure charged particle positions. The two downstream chambers are separated from the upstream chambers by a $3\times 2~{\rm m}^2$ aperture dipole magnet. The magnet produces a field that is uniform to better than 1% and imparts a $0.41~{ {\rm GeV}/c }$ kick in the horizontal plane. During data taking, the magnet polarity was reversed every 1-2 days. The DC planes have a hexagonal cell geometry formed by six field-shaping wires surrounding each sense wire (Fig. \[fig:dccell\]). The cells are $6.35$ mm wide, and the drift velocity is about $50 \mu$m/ns in the 50-50 argon-ethane gas mixture. The maximum drift time across a cell is 150 ns, and defines the width of the “in-time” window. A chamber consists of two planes of horizontal wires to measure $y$ hit positions, and two planes of vertical wires to measure $x$ hit positions; the two $x$-planes, as well as the two $y$-planes, are offset by one half-cell to resolve the left-right ambiguity. The $x$ and $y$ hits cannot be associated to each other using only drift chamber information, but can be matched using CsI clusters as explained in Section \[sec:chrg\_evtsel\]. There are a total of 16 planes and $1972$ sense wires in the four DCs. The transverse chamber size increases with distance from the target. The smallest chamber (DC1) is $1.26\times 1.26~{\rm m}^2$ and has 101 sense wires per plane; the largest chamber (DC4) is $1.77\times 1.77~{\rm m}^2$ and has 140 sense wires per plane. Lecroy 3373 multi-hit Time-to-Digital Converters (TDCs) are used to measure the drift times relative to the RF-synchronized Level 1 trigger. The TDC resolution is 0.25 ns, which contributes 13 $\mu$m to the position resolution. The total TDC time window is $2.5$ times longer than the in-time window, and is centered on the in-time window. The track reconstruction software uses only the earliest in-time hit on each wire. Hits prior to the in-time window are recorded to study their influence on in-time hits. Each measured drift time, $t$, is converted into a drift distance, $x$, with a non-linear $x(t)$ map. The maps are measured separately for each of the sixteen planes using the uniform hit-illumination across each cell. The $x(t)$ calibrations are performed in roughly 1-day time periods. A charged-particle track produces a hit in each of the two $x$ and $y$ planes. The two $x$ hits (or two $y$ hits) in each plane are referred to as a “hit-pair.” For a track that is perpendicular to a drift chamber with perfect resolution, the sum of drift distances (SOD) from each hit-pair would equal the cell width of 6.35 mm. The measured SOD distribution is shown in Fig. \[fig:sod\]. For inclined tracks, an angular correction is applied to the SOD. The track-finding software requires the SOD to be within $1$ mm of the cell width. The mean SOD is stable to within $10~\mu$m during the run. The single-hit position resolution is typically $110~\mu$m, corresponding to a SOD resolution of $150~\mu$m and a hit-pair resolution of $80~\mu$m. Using the tracking algorithm described in Section \[sec:chrg\_evtsel\], the momentum resolution is $\sigma_p/p \simeq [1.7 \oplus (p/14) ] \times 10^{-3}$, where $p$ is the track momentum in ${ {\rm GeV}/c }$. The average inefficiency for reconstructing a hit-pair is 3.7%. Delta rays and accidental hits contribute to the low-side tail of the SOD distribution (Fig. \[fig:sod\]), and result in a hit-pair loss of 0.5% and 0.7%, respectively. The remaining 2.5% loss is from missing hits and from SOD values more than 1 mm larger than the cell size (high-side tail in Fig. \[fig:sod\]). More details on sources of hit-pair inefficiency are given in the description of the Monte Carlo simulation (Sec. \[sec:mc\_dc\]). To reconstruct the trajectories of charged particles accurately, the alignment of the drift chambers relative to each other, to the target, and to the CsI calorimeter must be known. This alignment is determined in roughly 1-day periods using data samples described below. The transverse alignment of the drift chambers is based on muons from dedicated runs with the analysis magnet turned off. The muon intensity is raised to $\sim~1$ MHz by reducing the field in the upstream sweeping magnets, and the neutral hadron beam is absorbed by a 2 m long steel block placed in the beam at $z=90$ m. These muon runs were performed every 1-2 days when the magnet polarity was reversed. Software calibration results in a transverse alignment of each $x$ and $y$ plane to within $10~\mu$m, and relative rotations known to within $20~\mu$rad. The transverse target position is determined with a precision of $35~\mu$m using ${\mbox{$K\rightarrow\pi^{+}\pi^{-}$}}$ decays in the vacuum beam and projecting the reconstructed kaon trajectory back to the target. The CsI offset relative to the drift chambers is measured using electrons from ${\mbox{$K_{L}\to\pi^{\pm}e^{\mp}\nu$}}$ decays. ### The CsI Calorimeter {#sec:det_csi} The KTeV electromagnetic calorimeter consists of 3100 pure CsI crystals as shown in Fig. \[fig:csilayout\]. There are 2232 $2.5 \times 2.5$ cm$^2$ crystals in the central region, each viewed by a 1.9 cm Hamamatsu R5364 photomultiplier tube (PMT). There are also 868 $5 \times 5$ cm$^2$ crystals, each viewed by a 3.8 cm Hamamatsu R5330 PMT. The transverse size of the calorimeter is $1.9 \times 1.9$ m$^2$, and the length of each crystal is 50 cm (27 $X_0$). Two $15\times 15$ cm$^2$ carbon fiber beam pipes allow the few MHz of beam particles to pass through the calorimeter without striking any material. The crystals were individually wrapped and tested with a $^{137}$Cs source to ensure that the response over the length of each crystal is uniform to within $\sim 5$% [@csical]. This longitudinal uniformity requirement is necessary to obtain sub-percent energy resolution for electrons and photons. The CsI scintillation light has two components: (i) a fast component with decay times of 10 ns and 36 ns, and maximum light output at a wavelength of 315 nm; (ii) a slow component with a decay time of $\sim 1\mu$s and maximum light output at $480$ nm. To reduce accidental pile-up effects from the slow component, a Schott UV filter is placed between the crystal and the PMT. The filter reduces the total light output by $\sim 20$%, but increases the fast component fraction from about 80% to 90%. The average light yield with the filter gives 20 photo-electrons per MeV of energy deposit. The CsI signal components discussed above include the effect of the PMT spectral response. Digitizing electronics are placed directly behind the PMTs to minimize electronic noise ($<1$ MeV). Each PMT is equipped with a custom 8-range digitizer to integrate the charge delivered by the PMT. This device, which is referred to as a “digital PMT” (DPMT) [@dpmt], has 16 bits of dynamic range with 8-bit resolution, and allows the measurement of energies from a few MeV to 100 GeV. In 1997, the digitization and readout operated at the Tevatron RF frequency of 53 MHz, and the PMT signal integration time was 114 ns (6 RF “buckets”), which permitted collection of approximately $96\%$ of the fast scintillation component. In 1996, the readout frequency was 18 MHz (RF/3) and the integration time was a factor of two longer than in 1997. To convert measurements of integrated charge to energy, a laser system is first used to calibrate the response from each DPMT. Then momentum-analyzed electrons from ${\mbox{$K_{L}\to\pi^{\pm}e^{\mp}\nu$}}$ decays are used to calibrate the energy scale of each channel. The laser system consists of a pulsed Nd:YAG laser, a fluorescent dye, and a variable filter to span the full dynamic range of the readout system. The laser light ($360$ nm) is distributed via 3100 quartz fibers to each individual crystal; the light level for each quadrant of the calorimeter is monitored with a PIN diode read out by a 20-bit ADC. Throughout the data taking, special hour-long laser scans to calibrate the readout system were performed roughly once per week during periods when beam was not available. Using these laser scan calibrations, deviations from a linear fit of the combined DPMT plus PMT response versus light level are less than 0.1% (rms) for each channel. During nominal data-taking, the laser operated at a fixed intensity with $1$ Hz repetition rate to correct for short-term gain drifts that were typically $< 0.2$% per day. To determine the energy scale in the calorimeter, we collected 600 million ${\mbox{$K_{L}\to\pi^{\pm}e^{\mp}\nu$}}$ decays during the experiment; this number of events allows us to determine the energy scale for each channel with $\sim 0.03$% precision every 1-2 days. The electron energy is determined by summing energies from a $3 \times 3$ “cluster” of large crystals or a $7 \times 7$ cluster of small crystals centered on the crystal with maximum energy. The cluster energy is corrected for shower leakage outside the cluster region, leakage at the beam-holes and calorimeter edges, and for channels with energies below the $\sim 4$ MeV readout threshold. Figure \[fig:linres\]a shows the $E/p$ distribution for electrons from  decays, where $E/p$ is the ratio of cluster energy measured in the CsI calorimeter to momentum measured in the spectrometer. To avoid pion shower leakage into the electron shower, the $\pi^{\pm}$ is required to be at least 50 cm away from the $e^{\mp}$ at the CsI. The $E/p$ resolution has comparable contributions from both $E$ and $p$. The CsI energy resolution is obtained by subtracting the momentum resolution from the $E/p$ resolution, and is shown as a function of momentum in Fig. \[fig:linres\]b. The energy resolution can be parameterized as $\sigma_E/E \simeq 2\%/\sqrt{E} \oplus 0.4\%$, where $E$ is in GeV; the resolution is 1.3% at 3 GeV, the minimum cluster energy used in the  analysis, and is better than 0.6% for energies above 20 GeV. The momentum dependence of the mean $E/p$ (Fig. \[fig:linres\]c) shows that the average energy nonlinearity is 0.5% between 3 and 75 GeV. This energy nonlinearity is measured for each CsI channel and used as a correction. Electromagnetic cluster positions are determined from the fraction of energy in neighboring columns and rows. The conversion from energy fraction to position is done using a map based on the uniform photon illumination across each crystal for  data. The average position resolution for electrons is 1.2 mm for clusters in small crystals, and 2.4 mm for large crystals. ### The Regenerator {#sec:regenerator} KTeV uses an active regenerator as a source of $K_S$ decays. It consists mainly of 84   $10\times10\times 2~{\rm cm}^3$ scintillator modules (Fig. \[fig:regdiagram\]a). Each module is viewed by two photomultiplier tubes (PMTs), one from above and one from below. The downstream end of the regenerator (Fig. \[fig:regdiagram\]b) is a lead-scintillator sandwich, which is also viewed by two PMTs. This last module of the regenerator is used to define a sharp upstream edge for the kaon decay region in the regenerator beam. At the average kaon momentum of $70~{ {\rm GeV}/c }$, the magnitude of the regeneration amplitude is ${\vert\rho\vert}\sim 0.03$. The isoscalar carbon in the regenerator accounts for about 95% of the regeneration amplitude, which simplifies the model used to describe $\rho$ when extracting physics parameters (Sec. \[sec:fitting\]). One can distinguish three main processes that contribute to $K_S$ regeneration. These are (i) “coherent” regeneration, which occurs in the forward direction, (ii) “diffractive” regeneration, in which target nuclei do not disintegrate but kaons scatter at finite angle, and (iii) “inelastic” regeneration, characterized by nuclear break-up and often by production of secondary particles. Only the decays of coherently regenerated kaons are used in the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ analysis. The other processes are treated as background. The 170 cm length of the regenerator corresponds to about two hadronic interaction lengths. This length maximizes coherent regeneration and suppresses diffractive regeneration [@pr:good]. The diffractive-to-coherent ratio is 0.09 for  decays downstream of the regenerator, and is reduced by kinematic cuts in the analysis. The corresponding inelastic-to-coherent ratio is about $100$. Since inelastic regenerator interactions typically leave energy deposits of a few MeV to 100 MeV from the recoil nuclear fragments, this source of background is reduced using the regenerator PMT signals; events with more than 8 MeV in any scintillator element are rejected. Inelastic interactions with production of secondary particles are suppressed further by other elements of the veto system. After all veto requirements, the level of inelastic scattering is reduced by a factor of a few thousand, making its contribution smaller than that of diffractive scattering. The downstream edge of the regenerator defines the beginning of the regenerator-beam decay region for both the  and modes. A small fraction of decays inside the regenerator enters the signal sample because photons can pass through scintillator and lead, and because charged pions can exit the last regenerator module without depositing enough energy to be vetoed. In the fit to extract ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ (Sec. \[sec:fitting\]), the median of the distribution of  decays inside the regenerator is used to define a perfectly sharp “effective edge,” ${\mbox{$z_{\rm eff}$}}$. For the neutral decay mode, ${\mbox{$z_{\rm eff}$}}$ is calculated using the known geometry and regeneration properties, and is $(-6.2\pm 0.1)$ mm from the downstream end of the regenerator; it is shown by the arrow above the lead in Fig. \[fig:regdiagram\]b. For the charged decay mode, ${\mbox{$z_{\rm eff}$}}$ is determined by the veto threshold in the last regenerator module. The threshold is measured using muons collected with a separate trigger, and results in a  edge that is $(-1.65 \pm 0.45)$ mm from the downstream edge of the regenerator; it is shown by the arrow above the scintillator in Fig. \[fig:regdiagram\]b. The uncertainty comes from the geometry of the PMT-scintillator assembly and the threshold measurement from muons. ### The Veto System {#sec:det_veto} The  detector uses veto elements to reduce trigger rates, to reduce backgrounds, and to define sharp apertures and edges that limit the detector acceptance. The regenerator veto was discussed in the previous section. Nine lead-scintillator (16 $X_0$) photon veto counters are positioned along the beam-line, with five located upstream of the [[vacuum-window]{}]{} (Fig. \[fig:RCMA\]a) and four located downstream of the [vacuum-window]{}. These nine veto counters detect escaping particles that would miss any of the drift chambers or the CsI calorimeter. Another $10~X_0$ photon veto is placed behind the CsI to detect photons that go through the beam-holes; this “beam-hole veto” mainly suppresses background from ${\mbox{$K_{L}\rightarrow \pi^{0}\pi^{0}\pi^{0}$}}$ decays. A scintillator bank behind 4 m of steel ($z=192$ m in Fig. \[fig:detector\]) is used to veto muons, primarily from ${\mbox{$K_{L}\to\pi^{\pm}{\mu}^{\mp}\nu$}}$ decays. The upstream distribution of reconstructed kaon decays is determined mainly by the “Mask Anti” (MA, Fig \[fig:RCMA\]b), which is a 16 $X_0$ lead-scintillator sandwich located at $z = 123$ m. The MA has two $9 \times 9$ [cm]{}$^2$ holes through which the neutral beams pass. At the downstream end of the detector, the CsI crystals around the beam-holes are partially covered by an 8.7 $X_0$ tungsten-scintillator “Collar Anti” (CA, Fig \[fig:CA\]). In addition to defining a sharp edge, the CA veto rejects events in which more than 10% of a photon’s energy is lost in a beam hole. ### Trigger and Data Acquisition {#sec:trigger}  uses a three-level trigger system to reduce the total rate of stored events to approximately 2 kHz, while efficiently collecting  decays. The Level 1 trigger (L1) has no deadtime and makes a decision every 19 ns (corresponding to the beam RF-structure) using fast signals from the detector. The Level 2 trigger (L2) is based on more sophisticated processors and introduces a deadtime of 2-$3\mu$s. When an event passes the Level 2 trigger, the entire detector is read out with an average deadtime of $15~\mu s$. Each event is then sent to one of twenty-four 200-MHz SGI processors running a Level 3 (L3) software filter. An event passing Level 3 selection is written to a Digital Linear Tape for permanent storage. An independent set of ten 150-MHz processors is used for online monitoring and calibration. For rate reduction, the most important trigger element is the regenerator veto, which uses the signal from the downstream lead-scintillator sandwich plus signals from 3 of the 84 scintillator modules. This veto is applied in Level 1 triggers to reject events from the 2 MHz of hadrons that interact in the regenerator. After applying the regenerator veto, there is still a 100 kHz rate of kaon decays and another 100 kHz rate of hadron interactions in the [vacuum-window]{} and drift chambers. Additional trigger requirements are used to reduce this 200 kHz rate by about a factor of 100 to match the bandwidth of the data acquisition system. Separate triggers are defined for the signal ${\mbox{$\pi^{+}\pi^{-}$}}$ and ${\mbox{$\pi^{0}\pi^{0}$}}$ modes. Each trigger is identical for the two beams and for both regenerator positions. The charged, neutral, and special purpose triggers are described below. Each section includes a brief summary of the trigger inefficiency, which is defined as the fraction of events that pass all analysis cuts, but fail the trigger. The inefficiencies are measured using decays collected in separate minimum-bias triggers. The Level 1 trigger for charged decays requires hits in the two drift chambers upstream of the magnet, and requires hits in the “trigger hodoscope” located 2 m upstream of the CsI. Each drift chamber is required to have at least one hit in both the $x$ and $y$ views. The trigger hodoscope consists of two 5 mm thick scintillation planes, each with 31 individually wrapped counters. There are small gaps between the counters, representing 1.1% of the area of each scintillation plane. The hodoscope counters are arranged to minimize the impact of these gaps; a particle traversing a gap in one plane cannot pass through a gap in the other plane. Each plane has two $14\times 14~{\rm cm}^2$ holes to allow the neutral beams to pass through without interacting. The trigger requires 1 or more hits in both planes, and 2 or more hits in at least one plane; this requirement allows for one of the charged particles to pass through a gap between the counters. The two hodoscope hits must also include both the upper and lower “regions,” as well as the left and right regions. The defined hodoscope regions have sufficient overlap to prevent losses for ${\mbox{$K\rightarrow\pi^{+}\pi^{-}$}}$ events, except for decays in which a pion passes through a scintillator gap in the central region defined by $|y| < 7~$cm. To reduce the trigger rate from non-${\mbox{$\pi^{+}\pi^{-}$}}$ kaon decays, signals from the veto system are used to reject events at the first trigger level. The muon veto is used to suppress ${\mbox{$K_{L}\to\pi^{\pm}{\mu}^{\mp}\nu$}}$ decays, and the photon vetos downstream of the [vacuum-window]{} are used to reject decays with a photon in the final state. The Level 2 trigger requires that the drift chamber hits in the $y$-view be consistent with two tracks from a common vertex; to reduce signal loss from inefficient wires, a missing hit is allowed. The L3 filter selects  candidates by reconstructing two charged tracks in the spectrometer; to reject ${\mbox{$K_{L}\to\pi^{\pm}e^{\mp}\nu$}}$ and ${\mbox{$K_{L}\rightarrow \pi^{+}\pi^{-}\pi^{0}$}}$ decays, the ${\mbox{$\pi^{+}\pi^{-}$}}$ mass is required to be greater than $450~{\rm MeV}/c^2$ and $E/p$ is required to be less than $0.9$ for both tracks. For CsI calibration and detector acceptance studies, $1/7$ of the  candidates are kept by requiring $E/p > 0.9$ for one of the tracks. The total charged trigger inefficiency is 0.5%, and is mainly from the 0.3% loss due to gaps between the scintillator counters. The drift chamber requirements result in a 0.1% inefficiency. Accidental effects result in a 0.06% loss, and there is a 0.04% loss from the trigger hardware. The Level 3 inefficiency is 0.09%. The Level 1 trigger for neutral decays is based on the total energy deposited in the CsI calorimeter. Using a 3100 channel analog sum, the threshold is 24 GeV and the 10-90% turn-on width is 7 GeV. The photon vetos downstream of the [vacuum-window]{} are used in the Level 1 trigger to reduce the rate from  decays. The Level 2 trigger uses a Hardware Cluster Counter [@nim:hcc] to count clusters of energy above 1 GeV in the CsI. Four clusters are required for  decays, and a separate six-cluster trigger is pre-scaled by five to collect  decays. The Level 3 filter requires that the invariant mass be greater than $450~{\rm MeV}/c^2$ for both the ${\mbox{$\pi^{0}\pi^{0}$}}$ and ${\mbox{$\pi^{0}\pi^{0}\pi^{0}$}}$ final states. The neutral energy-sum trigger inefficiency has two components: a 0.6% inefficiency from early accidental effects and an inefficiency of $4\times 10^{-5}$ from the trigger hardware. The Level 2 trigger inefficiency of 0.4% comes from the Hardware Cluster Counter. The Level 3 inefficiency is 0.01%. In addition to the triggers used to select specific decay modes, several special-purpose triggers are used for monitoring. These include (i) a “pedestal” trigger, which reads out the entire CsI calorimeter a few times per 20 s spill, (ii) a laser trigger to monitor the CsI (Sec. \[sec:det\_csi\]), (iii) a muon trigger that requires a muon hodoscope signal instead of using it in veto, (iv) a charged mode trigger that does not use the regenerator veto, (v) a trigger using only the Level 1 CsI analog sum with no Level 2 requirement, and (vi) an “accidental” trigger to record random activity in the detector that is proportional to the instantaneous intensity of the proton beam; triggers iii-vi are heavily pre-scaled. For the accidental trigger, we use a telescope consisting of three scintillation counters, each viewed by a photomultiplier tube. It is located 1.8 m from the BeO target and is oriented at an angle of $90{^{\circ}}$ with respect to the beam axis. The target is viewed by the counters through a $6.4 \times 6.4$ [mm]{}$^2$ hole in the stainless steel shielding around the target. A coincidence of signals in all three counters generates an accidental trigger. Under nominal conditions, the total rates passing L1, L2, and L3 are 40, 10, and 2 kHz, respectively; this L3 rate corresponds to approximately $40,000$ events written to tape each minute. The deadtime, which is common to all triggers, is about $33\%$ with roughly equal contributions from Level 2 and readout. Data Collection {#sec:datarun} --------------- The data used in this analysis were collected in two distinct “E832” periods: October-December in 1996 and April-July in 1997. In these periods, there were about 200 billion kaon decays between the defining collimator and the CsI calorimeter, of which 5 billion events were written to 3000 15-Gb tapes. About 5% of the events are used to select the ${\mbox{$K\rightarrow\pi\pi$}}$ sample, and the remaining 95% of the events are used to understand the detector. The neutral mode data from 1996 and 1997 are used. For the charged mode, the 1996 sample is not used because the Level 3 rejection of delayed hits in the drift chambers led to a 20% signal loss that is difficult to simulate. In 1997, the Level 3 tracking was modified to avoid this loss. Excluding the 1996 charged mode data has a negligible effect on the overall statistical and systematic uncertainty in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. To ensure that the final data sample is of high quality, periods in which there are known detector problems are excluded. The CsI calorimeter readout suffered DPMT failures at a rate of about one per day, and was the most significant detector problem. Each DPMT failure was identified immediately by the online monitoring, and then repaired. Calibrations were frequent enough so that every repaired DPMT can be calibrated offline. The final data sample does not include periods in which there were dead CsI channels or dead cells in the drift chambers. Approximately 12% of the data on tape are rejected because of detector problems. Monte Carlo Simulation {#sec:mc} ---------------------- The Monte Carlo (MC) simulation consists of three main steps. The first step is kaon generation at the BeO target and propagation along the beamline to the decay point. The second step is kaon decay into an appropriate final state, and tracing of the decay products through the detector. The last step is to simulate the detector response including digitization of the detector signals. The simulated event format and analysis are the same as for the data. The detector geometry used in the simulation comes from survey measurements and data. The survey measurements are used for the transverse dimensions and ${\mbox{$z$}}$ locations of beamline and detector elements. The transverse offsets and rotations of the drift chambers, relative to the CsI calorimeter, are determined using various data samples as discussed in Section \[sec:det\_spec\]. The Mask-Anti and Collar-Anti (Sec. \[sec:det\_veto\]) aperture sizes and locations are determined using electrons from ${\mbox{$K_{L}\to\pi^{\pm}e^{\mp}\nu$}}$ decays. As a result of the high flux of kaons and neutrons in the  apparatus, there can be underlying accidental activity in the detector that is unrelated to the kaon decay. After applying veto cuts, the average accidental energy under each CsI cluster is a few MeV, and there are roughly 20 extra in-time drift chamber hits. To simulate these effects, we use data events from the accidental trigger (Sec. \[sec:trigger\]) to add the underlying accidental activity to each generated MC event. In the CsI calorimeter and veto system, activity from an accidental event is added to the MC energy deposits in a straightforward manner. The procedure for including accidental activity in the drift chamber simulation, however, is more complicated because an empirical model is needed to describe how an accidental hit can obscure a signal hit that arrives later on the same wire. ### Kaon Propagation and Decay {#sec:mc_kaonprop} The kaon energy spectrum and the relative flux of ${\mbox{$K^{0}$}}$ and ${\mbox{$\overline{K^{0}}$}}$ states produced at the target are based on a parameterization [@malensek] that is tuned to match  ${\mbox{$K\rightarrow\pi^{+}\pi^{-}$}}$ data. The $z$ position of each kaon decay is chosen based on the calculated $z$ distribution for the initial  or  state, and accounts for interference between  and . The simulation propagates the  and  amplitudes along the beamline, accounting for regeneration and scattering in the absorbers and the regenerator. Some small-angle kaon scatters in the absorbers ($z\sim 19$ m, Fig. \[fig:beamline\]) can pass through the collimator system and satisfy the  analysis requirements. The upstream collimators are modeled as perfectly absorbing, while scattering in the defining collimators and the regenerator are treated using models that are tuned to data (Sections \[sec:bkg\_coscat\]-\[sec:bkg\_regscat\]). For ${\mbox{$K\rightarrow\pi^{+}\pi^{-}$}}$ decays, radiative corrections due to inner [Bremsstrahlung]{} are included [@prl:belz]. We do not simulate the direct emission part of the radiative spectrum since the invariant mass cut used in the analysis essentially eliminates this component. For the  mode, only the four photon final state is considered. GEANT [@geant] is used to parameterize scattering of final state charged particles in the [vacuum-window]{}, helium bags, drift chambers, trigger hodoscope, and the steel. Electrons can undergo [Bremsstrahlung]{}, photons can convert to $e^{+}e^{-}$ pairs, and charged pions can decay into a muon and a neutrino; these secondary particles are traced through the detector. ### Simulation of the Drift Chambers {#sec:mc_dc} The Monte Carlo traces each charged particle through the drift chambers, and the hit position at each drift chamber plane is converted into a TDC value. The position resolutions measured in data are used to smear the hit positions, and the inverse of the $x(t)$ map is used to convert the smeared hit position into a drift time. The simulation includes four effects that cause drift chamber signals to be corrupted or lost. 1. “Wire inefficiency” results in no in-time TDC hit. The inefficiency is measured in 1 cm steps along each wire of each chamber. The average single-hit inefficiency is less than 1%. Since the inefficiency increases with distance from the wire, the measured inefficiency profile within the cell is used in the simulation. 2. A “delayed hit” results in a hit-pair with a sum-of-distance (SOD) that is more than 1 mm greater than the nominal cell size, and therefore does not satisfy the hit-pair requirement (Sec. \[sec:det\_spec\]). The delayed hit probability is a few percent in the regions where the neutral beams pass through the drift chambers, and about 1% over the rest of the chamber area. The effect is modeled by distributing primary drift electrons along the track using a Poisson distribution with an average interval of $340~\mu m$, and then generating a composite signal at the sense wire. A delayed hit occurs when the signal from the nearest ionization cluster is below threshold, but the composite pulse from all ionization clusters is above threshold. The MC threshold is determined empirically by matching the MC delayed hit probability to data; the delayed hit probability in data is measured in 1 cm steps along each wire. 3. When an “in-time accidental” hit arrives before a signal hit on the same wire, the accidental hit is used instead of the signal hit because the tracking program considers only the first in-time hit. For roughly 0.7% of the hit-pairs, this effect causes a “low-SOD” that is more than 1 mm below the nominal cell size, and therefore does not pass the hit-pair requirement. An “early accidental” hit prior to the in-time window can also obscure a signal hit on the same wire for two reasons. First, the discriminator has a deadtime of $42$ ns during which the wire is 100% inefficient. Second, large analog pulses can stay above the discriminator threshold longer than 42 ns; the variation of the pulse-length is modeled and tuned to data. 4. Delta rays also cause a low-SOD for 0.5% of the hit-pairs. In the Monte Carlo, delta rays are generated in the same cell as the track, and the rate is tuned to match the low-SOD distribution in data. The quality of the drift chamber simulation is illustrated by the SOD distribution in Fig. \[fig:sod\]. Both the low and high-side tails in the data are well simulated. ### Simulation of the CsI Calorimeter {#sec:mc_csi} The Monte Carlo simulation is used to predict the energy deposit in each crystal when kaon decay products hit the calorimeter. In particular, the MC is needed to model energy leakage in the beam-holes and at the outer edges of the calorimeter, and to model nearby showers that share energy. A library of GEANT-based [@geant] electron and photon showers is used to simulate electromagnetic showers in the calorimeter. In the shower generation, each electron or photon is incident on the central crystal of a $13\times 13$ array of small crystals ($32.5\times 32.5~{\rm cm}^2$). The showers are generated in 6 energy bins from 2 to 64 GeV, and in $x,y$ position bins. The position bin spacing varies from 7 mm at the crystal center to 2 mm at the edge; this binning matches the variation in reconstructed position resolution, which is better for particles incident near the edge of a crystal. Outside the $13\times 13$ array, a GEANT-based parameterization is used instead of a library of individual showers; this parameterization models energy deposits in a $27\times 27$ array. Energy leakage across the beam-holes is modeled based on electron data from ${\mbox{$K_{L}\to\pi^{\pm}e^{\mp}\nu$}}$ decays. To simulate the DPMT response, the energy deposit in each crystal is distributed among six consecutive RF buckets according to the measured time profile of the scintillation light output. In each 19 ns wide RF bucket, the energy is smeared to account for photo-statistics, and random activity from an accidental trigger is added. Each channel is digitized using the inverse of the calibrations obtained from data. A channel is processed if the digitized signal exceeds the $4$ MeV readout threshold that was applied during data-taking. In addition to simulating electromagnetic showers, we also simulate the calorimeter response to charged pions and muons. The energy deposits from charged pions are based on a library of GEANT-based showers using a $50\times 50$ array of small crystals. A continuous energy distribution is generated in $x,y$ position bins with 4 mm separation. For muons, the average CsI energy deposit is 320 MeV, and is simulated using the Bethe-Bloch energy loss formula. ### Simulation of the Trigger {#sec:mc_trg} The  Monte Carlo includes a simulation of the Level 1 and Level 2 triggers, and the Level 3 software filter. For the ${\mbox{$K\rightarrow\pi^{+}\pi^{-}$}}$ trigger, the most important effect to simulate is the $0.3$% inefficiency due to scintillator gaps in the hodoscope just upstream of the CsI calorimeter. The gap sizes and positions are measured in data using the ${\mbox{$K_{e3}$}}$ sample. The simulation also includes the drift chamber signals at both Level 1 and Level 2. For the ${\mbox{$K\rightarrow\pi^{0}\pi^{0}$}}$ trigger, we use ${\mbox{$K_{L}\to\pi^{\pm}e^{\mp}\nu$}}$ decays to determine the calorimeter energy-sum threshold and turn-on width, and to measure the Hardware Cluster Counter threshold for each CsI channel. Data Analysis {#sec:ana} ============= The analysis is designed to identify  decays while removing poorly reconstructed events that are difficult to simulate, and to reject background. The following sections describe the analysis and the associated systematic uncertainties in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. When discussing systematic uncertainties, we typically estimate a potential shift $s\pm \sigma_s$, where $s$ is the shift in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ and $\sigma_s$ is the accompanying statistical uncertainty. We convert the shift to a symmetric systematic error, $\dels$, such that the range $[-\dels,+\dels]$ includes 68.3% of the area of a Gaussian with mean $s$ and width $\sigma_s$: $$\frac{1}{\sigma_s\sqrt{2\pi}} \int_{-\dels}^{+\dels} dx ~ {\rm exp}{\left[-\frac{(x-s)^2}{2\sigma_s^2}\right]} = 0.683 ~. \label{eq:error}$$ Note that $\dels = \sigma_s$ when $s=0$; when $s > \sigma_s$, $\dels \approx s+\sigma_s/2$. Common Features of  and  Analyses {#sec:ana common} --------------------------------- Although many details of the charged and neutral decay mode analyses are different, several features are common to reduce systematic uncertainties. For each decay mode, the same cuts are applied to decays in the vacuum and regenerator beams, so that most systematic uncertainties cancel in the single ratios used to measure $|{\mbox{$\eta_{+-}$}}/\rho|^2$ and $|{\mbox{$\eta_{00}$}}/\rho|^2$. Since the regeneration amplitude $\rho$ depends on the kaon momentum, we select an identical $40$-$160~{ {\rm GeV}/c }$ kaon momentum range for both the charged and neutral decay modes. The $40~{ {\rm GeV}/c }$ cut is chosen because of the rapidly falling detector acceptance at lower kaon momenta; the higher momentum cut is a compromise between slightly higher statistics and [target-$K_S$]{} contamination. We also use the same $z$-vertex range of 110-158 m for each decay mode. The 110 m cut is chosen to be well upstream of the Mask Anti and therefore removes very few decays in the vacuum beam; this cut removes no events in the regenerator beam. The downstream $z$-vertex requirement avoids background from beam interactions in the [vacuum-window]{}. To simplify the treatment of background from kaons that scatter in the regenerator, the veto requirements for the charged and neutral mode analyses are as similar as possible. In both analyses, the main reduction in background from regenerator scattering comes from the requirement that there be less than 8 MeV in every regenerator module (Sec. \[sec:regenerator\]). Interactions in the regenerator that are close in time to a  decay can add significant activity in the detector, resulting in events that are difficult to reconstruct. To avoid this problem, the trigger signal from the regenerator is used to reject events in a 57 ns wide window (3 RF buckets), which removes events immediately following or just prior to an interaction in the regenerator. In addition to the regenerator veto, we require that there be less than 150 (300) MeV in the photon vetos upstream (downstream) of the [vacuum-window]{}, and less than 300 MeV in the Mask Anti; note that the photon veto thresholds refer to equivalent photon energy.  Reconstruction and Selection {#sec:chrg_evtsel} ----------------------------- The strategy to identify  decays is to reconstruct two well-measured tracks in the spectrometer, and to reduce backgrounds with particle identification and kinematic requirements. The spectrometer reconstruction begins by finding $y$-tracks using all four drift chambers. In the $x$-view, separate segments are found upstream of the magnet using DC1 and DC2, and downstream of the magnet using DC3 and DC4. The extrapolated upstream and downstream $x$-track segments typically match to within 0.5 mm at the center of the magnet. To reduce sensitivity to multiple-scattering and magnetic fringe fields between the drift chambers, only a loose match of 6 mm is required at the magnet. If two $x$-tracks and two $y$-tracks are found, the reconstruction continues by extrapolating both sets of tracks upstream to define an $x$-$z$ and $y$-$z$ vertex. The difference between these two vertices, ${\Delta z_{vtx}}$, is used to define a vertex-$\chi^2$, $${\chi^2_{vtx}}\equiv \left( {{\Delta z_{vtx}}}/{\mbox{$\sigma_{\Delta z}$}}\right)^2 ~, \label{eq:chisqvtx}$$ where ${\mbox{$\sigma_{\Delta z}$}}$ is the resolution of ${\Delta z_{vtx}}$. This resolution depends on momentum and opening angle, and accounts for multiple scattering effects. The two $x$-tracks and two $y$-tracks are assumed to originate from a common vertex if ${\chi^2_{vtx}}< 100$; this ${\chi^2_{vtx}}$ requirement is sufficiently loose to remain insensitive to the tails in the multiple scattering distribution. At this stage, the $x$ and $y$ tracks are independent. To determine the full particle trajectory, the $x$ and $y$ tracks are matched to each other based on their projections to CsI clusters; the track projections to clusters must match within 7 cm. After combining the $x$ and $y$ tracks, the reconstructed $z$-vertex resolution is about 30 cm near the regenerator and 5 cm near the [vacuum-window]{}. Each particle momentum is determined from the track bend-angle in the magnet and a precise B-field map. An ideal event with two charged tracks results in 32 hits in the four chambers. About 40% of the events have at least one missing hit or a hit-pair that does not reconstruct a proper sum-of-distance (SOD) value. The tracking program can reconstruct events with many of these defects along the tracks. The overall single-track reconstruction inefficiency is measured to be 1% using ${\mbox{$K_{L}\rightarrow \pi^{+}\pi^{-}\pi^{0}$}}$ decays. An event is assigned to the regenerator beam if the regenerator $x$-position has the same sign as the $x$-coordinate of the kaon trajectory at the downstream face of the regenerator; the event is assigned to the vacuum beam if the signs are different. For ${\mbox{$K_{L}\to\pi^{\pm}e^{\mp}\nu$}}$ decays, this beam assignment definition cannot be used because of the missing neutrino; instead we compare the $x$-coordinate of the decay vertex to that of the regenerator. The reconstruction described above is used for kaon decays with two charged tracks in the final state. The following selection criteria are specific to the  channel, and are designed mainly to reduce background from semileptonic decays and kaon scattering. Requiring $E/p < \EopCut$ for each track reduces the ${\mbox{$K_{L}\to\pi^{\pm}e^{\mp}\nu$}}$ background by a factor of 1000. The ${\mbox{$K_{L}\to\pi^{\pm}{\mu}^{\mp}\nu$}}$ background is rejected largely by the muon veto in the Level 1 trigger. Since low momentum muons may range out in the 4 m of steel before depositing energy in the muon veto, we also require each track to have a momentum $p > \MinpCut$. To reject radiative  decays, events with an isolated electromagnetic cluster above   are removed if the cluster is at least  away from both extrapolated pion track positions at the CsI; the pion-photon separation requirement avoids removing events that have satellite clusters from hadronic interactions. To remove background from  and decays, the higher momentum track is assumed to be from a proton (or antiproton) and the event is rejected if the proton-pion invariant mass is within $3.5$ MeV of the known $\Lambda$ mass. To provide additional background rejection of semileptonic and  decays, the ${\mbox{$\pi^{+}\pi^{-}$}}$ invariant mass is required to be in the range $488$-$508~{\rm MeV}/c^2$. Figures \[fig:mass\]a,b show the invariant mass distributions for the vacuum and regenerator beams after all other selection cuts. The shapes of the vacuum and regenerator beam distributions are nearly the same, and have an RMS resolution of $\sim \MassResln$. The low-mass tail is mainly due to the presence of  events in both beams. The tails in the vacuum beam distribution also include background from semileptonic decays. Background from kaon scattering in the regenerator is suppressed mainly by the regenerator veto requirement. To reduce this background further, a cut is made on the  transverse momentum, which also rejects events from kaons that scatter in the defining collimator. As illustrated in Fig. \[fig:pt2diag\], the total momentum of the  system ($\vec{p}$) is projected back to a point (${\mbox{${\sf R}_{\sf reg}$}}$) in the plane containing the downstream face of the regenerator, and $\vec{p}_T$ is the momentum component that is transverse to the line connecting ${\mbox{${\sf R}_{\sf reg}$}}$ to the target. This definition of $\vec{p}_T$ is used for both beams, and is optimized to distinguish between scattered and unscattered kaons in the regenerator beam. The data and MC  distributions in both beams are shown in Fig. \[fig:pt2datamc\]; the selection cut, ${\mbox{$p_T^2$}}< 250~{ {\rm MeV}^2/c^2 }$, is shown by the arrows. Next, we describe selection requirements that define apertures. For ${\mbox{$K\rightarrow\pi^{+}\pi^{-}$}}$, unlike  decays, the edges of the Mask Anti and Collar Anti vetos are not useful apertures because of the difficulty in simulating the interactions of charged pions. For the MA, a charged pion can pass through a small amount of lead-scintillator near the edge without leaving a veto signal, and multiple scattering in the MA results in a poorly reconstructed vertex. For the CA, the difficulty is with pions that miss the CA, but interact in the CsI calorimeter; backscattering from these pion interactions can still deposit sufficient energy in the CA to produce a veto signal. To avoid these apertures, we require that the track projections be away from the physical boundaries of the veto detectors. In a similar fashion, the outer CsI aperture is defined by track projections so that the acceptance does not depend on the CsI energy profile from charged pions. If two tracks pass within the same drift chamber cell or adjacent cells, a hit from one track can obscure a hit from the other track. Since this effect is difficult to simulate, we use a track-separation requirement of two offset-cells (Fig. \[fig:dccell\]) between the tracks in both the $x$ and $y$ views for each chamber; offset-cells are used so that tracks near a wire, which have the poorest position resolution, are not near the boundary which defines the track-separation cut. This track-separation requirement results in an effective inner aperture. Systematic Uncertainties from   Trigger, Reconstruction, and Selection {#sec:chrg_syst} ---------------------------------------------------------------------- ------------------------------ --------------- -------------- Source of uncertainty Trigger [**0.58**]{}    L1 & L2 0.20    L3 filter $\L3CHRGSYST$ Track reconstruction [**0.32**]{}    Alignment and calibration 0.28    Momentum scale 0.16 Selection efficiency [**0.47**]{}    $p^2_T$ cut 0.25    DC efficiency modeling 0.37    DC resolution modeling 0.15 Apertures [**0.30**]{}    Wire spacing 0.22    Regenerator edge 0.20 ------------------------------ --------------- -------------- : \[tb:chrg\_syst\] Systematic uncertainties in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ from the charged mode trigger and analysis. The uncertainties in the 3rd column, which also appear in the systematics summary (Table \[tb:syst\_reepoe\]), are the quadratic sum of contributions in the 2nd column. In this section, we discuss the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ systematic uncertainties from the charged mode trigger, reconstruction, and selection, which are summarized in Table \[tb:chrg\_syst\]. Systematic uncertainties related to background and acceptance are discussed in Sections \[sec:bkg\_syst\] and \[sec:acceptance\]. The Level 1 and Level 2 trigger requirements are studied with events collected in a trigger based only on CsI calorimeter energy, and contribute a $0.20{ \times 10^{-4}}$ uncertainty to ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. The L3 uncertainty is determined using a 1% sample of charged triggers that pass L1 and L2, and are saved without requiring L3. From this sample, $3\times 10^5$ events pass all  analysis cuts. Applying the L3 requirement to this sample results in a $2.2\sigma$ shift in the vacuum-to-regenerator ratio; the same analysis on MC events results in no shift from L3. These shifts in the vacuum-to-regenerator ratio correspond to a $(0.45\pm 0.20){ \times 10^{-4}}$ shift in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$, which leads to an uncertainty of $\L3CHRGSYST{ \times 10^{-4}}$ using Eq. \[eq:error\]. The effects of detector misalignment (Sec. \[sec:det\_spec\]) are studied by evaluating the change in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ as the following are changed within their measured uncertainties: transverse chamber offsets and rotations, non-orthogonality between the $x$ and $y$ wire planes, and the $z$-locations of the drift chambers. The time-to-distance calibration is varied to change the average SOD value within its uncertainty. A ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ uncertainty of $0.28{ \times 10^{-4}}$ is assigned based on these tests. The kaon mass is known to $0.031$ MeV [@pdg00], leading to a momentum-scale uncertainty of $1{ \times 10^{-4}}$, and a ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ uncertainty of $0.16{ \times 10^{-4}}$. In the charged mode analysis,  is the only variable for which ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ is sensitive to the cut value. Figure \[fig:pt2datamc\] shows the  data-MC comparison in both beams after background subtraction (Sec. \[sec:bkg\]), and also illustrates the importance of simulating details of the drift chamber performance. Increasing the  cut value from 250 to $500~{ {\rm MeV}^2/c^2 }$ changes  by $(-0.23 \pm 0.05) { \times 10^{-4}}$, leading to a systematic uncertainty of $\PtsqSyst { \times 10^{-4}}$. There is no further statistically significant variation if the  cut value is increased beyond $500~{ {\rm MeV}^2/c^2 }$. The uncertainty in modeling the drift chamber efficiency is related to the effects of delayed hits and early accidentals. The delayed-hit probability (Sec. \[sec:mc\_dc\]) predicted by the MC is compared to data in various regions of the chambers and for different time periods. Residual data-MC differences do not exceed $10\%$ of the effect, which corresponds to a  systematic uncertainty of $0.21 { \times 10^{-4}}$. There is a component of the early accidental inefficiency that may not be modeled properly because the total TDC range covers only about $2/3$ of the relevant early time window that is prior to the in-time window. A systematic error of $0.30{ \times 10^{-4}}$ is assigned based on the change in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ when accidental hits from only half of the early time window are used to obscure simulated in-time hits. The total uncertainty from modeling the effects of delayed hits and accidentals is $0.37{ \times 10^{-4}}$ on ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. The modeling of the drift chamber resolutions is checked by comparing the widths of the SOD distributions between data and MC; they agree to within $5\%$ which corresponds to a systematic error of $0.15{ \times 10^{-4}}$ on ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. The drift chamber track-separation cut (Fig. \[fig:dccell\]) depends on the wire spacing, which is known to $20~\mu$m on average [@thesis:mbw]. The bias in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ from variations in the wire spacing is leading to an uncertainty of $\TrCellSyst { \times 10^{-4}}$. The $0.45$ mm uncertainty on the effective regenerator edge (Fig. \[fig:regdiagram\]b) leads to a small uncertainty in the expected number of  decays, and results in a systematic error of $\RegEdgeSyst { \times 10^{-4}}$ on .  Reconstruction and Selection {#sec:neut_evtsel} ----------------------------- The strategy to identify  decays is to reconstruct four photon clusters in the CsI calorimeter that are consistent with coming from two neutral pions, and to reduce background with kinematic cuts. Since the reconstruction of  and  decays is almost identical, most of the discussion will be valid for both these decay modes. The calorimeter reconstruction begins by determining the energy and position of each cluster found in the Level 2 trigger. The neutral decay mode analysis requires two additional corrections that are not relevant for the ${\mbox{$K_{L}\to\pi^{\pm}e^{\mp}\nu$}}$ calibration. First, a correction is applied to account for energy shared among nearby clusters. The second correction, described in detail at the end of this section, accounts for a small energy scale difference between data and Monte Carlo. Each cluster is required to have a transverse energy distribution consistent with that of a photon. This requirement rejects decays that have significant accidental energy overlapping a photon cluster; for  decays, it also rejects background from ${\mbox{$K_{L}\rightarrow \pi^{0}\pi^{0}\pi^{0}$}}$ decays in which two or more photons overlap at the CsI calorimeter. Sensitivity to modeling the transverse energy distribution is reduced by requiring that all clusters be separated from each other by at least 7.5 cm. To reduce the dependence on the MC trigger simulation, the energy of each cluster is required to be greater than 3 GeV. To reconstruct each neutral pion, we assume that two photon clusters originate from a $\pi^0$ decay. In the small angle approximation, the distance between the $\pi^0$ decay vertex and the CsI calorimeter is given by $$d_{\pi^0} = r_{12} \sqrt{E_{\gamma_1} E_{\gamma_2}} / {m_{\pi^0}} , \label{eq:neut_z}$$ where $r_{12}$ is the distance between the two photon clusters, $E_{\gamma_1}$ and $E_{\gamma_2}$ are the two photon energies, and $m_{\pi^0}$ is the known mass of the neutral pion. The number of possible photon pairings is 3 for ${\mbox{$K\rightarrow\pi^{0}\pi^{0}$}}$ and 15 for ${\mbox{$K_{L}\rightarrow \pi^{0}\pi^{0}\pi^{0}$}}$ decays. For each possible photon pairing we compute the quantity $${\chi^2_{\pi^0}}\equiv \sum_{j=1}^{N_{\pi^0}} \left[ \frac{d_{\pi^0}^j - d_{avg}}{\sigma_d^j} \right]^2 ~, \label{eq:chisqzz}$$ where $N_{\pi^0}$ is the number of neutral pions (2 or 3), $d_{\pi^0}^j$ is the distance between the CsI and the vertex of the $j$’th $\pi^0$, $d_{avg}$ is the weighted average of all the $d_{\pi^0}^j$, and $\sigma_d^j$ is the energy-dependent $\pi^0$-vertex resolution, which is roughly 40 cm (30 cm) at the upstream (downstream) end of the decay region. The photon pairing with the minimum ${\chi^2_{\pi^0}}$ value is used because this pairing corresponds to the best agreement of the $\pi^0$ vertices. To reduce the chance of choosing the wrong photon pairing, we require that the minimum ${\chi^2_{\pi^0}}$ value be less than 12 for  decays, and less than 24 for  decays. After all selection cuts, the probability that the minimum ${\chi^2_{\pi^0}}$ value gives the wrong pairing is 0.006% for  and 0.02% for ${\mbox{$K_{L}\rightarrow \pi^{0}\pi^{0}\pi^{0}$}}$. The ${\mbox{$z$}}$-vertex used in the analysis is given by ${Z_{\rm CsI}}- d_{avg}$, where ${Z_{\rm CsI}}$ is the $z$-position of the mean shower depth in the CsI. For  decays, the $z$-vertex resolution is $\sim 30$ cm near the regenerator and $\sim 20$ cm near the [vacuum-window]{}. The beam assignment is made by comparing the $x$-component of the “center-of-energy” to the regenerator $x$ position. The center-of-energy is defined to be $$x_{\rm coe} \equiv \frac{ \sum x_i E_i }{ \sum E_i }~, ~~~~~~~~ y_{\rm coe} \equiv \frac{ \sum y_i E_i }{ \sum E_i } ~, \label{eq:coe}$$ where $E_i$ are the cluster energies, $x_i$ and $y_i$ are the cluster coordinates at the CsI calorimeter, and the index $i$ runs over the photons. The center-of-energy is the point at which the kaon would have intercepted the plane of the CsI if it had not decayed. The value of $\{ x_{\rm coe},y_{\rm coe}\}$ typically lies well inside the beam-holes, except for kaons that are scattered by a large angle in either the regenerator or the defining collimator (Sec. \[sec:bkg\]). The center-of-energy resolution is $\sim\!1$ mm, which is much smaller than the beam separation. An event is assigned to the regenerator beam if $x_{\rm coe}$ has the same sign as the regenerator $x$-position, or to the vacuum beam if $x_{\rm coe}$ has the opposite sign. Significant background rejection comes from the invariant mass cut, $490 < m_{{\mbox{$\pi^{0}\pi^{0}$}}} < 505~{\rm MeV}/c^2$, and from the photon veto cuts. To calculate the invariant mass, the decay vertex is assumed to be on the line joining the target and the center-of-energy, at a distance $d_{\pi^0}$ from the calorimeter. Figures \[fig:mass\]c,d show the invariant mass distributions. In the vacuum beam, events in the mass side-band regions are mostly from ${\mbox{$K_{L}\rightarrow \pi^{0}\pi^{0}\pi^{0}$}}$ decays in which two of the six photons are not detected. In the regenerator beam, the side-band regions include comparable contributions from ${\mbox{$\pi^{0}\pi^{0}$}}$ pairs produced in the lead of the regenerator (Section \[sec:bkg\_non2pi\]),  events with the wrong photon pairing, and ${\mbox{$K_{L}\rightarrow \pi^{0}\pi^{0}\pi^{0}$}}$ decays. The photon veto cuts are the same as in the charged mode analysis, with three additional cuts to reduce background: (i) the energy of extra isolated EM clusters is required to be below 0.6 GeV instead of 1.0 GeV in the charged analysis, (ii) the photon-equivalent energy in the beam-hole veto behind the CsI must be less than 5 GeV, and (iii) the photon-equivalent energy in the Collar Anti that surrounds the CsI beam-holes (Fig. \[fig:CA\]) must be less than 1 GeV. Five apertures define the  acceptance. 1. The CsI inner aperture near the beam holes is defined by the CA (Fig. \[fig:CA\]). 2. The outer CsI aperture is defined by rejecting events in which a photon hits the outer-most layer of crystals (Fig. \[fig:csilayout\]). This cut is applied based on the location of the “seed” crystals that have the maximum energy in each cluster. 3. The upstream aperture in the vacuum beam is defined by the MA (Fig. \[fig:RCMA\]b). 4. The upstream edge in the regenerator beam is defined by the lead at the downstream end of the regenerator (Figure \[fig:regdiagram\]b). 5. The requirement of at least 7.5 cm between photons results in an effective inner aperture. Since we do not measure photon angles, the transverse momentum is unknown and therefore cannot be used to reject events in which a kaon scatters in the collimator or regenerator. Instead, we use the center-of-energy (Eq. \[eq:coe\]) to define a variable called “Ring Number:” $${{\tt RING}}= 40000 \times {\rm Max}(\Delta x^2_{\rm coe},\Delta y^2_{\rm coe})~, \label{eq:ring_def}$$ where $\Delta x_{\rm coe}$ ($\Delta y_{\rm coe}$) is the $x$-distance ($y$-distance) of the center-of-energy from the center of the closest beam hole. The ${{\tt RING}}$ value is the area, in cm$^2$, of the smallest square that is centered on the beam hole and contains the point $\{ x_{coe}, y_{coe}\}$. The vacuum and regenerator ${{\tt RING}}$ distributions are shown in Fig. \[fig:ring\]. The beam size at the CsI is approximately $9\times 9~{\rm cm}^2$, corresponding to ${{\tt RING}}$ values less than $81~{\rm cm}^2$. The ${{\tt RING}}$ distribution between 100 and $150~{\rm cm}^2$ is sensitive to scattering in the upstream beryllium absorbers ($z\simeq 19$ m, Fig \[fig:beamline\]). We require ${{\tt RING}}< 110~{\rm cm}^2$, which is a compromise between background reduction and sensitivity to the beam halo simulation. The final part of the  analysis is to match the photon energy scale for data and Monte Carlo. This matching is performed after the background subtraction that will be described in Sec. \[sec:bkg\]. Since the energy scale affects the determination of both the kaon energy and the $z$-vertex position (through Eq. \[eq:neut\_z\]), events can migrate into and out of the selected event sample depending on the energy scale. The final energy scale is adjusted to match the data and MC reconstructed $z$-vertex distribution of ${\mbox{$K\rightarrow\pi^{0}\pi^{0}$}}$ decays at the regenerator edge, as shown in Fig. \[fig:regmatch\]a,b. Using $\sim 10^6$ events near the regenerator edge, the energy scale is determined in 10 GeV wide kaon energy bins, which is the binning used to extract ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. On average, the regenerator edge in data lies $\sim 5$ cm upstream of the MC edge as seen in Fig. \[fig:regmatch\]c; the data-MC regenerator edge difference varies by a few centimeters depending on the kaon energy. Since the vertex distance from the CsI is proportional to the cluster energy scale (Eq. \[eq:neut\_z\]), the multiplicative energy scale correction is $\sim 0.9992$, which corresponds roughly to the $-5$ cm data-MC difference divided by the 61 meter distance between the regenerator and the CsI. The regenerator edge shift in each 10 GeV kaon energy bin (Fig. \[fig:regmatch\]c) is converted to an energy scale correction, and is applied to each cluster in data. Systematic Uncertainties from   Trigger, Reconstruction, and Selection {#sec:neut_syst} ---------------------------------------------------------------------- ----------------------------- ------ -------------- Source of uncertainty Trigger [**0.18**]{}     L1 trigger 0.10     L2 trigger 0.13     L3 filter 0.08 Cluster reconstruction [**1.47**]{}     Energy scale 1.27     Non-linearity 0.66     Position reconstruction 0.35 Selection efficiency [**0.37**]{}     ${{\tt RING}}$ cut 0.24     ${\chi^2_{\pi^0}}$ cut 0.20     Transverse shape 0.20 Apertures [**0.48**]{}     Collar Anti 0.42     Mask Anti 0.18     Reg edge 0.04     CsI size 0.15     Photon separation $-$ ----------------------------- ------ -------------- : \[tb:neut\_syst\] Systematic uncertainties in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ from the neutral mode trigger and analysis. The uncertainties in the 3rd column, which also appear in the systematics summary (Table \[tb:syst\_reepoe\]), are the quadratic sum of contributions in the 2nd column. In this section, we discuss the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ systematic uncertainties from the neutral mode trigger, reconstruction, and selection, which are summarized in Table \[tb:neut\_syst\]. Systematic uncertainties related to background and acceptance are discussed in Sections \[sec:bkg\_syst\] and \[sec:acceptance\]. The Level 1 CsI “energy-sum” trigger is studied using the large ${\mbox{$K_{L}\to\pi^{\pm}e^{\mp}\nu$}}$ sample in the charged trigger which does not have L1 CsI requirements (Sec. \[sec:trigger\]). The Level 2 cluster-counter is studied using a half-million  decays from a trigger that requires only Level 1. The Level 3 systematic uncertainty is based on $1.3\times 10^5$  events that satisfy all analysis cuts and that were accepted online without requiring L3. The combined ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ systematic uncertainty from the trigger is $0.18{ \times 10^{-4}}$. The understanding of the cluster reconstruction in the CsI calorimeter, and in particular the energy scale, results in the largest systematic uncertainty in the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ analysis. After matching the data and MC at the regenerator edge, we use a variety of other modes that have one or more ${\mbox{$\pi^{0}$}}$s in the final state to check how well the data and MC match at other $z$-locations. All of these crosscheck samples are collected at the same time as ${\mbox{$K\rightarrow\pi\pi$}}$ decays, and therefore the detector conditions and calibration are precisely the same as for the  sample. Data-MC comparisons of the reconstructed neutral vertex, relative to either a charged mode vertex or the known [[vacuum-window]{}]{} location, are made in the vacuum beam for the following: 1. ${\mbox{$K\rightarrow\pi^{0}\pi^{0}_D$}}$, where $\pi^0_D$ refers to $\pi^0\to{\mbox{$e^+e^-\gamma$}}$; the reconstructed $\pi^0\to{\mbox{$\gamma\gamma$}}$ vertex is compared to the $e^+e^-\gamma$ vertex; 2. ${\mbox{$K_{L}\rightarrow \pi^{+}\pi^{-}\pi^{0}$}}$ in which the reconstructed $\pi^0\to{\mbox{$\gamma\gamma$}}$ vertex is compared to the ${\mbox{$\pi^{+}\pi^{-}$}}$ vertex; 3. $\eta\to{\mbox{$\pi^{0}\pi^{0}\pi^{0}$}}$, where $\eta$ mesons are produced by beam interactions in the [vacuum-window]{} at $z=159$ m; 4.  pairs produced by beam interactions in the [vacuum-window]{} at $z=159$ m. Additional crosschecks at the regenerator edge include  pairs produced by neutron interactions in the regenerator, and $K^{\star}\to{\mbox{$\pi^{0}$}}K_S$ with $K_S\to{\mbox{$\pi^{+}\pi^{-}$}}$. A summary of the neutral vertex crosschecks is given in Fig. \[fig:esclsyst\]a; they are all consistent with the nominal energy scale correction except for the [vacuum-window]{}  pairs. Since the [vacuum-window]{}  pairs result in the largest discrepancy, a discussion of the analysis of these events is given here. The selection of [vacuum-window]{} ${\mbox{$\pi^{0}\pi^{0}$}}$ pairs is similar to that for the  decay mode; the main difference is that  decays are excluded by selecting events in which the  invariant mass is at least $15~{\rm MeV/c}^2$ away from the kaon mass. The [vacuum-window]{}  sample consists of 45000 events in the vacuum beam. The [vacuum-window]{} $z$-location is known with $1$ mm precision using charged two-track events produced in the [vacuum-window]{}. The simulation of  pairs is tuned to match data and MC distributions of reconstructed energy, ${{\tt RING}}$, and  invariant mass. Figure \[fig:esclsyst\]b shows the reconstructed [vacuum-window]{}  vertex for data and MC after the regenerator edge $z$-distribution has been matched. The data-MC comparison is complicated by the helium and drift chamber immediately downstream of the [vacuum-window]{} since this extra material is also a source of  pairs. The production of  pairs is simulated separately in the [vacuum-window]{}, helium, and drift chamber. To evaluate the data-MC discrepancy in the $z$-vertex distribution, a fit is used to determine the relative contribution from each material along with the data-MC vertex shift. The result of this analysis is that the data $z$-vertex distribution at the [vacuum-window]{} lies $[2.46\pm 0.16~({\rm stat}) \pm 0.33~({\rm syst})]$ cm downstream of the MC distribution for the 1997 sample; for the 1996 sample, the corresponding shift is $[1.89\pm 0.24~({\rm stat}) \pm 0.33~({\rm syst})]$ cm. The data-MC shifts are evaluated separately in the 1996 and 1997 samples because of the different CsI PMT signal integration times. The data-MC “$z$-shift” at the [vacuum-window]{}  is translated into a ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ uncertainty by introducing a CsI energy scale distortion to data such that the data and MC $z$-vertex distributions match at both the regenerator and [vacuum-window]{} edges; this distortion changes  by $-1.08{ \times 10^{-4}}$ and $-1.37 { \times 10^{-4}}$ for the 1996 and 1997 data samples, respectively. The combined systematic uncertainty on ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ is $1.27{ \times 10^{-4}}$. The energy scale distortion leading to this uncertainty varies linearly with decay vertex in both beams, and corresponds to the hatched region in Fig. \[fig:esclsyst\]; non-linear energy scale variations as a function of decay vertex are ruled out because they introduce data-MC discrepancies in other distributions. Some reconstructed quantities in the analysis do not depend on the CsI energy scale, but are sensitive to energy non-linearities. For example, the $m_{{\mbox{$\pi^{0}\pi^{0}$}}}$ peak varies by 0.2 MeV/c$^2$ for kaon energies between 40 and 160 GeV, and the data-MC $z$-difference at the regenerator edge (Fig. \[fig:regmatch\]c) varies by 4 cm over the same kaon energy range. Such data-MC discrepancies can be reproduced with cluster-energy distortions based on energy, angle, position, and pedestal shift. These distortions are not used in the final result, but are used to determine a systematic uncertainty of $0.66{ \times 10^{-4}}$ on ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. Cluster position reconstruction is studied using electrons from  decays and comparing the position reconstructed from the CsI to the position of the electron track projected to the mean shower depth in the CsI. The position differences are parameterized and simulated; the maximum ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ shift of $0.35{ \times 10^{-4}}$ is assigned as a systematic uncertainty. There are three variables for which ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ is sensitive to the cut value. (i) Varying the ${{\tt RING}}$ cut between 100 and $150~{\rm cm}^2$ changes ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ by $0.24{ \times 10^{-4}}$ (Fig. \[fig:ring\]). (ii) Relaxing the ${\chi^2_{\pi^0}}$ requirement such that the inefficiency from this cut is reduced by a factor of 3 changes  by $0.20{ \times 10^{-4}}$. (iii) Removing the transverse energy distribution requirement changes ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ by $0.20{ \times 10^{-4}}$. These three changes added in quadrature contribute an uncertainty of $0.37{ \times 10^{-4}}$ to ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. The aperture uncertainties are mainly from the Collar-Anti (CA, Fig. \[fig:CA\]) and Mask-Anti (MA, Fig. \[fig:RCMA\]b). Their effective sizes and positions are measured with $100~\mu m$ precision using ${\mbox{$K_{L}\to\pi^{\pm}e^{\mp}\nu$}}$ decays, resulting in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ uncertainties of $0.42{ \times 10^{-4}}$ and $0.18{ \times 10^{-4}}$ from the CA and MA, respectively. The CsI calorimeter size is known to better than 1 mm from surveys, resulting in a $0.15{ \times 10^{-4}}$ uncertainty on ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. The 0.1 mm uncertainty on the effective regenerator edge (Fig. \[fig:regdiagram\]b) leads to a ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ uncertainty of $0.04{ \times 10^{-4}}$. Varying the minimum allowed photon separation between 5 cm and 20 cm results in no significant change in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. The total ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ uncertainty from the neutral mode apertures is $0.48{ \times 10^{-4}}$. Background to  and   {#sec:bkg} -------------------- Two types of background are relevant for this analysis. First, there is “non-$\pi\pi$” background from misidentification of high branching-ratio decay modes such as semileptonic ${\mbox{$K_{e3}$}}$ and ${\mbox{$K_{\mu 3}$}}$ in the charged mode, and ${\mbox{$K_{L}\rightarrow \pi^{0}\pi^{0}\pi^{0}$}}$ in the neutral mode. The second type of background is from kaons that scatter in the regenerator or the defining collimator and then decay into two pions. Kaon scattering is the same for both the charged and neutral decay modes; it can be largely eliminated in the charged mode analysis using the reconstructed total transverse momentum of the decay products, but the lack of a photon trajectory measurement does not allow a similar reduction in the neutral mode analysis. Regenerator and collimator scattering affect the  reconstruction in two ways. First, a kaon can scatter at a small angle and still reconstruct within the same beam; this “in-beam” background has different regeneration properties and acceptance compared to unscattered kaons, which can cause a bias in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$, as well as in kaon parameters such as $\Delta \phi$ and $\Delta m$. Second, a kaon that undergoes large-angle scattering can traverse from one beam to the other beam and then decay, leading to the wrong beam assignment in the neutral mode analysis; this “crossover background” is mostly $K_S\to{\mbox{$\pi^{0}\pi^{0}$}}$. Background from regenerator scattering is roughly ten times larger than from scattering in the defining collimator. All backgrounds are simulated, normalized to data, and then subtracted from the  signal samples. The background-subtraction procedure is described in the following sections, and the background-to-signal ratios are summarized in Table \[tab:bkgd\]. ### Non-$\pi\pi$ Background {#sec:bkg_non2pi} Charged pion identification and kinematic cuts (Sec. \[sec:chrg\_evtsel\]) eliminate most of the charged mode non-$\pi\pi$ background; only a 0.09% contribution from the semileptonic  and  processes remains to be subtracted in the vacuum beam. Both semileptonic background distributions are simulated, and then normalized to data using events reconstructed outside the invariant mass and  signal region. The only substantial non-$\pi\pi$ background in the neutral mode is from  with undetected photons or photons that have merged at the CsI calorimeter. This 0.11% background is simulated and then normalized to data using sidebands in the  invariant mass distribution in the vacuum beam. The decay ${\mbox{$K_L\rightarrow\pi^{0}\gamma\gamma$}}$ has a branching fraction of $1.7\times 10^{-6}$ [@pdg00]; it contributes $2\times 10^{-5}$ background in the vacuum beam and is ignored. We also ignore $\Xi^{0}\to\Lambda\pi^0$ with $\Lambda\to n\pi^0$, which contributes less than $10^{-5}$ background. The charged and neutral decay modes both have misidentification background associated with hadronic production in the lead plate of the last regenerator module (Fig. \[fig:regdiagram\]b). In both the charged and neutral data samples, this “regenerator-hadron” production is easily isolated in the reconstructed ${\mbox{$z$}}$-vertex distribution at the regenerator edge. This background is almost entirely rejected by the  cut in the charged decay mode; the remaining $10^{-5}$ background is ignored. In the neutral decay mode, the regenerator-${\mbox{$\pi^{0}\pi^{0}$}}$ background is $8\times 10^{-5}$ ($2\times 10^{-5}$) in the regenerator (vacuum) beam, and is included in the background simulation. ### Collimator Scattering Background {#sec:bkg_coscat} Scattering in the defining collimators is studied using  and ${\mbox{$K_{L}\rightarrow \pi^{+}\pi^{-}\pi^{0}$}}$ decays in the vacuum beam. Figure \[fig:cosct\] shows the $y$ vs. $x$ distribution of the kaon trajectory projected back to the $z$ position of the defining collimator for high ${\mbox{$p_T^2$}}$ vacuum beam events that satisfy all other  requirements. The square bands show kaons that scattered from the defining collimator edges before decaying. The events in Fig. \[fig:cosct\] that lie outside the collimator scattering bands are mainly from semileptonic decays. To determine the number of collimator scatters accurately, the roughly 10% semileptonic component is subtracted. In the charged decay mode, the background from collimator scattering is only 0.01%, and is small mainly because of the  cut; in the neutral decay mode, this background is about 0.1% and therefore requires an accurate simulation. The MC simulation propagates each kaon to the defining collimator, and checks if the kaon strikes the collimator at either the upstream end or anywhere along the 3 meter long inner surface. Kaons that hit the collimator are traced through the steel and allowed to scatter back into the beam. A kaon that scatters in the collimator is parameterized to be either pure $K_S$ or pure $K_L$, with the relative amount adjusted to match the $z$-vertex distribution for the collimator scattering sample shown in Fig. \[fig:cosct\]. The MC treatment of collimator scattering is the same in both the vacuum and regenerator beams. About 1/3 of the collimator-scattered kaons hit the Mask Anti (MA, Fig. \[fig:RCMA\]), and can then punch through and exit the MA as either a $K_L$ or $K_S$. Based on measurements from data, the MC includes a kaon punch-through probability of 60%, and a $K_S$ to $K_L$ ratio of about 50 for kaons that exit the MA. ### Regenerator Scattering Background {#sec:bkg_regscat} In the charged mode analysis, the regenerator scattering background is 0.074% in the regenerator beam, and is not present in the vacuum beam. In the neutral mode analysis, the corresponding background levels are 1.13% in the regenerator beam and 0.25% in the vacuum beam. Regenerator scattering is more complicated than collimator scattering, particularly in the time dependence of  decays resulting from the coherent $K_L$-$K_S$ mixture. Figure \[fig:tdk\_rgsct\] shows the observed proper decay time distributions for kaons that scatter with small ${\mbox{$p_T^2$}}$ ($2500{\rm -}10^4~{ {\rm MeV}^2/c^2 }$) and large ${\mbox{$p_T^2$}}$ ($ > 10^5~{ {\rm MeV}^2/c^2 }$), and for the unscattered  signal (${\mbox{$p_T^2$}}< 250~{ {\rm MeV}^2/c^2 }$). Note that the proper time distribution depends strongly on the  value. Large- scattering contributes mainly to crossover background, while small- scattering contributes mostly to in-beam background. A detailed description of the regenerator scattering background is needed for the neutral decay mode because the ${\mbox{$K\rightarrow\pi^{0}\pi^{0}$}}$ sample includes events with ${\mbox{$p_T^2$}}$ values up to about $3\times 10^4~{ {\rm MeV}^2/c^2 }$ in the regenerator beam, and up to $5\times 10^5~{ {\rm MeV}^2/c^2 }$ in the vacuum beam. A  decay from a kaon that has scattered in the regenerator, referred to as a “regenerator scattering decay,” is described in the MC using a function that is fit to acceptance-corrected  data after subtracting collimator scattering and semileptonic decays. The acceptance correction allows us to fit the “true” regenerator scattering decay distribution, so that the scattering simulation can be used to predict background for both charged and neutral mode decays. As described in Appendix \[app:fitfun\], the fit function depends on proper time, ${\mbox{$p_T^2$}}$, and kaon momentum. To remove non- background from charged tracks produced in the regenerator lead (Fig. \[fig:regdiagram\]b), the fit excludes decays within 0.2 $K_S$ lifetimes of the regenerator edge. To avoid the ${\mbox{$p_T^2$}}$ tail from coherent events, only decays with ${\mbox{$p_T^2$}}>2500~{ {\rm MeV}^2/c^2 }$ are used in the fit. The fit momentum region is the same as in the signal analysis: 40 to $160~{ {\rm GeV}/c }$. As discussed in Section \[sec:regenerator\], there are two processes that contribute to regenerator scattering. The first process is diffractive scattering, which is identical in the charged and neutral mode analyses because no energy is deposited in the regenerator or photon vetos. The second process is inelastic scattering, which is slighlty different in the two modes because of different photon veto requirements. To address this charged-neutral difference, the fit function is based on a phenomenological model that has separate terms for diffractive and inelastic scattering. The  distribution is significantly steeper for diffractive scattering than for inelastic scattering, and this difference allows for the two scattering processes to be distinguished. Using the fit function to simulate the ${\mbox{$K\rightarrow\pi^{0}\pi^{0}$}}$ scattering background, and normalizing to data events in the range $300 < {{\tt RING}}\ < 800~{\rm cm}^2$, we find that the neutral mode veto requirements suppress inelastic scattering by an additional 16% compared to the charged mode veto requirements. This 16% charged-neutral difference in the inelastic component corresponds to a 3% charged-neutral difference in the total regenerator scattering background. ### Summary of Backgrounds {#sec:bkg_summ} Figure \[fig:ptsq\_shapes\] shows the vacuum and regenerator beam  distributions after all other charged mode analysis requirements. MC simulations of the background processes are also shown. The regenerator beam background is mostly from regenerator scattering, and the vacuum beam background is mostly from semileptonic decays. Figure \[fig:ringvaceps\] shows the neutral mode ${{\tt RING}}$ distribution in both beams, along with MC background simulations. Figure \[fig:zbkgneut\] shows the neutral mode background-to-signal ratio (B/S) as a function of the  decay vertex. In the regenerator beam, the main background is from regenerator scattering. In the vacuum beam, the largest sources of background are collimator scattering between 110 m and 125 m, crossover regenerator scattering between 125 m and 140 m, and  decays for $z>140$ m. Table \[tab:bkgd\] summarizes the background levels for both decay modes. The charged mode background level is $\sim 10^{-3}$ in both beams; the neutral background level is 1.2% in the regenerator beam and 0.5% in the vacuum beam. The background subtraction results in corrections to ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ of $-12.5{ \times 10^{-4}}$ for the neutral decay mode and $-0.2{ \times 10^{-4}}$ for the charged decay mode. [lccc]{} Background & Vac & Reg & $\sigma_{syst}$\ process & B/S (%) & B/S (%) & $({ \times 10^{-4}})$\ \   &  &  & 0.12\   &  &  & 0.12\ Collimator scattering &  &  & 0.01\ Regenerator scattering &  &  & 0.10\ Total charged &  &   &\ \ ${\mbox{$K_{L}\rightarrow \pi^{0}\pi^{0}\pi^{0}$}}$ & 0.107 & 0.003 & 0.07\ Reg-${\mbox{$\pi^{0}\pi^{0}$}}$ production & 0.002 & 0.008 & 0.05\ Collimator scattering & 0.123 & 0.094 & 0.10\ Regenerator scattering & 0.252 & 1.130 &\ Total neutral & 0.484 & 1.235 &\ ### Systematic Uncertainties from Background {#sec:bkg_syst} The uncertainties in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ resulting from the background subtraction are shown in the last column of Table \[tab:bkgd\]. In the  analysis, the background contributes an uncertainty of $0.20{ \times 10^{-4}}$ on ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$; this uncertainty is based on changes in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ when background rejection cuts are varied for the  invariant mass, $E/p$, and the minimum pion momentum. In the  analysis, the background contributes an uncertainty of $1.07{ \times 10^{-4}}$ on ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$, and is mostly from the 5% uncertainty on the background level for in-beam regenerator scattering. This background subtraction depends largely on modeling the acceptance for high ${\mbox{$p_T^2$}}$ ${\mbox{$K\rightarrow\pi^{+}\pi^{-}$}}$ events. To check our understanding of the detector acceptance at high ${\mbox{$p_T^2$}}$, we use ${\mbox{$\pi^{+}\pi^{-}$}}$ pairs from ${\mbox{$K_{L}\rightarrow \pi^{+}\pi^{-}\pi^{0}$}}$ decays. Comparing the data and MC   distributions, we limit the data-MC difference to be less than 0.5% per $10000~{ {\rm MeV}^2/c^2 }$. To convert this limit on the “${\mbox{$p_T^2$}}$ slope” into a potential bias on ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$, we weight the  distribution in the neutral mode background simulation by this slope; the resulting $0.4{ \times 10^{-4}}$ change in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ is included as a systematic uncertainty. Imperfections in the phenomenological parameterization and fitting of the ${\mbox{$K\rightarrow\pi^{+}\pi^{-}$}}$ scattering distribution are estimated by comparing charged mode data to the MC simulation. The maximum data-MC difference in the scattering distribution is $3\%$ of the background level, which corresponds to a $0.50{ \times 10^{-4}}$ uncertainty for ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. There is also an uncertainty in how the scattering distribution, measured with  decays, is used to simulate background for  decays. As mentioned in Section \[sec:bkg\_regscat\], the observed charged-neutral difference of 3% in the regenerator scattering level is accounted for by a 16% reduction in the inelastic scattering component in the neutral mode background simulation (Appendix \[app:fitfun\]). If we ignore differences between diffractive and inelastic scattering, and simply reduce the total scattering level by 3% in the neutral mode simulation, ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ changes by $+0.3{ \times 10^{-4}}$. We assign $0.3{ \times 10^{-4}}$ as the systematic uncertainty on ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ to account for the uncertainty in the inelastic-to-diffractive ratio for  decays, and to account for possible charged-neutral differences in the  distribution. We have also checked the effect of variations in the analysis requirements on the background subtraction. The most significant effect is from increasing the regenerator-veto threshold from 8 MeV to 24 MeV in both the  kaon scattering analysis and the  signal analysis. This change doubles the inelastic regenerator scattering background and shifts  by $(+0.7\pm 0.3){ \times 10^{-4}}$, leading to an additional systematic uncertainty of $0.8{ \times 10^{-4}}$. The other neutral mode background sources (Table \[tab:bkgd\]) have a much smaller effect on the measurement than regenerator scattering. The ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ uncertainty from all neutral mode backgrounds is $\BkgdNeutSyst{ \times 10^{-4}}$. Analysis Summary {#sec:ana summary} ---------------- The numbers of events after all event selection requirements and background subtraction are given in Table \[ta:yield\]. The measurement of ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ is statistically limited by the 3.3 million ${\mbox{$K_{L}$}}\to{\mbox{$\pi^{0}\pi^{0}$}}$ decays. Figure \[fig:pzdata\] shows the ${\mbox{$z$}}$-vertex and kaon momentum distributions for the four event samples. Vacuum Beam Regenerator Beam ----------------------------------------- -------------- ------------------ ${\mbox{$K\rightarrow\pi^{+}\pi^{-}$}}$ $11,126,243$ $19,290,609$ ${\mbox{$K\rightarrow\pi^{0}\pi^{0}$}}$  $3,347,729$  $5,555,789$ : \[ta:yield\]  event totals after all analysis requirements and background subtraction. Extracting Physics Parameters {#sec:extract} ============================= To measure physics parameters with the event samples described in the previous section, we correct for detector acceptance and perform a fit to the data. The acceptance correction and the associated systematic error are described in Sections \[sec:acceptance\]-\[sec:acc\_syst\]. The fitting program used to extract physics results from the event samples is described in Section \[sec:fitting\]. Systematic uncertainties associated with fitting are discussed in Section \[sec:fit\_syst\]. The Acceptance Correction {#sec:acceptance} ------------------------- A Monte Carlo simulation is used to determine the acceptance, which is the fraction of ${\mbox{$K\rightarrow\pi\pi$}}$ decays that satisfy the reconstruction and event selection criteria. The very different $K_L$ and $K_S$ lifetimes cause a difference between the average acceptance for decays in the two beams. Correcting for the acceptance difference in the momentum and $z$-vertex range used in this analysis, the measured value of ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ shifts by $\sim 85{ \times 10^{-4}}$ . About 85% of this correction is the result of detector geometry, which is known precisely from optical survey and measurements with data. The remaining part of the acceptance correction depends on detailed detector response and resolution in the simulation. Including accidental activity in the simulation results in corrections to ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ that are about $-0.9{ \times 10^{-4}}$ and $-0.5{ \times 10^{-4}}$ for the charged and neutral decay modes, respectively. As will be discussed in Section \[sec:acc\_syst\], comparisons of data and MC $z$-vertex distributions allow us to estimate the systematic uncertainty associated with the acceptance correction. For a given range of kaon ${\mbox{$p$}}$ and ${\mbox{$z$}}$, the acceptance is defined as $$A_{p,z}=N_{p,z}^{rec}/N_{p,z}^{gen}, \label{eq:acceptance}$$ where $N^{rec}_{p,z}$ ($N^{gen}_{p,z}$) is the number of reconstructed (generated) Monte Carlo events in the specified ${\mbox{$p$}}$, ${\mbox{$z$}}$ range. The generated $p$ and $z$ ranges are slightly larger than the ranges used in the analysis to account for the effects of resolution. Fig. \[fig:mczacc\] shows the acceptance as a function of ${\mbox{$z$}}$ for $70 < {\mbox{$p$}}< 80~{ {\rm GeV}/c }$. The   MC samples used to calculate the detector acceptance correspond to 4.7 times the  data sample and 10.4 times the   data sample. The resulting statistical uncertainties on  from the acceptance correction are $\KtevMCStatChrg { \times 10^{-4}}$ and $\KtevMCStatNeut { \times 10^{-4}}$ for the charged and neutral decay modes, respectively. As will be described in Section \[sec:fitting\], we extract  using twelve $10~{ {\rm GeV}/c }$ bins in kaon momentum and a single, integrated ${\mbox{$z$}}$ bin. This $p$ binning reduces sensitivity to the momentum dependence of the detector acceptance and to our understanding of the kaon momentum spectrum. The use of $p$ bins also allows us to account for the momentum dependence of the regeneration amplitude. Systematic Uncertainty From The Acceptance Correction {#sec:acc_syst} ----------------------------------------------------- We evaluate the quality of the simulation by comparing the data and Monte Carlo ${\mbox{$z$}}$-vertex distributions in the vacuum beam, where the generated ${\mbox{$z$}}$ distribution depends only on the well known $K_L$ lifetime. Imperfections in the understanding of detector size and efficiency would change the number of reconstructed events in a non-uniform way along the decay region, and would result in a data-MC difference in the ${\mbox{$z$}}$-vertex distribution. The procedure for converting the data-MC vertex comparison into a systematic uncertainty on ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ is as follows. Since  is measured in $10~{ {\rm GeV}/c }$ kaon momentum bins, we weight the number of MC events in each energy bin so that the data and MC kaon momentum distributions agree. We then compare the data and the weighted MC ${\mbox{$z$}}$ distributions, and fit a line to the data/MC ratio as a function of ${\mbox{$z$}}$. The slope of this line, $s$, is called an acceptance “$z$-slope.” To a good approximation, a $z$-slope affects the measured value of  as $s\Delta z / 6$, where $\Delta z$ is the difference of the mean $z$ values for the vacuum and regenerator beam vertex distributions, and the factor 6 arises from converting a bias on the vacuum-to-regenerator ratio to a bias on ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. $\Delta z = 5.6$ m and $\Delta z = 7.2$ m in the charged and neutral $\pi\pi$ modes, respectively. Equation \[eq:error\] is used to convert the bias on  to a systematic uncertainty. The uncertainty in $\tau_L$ [@pdg00], which affects the MC $z$-vertex distribution, contributes a negligible uncertainty of $0.034{ \times 10^{-4}}~{\rm m}^{-1}$ to the $z$-slope. Figure \[fig:vtxz\_slopes\] shows the data-MC $z$-vertex comparisons for the charged and neutral $\pi\pi$ decay modes, and for the high statistics ${\mbox{$K_{L}\to\pi^{\pm}e^{\mp}\nu$}}$ and  modes. The $z$-slope in  is , and leads to a systematic uncertainty of $\ChrgZSlopeSyst { \times 10^{-4}}$ in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. This charged $z$-slope has a significance of $2.3\sigma$, and is mostly from the first 20% of the data sample (i.e., data collected at the start of the 1997 run). The very small  $z$-slope is shown as a crosscheck, but is not used to set the systematic error because of the different particle types in the final state. To assign a systematic uncertainty for the neutral decay mode acceptance, we use ${\mbox{$K_{L}\rightarrow \pi^{0}\pi^{0}\pi^{0}$}}$ decays. This decay mode has the same particle type in the final state as ${\mbox{$K\rightarrow\pi^{0}\pi^{0}$}}$ decays, and the  reconstruction is more sensitive to the effects of nearby clusters, energy leakage at the calorimeter edges, and low photon energies. Using a sample of 50 million reconstructed ${\mbox{$K_{L}\rightarrow \pi^{0}\pi^{0}\pi^{0}$}}$ decays in both data and MC, the $z$-slope is leading to a neutral mode acceptance uncertainty of $\NeutZSlopeSyst { \times 10^{-4}}$ on ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. The  $z$-slope of is consistent with the $z$-slope in  decays. The uncertainty in the $z$-dependence of the acceptance for each decay mode is included in the summary of systematic uncertainties shown in Table \[tb:syst\_reepoe\]. Fitting Decay Distributions {#sec:fitting} --------------------------- For pure $K_L$ and $K_S$ beams, the event yields (Table \[ta:yield\]) and acceptance for each mode (Fig. \[fig:mczacc\]) would be sufficient to determine ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ from the acceptance-corrected double ratio. The regenerator, however, produces a coherent $K_S$-$K_L$ mixture, so that a simple double ratio underestimates ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ by a ${\rm few}{ \times 10^{-4}}$. A proper treatment of the regenerator, as well as [target-$K_S$]{}in both beams, is included in a fitting program. In addition to extracting ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$, this fitting program is also used to make measurements of the kaon parameters ${\mbox{$\Delta m$}}$, ${\mbox{$\tau_{S}$}}$, ${\mbox{$\phi_{+-}$}}$, and ${\mbox{$\Delta \phi$}}$; the fitting procedure described below applies to both ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ and these kaon parameters. Each fit has different conditions related to the ${\mbox{$z$}}$-binning and CPT assumptions, which are summarized in Table \[tb:fits\]. The fitting procedure is to minimize the $\chi^2$ between background-subtracted data yields and a prediction function, and uses the MINUIT [@minuit] program. The prediction function (${\cal P}$) for each beam and decay mode is $${\cal P}_{p,z} = {N_{p,z}^{\pi\pi}}\times A_{p,z}~, \label{eq:predict fun}$$ where ${N_{p,z}^{\pi\pi}}$ is the calculated number of ${\mbox{$K\rightarrow\pi\pi$}}$ decays in the specified $p,z$ range, and $A_{p,z}$ is the detector acceptance determined by a Monte Carlo simulation (Eq. \[eq:acceptance\]). Note that ${N_{p,z}^{\pi\pi}}$ includes full propagation of the kaon state from the target up to the decay point, as in the MC simulation (Sec. \[sec:mc\_kaonprop\]). For all fits, the prediction function is computed in $1~{ {\rm GeV}/c }$ $p$-bins and 2 meter $z$-bins. To evaluate the $\chi^2$ in the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ fit, the event yields and prediction function are integrated in $10~{ {\rm GeV}/c }$-wide $p$-bins, and each $p$-bin is integrated over the the full $z$-range from 110 m to 158 m. For the other kaon parameter fits, the event yields and prediction function are integrated over $10~{ {\rm GeV}/c }\times 2~{\rm m}$ $p$-$z$ bins. To simplify the discussion that follows, the [target-$K_S$]{}component is ignored. For a pure $K_L$ beam, the number of ${\mbox{$K\rightarrow\pi\pi$}}$ decays is $${N_{p,z}^{\pi\pi}}\propto {\cal F}(p) \left| \eta \right|^2 e^{-t/{\mbox{$\tau_{L}$}}}, \label{eq:dnvac_dt}$$ where $t={m_K}(z-z_{\rm reg})/p$ is the measured proper time relative to decays at the regenerator edge, $\eta = \eta_{+-}~(\eta_{00})$ for charged (neutral) decays, ${\cal F}(p)$ is the kaon flux, and ${\mbox{$\tau_{L}$}}$ is the $K_L$ lifetime. The vacuum beam decay distribution is determined by ${\mbox{$\tau_{L}$}}$; the total event yield is proportional to $|\eta|^2$ and the kaon flux. For a pure $K_L$ beam incident on the  regenerator, the number of decays downstream of the regenerator is $$\begin{aligned} {N_{p,z}^{\pi\pi}}& \propto & {\cal F}_{R}(p) {\mbox{$T_{reg}$}}(p) \left[ \left| \rho(p) \right|^2 e^{-t/{\mbox{$\tau_{S}$}}} + \left| \eta \right|^2 e^{-t/{\mbox{$\tau_{L}$}}} + \right. \nonumber \\ & & \left. 2 | \rho | | \eta | \cos( {\mbox{$\Delta m$}}t + \phi_{\rho} - \phi_{\eta} ) e^{ -t/\tau_{avg} } \right] ~, \label{eq:dnreg_dt}\end{aligned}$$ where $\phi_{\eta} = \arg(\eta)$, ${\vert\rho\vert}$ and $\phi_{\rho}$ are the magnitude and phase of the coherent regeneration amplitude [^1], $1/\tau_{avg} \equiv (1/{\mbox{$\tau_{S}$}}+ 1/{\mbox{$\tau_{L}$}})/2$, ${\cal F}_{R}(p)$ is the kaon flux upstream of the regenerator, and ${\mbox{$T_{reg}$}}(p)$ is the kaon flux transmission through the regenerator. The prediction function accounts for decays inside the regenerator by using the effective regenerator edge (Fig. \[fig:regdiagram\]b) as the start of the decay region. All three terms in Eq. \[eq:dnreg\_dt\] are important, as illustrated in Fig. \[fig:decay\_dnstream\_reg\], which shows interference effects in the regenerator-beam ${\mbox{$z$}}$-vertex distribution. ---------------------------------------------------------- ------------- -------- ---------------------------------------------------------- Fit Assume Free Type $z$ binning CPT Parameters ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ No Yes ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ , Yes Yes ,    Yes No , ,    Yes No , ,  ,  ---------------------------------------------------------- ------------- -------- ---------------------------------------------------------- : \[tb:fits\] Fit conditions used to analyze ${\mbox{$K\rightarrow\pi\pi$}}$ data. “$z$-binning” refers to using 2 m $z$-bins in the regenerator beam. “Assume CPT” means that Eq. \[eq:sw\] is a fit constraint. Free parameters common to all fits, but not shown in the table, include the regeneration parameters ${\mbox{$|f_{-}(70~{\rm GeV}/c)|$}}$ and $\alpha$, and the kaon flux in each $10~{ {\rm GeV}/c }$ momentum bin, ${\cal F}(p_1)-{\cal F}(p_{12})$. Next, we discuss how the various factors in Eqs. \[eq:dnvac\_dt\]-\[eq:dnreg\_dt\] are treated in the fits. The average vacuum-to-regenerator kaon flux ratio (${\cal F}/{\cal F}_R$) and the average regenerator transmission (${\mbox{$T_{reg}$}}$) cancel in the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ fit as explained in Appendix \[app:exp details\]. To account for the momentum dependence of the regeneration amplitude in the fits (see below), we need to know the momentum-dependence of ${\cal F}/({\cal F}_R{\mbox{$T_{reg}$}})$; it is measured from the vacuum-to-regenerator ratio of ${\mbox{$K_{L}\rightarrow \pi^{+}\pi^{-}\pi^{0}$}}$ decays, and is found to vary linearly by $(+7.0 \pm 0.7)$% between 40 and $160~{ {\rm GeV}/c }$. This variation in ${\cal F}/({\cal F}_R{\mbox{$T_{reg}$}})$ is mostly from the momentum dependence of the regenerator transmission, and to a lesser extent from the movable absorber transmission. The $K_L$ lifetime is taken from [@pdg00]. The values of ${\mbox{$\Delta m$}}$ and ${\mbox{$\tau_{S}$}}$ are fixed to our measurements (Sec. \[sec:taus\_delm\]) for the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ fit, and are floated in the fits for the other kaon parameters. In the fits that assume CPT symmetry, ${\mbox{$\phi_{+-}$}}$ and ${\mbox{$\phi_{00}$}}$ are set equal to the superweak phase: $$\phi_{\eta} = {\mbox{$\phi_{SW}$}}= \tan^{-1}(2{\mbox{$\Delta m$}}/{\mbox{$\Delta \Gamma$}}). \label{eq:sw}$$ The final component of the prediction function is the regeneration amplitude. We use a model that relates $\rho$ to the difference between the forward kaon-nucleon scattering amplitudes for ${\mbox{$K^{0}$}}$ and ${\mbox{$\overline{K^{0}}$}}$: $$\rho \propto f_{-} \equiv \hbar \frac{ f(0) - \bar{f}(0) }{p},$$ where $f(0)$ and $\bar{f}(0)$ are the forward scattering amplitudes for  and , respectively, and $p$ is the kaon momentum. Additional factors that contribute to $\rho$ are described in [@prd:731]. For an isoscalar target and high kaon momentum, $f_-$ can be approximated by a single Regge trajectory corresponding to the $\omega$ meson. In that case, Regge theory [@pr:gilman] predicts that the magnitude of $f_-$ should vary with kaon momentum as a power law: $$\left| f_{-}(p)\right| = {\mbox{$|f_{-}(70~{\rm GeV}/c)|$}}\left( \frac{ p }{ 70~{ {\rm GeV}/c }} \right)^{\alpha}. \label{eq:rho_pk}$$ The complex phase of $f_{-}$ can be determined from its momentum dependence through an integral dispersion relation, with the requirement that the forward scattering amplitudes be analytic functions. This “analyticity” requirement yields a constant phase for a power-law momentum dependence: $$\phi_{f_{-}} = - \frac{\pi}{2} ( 2 + \alpha). \label{eq:analyticity}$$ In practice, the kaon-nucleon interactions in carbon are screened due to rescattering processes. The effects of screening modify the momentum dependence of $|f_{-}(p)|$ as well as its phase. Screening corrections are evaluated using Glauber theory formalism [@glauber55; @glauber66] for diffractive scattering, and using various models [@regmodels] for inelastic scattering. The screening corrections in the prediction function used in the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ fit result in a $10$% correction to $\alpha$, and a $0.36{ \times 10^{-4}}$ shift in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. Systematic Uncertainties from Fitting {#sec:fit_syst} ------------------------------------- Uncertainties from the fitting procedure are summarized in Table \[tb:syst\_epefit\] and discussed below. These uncertainties are mainly related to regenerator properties, and contribute $\FitSyst{ \times 10^{-4}}$ to the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ uncertainty. The uncertainty on the momentum dependence of the regenerator transmission corresponds to a $\AttSyst { \times 10^{-4}}$ uncertainty on ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. The sensitivity to [target-$K_S$]{} is checked by floating the ${\mbox{$K^{0}$}}/{\mbox{$\overline{K^{0}}$}}$ flux ratio (Sec. \[sec:mc\_kaonprop\]) in the fit; this changes the [target-$K_S$]{} component by $(2.5\pm 1.6)$% of itself, and leads to a systematic uncertainty of $0.12{ \times 10^{-4}}$ on ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. The dependence of ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ on  and  is $$\begin{aligned} \Delta {\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}&= & \left( +\ReepoeDelmSyst { \times 10^{-4}}\right) \times \frac{ {\mbox{$\Delta m$}}- \KtevDelm}{\KtevDelmTerr} \nonumber \\ & & + \left( -\ReepoeTausSyst { \times 10^{-4}}\right) \times \frac{ {\mbox{$\tau_{S}$}}- \KtevTaus}{\KtevTausTerr}, \label{eq:reepoe_dmts}\end{aligned}$$ where  and  are in units of $10^{6}~\hbar {\rm s}^{-1}$ and $10^{-12}~{\rm s}$, respectively. Each numerator in Eq. \[eq:reepoe\_dmts\] is the difference between the true value and the  measurement (Sec. \[sec:taus\_delm\]); the denominators are the total  uncertainties on ${\mbox{$\Delta m$}}$ and ${\mbox{$\tau_{S}$}}$. Since the  measurements of  and  are anti-correlated, the systematic uncertainty on  due to variations in these parameters is $\ReepoeDelmTausSyst { \times 10^{-4}}$. There are also uncertainties in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ associated with the analyticity relation and screening correction used to predict the regeneration phase $\phi_{\rho}$. It has been argued that the analyticity assumption is good to $0.35^{\circ}$ in the E773 experiment[@analphi], which included kaon momenta down to $30~{ {\rm GeV}/c }$. A smaller deviation from analyticity is expected with the $40~{ {\rm GeV}/c }$ minimum momentum cut used in this analysis; this leads to a $0.25^{\circ}$ uncertainty in $\phi_{\rho}$ and a $0.07{ \times 10^{-4}}$ uncertainty in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. Using different screening models in the fit leads to a ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ uncertainty of $0.15{ \times 10^{-4}}$. The fitting program uses the same $K_S/K_L$ flux ratio for the charged and neutral decay modes. Since the 1996 $K\to\pi^+\pi^-$ sample is excluded, we consider the possibility of a change in the kaon flux ratio between 1996 and 1997. The kaon flux ratio depends only on the physical properties of the movable absorber and regenerator. The density of these two elements could change between the two years because of a possible few degree temperature difference, leading to a systematic uncertainty of $0.05{ \times 10^{-4}}$ on ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. ------------------------------------------------- ---------------------------------------------------------------------- ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ Uncertainty Source of Uncertainty (${ \times 10^{-4}}$) Regenerator transmission $0.19$ Target-  $0.12$ ${\mbox{$\Delta m$}}$ and ${\mbox{$\tau_{S}$}}$ $\ReepoeDelmTausSyst$ Regenerator screening $0.15$ $\phi_{\rho}$ (analyticity) $0.07$ 1996 vs. 1997 $K_S/K_L$ flux ratio $0.05$ ${\mbox{$\tau_{L}$}}$ [@pdg00] $0.02$ Total $\FitSyst$ ------------------------------------------------- ---------------------------------------------------------------------- : \[tb:syst\_epefit\] Systematic uncertainties in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ from fitting. Although fitting uncertainties in the  measurement are a small part of the total uncertainty, they are more significant in the measurements of ${\mbox{$\Delta m$}}$, ${\mbox{$\tau_{S}$}}$, and ${\mbox{$\phi_{+-}$}}$ (Table \[tb:syst\_kparfit\]). Uncertainties in the regenerator transmission and in the analyticity assumption contribute the largest uncertainties in the measurements of ${\mbox{$\Delta m$}}$ and ${\mbox{$\tau_{S}$}}$. The uncertainty in the regenerator screening model contributes an uncertainty of $0.75{^{\circ}}$ in the ${\mbox{$\phi_{+-}$}}$ measurement. Fitting uncertainties have a negligible effect on the measurement of   because of cancellations between the charged and neutral decay modes. ----------------------------- ------------------------ ---------------------------------------- ---------------------------- ------------------------------- Source of ${\mbox{$\Delta m$}}$ ${\mbox{$\tau_{S}$}}$   ${\mbox{$\phi_{+-}$}}$     ${\mbox{$\Delta \phi$}}$    Uncertainty $(\times 10^6\hbar/s)$ $({\mbox{$\times 10^{-12}~{\rm s}$}})$ $({^{\circ}})$ $({^{\circ}})$ Regen. transmission $10.0$ $0.020$ $0.07$ $0.01$ Target-  $1.4$ $0.017$ $0.13$ $0.01$ Regen. screening $3.0$ $0.020$ $0.75$ $0.03$ $\phi_{\rho}$ (analyticity) $8.1$ $0.030$ $0.25$ $0.00$ ${\mbox{$\tau_{L}$}}$ $0.0$ $0.001$ $0.00$ $0.00$ Total $13.3$ $0.045$ $0.80$ $0.03$ ----------------------------- ------------------------ ---------------------------------------- ---------------------------- ------------------------------- : \[tb:syst\_kparfit\] Fitting uncertainties in , , , and . “$\sigma_{syst}$” refers to systematic uncertainty. Measurement of $R\lowercase{e}({\mbox{$\epsilon'\!/\epsilon$}})$ {#sec:reepoe_measure} ================================================================= The  measurement of ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ uses the background-subtracted  and  samples in the vacuum and regenerator beams, the prediction for the  acceptances using the Monte Carlo, and the fitting program. Section \[sec:reepoe\_results\] presents the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ result and a summary of the systematic uncertainties. Section \[sec:crosschecks\] presents several crosschecks, including a “reweighting” technique which does not use a Monte Carlo acceptance correction. The ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ Result {#sec:reepoe_results} ------------------------------------------------------------------- There are 48 measured quantities that enter into the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ fit: the observed numbers of  and  decays in the vacuum and regenerator beams, each in twelve $10~{ {\rm GeV}/c }$ wide $p$ bins. Within each momentum bin we use the ${\mbox{$z$}}$-integrated yield from 110 m to 158 m. There are 27 fit parameters including 24 kaon fluxes, two regeneration parameters, and ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. Therefore, the number of degrees of freedom in the fit is $48-27 = 21$. CPT symmetry is assumed (Eq. \[eq:sw\]), and the values of  and  are from our measurements described in Sec. \[sec:taus\_delm\]. For the combined 1996 and 1997 datasets, we obtain $$\begin{aligned} \begin{array}{ccc} {\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}& = & \left( \KtevXReepoe \pm \KtevStat \right) { \times 10^{-4}}\\ {\mbox{$|f_{-}(70~{\rm GeV}/c)|$}}& = & \KtevAmp70andErr \\ \alpha & = & \KtevPwrSlpandErr \label{eq:reepoe fit results} \\ \chi^2/dof & = & 27.6/21 ~, \end{array}\end{aligned}$$ where the errors reflect the statistical uncertainties. Including the systematic uncertainty, $$\begin{aligned} {\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}& = & \left[ \KtevXReepoe \pm \KtevStat~({\rm stat}) \pm \TotSystMC~({\rm syst}) \right] { \times 10^{-4}}\nonumber \\ & = & \left( \KtevReepoe \pm \KtevTErr \right) { \times 10^{-4}}~, \label{eq:reepoe_result}\end{aligned}$$ where the contributions to the systematic uncertainty are summarized in Table \[tb:syst\_reepoe\]. The systematic uncertainties from the charged and neutral decay modes contribute $1.26{ \times 10^{-4}}$ and $2.00{ \times 10^{-4}}$, respectively. The largest uncertainties are from the CsI energy reconstruction ($1.47{ \times 10^{-4}}$), neutral mode background subtraction ($1.07{ \times 10^{-4}}$), $z$-dependence of the acceptance in the charged decay mode ($\ChrgZSlopeSyst{ \times 10^{-4}}$), and the charged mode Level 3 filter ($\L3CHRGSYST{ \times 10^{-4}}$). [ldd]{} &\ &\ Source of uncertainty & &\ Trigger & $0.58$ & $0.18$\ CsI energy, position recon & - & $1.47$\ Track reconstruction & $0.32$ & -\ Selection efficiency & $0.47$ & $0.37$\ Apertures & $0.30$ & $0.48$\ Background & $0.20$ & $1.07$\ ${\mbox{$z$}}$-dependence of acceptance & $\ChrgZSlopeSyst$ & $\NeutZSlopeSyst$\ MC statistics & $\KtevMCStatChrg$ & $\KtevMCStatNeut$\ Fitting &\ [TOTAL]{} &\ ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ Crosschecks {#sec:crosschecks} -------------------------------------------------------------------- ### Consistency Among Data Subsets {#sec:subset_checks} We have performed several crosschecks of our result by dividing the  samples into subsets and checking the consistency of ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ and other parameters among the different subsets. Figure \[fig:crosscheck\] shows the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ result in roughly month-long time periods, in each regenerator position, and for the two magnet polarities. These comparisons all show good agreement. The first data point labeled 96/97a corresponds to the current analysis applied to the sample used in our previous publication [@prl:pss]; this reanalysis is discussed in Appendix \[sec:prl99\]. To check the dependence on kaon momentum, 12 separate fits are done in $10~{ {\rm GeV}/c }$ momentum bins. The free parameters in each fit are ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ and ${\mbox{$|f_{-}(70~{\rm GeV}/c)|$}}$, which is proportional to the regeneration amplitude at $70~{ {\rm GeV}/c }$. The momentum dependence of the regeneration amplitude within each $10~{ {\rm GeV}/c }$ bin is described by the power-law, $\rho \sim p^{\alpha}$, where $\alpha$ is fixed to the value found in the nominal fit (Eq. \[eq:reepoe fit results\]). The $\chi^2$ per degree of freedom is 20.7/11 for  vs. ${\mbox{$p$}}$ (Fig. \[fig:reepoe\_pk\]a) and 5.2/11 for ${\mbox{$|f_{-}(70~{\rm GeV}/c)|$}}$ vs. ${\mbox{$p$}}$ (Fig. \[fig:reepoe\_pk\]b); the combined $\chi^2/{\rm dof}$ is 25.9/22 . The scatter of ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ in the higher momentum bins is not present in the kaon parameter measurements (Fig. \[fig:kparvspk\]), and a linear fit to ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ vs. $p$ has no significant slope ($<0.8\sigma$). The 140-$150~{ {\rm GeV}/c }$ bin, which accounts for 6.3 of the $\chi^2$ in Fig. \[fig:reepoe\_pk\]a, also contributes 6.7 to the $\chi^2$ in the nominal fit (Eq. \[eq:reepoe fit results\]), with nearly equal contributions from all four  samples. Another crosscheck is that $\alpha$, which describes the regeneration power law (Eq. \[eq:rho\_pk\]), should be the same for both the charged and neutral decays. A separate fit in each decay mode results in $$\begin{aligned} \begin{array}{ccc} \alpha_{+-} & = & -0.5421 \pm 0.0009~({\rm stat}) \\ \alpha_{00} & = & -0.5445 \pm 0.0017~({\rm stat}) \end{array}\end{aligned}$$ which agree to within $1.2\sigma$. The [target-$K_S$]{} correction is checked in a separate ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ fit that uses only those events with kaon momenta below $100~{ {\rm GeV}/c }$ and a ${\mbox{$z$}}$-vertex farther than 124 m from the BeO target. This sample has a negligible $K_S$ component, and is therefore described by essentially pure $K_L$ beams entering the decay region. Using this sub-sample, the change in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ from the nominal result (Eq. \[eq:reepoe\_result\]) is $(+0.85\pm 0.89){ \times 10^{-4}}$, where the error reflects the uncorrelated statistical uncertainty. ### ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ From Reweighting Technique {#sec:rewgt} As a final crosscheck of our “standard” analysis, we also measure ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ using a reweighting technique that does not depend on a Monte Carlo acceptance correction. This technique is similar to that used by the NA48 experiment [@na48:reepoe]. The “local acceptance” of  decays in each $p$-$z$ bin ($1~{ {\rm GeV}/c }\times 2~{\rm m}$) is nearly identical in both beams, with the only difference arising from the effects of accidental activity. In this method, a weight is applied to vacuum beam events such that the regenerator and weighted vacuum beam events have the same statistical sampling of decay vertex and kaon energy. With the same local acceptance in the two beams as a function of $p$ and $z$, an ideal weight function eliminates differences in the reconstruction efficiencies and resolutions for in the two beams. The weight factor, which is applied event-by-event, is the [*a priori*]{} ratio of the regenerator beam and vacuum beam decay rates, $$W(p,z)=\frac{ {d\Gamma_{reg}/dt}(p,z)} { {d\Gamma_{vac}/dt}(p,z)}~. \label{eq:wgt factor}$$ The functions $d\Gamma_{vac}/dt$ and $d\Gamma_{reg}/dt$ are similar to those given by Equations \[eq:dnvac\_dt\] and \[eq:dnreg\_dt\], respectively, with the modification that they are constrained to vanish upstream of the regenerator. The $z$ distribution in each beam, without the weight factor, is shown in Fig. \[fig:pzdata\]a-b, and the pion track $y$-illumination at the first drift chamber is shown in Fig. \[fig\_rwt\_illum\]a. The differences between the vacuum and regenerator beam distributions are due to the average acceptance difference coming from the different $K_L$ and $K_S$ lifetimes. The effect of $W(p,z)$ is that the vacuum beam distributions match those in the regenerator beam (Fig. \[fig\_rwt\_illum\]b). The main drawback to this reweighting method is that the statistical uncertainty is increased by a factor of 1.7 compared to the standard analysis, because of the loss of vacuum beam events through the reweighting function. In addition, the reweighting technique is more sensitive to the neutral energy reconstruction because the weight factor (Eq. \[eq:wgt factor\]) depends on kaon energy. The event reconstruction and selection are very similar to that of the standard analysis. In the ${\mbox{$K\rightarrow\pi^{0}\pi^{0}$}}$ mode, the event selection cuts are identical. The most significant difference in the reweighting analysis is the energy scale correction, where the absolute energy scale is corrected as opposed to the relative data-MC scale. The energy scale correction is derived from the difference between the $\gamma\gamma$ and $\pi^+\pi^-$ $z$-vertex reconstructed in ${\mbox{$K_{L}\rightarrow \pi^{+}\pi^{-}\pi^{0}$}}$. For the ${\mbox{$K\rightarrow\pi^{+}\pi^{-}$}}$ selection, the reweighting analysis differs from the standard analysis by adding a center-of-energy “${{\tt RING}}$” cut (Eq. \[eq:ring\_def\]). This cut is the same for neutral and charged events, and eliminates the need to correct for the effect of kaon scattering in the movable absorber upstream of the regenerator (Fig. \[fig:beamline\]). ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ is extracted in a fit which compares the background-subtracted yields to a prediction function. The prediction function is the same as Eqs. \[eq:dnvac\_dt\]-\[eq:dnreg\_dt\], except that in the vacuum beam, both the data and the prediction function include the weight factor. As in the standard fit, there are 48 measured inputs, which correspond to the numbers of  events in each $10~{ {\rm GeV}/c }$ wide kaon momentum bin, for both decay modes and both regenerator positions. The free parameters in the fit include 24 kaon fluxes, the regeneration amplitude and power-law slope (Eq. \[eq:rho\_pk\]), and ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. In the reweighting fit, the two regeneration parameters also appear in the vacuum beam fit functional via the reweighting function, where they are not varied. We find that the value of ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ is quite insensitive to the regeneration parameters used in the reweighting function. As in the standard  fit, ${\mbox{$\Delta m$}}$ and ${\mbox{$\tau_{S}$}}$ are fixed to the values from the measurements in Sec. \[sec:taus\_delm\]. There are a total of 27 free parameters and 21 degrees of freedom in the reweighting method fit. The systematic errors for the reweighting analysis are shown in Table \[tab:rewt\_syst\]. In the neutral decay mode, the largest systematic uncertainty results from the sensitivity to the minimum photon cluster energy. The other large source of systematic uncertainty results from accidental activity. The level of accidental activity in the detector is slightly different for the two beams; this affects the local acceptance differently in the vacuum and regenerator beams and is not accounted for by the weight factor. In the standard analysis, accidental effects are accounted for in the Monte Carlo acceptance correction. To determine the uncorrelated statistical uncertainty between the standard and reweighting analyses, a large number of Monte Carlo samples are generated and fit with both methods; the uncorrelated uncertainty results mainly from the effective loss in statistics in the reweighting method. The uncorrelated systematic error is mainly from the uncertainty in the minimum cluster energy requirement in the reweighting analysis; there are also contributions from the acceptance correction in the standard analysis and accidental effects in the reweighting analysis. The reweighting and standard analyses were both applied to the part of the 1997 data sample that was not used in the previous publication, corresponding to roughly 3/4 of the total sample. There is good agreement between the two analyses: the difference between the reweighting and standard ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ results is $$\begin{aligned} \Delta[{\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}] & = & [+1.5 \pm 2.1~({\rm stat}) \pm 3.3~({\rm syst})] { \times 10^{-4}}\nonumber \\ & = & (+1.5 \pm 3.9) { \times 10^{-4}}~,\end{aligned}$$ where the errors reflect the uncorrelated uncertainty between the two methods. --------- ------------------------ ----------------------- Source of  Uncertainty Sample Uncertainty (${ \times 10^{-4}}$) Neutral Backgrounds 1.31 Reconstruction 2.93 Trigger 0.41 Accidental Bias 1.46 Charged Backgrounds 0.16 Reconstruction 0.87 Trigger 1.09 Accidental Bias 0.81 Regenerator location 0.20 Common Ring Number cut 0.70 Reweighting Parameters 0.30 Total 3.98 --------- ------------------------ ----------------------- : \[tab:rewt\_syst\] Systematic Uncertainties in from the reweighting analysis. Measurements of Kaon Parameters {#sec:kaonpar} =============================== The regenerator beam decay distribution allows measurements of the kaon parameters ${\mbox{$\tau_{S}$}}$ and ${\mbox{$\Delta m$}}$, and CPT tests based on measurements of  and ${\mbox{$\Delta \phi$}}$. The main difference compared to the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ fit is that we fit the shape of decay distribution instead of the integrated yield. All of the fits discussed below use $2$ meter wide ${\mbox{$z$}}$-bins in the regenerator beam from 124 m to 158 m. In the vacuum beam, one ${\mbox{$z$}}$-bin from 110 m to 158 m is used to determine the kaon flux in each $10~{ {\rm GeV}/c }$ momentum bin. A ${\mbox{$z$}}$-binned fit increases the sensitivity to migrations in ${\mbox{$z$}}$. To allow for such migrations near the regenerator edge, we include an extra “${\mbox{$z$}}$-shift” parameter which is the shift in the effective regenerator edge relative to the nominal value calculated in Section \[sec:regenerator\]. In all fits, the charged and neutral data are consistent with no ${\mbox{$z$}}$-shift at the regenerator edge. Systematic errors for the kaon parameter measurements are evaluated in a manner similar to the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ analysis, and are summarized in Table \[tb:syst\_kpar\]. The sensitivity to ${\mbox{$z$}}$ migration is most pronounced in the ${\mbox{$\Delta \phi$}}$ uncertainty related to CsI energy reconstruction. The complicated $z$ dependence of the regenerator scattering background (Fig. \[fig:zbkgneut\]) also contributes significant uncertainties to results obtained from $z$-binned fits. [lcccc]{} &\ Source of & ${\mbox{$\Delta m$}}$ & ${\mbox{$\tau_{S}$}}$ &   ${\mbox{$\phi_{+-}$}}$  &   ${\mbox{$\Delta \phi$}}$  \ Uncertainty & $(\times 10^6\hbar/s)$ & $({\mbox{$\times 10^{-12}~{\rm s}$}})$ & $({^{\circ}})$ & $({^{\circ}})$ \ \ Trigger & $0.2$ & $0.004$ & $0.10$ & $0.02$\ Track reconstruction & $0.6$ & $0.032$ & $0.02$ & $0.02$\ Selection efficiency & $3.2$ & $0.011$ & $0.35$ & $0.06$\ Apertures & $2.8$ & $0.038$ & $0.76$ & $0.09$\ Background & $0.8$ & $0.002$ & $0.01$ & $0.01$\ Acceptance & $1.2$ & $0.026$ & $0.14$ & $0.06$\ MC statistics & $2.6$ & $0.012$ & $0.28$ & $0.05$\ Fitting & $\KtevDelmFITerr$ & $\KtevTausFITerr$ & $0.80$ & $0.03$\  Total & $14.3$ & $0.074$ & $1.20$ & $0.14$\ \ Trigger & $0.4$ & $0.013$ & $-$ & $0.03$\ CsI reconstruction & $8.1$ & $0.094$ & $-$ & $0.37$\ Selection efficiency & $5.0$ & $0.035$ & $-$ & $0.06$\ Apertures & $2.2$ & $0.040$ & $-$ & $0.14$\ Background & $7.0$ & $0.030$ & $-$ & $0.14$\ Acceptance & $2.0$ & $0.030$ & $-$ & $0.05$\ MC statistics & $3.3$ & $0.016$ & $-$ & $0.06$\ Fitting & $\KtevDelmFITerr$ & $\KtevTausFITerr$ & $-$ & $0.03$\  Total & $18.3$ & $0.126$ & $-$ & $0.43$\ & & & &\ Combined Total & $14.2$ & $0.069$ & $-$ & $\DelPhiTOTerr$\ Measurement of ${\mbox{$\Delta m$}}$ and $\tau_S$ {#sec:taus_delm} ------------------------------------------------- To measure ${\mbox{$\Delta m$}}$ and ${\mbox{$\tau_{S}$}}$, we fit the charged and neutral modes separately and then combine results according to the statistical and uncorrelated systematic errors. We assume CPT symmetry (Eq. \[eq:sw\]) by dynamically setting the value of $\phi_{\eta}$ equal to the superweak phase using the floated values of $\Delta m$ and ${\mbox{$\tau_{S}$}}$. The fit values of $\tau_S$ and ${\mbox{$\Delta m$}}$ are independent of the value of ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. For each charged and neutral mode fit, there are 216 measured input quantities. The number of vacuum beam decays in each $10~{ {\rm GeV}/c }$ momentum bin gives 12 inputs; the number of regenerator beam decays in each $2~{\rm m} \times 10~{ {\rm GeV}/c }$ bin adds $17\times 12=204$ inputs. The floated parameters include the kaon flux in each of 12 momentum bins, the magnitude and phase of the regeneration amplitude, a ${\mbox{$z$}}$-shift parameter, ${\mbox{$\Delta m$}}$, and ${\mbox{$\tau_{S}$}}$; these 17 floated parameters lead to 199 degrees of freedom. The results of separate fits to the charged and neutral mode data are shown in Table \[tab:delm\_taus\]. The difference between the charged and neutral mode results, after accounting for the common systematic uncertainty described below, is $1.6\sigma$ for ${\mbox{$\Delta m$}}$ and $0.1\sigma$ for ${\mbox{$\tau_{S}$}}$. ------- ------------------------------------------------- ---------------------------------------- -------------- decay     mode $({\mbox{$\times 10^{6}~\hbar {\rm s}^{-1}$}})$ $({\mbox{$\times 10^{-12}~{\rm s}$}})$ $\chi^2/$dof   $5266.7\pm ~5.9\pm 14.3$ $89.650\pm 0.028\pm 0.074$ 228/199   $5237.3\pm 10.6\pm 18.3$ $89.637\pm 0.050\pm 0.126$ 195/199 ------- ------------------------------------------------- ---------------------------------------- -------------- : \[tab:delm\_taus\]  and  results for the regenerator beam charged and neutral data samples. The first uncertainty is statistical; the second is systematic. Systematic errors arising from data analysis are larger in the neutral decay mode than in the charged mode, primarily because of larger background and uncertainties in the CsI energy reconstruction. The systematic uncertainties due to regeneration properties (screening, attenuation, and analyticity) are more significant than in the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ analysis because there is no cancellation between charged and neutral mode data. These uncertainties in the regeneration properties are common to the charged and neutral mode fits, and are applied to the final result after averaging. The common systematic uncertainty is $\KtevDelmFITerr {\mbox{$\times 10^{6}~\hbar {\rm s}^{-1}$}}$ on ${\mbox{$\Delta m$}}$, and $\KtevTausFITerr {\mbox{$\times 10^{-12}~{\rm s}$}}$ on . We combine the charged and neutral mode results weighted by the statistical uncertainty and the independent parts of the systematic uncertainty. The results are $$\begin{aligned} {\mbox{$\Delta m$}}& = & ( \KtevDelm \pm \KtevDelmTerr ) {\mbox{$\times 10^{6}~\hbar {\rm s}^{-1}$}}~, \\ {\mbox{$\tau_{S}$}}& = & ( \KtevTaus \pm \KtevTausTerr ) {\mbox{$\times 10^{-12}~{\rm s}$}}~,\end{aligned}$$ which correspond to a superweak phase of $${\mbox{$\phi_{SW}$}}= (\PhiSW \pm \PhiSWErr ){^{\circ}}.$$ Measurement of ${\mbox{$\phi_{+-}$}}$ and ${\mbox{$\phi_{+-}$}}-{\mbox{$\phi_{SW}$}}$ ------------------------------------------------------------------------------------- The fit for  is similar to the ${\mbox{$\Delta m$}}$-${\mbox{$\tau_{S}$}}$ fit. The main difference is that we remove the CPT assumption (Eq. \[eq:sw\]) and float $\phi_{+-}$ in addition to ${\mbox{$\Delta m$}}$ and ${\mbox{$\tau_{S}$}}$. The fit is performed to  data only. Compared to the ${\mbox{$\Delta m$}}$-${\mbox{$\tau_{S}$}}$ fit, we have the same number of measured inputs (216) and one additional free parameter (${\mbox{$\phi_{+-}$}}$), for a total of $216-18=198$ degrees of freedom. There is a large correlation among ${\mbox{$\phi_{+-}$}}$, ${\mbox{$\Delta m$}}$, and ${\mbox{$\tau_{S}$}}$, which is illustrated in Fig. \[fig:phipmcorr\] (Appendix \[app:kparcor\]). The ${\mbox{$\Delta m$}}$-${\mbox{$\tau_{S}$}}$ correlation is much stronger than in a fit using the CPT assumption, and therefore results in a larger statistical uncertainty on ${\mbox{$\Delta m$}}$ and ${\mbox{$\tau_{S}$}}$. The ${\mbox{$\phi_{+-}$}}$ statistical uncertainty in our fit is 2.4 times larger than in a fit with a fixed value of ${\mbox{$\tau_{S}$}}$. The $0.76{^{\circ}}$ systematic uncertainty from apertures (Table \[tb:syst\_kpar\]) is mainly from the cell separation cut at the drift chambers (Sec. \[sec:chrg\_syst\]). Different screening models result in a $0.75{^{\circ}}$ uncertainty on ${\mbox{$\phi_{+-}$}}$ (Table \[tb:syst\_kparfit\]). The value of ${\mbox{$\phi_{+-}$}}$ depends on the regeneration phase $\phi_{\rho}$; a $0.25{^{\circ}}$ uncertainty from the analyticity assumption leads to a $0.25{^{\circ}}$ error on ${\mbox{$\phi_{+-}$}}$. The total systematic uncertainty on  is $\PhipmAllSyst{^{\circ}}$. The results of the fit are: $$\begin{array}{lcl} {\mbox{$\phi_{+-}$}}& = & \left[ \PhipmAll \pm \PhipmAllErr~\mbox{(stat)} \pm \PhipmAllSyst~\mbox{(syst)} \right]{^{\circ}}\\ & = & (\PhipmAll \pm \PhipmAllTerr ){^{\circ}}\\ {\mbox{$\Delta m$}}& =& \left[\PhiAllDM \pm \PhiAllDMErr~\mbox{(stat)}\right] {\mbox{$\times 10^{6}~\hbar {\rm s}^{-1}$}}\\ {\mbox{$\tau_{S}$}}& =& \left[\PhiAllTs \pm \PhiAllTsErr~\mbox{(stat)}\right] {\mbox{$\times 10^{-12}~{\rm s}$}}\\ \chi^2/\nu & = & \PhiAllChi / \PhiAllDOF ~. \label{eq:phipm_fit} \end{array}$$ Next, we fit the deviation from the superweak phase, ${\mbox{$\phi_{+-}$}}-{\mbox{$\phi_{SW}$}}$, which is a direct test of CPT symmetry. Compared to the ${\mbox{$\phi_{+-}$}}$ fit shown above, the fit for ${\mbox{$\phi_{+-}$}}- {\mbox{$\phi_{SW}$}}$ results in slightly reduced statistical and systematic uncertainties because the value of ${\mbox{$\phi_{SW}$}}$ is computed dynamically using the floated values of ${\mbox{$\Delta m$}}$ and ${\mbox{$\tau_{S}$}}$ (Eq. \[eq:sw\]), and is less sensitive to the correlations. The result of this fit is $$\begin{aligned} \begin{array}{lcl} {\mbox{$\phi_{+-}$}}-{\mbox{$\phi_{SW}$}}& = & \left[ \dPhiSW \pm \dPhiSWSTATerr~\mbox{(stat)} \pm \dPhiSWSYSTerr ~\mbox{(syst)} \right]{^{\circ}}\\ & = & ( \dPhiSW\pm \dPhiSWTOTerr){^{\circ}}~, \end{array}\end{aligned}$$ and the $\chi^2$ is the same as for the ${\mbox{$\phi_{+-}$}}$ fit (Eq. \[eq:phipm\_fit\]). Measurement of ${\mbox{$\Delta \phi$}}$ --------------------------------------- The measurement of ${\mbox{$\Delta \phi$}}$ is performed in a simultaneous fit to neutral and charged mode data. The number of measured inputs is 432, which is simply twice the number used in the ${\mbox{$\Delta m$}}$-${\mbox{$\tau_{S}$}}$ fits, since both charged and neutral modes are used in the same fit. The floated parameters include the charged and neutral kaon fluxes in each of 12 momentum bins ($12+12=24$), the regeneration amplitude and phase, one ${\mbox{$z$}}$-shift term in charged and one in neutral, the real and imaginary parts of ${\mbox{$\epsilon'\!/\epsilon$}}$, ${\mbox{$\Delta m$}}$, ${\mbox{$\tau_{S}$}}$, and ${\mbox{$\phi_{+-}$}}$; these 33 floated parameters lead to 399 degrees of freedom. Note that the fit uses ${\mbox{$Im({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ instead of  as a free parameter (Eq. \[eq:delphimpe\]). The fit for ${\mbox{$\Delta \phi$}}$ benefits from the cancellation of uncertainties in the regenerator properties. Also, there is little correlation with the other kaon parameters such as ${\mbox{$\Delta m$}}$, ${\mbox{$\tau_{S}$}}$, and the phase of $\epsilon$. Consequently, systematic uncertainties due to regenerator properties are small. The largest systematic uncertainty of $0.37{^{\circ}}$ is from the CsI cluster reconstruction in the neutral mode analysis. There is a correlation between the real and imaginary parts of $\epsilon'/\epsilon$, with a correlation coefficient of $-0.565$. As a result, the statistical uncertainty for ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ is increased by about $20\%$ compared to the standard fit that sets ${\mbox{$Im({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}=0$. The results of the fit without CPT assumptions are $$\begin{aligned} {\mbox{$Im({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}& = & \left[ \IMEPOE \pm \IMEPOEStatErr~\mbox{(stat)} \pm \IMEPOESystErr~\mbox{(syst)} \right] { \times 10^{-4}}\nonumber \\ & = & \IMEPOEpmErr \nonumber \\ {\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}& = & [+22.5\pm1.9~(\mbox{stat})]{ \times 10^{-4}}\nonumber \\ \chi^{2}/\nu & = & 425 / 398 ~. \nonumber \end{aligned}$$ In terms of ${\mbox{$\Delta \phi$}}$, the result is $$\begin{array}{lcl} {\mbox{$\Delta \phi$}}& = & \left[ \DelPhi \pm \DelPhiSTATerr~\mbox{(stat)} \pm \DelPhiSYSTerr~\mbox{(syst)} \right]{^{\circ}}\\ & = & \left( \DelPhi \pm \DelPhiTOTerr \right){^{\circ}}. \\ \end{array}$$ Kaon Parameter Crosschecks {#sec:kaonpar_checks} -------------------------- The  samples are divided into various subsets, among which we check the consistency of , , , and ${\mbox{$\Delta \phi$}}$. For all four measurements, we find good agreement between five month-long time periods, the two regenerator positions, and the two magnet polarities. The consistency of , , , and as a function of kaon momentum is shown in Fig. \[fig:kparvspk\]. There is good agreement among the 12 momentum bins in both the charged and neutral decay modes. Allowing each parameter to have a slope as a function of kaon momentum, the significance of each slope is between $0.5\sigma$ and $1.5 \sigma$, consistent with no momentum dependence. To check the dependence on proper decay time, the regenerator beam   samples are divided into subsets with proper time less than and greater than 3 $K_S$ lifetimes relative to the regenerator edge. This test is also sensitive to the significant background variations as a function of decay vertex (Fig. \[fig:zbkgneut\]). The entire vacuum beam samples are used to determine the kaon flux in each $10~{ {\rm GeV}/c }$ momentum bin, and the statistical uncertainty from the vacuum beam data is subtracted for these comparisons. For the sample with decays near the regenerator edge, 85% of the  decay rate is from the $K_S$ term that is proportional to ${\vert\rho\vert}^2$ in Eq. \[eq:dnreg\_dt\]; for the other sample, 42% of the  decay rate is from the $K_S$ term. Fig. \[fig:kparvstaus\] shows consistent results between the two proper time ranges for the kaon parameter measurements. Note that the measurement of ${\mbox{$\phi_{+-}$}}$, which has strong correlations with  and , is more sensitive to early decay times; the measurement of , which is very weakly correlated with  and , is more sensitive to later decay times. Conclusions {#sec:conclude} =========== In this paper, we report an improved measurement of direct CP violation in the decay of the neutral kaon: $$\begin{array}{lcl} {\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}& = & \left[ \KtevReepoe \pm \KtevStat~({\rm stat}) \pm \TotSystMC~({\rm syst}) \right] { \times 10^{-4}}\\ & = & \left( \KtevReepoe \pm \KtevTErr \right) { \times 10^{-4}}~. \end{array}$$ This result, which supersedes reference [@prl:pss], is consistent with the measurement from the NA48 collaboration [@na48:reepoe; @na48:blois02] ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}= (15.3 \pm 2.6){ \times 10^{-4}}$. The average of our result and measurements from [@prl:731; @pl:na31; @na48:reepoe] gives ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}= (17.2 \pm 1.8){ \times 10^{-4}}$ with a $13\%$ confidence level. In addition, we report new measurements of the $K_L$-$K_S$ mass difference and the $K_S$ lifetime: $$\begin{array}{lcl} {\mbox{$\Delta m$}}& = & ( \KtevDelm \pm \KtevDelmTerr ) {\mbox{$\times 10^{6}~\hbar {\rm s}^{-1}$}}~, \\ {\mbox{$\tau_{S}$}}& = & ( \KtevTaus \pm \KtevTausTerr ) {\mbox{$\times 10^{-12}~{\rm s}$}}~, \label{eq:delmtauS} \end{array}$$ where CPT symmetry is assumed. Although these results are consistent with individual previous measurements used in the PDG averages [@pdg00], our results each differ from the PDG averages by more than two standard deviations. The   ${\mbox{$\tau_{S}$}}$ measurement is consistent with a recent NA48 measurement [@na48:taus], and both results are much more precise than the PDG average. Finally, we measure phase differences $$\begin{array}{lcl} & {\mbox{$\phi_{+-}$}}- {\mbox{$\phi_{SW}$}}& = (\dPhiSW \pm \dPhiSWTOTerr){^{\circ}}\\ {\mbox{$\Delta \phi$}}\equiv & {\mbox{$\phi_{00}$}}- {\mbox{$\phi_{+-}$}}& = (\DelPhi \pm \DelPhiTOTerr ){^{\circ}}, \end{array}$$ which are consistent with the CPT-symmetry prediction of zero. These phase differences are extracted from fits in which  and  are free parameters to avoid the CPT assumptions used to extract the nominal values in Eq. \[eq:delmtauS\]. The ${\mbox{$\Delta \phi$}}$ result can be expressed in terms of ${\mbox{$Im({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$: $${\mbox{$Im({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}= \IMEPOEpmErr ~.$$ Acknowledgments =============== We gratefully acknowledge the support and effort of the Fermilab staff and the technical staffs of the participating institutions for their vital contributions. This work was supported in part by the U.S. Department of Energy, The National Science Foundation and The Ministry of Education and Science of Japan. In addition, A.R.B., E.B. and S.V.S. acknowledge support from the NYI program of the NSF; A.R.B. and E.B. from the Alfred P. Sloan Foundation; E.B. from the OJI program of the DOE; K.H., T.N. and M.S. from the Japan Society for the Promotion of Science; and R.F.B. from the Fundação de Amparo à Pesquisa do Estado de São Paulo. P.S.S. acknowledges support from the Grainger Foundation. Principles of the  ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ Measurement {#app:exp details} ======================================================================================= A simplified treatment of the   measurement technique is presented here to illustrate some of the important cancellations that reduce systematic uncertainties in the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ measurement. The measurement is based on the number of reconstructed  and  decays in the vacuum and regenerator beams. To illustrate how these four quantities are related to ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$, it is convenient to ignore the  interference in both beams. In this simplified case, the four measured quantities are related to experimental parameters as follows: $$\begin{aligned} N(vac~{\mbox{$\pi^{+}\pi^{-}$}}) & \simeq & \tlts B_S^{+-} {\cal F}_V^{+-} {\cal A}_V^{+-} \vert {\mbox{$\eta_{+-}$}}\vert^2 \frac{L}{\gamma c{\mbox{$\tau_{L}$}}} \label{eq:dumfun1} \\ N(reg~{\mbox{$\pi^{+}\pi^{-}$}}) & \simeq & ~~~B_S^{+-}{\cal F}_R^{+-} {\cal A}_R^{+-} {{\vert\rho\vert}}^2 {\mbox{$T_{reg}$}}\label{eq:dumfun2} \\ N(vac~{\mbox{$\pi^{0}\pi^{0}$}}) & \simeq & \tlts B_S^{00}{\cal F}_V^{00} {\cal A}_V^{00} \vert {\mbox{$\eta_{00}$}}\vert^2 \frac{L}{\gamma c{\mbox{$\tau_{L}$}}} \label{eq:dumfun3} \\ N(reg~{\mbox{$\pi^{0}\pi^{0}$}}) & \simeq & ~~~B_S^{00}{\cal F}_R^{00} {\cal A}_R^{00} {{\vert\rho\vert}}^2 {\mbox{$T_{reg}$}}~, \label{eq:dumfun4}\end{aligned}$$ where the $N$’s are the observed number of  decays in each beam, $B_S^{+-(00)}$ is the branching fraction of $K_S\to{\mbox{$\pi^{+}\pi^{-}$}}({\mbox{$\pi^{0}\pi^{0}$}})$, ${\cal F}_V^{+-(00)}$ are the vacuum beam kaon fluxes, ${\cal F}_R^{+-(00)}$ are the kaon fluxes just upstream of the regenerator, the ${\cal A}$’s are the acceptances determined by a Monte Carlo simulation, $L\sim 40~{\rm m}$ is the length of the useful decay region, ${\mbox{$T_{reg}$}}\sim 0.18$ is the kaon flux transmission through the regenerator, $\gamma = E_K/M_K \sim 140$ is the average kaon boost, and $\rho\sim 0.03$ is the regeneration amplitude for forward scatters. The factor $L/\gamma c{\mbox{$\tau_{S}$}}$ is not present because essentially all of the $K_S$ decay within the decay region. The charged mode vacuum-to-regenerator single ratio is $$r_{+-} = \left\vert \frac{{\mbox{$\eta_{+-}$}}}{\rho} \right\vert^2 \tlts \cdot \frac{ {\cal F}_V^{+-}}{ {\cal F}_R^{+-}} \cdot \frac{ {\cal A}_V^{+-}}{ {\cal A}_R^{+-}} \cdot \frac{L}{\gamma c{\mbox{$\tau_{L}$}}} \cdot \frac{1}{{\mbox{$T_{reg}$}}} \label{eq:single ratio}$$ and similarly the neutral mode single ratio is obtained with $+-$ replaced by $00$. To get a typical value of the single ratio, use $\vert\eta/\rho\vert \sim 0.07$, ${\mbox{$\tau_{L}$}}/{\mbox{$\tau_{S}$}}= 580$, ${\cal F}_V/{\cal F}_R \simeq 2.32$ due to the movable absorber (Fig. \[fig:detector\]), ${\cal A}_V/{\cal A}_R \sim 0.8$ and $L/\gamma c{\mbox{$\tau_{L}$}}\sim 0.02$; this gives $r\sim 0.6$, which shows that  is designed to collect roughly the same statistics in the vacuum and regenerator beams. The statistical precision on  is limited by the number of vacuum beam  decays; with ${\cal F}_V^{00} \simeq 2$ MHz and ${\cal A}_V^{00} \sim 0.05$, the rate of ${\mbox{$K_{L}\rightarrow\pi^{0}\pi^{0}$}}$ is $\sim 2$ Hz. The 5% acceptance used here is defined relative to all kaons and includes a livetime factor of 0.7; it is therefore smaller than the acceptance shown in Figure \[fig:mczacc\] that is defined within a specific momentum and ${\mbox{$z$}}$-vertex range. The desired quantity, $\vert{\mbox{$\eta_{+-}$}}/{\mbox{$\eta_{00}$}}\vert^2$, is proportional to the experimentally measured double ratio, $r_{+-}/r_{00}$. The factors ${\mbox{$\tau_{L}$}}/{\mbox{$\tau_{S}$}}$, ${\mbox{$T_{reg}$}}$, ${\vert\rho\vert}$, and $L/\gamma c{\mbox{$\tau_{L}$}}$ cancel in the double ratio. For the kaon flux cancellation, we use the constraint that the vacuum-to-regenerator kaon flux ratio is the same for both the charged and neutral decay modes, $$R_F \equiv {\cal F}_{V+-}/{\cal F}_{R+-} = {\cal F}_{V00}/{\cal F}_{R00} \simeq 2.32. \label{eq:vacreg}$$ Note that Eq. \[eq:vacreg\] requires equal live-time for the vacuum and regenerator beams; the charged and neutral mode live-times, which are more difficult to control experimentally, do not have to be the same. If the beam collimation system results in small kaon-flux differences between the two neutral beams, switching the regenerator position between the two beams ensures that Eq. \[eq:vacreg\] is satisfied. The only quantities in the double ratio that do not cancel are the acceptances, for which we use a detailed Monte Carlo simulation. The quantities $r_{+-}$ and $r_{00}$ can be measured at different times provided that the following are the same for both measurements: (i) the kaon transmission in both the movable absorber and regenerator, (ii) the regeneration amplitude $\rho$ (iii) the distance between the primary BeO target and regenerator. When Equations \[eq:dumfun1\]-\[eq:dumfun4\] are modified to account for  interference, there is no algebraic solution for ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$; results are extracted from a fit described in Section \[sec:fitting\]. Function for Kaon Scattering in the Regenerator {#app:fitfun} =============================================== This appendix describes the function used to model the  decay distribution for kaons that scatter in the regenerator. The function depends on decay time, ${\mbox{$p_T^2$}}$, and kaon momentum, and is also used to separate diffractive and inelastic contributions. The functional form is $$N_{\rm regscat} \propto \sum_{j=1}^{6} A_j e^{\alpha_j p_t^2} \vert \hat{\rho}_j e^{\Lambda_S t} + \eta e^{\Lambda_L t} \vert^2 ~, \label{eq:fitfun}$$ where $A_j$, $\alpha_j$, $\vert\hat{\rho}_j\vert$ and $\phi_{\hat{\rho}_j}$ are the 24 fit parameters, $\Lambda_{S,L} = i m_{S,L} - \frac{1}{2}\Gamma_{S,L}$, and $t$ is the proper time of the decay. Note that each $\hat{\rho}_j$ is an independent regeneration amplitude for scattering. The terms in Eq. \[eq:fitfun\] can be roughly associated with the known properties of kaon scattering in carbon, hydrogen and lead. In addition to these 24 parameters, there are two extra parameters that describe the momentum dependence of the phase ($\phi_{\hat{\rho}_j}$) and ${\mbox{$p_T^2$}}$ slope ($\alpha_j$) associated with diffractive scattering from the lead at the downstream edge of the regenerator. Of the 26 parameters in Eq. \[eq:fitfun\], 8 are fixed based on previously measured properties of kaon scattering. An additional 12 parameters are used to float the momentum dependence of $N_{\rm regscat}$ in $10~{ {\rm GeV}/c }$ bins; the momentum dependence for scattered kaons varies by only a few percent compared to that of unscattered kaons. The total number of free parameters in the regenerator scattering function is $18+12 = 30$. The $\alpha_j$ parameters, which describe the exponential ${\mbox{$p_T^2$}}$ dependence, are used to distinguish between inelastic and diffractive scattering. The term with the broadest  distribution has $\alpha^{-1} = 2.4\times 10^5~{ {\rm MeV}^2/c^2 }$, and is identified with inelastic scattering. The other terms have much steeper ${\mbox{$p_T^2$}}$ distributions, with $5000 < \alpha^{-1} < 70000 ~{ {\rm MeV}^2/c^2 }$, and are associated with diffractive scattering. Figure \[fig:regscat\] shows the diffractive and inelastic contributions to regenerator scattering. After determining the shapes of the diffractive and inelastic scattering distributions, the next step is the absolute normalization of the background relative to the coherent  signal. In the charged analysis, we define $\nscat{+-}$ to be the number of reconstructed  events with ${\mbox{$p_T^2$}}> 2500~{ {\rm MeV}^2/c^2 }$, and $\ncoh{+-}$ to be the number of coherent events with ${\mbox{$p_T^2$}}< 250~{ {\rm MeV}^2/c^2 }$ (Fig. \[fig:ptsq\_shapes\]). The scattering level is adjusted in the simulation so that the reconstructed $\nscat{+-}/\ncoh{+-}$ ratio is the same in the data and MC. Note that $\nscat{+-}$ in data is obtained after subtracting collimator scatters and semileptonic decays. In the neutral mode analysis, we define similar quantities in the regenerator beam: $\nscat{00}$ is the number of reconstructed events with $300 < {{\tt RING}}< 800~{\rm cm}^2$ (after subtracting the other background components) and $\ncoh{00}$ is the number with ${{\tt RING}}< 110~{\rm cm}^2$ (Fig. \[fig:ringvaceps\]). Using the scattering-to-coherent ratio determined with acceptance-corrected  decays, the simulation over-predicts the $\nscat{00}/\ncoh{00}$ ratio in data by 3%; this difference results from the additional veto requirements in the neutral mode analysis, and is illustrated in Fig. \[fig:regscat\]. To match the $\nscat{00}/\ncoh{00}$ ratio in data, the neutral mode scattering simulation is adjusted by a 16% reduction in the inelastic scattering level; note that the diffractive scattering level is not affected by the veto cuts, and is therefore the same in the charged and neutral analyses. If we ignore the difference between diffractive and inelastic scattering, the simulated $\nscat{00}/\ncoh{00}$ ratio can be adjusted by a 3% reduction in the total scattering level. Compared with the nominal 16% reduction in the inelastic level, a global 3% adjustment gives equally good data-MC agreement in the ${{\tt RING}}$ distribution from 300 to 800 cm$^2$ because the diffractive and inelastic distributions are very similar for ${\mbox{$p_T^2$}}> 30000~{ {\rm MeV}^2/c^2 }$. In the coherent signal region, however, the diffractive and inelastic scattering distributions are different, leading to a 0.01% difference in the background prediction between these two ways of adjusting of the $\nscat{00}/\ncoh{00}$ ratio. This difference is used in evaluating the systematic uncertainty on ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ in Section \[sec:bkg\_syst\]. Discussion of the Previous  Measurement of ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ {#sec:prl99} =================================================================================================== The ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ result reported in Section \[sec:reepoe\_results\] includes a full reanalysis of our previously published data sample [@prl:pss]. The reanalysis of that data sample gives $$\begin{aligned} {\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}& = & [ \KtevReepoeA \pm 3.0~({\rm stat}) \pm 2.9~({\rm syst}) ] { \times 10^{-4}}\nonumber \\ & = & [ \KtevReepoeA \pm \KtevTErrA~ ]{ \times 10^{-4}}\\ & & (\mbox{Reference~\cite{prl:pss}~sample}) ~, \nonumber\end{aligned}$$ with $\chi^2/dof = 18.7/21$. The ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ shift is $-4.8 { \times 10^{-4}}$ relative to the previous result in [@prl:pss], and is larger than the previous systematic uncertainty of $2.8 { \times 10^{-4}}$. The shifts in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ resulting from changes in the analysis are summarized in Table \[tab:prl99 shift\]. The change in the regenerator scattering background for  is mainly from correcting an error in the kaon scattering function. The other shifts are due to improvements and are consistent with the systematic uncertainties assigned in [@prl:pss]. All of the changes in Table \[tab:prl99 shift\] are from independent sources which are described below. -------------------------- ---------------------------------------------------------------- -------------------- Source of ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ shift Number change $({ \times 10^{-4}})$ of $\sigma_{syst}$ Regenerator scattering $-1.7$ 2.1 Collimator scattering $-0.2$ 0.6 Screening $-0.3$ 1.5 Regenerator transmission $-0.3$ 1.5 Regenerator edge $-0.4$ 1.6 Neutral analysis $+0.1$ 0.1 Mask Anti MC $+0.3$ 1.3 Absorber scattering MC $-0.6$ 0.6   and   $-0.5$ 3.1[^2] MC fluctuation $-1.1$ 1.0 Total $-4.8$ 1.7 -------------------------- ---------------------------------------------------------------- -------------------- : \[tab:prl99 shift\] Sources of the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ shift between reference [@prl:pss] and re-analysis of the same data sample. The systematic $\sigma$’s ($\sigma_{syst}$) are from [@prl:pss]. - The regenerator scattering function includes two extra scattering terms (Eq. \[eq:fitfun\]) and two extra parameters to simulate subtle momentum-dependent features. - Collimator scattering (Sec. \[sec:bkg\_coscat\]) is measured after subtracting semileptonic decays, and the simulation includes two extra parameters to better describe the $K_L$ ${\mbox{$p_T^2$}}$-dependence. - In the ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$ fit (Sec. \[sec:fitting\]), a nuclear screening correction is used. - The energy dependence of the regenerator transmission is measured with four times more ${\mbox{$K_{L}\rightarrow \pi^{+}\pi^{-}\pi^{0}$}}$ data. - The calculation of the effective regenerator edge in the neutral decay mode (Fig. \[fig:regdiagram\]b) includes kaon decays upstream of the last regenerator-lead piece instead of considering only decays in the last piece of lead. The effective regenerator edge in charged mode is based on a better technique to measure the veto threshold. - Additional neutral mode veto cuts on hits in the trigger hodoscope and drift chambers reduce sensitivity to the transverse energy cut on CsI clusters. - The Mask Anti simulation allows kaons and photons to punch through the lead-scintillator outside the beam-hole regions (Fig. \[fig:RCMA\]). - The absorber scattering simulation is improved to match beam shapes in a special run without the regenerator, but with the movable absorber still in place. - We use our measurements of  and  that each differ by more than $2\sigma$ from the PDG averages [@pdg00]. The changes listed above account for $-3.7{ \times 10^{-4}}$ of the shift in ${\mbox{$Re({\mbox{$\epsilon^{\prime}\!/\epsilon$}})$}}$. We attribute the remaining shift of $-1.1{ \times 10^{-4}}$ to a $1 \sigma$ fluctuation between the two independent MC samples used for the acceptance correction. The largest systematic uncertainty in [@prl:pss] was based on the $z$-dependence of the acceptance for  decays. Unfortunately, this data-MC discrepancy still remains in the reanalysis. Correlations Among Kaon Parameter Measurements {#app:kparcor} ============================================== The determination of correlations among different kaon parameters, including correlations arising from systematic uncertainties, is based on the following $\chi^2$: $$\begin{array}{lcl} \label{eq:syst} \chi^2(X,\alpha_i) & = & (X-\overline{X}) C^{-1} (X-\overline{X}) + \sum_i \alpha_i^2 \\ & & \\ \hfill \overline{X} & = & \overline{X_0} + \sum_i \beta_i \alpha_i ~. \end{array}$$ $X$ is the vector of the measured physics quantities from the fit ([*e.g.*]{},  and ), and $C$ is the covariance matrix including statistical uncertainties for both data and MC. The $\alpha_i$ are fit parameters representing the number of “$\sigma$” associated with each source of systematic uncertainty that causes correlations among the physics quantities ($\alpha_i = 1$ corresponds to $1\sigma$ change). $\overline{X_0}$ is the vector of physics quantities obtained in the nominal fit with all $\alpha_i = 0$, and $\beta_i$ are the changes in the central value of $X$ corresponding to a $1\sigma$ change in the systematic source $i$. The total uncertainty in a physics quantity obtained from the $\chi^2$ in Eq. $\ref{eq:syst}$ is equivalent to adding statistical and systematic uncertainties in quadrature. The minimization of this $\chi^2$ accounts for correlations among the physics quantities for each source of systematic uncertainty. We ignore correlations among the different sources of systematic uncertainty; these correlations are reduced by the grouping of systematic uncertainties shown in Table \[tb:syst\_kpar\]. The total uncertainties and correlation coefficients for all of the fits are given in Table \[tb:syst\_kparcorr\]. Note that the statistical and systematic uncertainties are larger in the fits in which CPT symmetry is not assumed. Figure \[fig:phipmcorr\] shows $1\sigma$ contours of statistical and total uncertainties for the measurements of , , and . For the ${\mbox{$\Delta m$}}$-${\mbox{$\tau_{S}$}}$ fit (Fig. \[fig:phipmcorr\]a), systematic uncertainties have a significant effect on the ${\mbox{$\Delta m$}}$-${\mbox{$\tau_{S}$}}$ correlation. For the ${\mbox{$\phi_{+-}$}}$ fit (Fig. \[fig:phipmcorr\]b-d), systematic uncertainties have a very small effect on the correlations. Figure \[fig:imrep\] shows the correlation between the real and imaginary parts of ${\mbox{$\epsilon'\!/\epsilon$}}$ from the fit without CPT assumptions; systematic uncertainties have a small effect on the correlation. [lccc]{}\ &   &   &\ Total Error & $\KtevDelmTerr{\mbox{$\times 10^{6}~\hbar {\rm s}^{-1}$}}$ & $\KtevTausTerr{\mbox{$\times 10^{-12}~{\rm s}$}}$ &\ Correlation coefficients: & & &\     & 1. & &\     & $-0.396$ & 1. &\ & & &\ \ &   &  &  \ Total Error & $42{\mbox{$\times 10^{6}~\hbar {\rm s}^{-1}$}}$ &   $0.13{\mbox{$\times 10^{-12}~{\rm s}$}}$ & $1.40{^{\circ}}$\ Correlation coefficients: & & &\     & 1. & &\     & $-0.874$ & 1. &\       & $+0.987$ & $-0.898$ & 1.\ & & &\ \ &   &   &\ Total Error & $ 4.0{ \times 10^{-4}}$ & $\IMEPOETotErr{ \times 10^{-4}}$ &\ Correlation coefficients: & & &\       & 1. & &\       & $-0.647$ & 1. &\ [41]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , , ****, (). , ****, (). , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (), . , ****, (). , ****, (). , ****, (), . , ****, (), . , ****, (). , ****, (). , ****, (). , ****, (). (), ****, (). . , in **, edited by (, ), p. . , ****, (). , ****, (). , Ph.D. thesis, (). , ****, (). , . , ****, (). (), . , ****, (). , Ph.D. thesis, (). (), . , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). () (), . [^1]: Approximate expressions for the regeneration amplitude resulting from fits to  data are ${\vert\rho\vert}\simeq 0.03 ({{\mbox{$p$}}}/70)^{-0.54}$ and $\phi_{\rho} \simeq [-34 + 12( {{\mbox{$p$}}}/140 -1 )^2]{^{\circ}}$. [^2]: Reference [@prl:pss] used PDG98 values and errors for  and .
{ "pile_set_name": "ArXiv" }
--- abstract: 'We have calculated the binding energy of various nucleobases (guanine (G), adenine (A), thymine (T) and cytosine (C)) with (5,5) single-walled carbon nanotubes (SWNTs) using ab-initio Hartre-Fock method (HF) together with force field calculations. The gas phase binding energies follow the sequence G $>$ A $>$ T $>$ C. We show that main contribution to binding energy comes from van-der Wall (vdW) interaction between nanotube and nucleobases. We compare these results with the interaction of nucleobases with graphene. We show that the binding energy of bases with SWNTs is much lower than the graphene but the sequence remains same. When we include the effect of solvation energy (Poisson-Boltzman (PB) solver at HF level), the binding energy follow the sequence G $>$ T $>$ A $>$ C $>$, which explains the experiment[@zheng] that oligonucleotides made of thymine bases are more effective in dispersing the SWNT in aqueous solution as compared to poly (A) and poly (C). We also demonstrate experimentally that there is differential binding affinity of nucleobases with the single-walled carbon nanotubes (SWNTs) by directly measuring the binding strength using isothermal titration (micro) calorimetry. The binding sequence of the nucleobases varies as thymine (T) $>$ adenine (A) $>$ cytosine (C), in agreement with our calculation.' author: - | Anindya Das,$^{1}$ A. K. Sood,$^{1,} $[^1]\ Prabal K. Maiti,$^{2}$ Mili Das,$^{3}$ R. Varadarajan,$^{3}$ and C. N. R. Rao$^{4}$ title: 'Binding of Nucleobases with Single-Walled Carbon Nanotubes' --- Introduction ============ Single-walled carbon nanotubes (SWNTs) are one-dimensional systems with different diameters and chiralities. They have large surface areas [@peigney; @eswaramoorthy; @bacsa] with the electrons in intimate contact with the environment. The superb mechanical [@salvetat] and electrical [@colbert] properties of SWNT have potential for many applications such as nanoelectronics [@collins; @tans], actuators [@baughman], chemical [@kong; @staii], and flow [@ghosh] sensors. Recently, there are potential applications of DNA coated CNTs as a biosensor [@lin; @cassell]. Single-stranded DNA (ss-DNA) coated SWNTs field effect transistor (FET) has been used to detect various odors [@staii], DNA hybridization [@star] and conformation changes in DNA [@heller] in presence of ionic concentration. It has been also reported that DNA can be inserted into CNTs, which can be further potential applications of this nano-bio system. During synthesis SWNTs appear in bundles, but for their potential application we need isolated SWNT. It is, therefore, a big challenge to disperse SWNT bundles in aqueous solution. It has been reported that single-walled carbon nanotube bundles are effectively dispersed in water on sonication in the presence of single stranded DNA (ssDNA) [@zheng]. The dispersity is dependent on the oligonucleotide sequence. Jagota $\textit{et al}$. [@zheng] has shown that poly (T) has higher efficiency in dispersion the nanotubes compared to ploy(A) and ploy(C). Therefore, the wraping of carbon nanotubes by ss-DNA is sequence dependent. It was also found that the best separation was obtained with sequence of repeats alternating G and T (poly (GT)) [@mzheng]. Studies on the nature of interaction of ssDNA, dsDNA and oligonucleotides with nanotubes have made use of classical molecular dynamics simulations, besides experimental techniques such as ion exchange chromatography (IEC) [@mzheng], atomic force microscopy (AFM), resonance Raman spectroscopy (RRS), photoluminescence (PL) [@chou], linear dichroism (LD) [@rajendra] and directional optical absorbance [@meng]. There has, however, been no estimation of the strength of interaction of the individual nucleobases, adenine (A), cytosine (C), thymine (T) and guanine (G) with the SWNTs. The objective of this study is to determine the binding strength of the different nucleobases with SWNTs using ab-initio quantum chemical as well as classical force field calculations. We have also determined the binding strength of thymine and cytosine with the SWNTs by employing isothermal titration calorimetry (ITC), which has emerged as a powerful tool to study the thermodynamics of protein-protein [@thompson], DNA-protein [@kunne] and protein-lipid interactions [@wenk]. The advantage of ITC over kinetic methods is that the same experiment provides the binding constant as well as the enthalpy of binding. In our study we have titrated aqueous SWNT solutions against aqueous solutions of nucleobases. Our results show that the binding affinity of thymine with SWNTs is higher than adenine and cytosine. We believe that the present study, besides providing valuable insight into the interactions of SWNTs with nucleobases, may be of potential use in nanotechnology. ![I, II, V $\&$ VI show the cross-sectional view of optimized structure of C, G, A and T nucleobases bonded to nanotube. III, IV, VII $\&$ VIII show the lateral view of the corresponding optimized structures.[]{data-label="Figure 1"}](Fig3.jpg){width="47.50000%"} Theoretical Calculations ======================== We consider a (5, 5) SWNT containing 5 unit cells in presence of different nucleobases (C, G, A, T) as input configuration for our quantum chemical calculations. The bases are kept parallel to the nanotube axis. During the optimization process, all the atoms are free to relax. The optimized structures of nucleobases, SWNT and combined system are obtained using Jaguar computational package [@jaguar]. The binding energy of nucleobases with SWNT is evaluated using ab-initio restricted open shell Hartree-Fock (ROHF). In all the calculations, we have used double-zeta basis set with polarization function (6-31g\*\*). The optimized geometries for the combined systems are shown in Fig. 1. We notice that a particular orientation of the base with the nanotube, as shown in Fig. 1 is achieved irrespective of different input configurations. Hartree-Fock (HF) or density-functional theory (DFT) describes the charge transfer and chemical bonds between the molecules but they are inadequate to describe the van der Waals (vdW) interaction [@ortmann; @williams]. It is known that the vdW is the main interaction between organic molecules and inert surfaces like graphite [@ortmann; @sowerby]. Here we have calculated the vdW energy between nanotube and different bases using classical MSCFF force field [@brameld] as well as AMBER 7 [@amber]. The solvation energies [@tannor; @tsang] were calculated using Poisson-Boltzman (PB) solver of Jaguar which takes into account the presence of continuum dielectric medium like water. The total binding energy is thus obtained by adding the HF, vdW and solvation energy. We show that the binding energy between the nanotube and DNA bases is mainly governed by vdW interaction, whereas the sequence of binding is influenced by the solvation energy. ![The open circle with line shows the DOS for the combined system (SWNT\_nucleobase), whereas the solid line and stars with dashed line show the DOS for pristine nanotube and nucleobases, respectively.[]{data-label="Figure 2"}](Fig4.jpg){width="42.50000%"} Ab-initio Calculations ---------------------- To investigate the binding energy of different bases with nanotube, we have calculated the energy of optimized structures of the nanotube, the bases and the combined system separately using Hartree-Fock (HF-6-31g\*\*). The binding energy of a base is calculated by subtracting the energy of an isolated nanotube and isolated base from the energy of the combined system i.e E $_{binding}$ = E $_{adsorbed}$ $_{system}$ - (E $_{isolated}$ $_{nanotube}$ + E $_{isolated}$ $_{base}$). In this scenario the binding sequence is C $>$ G $>$ A $>$ T and the corresponding binding energies are -0.07, -0.04, -0.03 and -0.02 eV, respectively. To investigate the origin of binding of nucleobases with nanotube we will examine the different energetic contributions, Mulliken charge analysis and electronic density of states (DOS). It is seen that there is no noticeable contribution from covalent or Coulombic energy to the attraction between nanotube and bases. The only noticeable contribution in the binding energy obtained from HF (6-31g\*\*) comes from the exchange energy term. Nuclear repulsion (E $_{\sf{A}}$), total one electron term (E $_{\sf{E}}$) and total two electron term (E $_{\sf{I}}$) are the three components in binding energy obtained from HF (6-31g\*\*), where the total two electron term consists of Coulomb (E $_{\sf{J}}$) and exchange energy (E $_{\sf{X}}$). The corresponding energies for cytosine bonded nanotube are E $_{\sf{A}}$ = 62595.74 eV, E $_{\sf{E}}$ = -125302.13 eV and E $_{\sf{I}}$ = 62706.32 eV with total energy E binding = E $_{\sf{A}}$ + E $_{\sf{E}}$ + E $_{\sf{I}}$ = -0.07 eV. As HF does not give the exchange energy separately, we have calculated the exchange energy with correlation term (E $_{\sf{XC}}$) for cytosine bonded nanotube using density functional theory (DFT) at the theoretical level of B3LYP/6-31g\*\*. The corresponding energies are E $_{\sf{A}}$ = 68743.93 eV, E $_{\sf{E}}$ = -137595.49 eV, E $_{\sf{J}}$ = 68851.62 eV and E $_{\sf{XC}}$ = -0.13 eV. Therefore E $_{\sf{I}}$ = E $_{\sf{J}}$ + E $_{\sf{XC}}$ = 68851.49 eV and the total energy E binding = E $_{\sf{A}}$ + E $_{\sf{E}}$ + E $_{\sf{I}}$ = -0.07 eV, which is same as the binding energy obtained from the HF. The binding of cytosine with nanotube can be explained only if we consider the E $_{\sf{XC}}$ term (-0.13 eV), because the sum of 1st three components of energies (E $_{\sf{A}}$ + E $_{\sf{E}}$ + E $_{\sf{J}}$) is a positive contribution of 0.06 eV. The sum E $_{\sf{A}}$ + E $_{\sf{E}}$ + E $_{\sf{I}}$ energy would be positive if there is no strong covalent bond between nanotube and cytosine. The positive value arises from purely repulsive Pauli barrier. It will be explained in the next paragraph that indeed there is no noticeable covalent bond between the nanotube and nucleobases. Therefore, the binding energy calculated from HF suggests that the weak attraction between nanotube and nucleobases comes mainly from the exchange energy. The DOS of pristine nanotube, isolated bases and the combined system are shown in Fig. 2 which clearly shows that there is no shift in the DOS of combined system with respect to the isolated SWNT and the DOS of combined system is the sum of DOS of pristine nanotube and the corresponding base. We note that the (5, 5) nanotube has an energy gap of 4 eV due to the finite length of 1.1 nm. The nearest distances in optimized structures (see Fig. 1) between the nanotube to different nucleobases are C-O  3.2 $\sf{\AA}$ (cytosine and guanine), C-H   3.3 $\sf{\AA}$ (adenine) and C-H  3.4 $\sf{\AA}$ (thymine), which certainly exceed the sum of the covalent radii of C-O   1.5 $\sf{\AA}$ and C-H   1.1 $\sf{\AA}$. Therefore, the bond between the nanotube and the bases is unlikely to be covalent. ![The open star with line shows the charge distribution on individual carbon atoms along the nanotube axis near (a) the nitrogen of C, (b) hydrogen of G, (c) nitrogen of A, (d) oxygen of T. The cross with line shows the averaged charge on each carbon atom in a ring along the nanotube axis for neutral nanotube. The open circle with line shows the averaged charge on each carbon atom in a ring along the nanotube axis for nanotube bonded with different nucleobases.[]{data-label="Figure 3"}](Fig5.jpg){width="47.50000%"} The adsorption energies in Hartree-Fock calculations are of similar magnitude as calculated for adenine adsorbed on Cu (110) [@preeuss], where the mutual polarization and Coulombic interaction between the molecule and Cu substrate determines the binding energy. Guided by this, we have carried out the Mulliken charge population analysis to evaluate the charge transfer and charge distribution in the combined system. As shown in Fig. 3, the adsorption does cause a minor charge redistribution. However, there is no charge transfer between the nanotube and nucleobases. Out of all the four bases, the most pronounced case is that of cytosine bonded nanotube where the maximum charge redistribution of the order of $\Delta$q = + 0.01e occurs on a carbon atom of nanotube in close proximity to the nitrogen atom of the cytosine (see Fig. 3). This results an overall dipole moment in nanotube of $\left|\sf{P}\right|$ = 0.03 Debye (D = 3.33 $\times$ 10$^{-30}$ Coulomb meters). Since this charge redistribution is much smaller in magnitude compared to the adenine on copper [@preeuss], we can exclude any noticeable contribution to binding energy due to mutual polarization between the nanotube and the bases. However, this mutual polarization can explain the origin for the specific orientation of the bases with respect to the nanotube in HF optimized structures. The optimized structures (Fig. 1) indicate that cytosine binds to nanotube surface via nitrogen atom of the pyrimidine ring with partial contribution from the oxygen of C=O group. Similarly for guanine, adenine and thymine, the binding occurs via either the nitrogen atom of the pyrimidine ring or the oxygen atom of the carbonyl group. As these nitrogen and oxygen atoms have more electron affinity, the resultant dipole moment of the pure bases is determined by these atoms. ![(color online) Classically optimized geometry of nucleobases on nanotube: (a) cytosine, (b) guanine, (c) adenine and (d) thymine.[]{data-label="Figure 4"}](NtBaseMSCFF.jpg){width="47.50000%"} Classical Force Field Calculations ---------------------------------- Since the vdW interaction is the dominant interaction between organic molecules and nanotubes, we have carried out classical force field calculation to calculate the vdW energy between the nanotube and nucleobases. Here we have used MSCFF force field parameter [@brameld] between the different atoms of nanotube and nucleobases. The input configuration is the same as that in the quantum chemical calculations. From the optimized structures of combined systems, we note that all the nucleobases remain parallel to the nanotube at a distance of 3.25 $\sf{\AA}$ as shown in Fig. 4. The parallel configuration is obtained because of maximum vdW interaction between the substrate and the molecule ans the stacking arrangement (see Fig. 4) is similar to AB stacking of graphite layers but it deviates frob perfect AB stacking, which is more prominent in G, due to five and six-membered rings of bases and nanotube curvature effects. The vdW binding energies are calculated by subtracting the vdW energy of isolated systems from the energy of the adsorbed system and the corresponding energies are -0.54, -0.51, -0.47, -0.39 eV for G, A, T and C, respectively and the vdW energy calculated using AMBER 7 remains similar to that of MSCFF (see table 1). Now the resultant gas phase energies are obtained by adding the HF and vdW energies and the sequence becomes G $>$ A $>$ T $>$ C with energies -0.58, -0.54, -0.49, -0.46 eV, respectively (see table 1). It is clear that vdW energy has much more contribution to binding than the HF exchange energy. We note that the gas phase binding sequence of nucleobases with SWNT remains same as the binding sequence of nucleobases with graphite [@shi]. To compare the adsorption of nucleobases to nanotube and graphite, we have calculated the binding energy of all the bases with graphite. For adenine, the calculated binding energies using HF (6-31g\*\*) and classical force field (vdW) are -1.50 kcal/mol (-0.06 eV) and -18.69 kcal/mol (-0.8 eV), respectively. Therefore, the total binding energy (HF+vdW) of adenine on graphite is -20.19 kcal/mol (-0.87 eV). These energies are very similar to the value of -0.07 eV calculated using DFT with Generalized Gradient Approximation (GGA) and the total energy of -1.07 eV (GGA + vdW) \[26\]. This comparison is very important as we calculate the the binding energy using HF and classical force field separately but it givesas accurate as DFT (GGA + vdW) calculation. The HF optimized structure for graphene-adenine combined system is shown in Fig. 5(a) and 5(b). We see that unlike nanotube adenine remains almost in parallel with graphene. The vdW interaction for graphite with other bases G, T and C bases are, respectively, -0.83, -0.75 and -0.7, eV. These values compare well with the reported values [@shi; @gowtham] of graphite-base interaction using molecular dynamics and DFT calculation. We note that vdW interaction for the nanotube is smaller than the graphite due to curvature effects. The clasically optimized structures for different bases with graphene are shown in Fig. 5(c), (d), (e) and (f) for C, G, A and T respectively. Like in nanotube, $\pi$-orbitals of nucleobases and graphene minimize their overlap to lower the repulsion interaction giving rise to AB stacking like arrangement, which is more perfect than nanotube. ![(color online) (a) HF optimized structure for graphene-adenine combined system. (b) cros-sectional view. Classically optimized structures for (c) cytosine, (d) guanine, (e) adenine and (f) thymine.[]{data-label="Figure 4"}](HfGrBaseMSCFF.jpg){width="47.50000%"} Solvation Energy Calculations ----------------------------- The binding energies calculated so far are in gas phase condition. In real experimental situations, the nucleobases bind with the nanotubes in aqueous solution like water. To calculate the binding energy in solution phase we have carried out solvation energy calculations using Poisson-Boltzman (PB) [@tannor] solver of Jaguar package at the HF/6-31g\*\* level, where the gas phase optimized structures (HF) are taken as input and the solvation energies (S.E) are calculated without optimizing the geometry further in presence of water. Here we have assigned the van der Waal radii for different atoms according to the reference [@lee]. Now the contribution of solvation energy to the binding energy is calculated by subtracting the solvation energy of isolated systems from the solvation energy of adsorbed system and the energies are -0.02, 0.01, 0.05 and 0.16 eV for T, G, A and C, respectively (see table 2). The total binding energy is obtained by adding the HF, vdW and the solvation energy. The relative binding for the monomeric bases is G $>$ T $>$ A $>$ C with energies -0.57, -0.51, -0.49 and -0.3 eV, respectively. ---------- ------- --------- -------- ------- ------- --------- Nucleo HF vdW(eV) HF+vdW S.E Total vdW(eV) base eV MSCFF eV eV eV AMBER Guanine -0.04 -0.54 -0.58 0.01 -0.57 -0.6 Thymine -0.02 -0.47 -0.49 -0.02 -0.51 -0.5 Adenine -0.03 -0.51 -0.54 0.05 -0.49 -0.53 Cytosine -0.07 -0.39 -0.46 0.16 -0.3 -0.45 ---------- ------- --------- -------- ------- ------- --------- : Theoretically calculated values of interaction energies of monomeric nucleobases with SWNT \[table2\] ![image](Fig2.jpg){width="\textwidth"} Experimental Details ==================== SWNT bundles were prepared by the arc discharge method followed by purification process, involving acid washing and high-temperature hydrogen treatment [@vivekchand]. The nanotubes are characterized by the thermogravimetric analysis, transmission electron microscopy (TEM), Raman spectroscopy \[RS\] and near infrared spectroscopy (NIR) [@vivekchand]. The average diameter of the nanotubes was about 1.4 nm, the length being a few microns. 100 $\mu$g of the SWNT sample was suspended in 1.6 ml double distilled water having a pH value of 6.7. The solution was sonicated for 5 hrs at a power level of 3W. After the sonication, the solution was kept for a day to sediment all the big bundles of SWNT. The supernatant solution was stable for more than 15 days. The nucleobases obtained from the Sigma Aldrich Chemicals were used as received. The binding of nucleobases with SWNTs was achieved by mixing of 300 $\mu$l of 10 mM nucleobases solution (prepared in double distilled water with pH of 6.7) with 1.6 ml of the above SWNT solution. We have chosen to suspend the nanotubes and the nucleobase in water instead of a buffer solution because the latter may influence the surface of the nanotubes. We could not perform the experiment with guanine in aqueous solution due to its insolubility in water. Calorimetric titrations were performed at 5 $^0$C with a Micro-Cal VP-ITC instrument. We checked the stabilization of the instrument by performing a water-water run, the heat change so obtained being less than 0.6 $\mu$Cal per 10 $\mu$l injection of water. Here a rotating stirrer-syringe (270 rpm) containing 300 $\mu$l of 10mM nucleobase solution injects in equal steps of 10 $\mu$l solution at 180-sec intervals into a cell containing 1.4 ml of SWNT solution until saturation is reached. Experimental Results ==================== The ITC response recorded during the titration of nucleobases with the SWNTs are shown in Fig. 6. Fig. 6a, 6d and 6g show the ITC data for the blank titration of 10 mM thymine, cytosine and adenine, respectively against double distilled water. Fig. 6b, 6e and 6h show the raw data of heat of reaction due to nucleobase-SWNT binding during each injection for thymine, cytosine and adenine, respectively. We can see that the blank titration of adenine (see Fig. 6g) with double distilled water is an endothermic reaction with higher magnitude of heat of reaction. The raw ITC response for adenine (see Fig. 6h) with SWNT solution also shows an endothermic response with different magnitude of heat of reaction. Therefore, the heat of reaction of adenine with SWNTs is obtained by subtracting the blank titration from the raw titration, as shown in Fig. 6i. The nature of the binding curve (Fig. 6b, 6e and 6i) arises due to lowering of accessible surface area of SWNT in successive injections as nucleobases bind with the SWNTs. The exothermic tail in the main curve is due to the dilution effect of nucleobases during the titration as there is no free surface of SWNTs available to bind. We have also seen that the heat of dilution of SWNT solution with water is less than 5 cal per 10 $\mu$l injection of water. The raw ITC of thymine (Fig. 6b) shows the appearance of second peak in each injection, which may be as a result of more available free surfaces due to possible removal of entanglement between the nanotube bundles. Fig. 6c, 6f and 6j correspond to the integrated heat change (enthalpy change in cal/mol) due to each injection of thymine, cytosine and adenine, respectively after correcting for heat of dilution (at each injection) and are plotted as a function of injected nucleobase volume. The experiments were repeated twice and show good reproducibility as seen by the relatively small standard errors. The strength of interaction of the nucleobases with SWNTs is evaluted from the exothermicity (integrated heat in Fig. 6c, 6f and 6j) of the first injection of nucleobases with SWNTs and the binding sequence is T (-320 cal/mol) $>$ A (-219 cal/mol) $>$ C (-134 cal/mol). Since the heat of reaction curve in Fig. 6 are not in sigmoidal nature, the absolute value of binding energy can not be calculated from Fig. 6. This is because of the limited solubility of the nanotubes as well as its bundle effect in aqueous solution. Conclusions =========== Our results from HF combined with forcefield calculation indeed suggests that the binding of nucleobases with SWNTs is mainly due to weak vdW force and the gas phase binding sequence is G $>$ A $>$ T $>$ C like the binding sequence of nucleobases with graphene. We also show that compared to graphene, nanotube has lesser amount of binding affinity with nucleobases duo to curvature effects. We have shown that the solution phase binding sequence of nucleobases with nanotube is G $>$ T $>$ A $>$ C, which explains the earlier observations that oligonucleotides made of thymine bases are more effective in dispersing the SWNT in aqueous solution as compared to poly (A) and poly (C). We have shown by direct calorimetry that there is differential binding affinity in the interaction of nucleobases with SWNTs, the sequence being T $>$ A $>$ C, in agreement with the calculation.\ **Acknowledgment** We thank the Department of Science and Technology for financial support. [10]{} \[2\][\#2]{} \[1\][\#1]{} \[1\][\#1]{} Peigney, A.; Larent, C.; Flahaut, E.; Bacsa, R. R.; Rousset, A. *Carbon* **2001**, *39*, 507-514. Eswaramoorthy, M.; Sen, R.; Rao, C. N. R. *Chem. Phys. Lett.* **1999**, *304*, 207-210. Bacsa, R. R.; Laurent, C.; Peigney, A.; Basca, W. S.; Vaugien, T.; Rousset, A. *Chem. Phys. Lett.* **2000**, *323*, 566-571. Salvetat, J. -p.; Bonard, J. -M.; Thomson, N. H.; Kulik, A. J.; Forró, L.; Benoit, W.; Zuppiroli, L. *Appl. Phys. A* **1999**, *69*, 255-260. Colbert, D. T.; Smalley, R. E. *Trends Biotechnol.* **1999**, *17*, 46-50. Collins, P. G.; Arnold, M. S.; Avouris, P. *Science* **2001**, *292*, 706-709. Tans, S.; Verschueren, A.; Dekker, C. *Nature* **1998**, *393*, 49-52. Baughman, R. H.; Cui, C.; Zakhidov, A. A.; Iqbal, Z.; Barisci, J. N.; Spinks, G. M.; Wallace, G. G.; Mazzoldi, A.; Rossi, D. D.; Rinzler, A. G.; Jaschinski, O.; Roth, S.; Kertesz, M. *Science* **1999**, *284*, 1340-1344. Kong, J.; Franklin, N. R.; Zhou, C.; Chapline, M. G.; Peng, S.; Cho, K.; Dai, H. *Science* **2000**, *287*, 622-625. Staii, C.; Johnson, A. T.; Jr. *Nano Lett.* **2005**, *5*, 1774-1778. Ghosh, S.; Sood, A. K.; Kumar, N. *Science* **2003**, *299*, 1042-1044. Lin, Y.; Taylor, S.; Li, H.; Fernando, K. A. S.; Qu, L.; Wang, W.; Gu, L.; Zhou, B.; Sun, Y. P. *J. Matter. Chem.* **2004**, *14*, 527-541. Li, J.; Ng, H. T.; Cassell, A.; Fan, W.; Chen, H.; Ye, Q.; Koehne, J.; Han, J.; Meyyappan, M. *Nano Lett.* **2003**, *3*, 597-602. , *Proc. Natl. Acad. Sci. U.S.A.* **2006**, *103*, 921. , *Science* **2006**, *311*, 508. , *nature materials* **2003**, *2*, 338-342. Zheng, M.; Jagota, A.; Strano, M. S.; Santos, A. P.; Barone, P.; Chou, S. G.; Diner, B. A.; Dresselhaus, M. S.; Mclean, R. S.; Onoa, G. B.; Samsonidze, G. G.; Semke, E. D.; Usrey, M., Walls, D. J. *Science* **2003**, *302*, 1545-1548. Chou, S. G.; Ribeiro, H. B.; Barros, E. B.; Santos, A. P.; Nezich, D.; Samsonidze, G. G.; Fantini, C.; Pimenta, M. A.; Jorio, A.; Filho, F. P.; Dresselhaus, M. S.; Dresselhaus, G.; Saito, R.; Zheng, M.; Onoa, G. B.; Semke, E. D.; Swan, A. K.; $\ddot{U}$nl$\ddot{u}$, M. S.; Goldberg, B. B. *Chem. Phys. Lett.* **2004**, *397*, 296-301. Rajendra, J.; Rodger, A. *Chem. Eur. J.* **2005**, *11*, 4841-4847. Meng, S.; Wang, W. L.; Maragakis, P.; Kaxiras, E. *Nano Lett.* **2007**, *7*, 2312-2316. Thompson, G.; Owen, D.; Chalk, P. A; Lowe, P. N. *Biochemistry* **1998**, *37*, 7885-7891. Kunne, A. G. E.; Sieber, M.; Meierhans, D.; Allemann, R. K. *Biochemistry* **1998**, *37*, 4217-4223. Wenk, M. R.; Seelig, J. *Biochemistry* **1998**, *37*, 3909-3916. www.schrodinger.com Ortmann, F.; Schmidt, W. G.; Bechstedt, F. *Phys. Rev. Lett.* **2005**, *95*, 186101-186104. , *Chemical Physics* **2006**, *327*, 54-62. Sowerby, S. J.; Cohn, C. A.; Heckl, W. M.; Holm, N. G. *PNAS* **2001**, *98*, 820-822. Brameld, K.; Dasgupta, S.; Goddard III, W. A. *J. Phys. Chem. B* **1997**, *101*, 4851-4859. Case D. A., Pearlman D. A., Caldwell J. W. et al., AMBER7, University of California, San Francisco CA, 1999. Tannor, D. J.; Marten, B.; Murphy, R.; Friesner, R. A.; Sitkoff, D.; Nicholls, A.; Ringnalda, M.; Goddard III, W. A.; Honig, B. *J. J. Am. Chem. Soc.* **1994**, *116*, 11875-11882. Tsang, K. Y.; Diaz, H.; Graciani, N.; Kelly, J. W. *J. J. Am. Chem. Soc.* **1994**, *116*, 3988-4005. Preeuss, M.; Schmidt, W. G.; Bechstedt, F. *Phys. Rev. Lett.* **2005**, *94*, 236102-236105. Shi, X.; Kong, Y.; Zhao, Y.; Gao, H. *Acta mechanica sinica* **2005**, *21*, 249-256. Gowtham S., Scheicher Ralph H., Ahuja R., Pandey R., Karna Shashi P. *Phys. Rev. B.* **2007**, *76*, 033401-4. Lee, B.; Richards, F. M. *J. Mol. Biol.* **1971**, *55*, 379-400. Vivekchand, S. R. C.; Jayakanth, R.; Govindaraj, A.; Rao, C. N. R. *small* **2005**, *1*, 2-5. [^1]: Corresponding author: [email protected]
{ "pile_set_name": "ArXiv" }
--- abstract: 'In electroencephalogram (EEG) signal processing, finding the appropriate information from a dataset has been a big challenge for successful signal classification. The feature selection methods make it possible to solve this problem; however, the method selection is still under investigation to find out which feature can perform the best to extract the most proper features of the signal to improve the classification performance. In this study, we use the genetic algorithm (GA), a heuristic searching algorithm, to find the optimum combination of the feature extraction methods and the classifiers, in the brain-computer interface (BCI) applications. A BCI system can be practical if and only if it performs with high accuracy and high speed alongside each other. In the proposed method, GA performs as a searching engine to find the best combination of the features and classifications. The features used here are Katz, Higuchi, Petrosian, Sevcik, and box-counting dimension (BCD) feature extraction methods. These features are applied to the wavelet subbands and are classified with four classifiers such as adaptive neuro-fuzzy inference system (ANFIS), fuzzy k-nearest neighbors (FKNN), support vector machine (SVM) and linear discriminant analysis (LDA). Due to the huge number of features, the GA optimization is used to find the features with the optimum fitness value (FV). Results reveal that Katz fractal feature estimation method with LDA classification has the best FV. Consequently, due to the low computation time of the first Daubechies wavelet transformation in comparison to the original signal, the final selected methods contain the fractal features of the first coefficient of the detail subbands.' address: - 'Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX, USA\' - 'Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran' - 'Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran' author: - Samira Vafay Eslahi - Nader Jafarnia Dabanloo - Keivan Maghooli title: 'A GA-based feature selection of the EEG signals by classification evaluation: Application in BCI systems' --- Genetic algorithm; Brain-computer interface; Wavelet transform; EEG; Classification; Fractals Introduction {#S:1} ============ BCI technology provides a direct communication between people and computer. It monitors the EEG signals and consequently the brain activity for detecting the related tasks via signal processing algorithms to enable the communication[@c0]. It has the potential to enable the physically disabled patients to perform the activities with a higher quality and can increase their productivity. The most common motor imagery tasks are hand [@c8], feet [@c0], and tongue [@c9] movements. BCI systems can translate these tasks to the computer commands. The signals from the motor cortex should be classified to be converted to the computer language. One of the most important criteria for a good classification is the accuracy. As the BCI applications demand, the computation time should be very low to convert the motor imagery tasks as fast as possible to the computer commands. The trade-off between accuracy and computation time has been a big challenge, which desires the EEG optimization methods. To improve the classification accuracy classifiers and features play important roles. Different studies investigated the usage of the GA to find the most efficient feature sets. For instance, continuous and binary GA optimization has been proposed to characterize the patient’s epilepsy risk level [@c25]. The other method decomposed the normal and epileptic seizer EEG signals into different frequencies with 4th level of the wavelet and used the approximation entropy of the decomposition nodes as the feature sets for classification [@c26]. They used GA as the optimization method to find the risk of epilepsy. The other GA based optimization proposed in 2010. They decomposed the EEG signals into five subband components. The features they considered in that method were the nonlinear parameters that were classified by support vector machine (SVM) with linear kernel function (SVML) and radial basis function kernel function (SVMRBF) classifiers [@c27]. The other feature selection method has been proposed with GA that was evaluated by SVM [@c28]. A proposed method showed that with GA optimization, the performance of the neural network would be improved [@c29]. Fractal dimension estimation is a statistical measurement indicating the complexity of an object or a quantity that is self-similar over some regions of space or time interval. The fractal feature estimation has been successfully used in various domains to characterize the objects and quantities, but its usage in motor imagery tasks in BCI applications is still under the investigation and has been widely used in the last decades [@c12][@c13]. The biggest challenge according to the BCI is to extract features for acceptable speed and accuracy. Numerous feature extraction methods use fractal dimension estimation to extract geometrical features from signals. The most famous methods that calculate fractal dimension of a signal are Katz[@c23], Higuchi[@c14], Petrosian [@c15], Sevcik[@c11], and BCD feature extraction methods. The combination of Katz and Higuchi fractal dimensions with Fuzzy k-nearest neighbors (KNN) [@c18], SVM [@c19], and linear discriminant analysis (LDA) [@c20] classifiers has been purposed [@c16]. They showed that combination of the Katz method with the FKNN classifier has the best performance based on the time and accuracy in comparison to the other methods. Then they modified the best methodology of the combination of these features and classifiers called time-dependent fractal dimension (TDFD) [@c21], differential fractal dimension (DFD) [@c22], differential signals (DS) [@c16]. In this study, we investigate the combination of different classifiers such as ANFIS, FKNN, SVM, and LDA with different feature extraction methods such as Katz, Higuchi, Petrosian, Sevcik, box-counting dimension, on first the original data and second on the detail and approximation sub-bands of the wavelet transformation. The classification is based on the right or left-hand imagery movements and evaluates the fractal features on both the subbands and the original signal. Finally, the computation time and accuracy are optimized by GA to find the best method with the highest performance. Materials and Methods {#S:2} ===================== Dataset ------- The data is downloaded from BCI Competition 2003, Dataset III [@c30]. A female subject sat in a chair with keeping her arms relaxed. By means of imagery left or right-hand movements the task was performed, and a feedback bar controlled the process. The EEG signal was recorded with a sampling rate of 128HZ from three channels at standard positions of the 10-20 international system (C3, Cz, and C4). The signals were filtered between 0.5 and 30 HZ. Each run had 40 trials and each trial was nine second long. The imagery was done for about six seconds with a cue indicating that the task was presented. The task contains 280 trials. In this study, 9 fold is using for trains and 1 fold is used for test the classifiers’ performances. Optimization ------------ A genetic algorithm is a searching engine, which is based on natural selection and genetics that was proposed by Holland [@c5]. GA has four steps as follows: Step 1: Initializing and fitness calculation: First step in the Genetic algorithm is generating a random population of chromosomes. These chromosomes encode the solution candidates called individuals. For optimizing a problem, GA calculates the fitness value of each of the chromosomes. Step 2: selection: During each iteration, the chromosomes that have higher fitness values will be eliminated, and consequently the chromosomes with lower fitness values will be selected as the desired parents to produce children for the next generation. Step 3: reproduction and Mutation: After reproducing the population, mutation technique is used. This procedure prevents the algorithm to fall in the local minimums. It flips some Bits of chromosomes randomly to change them in that generation. Step 4: Termination. This process is repeated until the condition is met. ![image](Flowchart02.jpg){width="\textwidth"} Figure 1 shows the process of the proposed method. With the discrete wavelet transformation, the signal is decomposed into high and low frequencies with the dB2 of Daubechies, the most popular family of the wavelet transformation. This procedure continues to level 10 to find 10 details and 10 approximation coefficients. In this stage, EEG digital signal is decomposed into detail and approximation subbands. The coefficients of the details and approximation subbands are calculated up to 10 levels as the rest does not have enough information about the signal. In consequence, we calculated fractal dimensions of these subbands and obtained numerous feature sets. In the trade-off between the computation time and accuracy, we evaluate the performance of the methods with the minimum computation time and maximum classification accuracy. The evaluation is done by GA and with a defined fitness function. The considered fitness function is the ratio of the time to accuracy that a high FV shows a poor performance and a low FV shows an optimum performance. GA optimization works as follow: The initial population consists of N parents and consequently N chromosomes. Each chromosome is a string of Genes and has the length of L. Each gene is a binary allele that is a selected feature from the feature space. Calculating the fractal dimensions of the approximation and detail subbands with five feature estimation methods make the feature space to be comprised of 100 features. Ten approximation and ten detail subbands with five methods of fractal estimation make the feature sets contains 100 features. On the other hand, three channels producing these 100 features would result 300 features with different information. We also add more 15 features that obtained from applying five fractal dimension calculations on original signal from the 3 channels. With these combinations, the feature space consists of 315 different features with different information about the signal. The feature space is comprised of 315 different features, which can be defined as the initial population contains parents with different genes in one chromosome in GA optimization procedure. The number of combined features for the best classification results depends on the accuracy and computation time. Increasing the number of features combined can increase the computation time, which is not favorable while increasing the number of features can improve the accuracy of the classification. The number of different combined features can be defined by the best fitness value calculated by GA. As the number of features is 315, we defined the initial population with choosing a couple of parents that each of them has a chromosome contains 315 genes. Each chromosome is a binary string that contains only zeros and ones. Therefore each gene included in the binary string can be assigned to two numbers, one and zero, in which one represents the selection of one specific feature, and zero means that the same feature is not selected (Fig 2). \[fig:figure 2\] In each generation, two parents are selected among number of parents, and the five classifiers classify the feature combination of each of these parents separately. The fitness value of each of these classifications is calculated and evaluated. The termination of the GA is defined as meeting the 100 generations that means when the generation passed the 100th generation the algorithm stops the optimization. In fact, the FV of these parents shows how effective each parent and the offspring are. The procedure continues as the methods with high FV are stopped and the methods with low FV are transferred the next generation, which is producing the offspring. The selected parents will be sent to the crossover procedure, which means half of the genes of the first parent will be combined with half of the genes of the second parent. The combination of the features included in the chromosome of each of the parents was selected that was the best among the calculated ones. Therefore, these combinations will be well enough to be transferred to the next generations. The optimization continues until the generation meets the defined number of generations. As the feature space is very large, a threshold has been defined to eliminate the high FV of the classified signal. The threshold has been defined 5% of the maximum FV, which is the combination of all the features. This defined threshold can prevent the extra calculations. The 5% is set from defining a limited number of calculation, which will be about 7 features in a combination here (Fig 3). \[fig:figure 3\] The fitness value has been defined the ratio of the computation time over accuracy, and the lower value of the function is transferred to the next generation to find the best combination of the features and the best combination of the features with classifiers. The fitness function is defined by,\ FV=T/A\ where $FV$ is fitness value, $T$ is the computation time after the classification, and $A$ is the classification accuracy. In GA optimization, the selection, crossover, and mutation should be considered wisely to prevent the elimination of the combination, which are favorable. The selection is set by a selection rate and the crossover probability of 0.9 to choose the parents with lower fitness values for generating the next population. Parent with higher fitness values will be eliminated to reduce the time of optimization. On the other hand, the crossover of the selected parents is a technique that chromosomes exchange their genes with each other to produce children for the next population. Because the orders matter here, the scattering type of crossover has been considered, which means the randomly selected genes will be selected in 2 chromosomes and will be exchanged their places. The scatter crossover is considered due to the changing the order of the features combination. The scatter crossover prevents the combination of features to be repeated in the other generations. For instance, if F1-F2-F3-F4-F5-F6-F7 is one of the generated offspring and F8-F9-F10-F11-F13-F14-F15 is the other one, the next generation is not using F-1-F2-F3-F4-F13-F14-F1 in this order necessarily. In fact, the other probabilities of the lower FV have been investigated too. The new chromosomes are children of the previous generation, or in the other words, they are the parents of the new generation. Furthermore, in the mutation process, some genes randomly, and with a rate of 0.001 will be changed. This means that a number between zero and one is produced randomly in which if the number is lower than the defined mutation rate, a random binary string is produced and one random gene in the actual chromosome will be exchanged with the newly generated chromosome. This will avoid the algorithm to fall in the local minimums. Finally, by meeting the termination criteria i.e. 100 generations, the algorithm is converged to a minimum value that is the least fitness function value in the 100 generations. In addition, the crossover of the best couple is selected from the previous generation that necessarily has not a value better than previous parents do; therefore, the minimum fitness value will have an elite of to the next generation on the condition that the elite parent has a very low value that the elitism can be applied to them. In assigning values to GA parameters, preventing GA to trap in the local optimums is one of the most important factors that have to be considered. In addition, the divergence of future generations should be controlled to have an acceptable result in real-time signal processing. The attributed values to GA parameters are based on previous similar researches and with some little changes in these values we tried to adjust the GA parameters by examination to reach to the optimum value. Results {#S:3} ======= All the simulations were done by MATLAB R2011b (7.13.0.564). Figure 4 shows the GA convergence to the minimum value and shows how the optimization process could find the features that with the defined FV has the minimum value. \[fig:figure 4\] Feature(s)-classifier Accuracy (%) Time (sec) FV ----------------------- -------------- ------------ -------- Petrosian on D1-ANFIS 74 0.29 0.0039 B.C.D-SVM 71 0.27 0.0038 DS-ANFIS 67 0.25 0.0037 DS-FKNN 70 0.26 0.0037 B.C.D-FKNN 68 0.25 0.0037 DS-SVM 73 0.23 0.0031 B.C.D on D1-FKNN 77 0.22 0.0029 Katz on D1-LDA 79 0.19 0.0024 Katz on D1-SVM 75 0.17 0.0023 Katz-FKNN 83 0.17 0.0020 B.C.D on D1-LDA 69 0.14 0.0020 Katz on D1-FKNN 80 0.14 0.0017 Katz-LDA 84 0.12 0.0014 : GA selected methods with the lowest FV. The accuracy and the computation time are calculated. []{data-label="table 1"} The results showed that the feature combination has the best result whenever the combination did not exceed of more than two features in combination. The computation time will be increased when the number of features that cooperates in the classification and there is not a significant change in the classification. By a different fitness function obviously the results are different, and it depends on the importance of the factors involved in the final FV. The convergence of the GA shows that GA is a strong optimization algorithm that can find the optimum solution in a large feature set. The minimum fitness values that were found by the proposed method is shown in Table 1. In some applications, the accuracy is more important than the computation time; therefore fitness function can be changed by adding a weighting factor to the accuracy. In BCI applications, the computation time plays a significant role, as the brain signals should be transferred to the computer as fast as possible, and hands move as soon as the brain imagine the movements. The results show that the proposed method can select the best features in a large feature set and can investigate the combination of them. Discussion {#S:4} ========== In this paper, we optimized the combination of the fractal features and classification of these combined feature sets. This optimization is based on the extraction of the fractal features from approximation and detail sub bands acquired by wavelet transform and the original signal. In classification, we used four types of classification methods to recognize the class of motor imagery movements such as the task related to the right or left-hand movement. First, we passed the recorded signal from the high passed and low passed filters and decomposed the signal to two sub band sets as approximation and detail. Then we used five famous methods of measuring fractal dimensions such as Katz, Higuchi, Petrosian, Sevcik and Box Counting Dimension algorithms to calculate the geometry dimensions of these extracted sub bands. In this method, due to the huge number of features, we used GA to decrease the dimension of the feature space. The best accuracy was found from Katz feature estimation method that has been classified with LDA. The program is run multiple times and the results differed each time, but one of the results that reached to the minimum fitness value selected as the best results of the GA optimum solution. Katz features with the LDA classifier was the winner in all of the runs. Any of the feature sets and classification methods have their own advantages and disadvantages; however, the best method is defined by a method that has the lowest fitness value among the investigated methods and can have an acceptable performance for EEG classification. In this study, the fitness value has been calculated by the ratio of the computation time over the classification accuracy, in which it can result in different values with different considered fitness functions. The considered weighted factor for both computation time and the classification accuracy is one, but by adding a weighting factor for any independent value in the fitness function can make it more reliable. The proposed method is evaluated by 10-fold cross-validation. As mentioned the data consists of 280 trials that nine fold was applied in the training of the classifications and 1-fold for testing the performance of the data. Conclusion {#S:5} ========== Genetic algorithm plays an important role as a feature selection algorithm to choose optimum features among numerous choices. Producing the generation of children from parents with the best fitness values can cause the targeted calculation to optimize time and additional computations. In this paper, we applied genetic algorithm as a feature selection method to choose the optimum fitness values. These values are based on features that are the combination of the fractal dimensions of original signal and subbands obtained from wavelet transform. We could reach the results that have the minimum fitness values among compared methods. Results showed that the proposed method could perform as a strong algorithm for feature selection. In other words, this searching algorithm with consideration of the computation time and classification accuracy can solve the problem of minimizing the fitness value to optimize the algorithm and extract features that are more appropriate for motor imagery movements of the right and left hand. In the ANFIS implementation, the computation time has been calculated only on testing data. As each of the individual person’s EEG signals should be trained by itself, training of the other signals may not be reliable for all of the other individuals. In a certain case, it is reliable to train the data once the person imagined a task and the EEG data was acquired. The reason is that a person’s EEG signals can remain the same any time that the specific task is imagined. Therefore, in this study, the hypothesis is based on any individual who repeats any specific task repeatedly. It is in a great interest to find an algorithm for training the signals of different participants and test that with a random individual. In addition, this research would open a way to optimize some parameters that are not considered in the classification training. These parameters can lead to different results in different situations. Furthermore, there are some other parameters in fractal dimensions, which are better to be optimized. We can use the genetic algorithm to optimize these parameters to reach a minimum defined nonlinear ratio of time to accuracy. Defining a nonlinear fitness value would be more realistic in the commercial BCI systems.\ References [99]{} Muller-Putz, G.R., Scherer, R., Pfurtscheller, G. and Rupp, R., 2005. EEG-based neuroprosthesis control: a step towards clinical practice. Neuroscience letters, 382(1), pp.169-174. Pfurtscheller, G., Neuper, C., Schlogl, A. and Lugger, K., 1998. Separability of EEG signals recorded during right and left motor imagery using adaptive autoregressive parameters. IEEE transactions on Rehabilitation Engineering, 6(3), pp.316-325. Misiti, M., Misiti, Y., Oppenheim, G. and Poggi, J.M., 1996. Wavelet toolbox. The MathWorks Inc., Natick, MA, 15, p.21. Harikumar, R., Sukanesh, R. and Bharathi, P.A., 2004, November. Genetic algorithm optimization of fuzzy outputs for classification of epilepsy risk levels from EEG signals. In Signals, Systems and Computers, 2004. Conference Record of the Thirty-Eighth Asilomar Conference on (Vol. 2, pp. 1585-1589). IEEE. Ocak, H., 2008. Optimal classification of epileptic seizures in EEG using wavelet analysis and genetic algorithm. Signal processing, 88(7), pp.1858-1867. Hsu, K.C. and Yu, S.N., 2010. Detection of seizures in EEG using subband nonlinear parameters and genetic algorithm. Computers in Biology and Medicine, 40(10), pp.823-830. Schroder, M., Bogdan, M., Hinterberger, T. and Birbaumer, N., 2003, March. Automated EEG feature selection for brain computer interfaces. In Neural Engineering, 2003. Conference Proceedings. First International IEEE EMBS Conference on (pp. 626-629). IEEE. Kocer, S. and Canal, M.R., 2011. Classifying epilepsy diseases using artificial neural networks and genetic algorithm. Journal of medical systems, 35(4), pp.489-498. Boostani, R. and Moradi, M.H., 2004. A new approach in the BCI research based on fractal dimension as feature and Adaboost as classifier. Journal of Neural Engineering, 1(4), p.212. Phothisonothai, M. and Nakagawa, M., 2008. EEG-based classification of motor imagery tasks using fractal dimension and neural network for brain-computer interface. IEICE TRANSACTIONS on Information and Systems, 91(1), pp.44-53. Katz, M.J., 1988. Fractals and the analysis of waveforms. Computers in biology and medicine, 18(3), pp.145-156. Higuchi, T., 1988. Approach to an irregular time series on the basis of the fractal theory. Physica D: Nonlinear Phenomena, 31(2), pp.277-283. Petrosian, A., 1995, June. Kolmogorov complexity of finite sequences and recognition of different preictal EEG patterns. In Computer-Based Medical Systems, 1995., Proceedings of the Eighth IEEE Symposium on (pp. 212-217). IEEE. Sevcik, C., 2010. A procedure to estimate the fractal dimension of waveforms. arXiv preprint arXiv:1003.5266. Keller, J.M., Gray, M.R. and Givens, J.A., 1985. A fuzzy k-nearest neighbor algorithm. IEEE transactions on systems, man, and cybernetics, (4), pp.580-585. Hearst, M.A., Dumais, S.T., Osuna, E., Platt, J. and Scholkopf, B., 1998. Support vector machines. IEEE Intelligent Systems and their applications, 13(4), pp.18-28. Etemad, K. and Chellappa, R., 1997. Discriminant analysis for recognition of human face images. JOSA A, 14(8), pp.1724-1733. Guçlu, U., Guçluturk, Y. and Loo, C.K., 2011. Evaluation of fractal dimension estimation methods for feature extraction in motor imagery based brain computer interface. Procedia Computer Science, 3, pp.589-594. Sabanal, S. and Nakagawa, M., 1995. A study of time-dependent fractal dimensions of vocal sounds. Journal of the Physical Society of Japan, 64(9), pp.3226-3238. Takayasu, H., 1982. Differential fractal dimension of random walk and its applications to physical systems. Journal of the physical society of Japan, 51(9), pp.3057-3064. Holland, J.H., 1992. Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT press. Sabeti, M., Boostani, R., Katebi, S.D. and Price, G.W., 2007. Selection of relevant features for EEG signal classification of schizophrenic patients. Biomedical Signal Processing and Control, 2(2), pp.122-134. Lemm, S., Schafer, C. and Curio, G., 2004. BCI competition 2003-data set III: probabilistic modeling of sensorimotor/spl mu/rhythms for classification of imaginary hand movements. IEEE Transactions on Biomedical Engineering, 51(6), pp.1077-1080.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The influence of structural asymmetries (barrier height and exchange splitting), as well as inelastic scattering (magnons and phonons) on the bias dependence of the spin transfer torque in a magnetic tunnel junction is studied theoretically using the free electron model. We show that they modify the “conventional” bias dependence of the spin transfer torque, together with the bias dependence of the conductance. In particular, both structural asymmetries and bulk (inelastic) scattering add [*antisymmetric*]{} terms to the perpendicular torque ($\propto V$ and $\propto j_e|V|$ ), while the interfacial inelastic scattering conserves the junction symmetry and only produces [*symmetric*]{} terms ($\propto |V|^n$, $n\in\mathbb{N}$). The analysis of spin torque and conductance measurements displays a signature revealing the origin (asymmetry or inelastic scattering) of the discrepancy.' author: - 'A. Manchon$^{1,2}$,S. Zhang$^{2}$,K.-J. Lee$^{3}$' title: Signatures of asymmetric and inelastic tunneling on the spin torque bias dependence --- Introduction ============ The recent observation of current-driven magnetization control [@Slonc96] in Magnetic Tunnel Junctions [@huai] (MTJs) offers promising opportunities for magnetic recording and memories applications [@fullerton]. The observed critical switching current has now reached 10$^6$A/cm$^2$, which makes MTJs competitive candidates for Magnetic Random Access Memories [@ieee]. However, due to the specific transport properties in MTJs, the characteristics of the spin transfer torque in these devices display significant differences with the current-driven torque usually observed in metallic spin-valves [@sunralph]. Uncovering the precise form of the bias dependence of the spin torque is essential to understand and control the dynamical properties of the magnetization. In MTJs, it has been demonstrated both theoretically [@slonc07; @theo; @kioussis; @manchon; @xiao; @wil; @heiliger] and experimentally [@sankey; @deac; @petit; @li; @sun; @oh] that the torque possesses two components, usually referred to as the in-plane (or Slonczewski) torque, $T_{||}$ and the perpendicular (or out-of-plane) torque, $T_\bot$. The first one is purely non-equilibrium and competes with the damping, whereas the second one arises from spin reorientation at the interfaces, possesses both equilibrium (Interlayer Exchange Coupling [@slonc89]) and non-equilibrium components and acts like an effective magnetic field on the magnetization. The presence of this perpendicular torque results in original dynamical properties of the magnetization [@sankey; @deac; @petit; @li; @sun; @oh]. Up until now, most of the experimental efforts have been focused on the bias-dependence of the perpendicular torque $T_\bot$. Although this component is vanishingly small in metallic spin-valves, it can not be neglected in MTJs, due to the momentum filtering imposed by the tunnel barrier [@manchon]. Most of the theories, using tight-binding [@theo; @kioussis], free-electron [@manchon; @xiao; @wil] or [*ab-initio*]{} [@heiliger] calculations, have addressed the bias dependence of the spin torque within a symmetric and purely elastic tunneling junction (referred to as SE tunneling). It has been shown that for SE tunneling at low bias voltage, the form of the spin torque is: $$\begin{aligned} \label{eq:SEip} {\bf T}_{||}&=&(a_1V+a_2V^2){\bf M}\times({\bf M}\times {\bf P}),\\\label{eq:SEop} {\bf T}_{\bot}&=&(b_0+b_2V^2){\bf M}\times {\bf P},\end{aligned}$$ where ${\bf P}$ and ${\bf M}$ are the magnetization directions of the pinned and free layers, respectively. These bias dependencies have been well observed in spin-diode-type experiments [@sankey] performed on MgO-based MTJs. The linear bias-dependence of the in-plane torque that has been measured ($a_2\approx0$) is consistent with Ref. which suggests that MgO-based MTJs behave like half-metallic junctions. In contrast, a number of experiments using dynamical and switching properties of the MTJs [@deac; @petit; @li; @sun; @oh], as well as recent theoretical investigations [@kioussis; @xiao; @wil] have recently questioned the validity of the “conventional” bias-dependencies represented by Eqs.(\[eq:SEip\])-(\[eq:SEop\]). In particular, Xiao et al. [@xiao] and Wilczynski et al. [@wil] employing the free electron model numerically showed that structural asymmetries could alter the convention bias dependence of the perpendicular torque, whereas Tang et al. [@kioussis] predicted a non-monotonic bias dependence of $T_\bot$, demonstrating the importance of band filling. From the experimental side, Li et al. [@li] measured a field-like effect of the form $\propto j_e|V|$ and interpreted their data by considering the electron-magnon scattering in the bulk of the ferromagnets. In contrast, Sun et al. [@sun] suggested the possibility of non macrospin processes or heating artefacts that would induce a bias-dependent effective field. Very recenlty, Oh et al. [@oh] demonstrated the possibility to tune the bias dependence of the perpendicular torque by engineering the structural asymmetries of a MgO-based MTJ, consistently with theoretical simulations [@xiao; @wil; @kioussis]. One of the authors also proposed that an incomplete absorption of the transverse spin density within the free layer could lead to an asymmetric perpendicular torque [@manchon]. Finally, we recently studied the influence of interfacial electron-magnon scattering on the bias dependence of the torque [@magnon] and found that an additional symmetric term of the form $\propto|V|$ arises. As seen from this brief overview, the bias dependence of the spin torque is far from universal, and a number of mechanisms has been shown to modify this dependence. However, the role of asymmetries has been investigated numerically within the tight binding model [@kioussis] and the free electron model [@xiao; @wil] and little is known concerning the role of inelastic scattering [@li; @magnon]. In this paper, we derive analytic solutions for $T_{||}$ and $T_{\bot}$ in the case of structural asymmetries and (bulk and interfacial) inelastic scattering by magnons and phonons, leading to a discrepancy between the actual torques and the “conventional” ones \[Eqs. (1)-(2)\]. In particular, both structural asymmetries and bulk (inelastic) scattering add [*antisymmetric*]{} terms to the perpendicular torque ($\propto V$ and $\propto j_e|V|$ ), while the interfacial inelastic scattering conserves the junction symmetry and only produces [*symmetric*]{} terms ($\propto |V|$). Moreover, we suggest that a connection exists between the tunneling conductance and the out-of-plane torque which constitutes a signature of the origin of the discrepancy. This paper is organized as follows: in section II, we briefly discuss the form of the tunneling spin torque with and without spin diffusion. Section III addresses the bias-dependence of the spin torque and conductance in the presence of structural asymmetries. The influence of bulk and interfacial inelastic scattering is described in section IV and the conclusion is given in section V. Spin current vs spin density ============================ In most of the theoretical studies on the spin transfer torque in MTJs, the torques are associated with the transverse spin current density at the interfaces between the electrodes and the tunnel barrier [@slonc07; @theo; @kioussis; @manchon; @xiao; @wil; @heiliger]. This definition is only valid in the case of semi-infinite electrodes where the spin diffusion is neglected. A more correct approach is to relate the spin torque to the spin density rather than to the spin current (see Ref. for a detailed discussion). The spin torque is then the torque exerted by the transverse spin density on the local magnetization and has the form: $$\begin{aligned} {\bf T}=\int_\Omega \frac{J}{\hbar}{\bf m}\times{\bf M}d\Omega,\end{aligned}$$ where $J$ is the $s-d$ exchange coupling, ${\bf m}$ is the itinerant spin density, ${\bf M}$ is the local magnetization and $\Omega$ is the volume of the magnetic layer. The spin density can be computed from the well-known spin continuity equation [@chapter]: $$\begin{aligned} \label{eq:eq6} \frac{\partial {\bf m}}{\partial t}=-\nabla\cdot{\cal J}_s-\frac{J}{\hbar}{\bf m}\times{\bf M}-\frac{{\bf m}}{\tau_{sf}},\end{aligned}$$ where ${\cal J}_s$ is the spin current and $\tau_{sf}$ is the spin relaxation time. In the case of a magnetic tunnel junction, where the resistance is dominated by the barrier, the spatial variation of the spin density is usually neglected and the torque is directly related to the spin current[@slonc07; @theo; @kioussis; @manchon; @xiao; @wil; @heiliger]: ${\bf T}=-\int_\Omega \nabla\cdot{\cal J}_sd\Omega$. Therefore, in the case of a semi-infinite magnetic layer, the torque reduces to the [*interfacial transverse spin current*]{}. However, in realistic junctions, the free layer is usually thin ($t\approx$2-3nm) and the torque arising at the interface between the barrier and the ferromagnetic electrode must be balanced by the torque arising at the second interface: ${\bf T}={\cal J}_s(x=0)-{\cal J}_s(x=t)$, which may introduce some deviations from the “conventional” bias dependence of the torque [@manchon]. On the other hand, in MgO-based junctions, the junction behaves like a half-metallic MTJ [@heiliger] and the spin density (or transverse spin current) is strongly absorbed near the barrier interface [@manchon; @heiliger] (a few monolayers). Therefore, the usual identification ${\bf T}={\cal J}_s(x=0)$ is essentially valid in MgO-MTJs if one neglects the spin diffusion in the electrodes. Nevertheless, we will show that it is possible to account for the spin relaxation ($1/\tau_{sf}\neq0$ in Eq. (\[eq:eq6\])) in the bulk of the electrodes (impurities- or magnons-induced spin-flip scattering) as long as this relaxation does not significantly modify the interfacial densities of states, and thereby the tunneling process itself. In this case, we find that the resulting spin torque is a mixing between the two transverse components of the spin current. This issue will be addressed in detailed in section IV. Structural asymmetries\[s:as\] ============================== In Ref. , the authors demonstrated the possibility to add a linear component to the bias dependence of the perpendicular torque by intentionally introducing structural asymmetries in the junction. Depending on the asymmetry, it is possible to change the sign of the linear component, therefore artificially tuning the form of the spin torque. This finding is consistent with numerical studies [@kioussis; @xiao; @wil]. Although a connection is suggested between the bias dependence of the conductance and the one of the perpendicular torque [@oh; @kioussis; @xiao; @wil], this connection remains unclear and analytical formulae are needed. In this section, we study the influence of two types of structural asymmetries. First, we consider the presence of different exchange splittings in the ferromagnetic electrodes. The exchange splittings of Fe, Co and Ni have been measured experimentally near the $\Gamma$ point [@Eastman], as shown in Table 1. As a consequence, varying the composition of the electrodes, one can obtain different exchange splittings up to $J_R-J_L\approx0.5$ eV. Fe Co Ni ---------------------- ----------- ----- ----------- $J$ [@Eastman] (eV) 1.5 1.1 0.6 W [@Michaelson] (eV) 4.67-4.81 5 5.04-5.35 : Exchange splitting and work functions for the three standard ferromagnetic transition metals.[]{data-label="table:T1"} Another type of structural asymmetry is the presence of a different barrier height at the left and right interfaces of the junction. Since the work functions of Co, Fe and Ni are different [@Michaelson] (see Table \[table:T1\]), the asymmetry can be created by using different electrode compositions, but also by modifying the composition of the barrier itself [@asym]. ![Potential profile of an asymmetric Magnetic Tunnel Junction. The right and left parabolae represent the dispersion of tunneling electrons.[]{data-label="fig:fig1"}](Fig1.pdf "fig:"){width="8cm"}\ We consider the junction presented in Fig. 1, where two ferromagnetic electrodes are separated by an insulator. The magnetizations form an angle $\theta$ between them. The barrier of average height $\phi=(\phi_R+\phi_L)/2$ possesses an asymmetry $\Delta\phi=\phi_R-\phi_L$, whereas the electrodes have an average exchange splitting $J=(J_R+J_L)/2$ with an asymmetry $\Delta J=J_R-J_L$. To determine the influence of these asymmetries on the spin torque and conductance, we use the same approach as Brinkman et al. [@brinkman]. Using the free electron model within the Keldysh formalism developed in Ref. , the wave functions are determined for the complete structure (see Ref. ). The analytic forms of the torque and current are obtain up to the first order in $\exp[-2d\kappa_0]$, where $d$ is the barrier thickness and $\kappa_0=\sqrt{2m\phi/\hbar^2}$ is the barrier wave vector for perpendicularly incident Fermi electrons (see Appendix A). The effective mass of the electrons within the barrier is taken equal to 1. Therefore, the general form of the torques and current is: $$T_{||},\;T_{\bot},\;J_e=\int\int dEd{\bf k}_{||}e^{-2d\kappa(E,{\bf k}_{||})}F(E,{\bf k}_{||}),$$ where $F(E,{\bf k}_{||})$ is a function given explicitly in Appendix B, $E$ is the electron energy and ${\bf k}_{||}$ is the wave vector component in the plane of the layers. The factor $e^{-2d\kappa(E,{\bf k}_{||})}$ arises from the WKB approximation and represents the tunneling transmission. Following the spirit of Brinkman et al. [@brinkman], we assume that the barrier is thick and high enough so that the energy dependence is essentially contained in the exponential factor $e^{-2d\kappa(E,{\bf k}_{||})}$. Therefore, $F(E,{\bf k}_{||})\approx F(E_F\pm eV/2,0)$, and we obtain: $$\begin{aligned} \label{eq:analtip} T_{||}&=&T_{||0}\left[a_1\frac{eV}{\phi}+a_2\left(\frac{eV}{\phi}\right)^2\right]\sin\theta,\\\label{eq:analtop} T_{\bot}&=&T_{\bot0}\left[1+b_1\frac{eV}{\phi}+b_2\left(\frac{eV}{\phi}\right)^2\right]\sin\theta,\\\label{eq:analg} G_p(V)&=&G^p_0\left[1+g^p_1\frac{eV}{\phi}+g^p_2\left(\frac{eV}{\phi}\right)^2\right],\\ G_{ap}(V)&=&G^{ap}_0\left[1+g^{ap}_1\frac{eV}{\phi}+g^{ap}_2\left(\frac{eV}{\phi}\right)^2\right],\end{aligned}$$ at the second order in bias voltage $V$. The torques $T_{||}$, $T_\bot$ are exerted on the [*right*]{} layer and $G_{p,ap}(V)$ is the conductance defined as $G_{p,ap}(V)=\partial J^{p,ap}_e/\partial V$ where $J^{p,ap}_e$ is the charge current in the parallel and antiparallel configurations, respectively. The coefficients $a_1...g_2^{ap}$ are given explicitly in Appendix C. Notice that up to the first order in the barrier, the angular dependence of the in-plane and perpendicular torques is a simple $\sin\theta$. The introduction of asymmetries does not modify the angular dependence of the torque, as long as the barrier is either high enough or thick enough ($\beta=d\kappa_0>>1$). To illustrate the influence of the structural asymmetries on the torques and conductance, the analytical expressions given in Eqs. (\[eq:analtip\])-(\[eq:analg\]) have been plotted in Fig. 2, together with the full numerical simulation of the model developped in Ref. . The torques and conductance are represented in their reduced form: the in-plane torque is normalized to the in-plane torquance ($\partial T_{||}/\partial V$) in the absence of asymmetries, whereas the perpendicular torque and conductance are normalized to their value at zero bias. Several points are worth noting. First, since the in-plane torque is already asymmetric against the bias voltage in SE tunneling ($a_1$ and $a_2$ do not vanish in the absence of structural asymmetries), the “conventional” bias dependence given in Eq. (\[eq:SEip\]) is conserved in the presence of asymmetries and only the actual magnitude of $a_1$ and $a_2$ is modified, as illustrated in Fig. 2(a,b). Note that the small discrepancy between the numerical model (solid lines) and the analytical expressions (squares) can be attributed to the presence of a cubic term $\propto V^3$ in the torque $T_{||}$. The change in the slope of the torque can be simply understood by considering the polarization (defined as Slonczewski’s polarization - see Appendix B) of the electrons responsible for the in-plane torque $T_{||}$. Secondly, the terms $b_1$ and $g^{p,ap}_1$ are proportional to $\Delta J$ and $\Delta \phi$ so that in the absence of structural asymmetry, the perpendicular torque and the conductance are [*quadratic*]{} in bias voltage. However, when structural asymmetries are present, the perpendicular torque and the conductance both acquire an additional [*linear*]{} component ($b_1$ and $g^{p,ap}_1$). This is consistent with numerical simulations reported earlier [@kioussis; @xiao; @wil] and the analytical expressions satisfactorily reproduce the numerical results, as shown in Fig. \[fig:fig2\](c-f). ![Bias dependence of the in-plane torque (a,b), perpendicular torque (c,d) and parallel conductance (e,f) in the case of barrier height (a,c,e) and exchange splitting (b,d,f) asymmetries. The solid lines correspond to numerical calculations based on the model presented in Ref. and the squares are calculations using Eqs. (\[eq:analtip\])-(\[eq:analg\]). The parameters are $E_F=10$eV, $J=1$eV, $\phi=5$eV, $d=1$nm.)[]{data-label="fig:fig2"}](Fig2.pdf "fig:"){width="8cm"}\ An interesting feature here is the sign of the deviations. For $\Delta\phi>0$ and $\Delta J=0$, the junction is more conductive for negative bias ($\phi_R>\phi_L$), therefore a shift is observed in the conductance and torque towards [*positive*]{} voltages ($b_1,g^p_1<0$ - see Fig. 2(a) and Fig. 2(c)). However, in the case $\Delta J>0$ and $\Delta\phi=0$, the tunneling from left to right is slightly more efficient (since $J_R>J_L$) and the conductance displays a shift towards [*negative*]{} voltages ($g^p_1>0$ - see Fig. 2(d)). In contrast, the electrons from the right electrodes are more polarized than the ones from the left electrode and the torque displays a shift towards [*positive*]{} voltages ($b_1<0$ - see Fig. 2(b)). This difference in the signature of the structural asymmetry allows for the identification of the source of the linear term in the out-of-plane torque, as demonstrated by the study of Oh et al. [@oh]. These results are consistent with previous numerical studies [@kioussis; @xiao; @wil] at low bias. However, the comparison with the tight-binding model studied in Ref. presents some differences. The free electron model yields an open parabolic band dispersion whereas the tight-binding model produces a closed band dispersion. Therefore, the free electron model is only correct for low bias dependence and provides results for [*low band filling*]{}. As a consequence, the free electron model is surprisingly well adapted to the case of Fe. It also implies that the bias voltage must be smaller than the half-band width of the conduction electrons. As a consequence, neither band filling-induced sign reversal of IEC nor the oscillatory bias dependence [@kioussis] of the perpendicular torque can be obtained within the free electron model. The above results are limited to reasonably small bias and low band filling systems. Inelastic scattering\[s:ine\] ============================= In this section, we consider that the (bulk or interfacial) scattering by phonons, magnons or impurities in the left and right electrodes are symmetric (i.e., the interactions have the same amplitude in the left and right electrodes). This way, the electron scattering conserves the symmetry of the junction (it may not be true when the electrodes compositions are different). Although the symmetry of the system is conserved, the spin torque does not have the same expression in the case of bulk or interfacial scattering. As mentioned in section II, in the case of interfacial scattering the spin torque is directly related to the interfacial spin current, ${\bf T}={\cal J}_s(x=0)$, whereas in the case of bulk scattering, the spin relaxation in the electrodes cannot be neglected anymore and ${\bf T}={\cal J}_s(x=0)-\int_\Omega d\Omega{\bf m}/\tau_{sf}$. Therefore, although the symmetry of the MTJ is conserved in both cases, the bias dependence of the spin torque will experience a different modification depending on whether the scattering occurs at the interfaces or in the bulk of the electrodes. Interfacial scattering ---------------------- We consider two types of interfacial inelastic scattering processes: electron-magnon and electron-phonon. The influence of electron-magnon scattering on TMR [@zhang97; @bratkovsky] and spin transfer torque [@magnon; @swstt] has been studied within the Transfer Hamiltonian formalism. The current density is expressed in the form of a $2\times2$ spinor matrix: $$\begin{aligned} \label{eq:5} \hat{J}=2\pi\frac{e}{\hbar}&&\sum_{\bf{k,p}}[\hat{\rho}_{\bf{k}}\hat{T}_{\bf{kp}}\hat{\rho}_{\bf{p}}(\hat{T}_{\bf{kp}})^+f_L(1-f_R)\nonumber\\&&-\hat{\rho}_{\bf{p}}(\hat{T}_{\bf{kp}})^+\hat{\rho}_{\bf{k}}\hat{T}_{\bf{kp}}f_R(1-f_L)],\end{aligned}$$ where $f_{L(R)}$ and ${\hat \rho}_{{\bf k}({\bf p})}$ are the Fermi distribution function and electronic density of states at the left (right) interfaces, and ${\hat T}_{\bf{kp}}$ (${\hat T}_{\bf{pk}}$) is the spin-dependent transfer matrix accounting for both elastic and inelastic tunneling. In the spinor formalism, the charge current and spin current are expressed $j_e=Tr[{\hat J}]$ and $T_{||(\bot)}={\cal J}^{x(y)}_s=Tr[{\hat \sigma}_{x(y)}{\hat J}]$, where ${\hat \sigma}_{x(y)}$ are the Pauli spin matrices. In the case of electron scattering by interfacial phonons, although no spin-flip takes place, the increase of the conductance is expected to modify the TMR[@bratkovsky] and, correspondingly, the spin torque. In the presence of electron-magnon and electron-phonon interactions, the transfer matrix can be written: $$\begin{aligned} \label{eq:sd} \hat{T}_{{\bf kp}}^{e-m}&=&\hat{T}^d_{{\bf kp}}\left(\hat{I}+\sqrt{\frac{Q^m}{N}}(\bm{\sigma}.{\bf S}_{tr}^R+\bm{\sigma}.{\bf S}_{tr}^L)\right),\\\label{eq:ph} \hat{T}_{{\bf kp}}^{e-ph}&=&\hat{T}^d_{{\bf kp}}\left(1+\sqrt{\frac{Q^{ph}_{\bf q}}{N}}(b_{\bf q}+b_{\bf q}^+)\right)\hat{I},\end{aligned}$$ where $\hat{T}^d_{{\bf kp}}$ is the direct tunneling matrix, $Q^m$ ($Q^{ph}_{\bf q}$) is the phenomenological electron-magnon (electron-phonon) efficiency, $N$ is the number of atoms per cell, $\bm{\sigma}$ is the vector of Pauli spin matrices and ${\bf S}_{tr}^{L(R)}$ are the transverse part of the magnetizations of the left and right electrodes. Details about the derivation of Eq. (\[eq:sd\]) can be found in Ref. . The interaction efficiency ($Q^m$ and $Q^{ph}_{\bf q}$) can be related to quantum mechanical quantities, $Q^m\approx J^2/E^2_F$ and $Q^{ph}_{\bf q}\approx|M_{\bf q}^2|/E_F^2$, where $J$ is the exchange splitting, $E_F$ is the Fermi energy and $M_{\bf q}$ is the electron-phonon interaction [@phonons] that depends on the type of coupling (acoustic, optical or polar coupling). ### Electron-Phonon Scattering In the case of electron-phonon scattering, the transfer matrix Eq. (\[eq:ph\]) is diagonal in spin space and obviously, the presence of phonons does not induce spin-flip. However, the direct tunneling matrix is renormalized by $(1+\sqrt{\frac{Q^{ph}_{\bf q}}{N}}(b_{\bf q}+b_{\bf q}^+))$ and becomes bias dependent [@bratkovsky]. We then expect that the modification of the conductance due to phonons alters the bias dependence of the spin torque. Performing the matrix products displayed in Eq. (\[eq:5\]) and using the definition of the spinor current stated above, we find: $$\begin{aligned} j_e(E,{\bf q})&=&\frac{G_0}{e}\left(1+Q_{\bf q}^{ph}\langle b^+_{\bf q}b_{\bf q}\rangle+Q_{\bf q}^{ph}\langle b_{\bf q}b^+_{\bf q}\rangle\right)(1+\cos\theta P^LP^R)(f_L(1-f_R)-f_R(1-f_L)),\\ T_{||}(E,{\bf q})&=&\frac{G_0}{e}P_L\left(1+Q_{\bf q}^{ph}\langle b^+_{\bf q}b_{\bf q}\rangle+Q_{\bf q}^{ph}\langle b_{\bf q}b^+_{\bf q}\rangle\right)(f_L(1-f_R)-f_R(1-f_L))\sin\theta,\\ T_\bot(E,{\bf q})&=&\frac{G_0}{e}\left(1+Q_{\bf q}^{ph}\langle b^+_{\bf q}b_{\bf q}\rangle+Q_{\bf q}^{ph}\langle b_{\bf q}b^+_{\bf q}\rangle\right)(P^R\varphi_Lf_L(1-f_R)+P^L\varphi_Rf_R(1-f_L))\sin\theta,\end{aligned}$$ where $P_{L,R}$ is the polarization at the left (right) interface and $\varphi_{R,L}$ is a coefficient that accounts for the spin rotation during tunneling [@magnon]. The integration rules are described in Ref. . Assuming that the electron spin-dependent densities of state do not vary much over the range $eV$, and considering acoustic phonons ($\omega \propto q$, $Q_{\bf q}\propto q$) with a density of states of the form $\rho_{ph}(\omega)\propto \omega^{\nu}$, we obtain, at T=0 K and low bias voltage: $$\begin{aligned} &&G(V)=G_0(1+P^LP^R\cos\theta)(1+\zeta_{ph}|V|^{\nu+2}),\label{eq:16}\\ &&T_{||}=G_0P^L\sin\theta(1+\zeta_{ph}|V|^{\nu+2})V,\label{eq:17}\\ &&T_\bot-T_{\bot0}=G_0P^R\phi_L\sin\theta\zeta_{ph}|V|^{\nu+3},\label{eq:18}\end{aligned}$$ $\zeta_{ph}$ being a coefficient that depends on the electron-phonon coupling, Fermi energy, Debye temperature $\Theta_D$ etc... The bias-dependence of the conductance ($\propto |V|^{\nu+2}$) is consistent with the one suggested by Bratkovsky when $\nu=2$. At larger bias, $|V|^{\nu+2}$ is replaced by $k_B\Theta_D$ and the bias dependence of the torques becomes linear. Note that the symmetry of the out-of-plane torque against the bias is conserved, whereas the in-plane torque acquires an antisymmetric component. At higher temperature and bias, more complex behavior are found, but the bias dependence of $G(V)$ and $T_\bot$ is always an even function of $V$ ($|V|^n$, $n\in\mathbb{N}$). As an illustration, we provide below the expressions for large bias at finite temperature ($eV>k_BT>k_B\Theta_D$): $$\begin{aligned} &&G(V)= G_0(1+P^LP^R\cos\theta)\left(1+\xi_{ph}\frac{T}{\Theta_D}\left(\frac{|eV|}{k_B\Theta_D}\right)^{\nu+1}\right),\label{eq:phTG}\\ &&T_{||}= G_0\sin\theta P^L\left(1+\xi_{ph}\frac{T}{\Theta_D}\left(\frac{|eV|}{k_B\Theta_D}\right)^{\nu+1}\right)V,\label{eq:phTTip}\\ &&T_\bot-T_{\bot0}= G_0P^R\phi_L\sin\theta\xi_{ph}\frac{T}{\Theta_D}\left(\frac{|eV|}{k_B\Theta_D}\right)^{\nu+2}.\label{eq:phTTop}\end{aligned}$$ Again $\xi_{ph}$ depends on the electron-phonon coupling, Fermi energy, Debye temperature etc. At finite temperatures, the conductance is enhanced, due to phonon-assisted tunneling and therefore, both in-plane and out-of-plane torques are enhanced. The temperature dependence is expected to be linear. Notice that although the magnitude of the torque increases with the temperature, its efficiency (ratio between spin torque and current density) is not modified, since the electron-phonon interaction does not affect the spin itself, but rather the tunneling rate. ### Electron-Magnon Scattering In the case of electron-magnon interaction, the transfer matrix \[Eq. (\[eq:sd\])\] possesses non-diagonal elements that are responsible for spin-flip scattering. We then expect a much more complex influence on the torque. Assuming a magnon density of states of the form $\rho_m(\omega)=\omega^\nu$, symmetric electrodes ($P^L=P^R=P$, $\phi_L=\phi_R=\phi$) and T=0 K, we find: $$\begin{aligned} &&G(V)\propto (1-P^2\cos\theta)|V|^{\nu+1},\\ &&T_{||}-T_{||0}\propto \sin\theta[P(1+P)-(1-P)(1+P\cos\theta)] V^{\nu+2},\label{eq:19}\\ &&T_{\bot}-T_{\bot0}\propto P\phi\sin\theta(1-\cos\theta) |V|^{\nu+2}.\label{eq:20}\end{aligned}$$ The detail of these expressions can be found in Ref. . Interestingly the perpendicular torque and the conductance both acquire a component that is symmetric against the bias. Furthermore, since the electron-magnon interaction mixes the majority and minority channels, the angular dependence is also affected, contrary to the case of electron-phonon coupling. The finite temperature situation has been studied in Ref. and gives rise to a non-linear dependence as a function of both voltage and temperature. Actually, competing mechanisms take place when both magnon emission and absorption are accounted for. Let us consider the torque exerted on the right electrode magnetization. Magnon emission (absorption) occuring at the left interface increases (reduces) the effective spin-polarization of the incoming electrons, therefore enhancing (lowering) the spin torque exerted on the right electrode. Symmetrically, electron-magnon interactions occuring at the right interface also affects the effective polarization of electrons coming from the right reservoir. Finally, we must stress out that the detailed temperature and bias dependencies presented here are strongly conditioned by the electrons, phonons and magnon band structures. Bulk scattering --------------- In contrast with interfacial scattering, in the case of bulk scattering (by impurities or magnons) the spin torque is no more described by the purely interfacial spin current since spin relaxation can not be neglected in the bulk of the layers. Therefore, the spin torque reads: $${\bf T}=\int_\Omega \frac{J}{\hbar}{\bf m}\times{\bf M}d\Omega=\int_\Omega [-\nabla{\cal J}_s-\frac{{\bf m}}{\tau_{sf}}]d\Omega$$ The presence of a finite spin relaxation time ($1/\tau_{sf}\neq0$) induces a coupling between the two components of the spin torque, so that the perpendicular torque now involves a contribution of both in-plane and perpendicular interfacial spin current densities. In a MTJ, the interfacial densities of state are usually only affected by the first few monolayers away from the interface. Therefore, since the spin-diffusion length is on the order of 5-15nm, we can assume that the tunneling process is almost not affected by the presence of spin-flip scattering. Then, the interfacial spin current can be identified to the spin torque without spin-flip: ${\cal J}_s(x=0)={\bf T}_0$. As a consequence, the actual spin torque has the form: $$\begin{aligned} T_{||}&=&T_{||0}+\frac{\tau_J}{\tau_{sf}}T_{\bot0}\\ T_{\bot}&=&T_{\bot0}-\frac{\tau_J}{\tau_{sf}}T_{||0}\end{aligned}$$ where $\tau_J=\hbar/J$. In the case of a symmetric magnetic tunnel junction in the absence of interfacial inelastic scattering, $T_{||0}$ and $T_{\bot0}$ are given by Eqs. (\[eq:SEip\])-(\[eq:SEop\]). For low bias voltage, when the spin-flip is dominated by Elliott-Yafet spin scattering, $\tau_{sf}$ is bias-independent (but temperature dependent) and the perpendicular torque gains a linear component $a_1V\tau_J/\tau_{sf}$. At large bias, or non-zero temperature, the spin-flip scattering is dominated by the electron-magnon interaction. As showed by Li et al. [@deac], the spin-flip relaxation time due to electron-magnon interaction is inversely proportional to $|V|$. Consequently, the presence of bulk magnons results in an additional component in the perpendicular torque of the form $\propto j_e|V|$. This is in sharp contrast with the case of interfacial magnons, where the additional component is simply $|V|$. Conclusion\[s:con\] =================== In summary, we studied the influence of structural asymmetries and inelastic tunneling on the bias dependence of the spin transfer torque in MTJs, using either the free electron model or the Transfer Hamiltonian formalism. Our results are summarized below: 1. Structural asymmetries: the perpendicular torque and the conductance acquire a [*antisymmetric*]{} linear component of the form $\propto V$, while the bias dependence of the in-plane torque is still described by Eq. (\[eq:SEip\]). The obtained formulae provide consistent results in the low-bias region and at low band filling with the numerical results of the tight binding model [@kioussis] and are in good agreement with the numerical results of the free electron model [@xiao; @wil]. Consequently, they can serve as a guideline to design the spin torque bias dependence, as demonstrated by Oh et al. [@oh]. 2. Inelastic interfacial scattering: the symmetry of the MTJ is conserved and the perpendicular torque and conductance acquire a [*symmetric*]{} linear component of the form $\propto |V|^n$, $n\in\mathbb{N}$. The presence of magnons or phonons interactions is usually revealed through peaks in the conductance derivative. The influence of the temperature has been briefly discussed. 3. Bulk spin-flip scattering: the spin torque is no more equal to the net transfer of angular momentum. The relaxation of the spin accumulation induces a mixing between the two components of the torque, giving rise to an [*antisymmetric*]{} component of the form $V$ and $j_e|V|$ in the case of impurity- and magnon-induced spin scattering, respectively. Since the resistance is dominated by the barrier, the contribution of bulk scattering is usually negligible on the conductance. Finally, we suggest that a link exists between the signature of asymmetry and inelastic scattering in the perpendicular torque and conductance. Since both are symmetric against the bias in a symmetric MTJ, the introduction of structural asymmetries or inelasticity affects both quantities, but in different ways. The careful analysis of the perpendicular torque together with the conductance should give important clues on the origin of the additional linear terms, as suggested in Ref. in the case of structural asymmetries. Note however, that the conductance remains unaffected by bulk scattering and therefore, the influence of bulk magnons cannot be analyzed by comparing the perpendicular torque and the conductance. This work was supported by NSF (DMR-0704182) and DOE (DE-FG02-06ER46307). A. M. acknowledges fruitful discussions with M. Chshiev. Wave functions in the large barrier approximation ================================================= We use the free electron model within the Keldysh formalism as described in Ref. . The electron wave vectors for majority and minority spin in the left and right electrodes and in the barrier are then: $$\begin{aligned} &&k_{1,2}=\sqrt{\frac{2m}{\hbar^2}\left(E-E_{||}\pm J_L-\frac{eV}{2}\right)}\\ &&k_{3,4}=\sqrt{\frac{2m}{\hbar^2}\left(E-E_{||}\pm J_R+\frac{eV}{2}\right)}\\ &&\kappa=\sqrt{\frac{2m}{\hbar^2}\left(\phi_L+\frac{eV}{2}+E_F-E+E_{||}-\frac{x}{d}(eV-\Delta\phi)\right)}\end{aligned}$$ The indices 1,3 (2,4) refer to the majority (minority) spin. The wave function of an electron of injected from the i-th electrode with an initial spin $\sigma$ is represented in the vector form $\Psi_{\sigma}^i=(\Psi_{\uparrow\sigma}^i,\Psi_{\downarrow\sigma}^i)$. The wave functions for the electrons from the left and right electrode at the interfaces are then [@manchon]: $$\begin{aligned} &&\Psi_{\uparrow\uparrow}^L=\frac{\sqrt{2k_1}}{k_1+i\kappa_1}\\ &&\Psi_{\downarrow\uparrow}^L=4\sqrt{2k_1}\frac{\kappa_1\kappa_2(k_3-k_4)}{den}\sin\theta\\ &&\Psi_{\downarrow\downarrow}^L=\frac{\sqrt{2k_2}}{k_2+i\kappa_1}\\ &&\Psi_{\uparrow\downarrow}^L=4\sqrt{2k_2}\frac{\kappa_1\kappa_2(k_3-k_4)}{den}\sin\theta\\ &&\Psi_{\uparrow\uparrow}^R=4iE_n\frac{\sqrt{2k_3\kappa_1\kappa_2}}{den}(k_2+i\kappa_1)(k_4+i\kappa_2)\cos\frac{\theta}{2}\\ &&\Psi_{\downarrow\uparrow}^R=4iE_n\frac{\sqrt{2k_3\kappa_1\kappa_2}}{den}(k_1+i\kappa_1)(k_4+i\kappa_2)\sin\frac{\theta}{2}\\ &&\Psi_{\downarrow\downarrow}^R=4iE_n\frac{\sqrt{2k_4\kappa_1\kappa_2}}{den}(k_1+i\kappa_1)(k_3+i\kappa_2)\cos\frac{\theta}{2}\\ &&\Psi_{\uparrow\downarrow}^R=-4iE_n\frac{\sqrt{2k_4\kappa_1\kappa_2}}{den}(k_2+i\kappa_1)(k_3+i\kappa_2)\sin\frac{\theta}{2}\end{aligned}$$ with $den=2E_n^2(k_1+i\kappa_1)(k_2+i\kappa_1)(k_3+i\kappa_2)(k_4+i\kappa_2)$, $E_n=\exp\left[-d\sqrt{\frac{2m}{\hbar^2}}\int_0^d\kappa dx\right]$ is the exponential factor and $\kappa(x=0)=\kappa_1$,$\kappa(x=d)=\kappa_2$. The above equations together with the integration rules mentioned in section II.A. are sufficient to describe the transport properties of the junction. Currents and Torques in the large barrier approximation ======================================================= By definition, the charge and spin currents are defined as $$\begin{aligned} J_i=\frac{2e}{h}\int\int dE d{\bf k}_{||}\left(\langle\sigma_i\otimes\nabla\rangle_L f_L+\langle\sigma_i\otimes\nabla\rangle_R f_R\right),\end{aligned}$$ where $i=0,x,y,z$ and ${\bm \sigma}=(\sigma_x,\sigma_y,\sigma_z)$ are the spin Paul matrices, $\sigma_0$ is the identity and $\langle...\rangle_{L,R}$ denotes quantum mechanical averaging, involving the rightward and leftward spin-dependent wave functions defined in Ref. . $f_L$ and $f_R$ are the Fermi distribution functions of the left and right reservoirs. Expanding these wave functions up to the lowest order in the barrier height, the charge and spin currents for a majority electron issued from the left reservoir are: $$\begin{aligned} &&J_{eL}^{\uparrow}=\frac{2e}{h}\int\int dE d{\bf k}_{||}\frac{8k_1\kappa_1\kappa_2(k_3+k_4)(\kappa_1^2+k_2^2)(\kappa_2^2+k_3k_4)}{(\kappa_1^2+k_1^2)(\kappa_1^2+k_2^2)(\kappa_2^2+k_3^2)(\kappa_2^2+k_4^2)}[1+P_L\cos\theta]f_L\\ &&T_{||L}^{\uparrow}=\int\int dE d{\bf k}_{||}\frac{4k_1\kappa_1\kappa_2(k_3-k_4)(\kappa_1^2+k_2^2)(\kappa_2^2-k_3k_4)}{(\kappa_1^2+k_1^2)(\kappa_1^2+k_2^2)(\kappa_2^2+k_3^2)(\kappa_2^2+k_4^2)}f_L\sin\theta\\ &&T_{\bot L}^{\uparrow}=\int\int dE d{\bf k}_{||}\frac{4k_1\kappa_1\kappa_2^2(k_3^2-k_4^2)(\kappa_1^2+k_2^2)}{(\kappa_1^2+k_1^2)(\kappa_1^2+k_2^2)(\kappa_2^2+k_3^2)(\kappa_2^2+k_4^2)}f_L\sin\theta\end{aligned}$$ and $P_L=\frac{(k_1-k_2)(\kappa_1^2-k_1k_2)}{(k_1+k_2)(\kappa_1^2+k_1k_2)}$ is Slonczewski’s polarization [@slonc89].The contribution for a minority electron is obtained by performing the following replacements: $k_{1,3}\leftrightarrow k_{2,4}$ and $\theta\rightarrow-\theta$. Similarly, the contribution of electrons issued from the right reservoir is obtained by performing the following replacements: $\kappa_1\leftrightarrow\kappa_2$, $(1,2)\leftrightarrow(3,4)$ and $f_L\rightarrow f_R$. The final expressions are then: $$\begin{aligned} &&J_{e}=J_{eL}^{\uparrow}+J_{eL}^{\downarrow}-J_{eR}^{\uparrow}-J_{eR}^{\downarrow}\\ &&T_{||}=T_{||L}^{\uparrow}+T_{||L}^{\downarrow}-T_{||R}^{\uparrow}-T_{||R}^{\downarrow}\\ &&T_{\bot}=T_{\bot L}^{\uparrow}-T_{\bot L}^{\downarrow}+T_{\bot R}^{\uparrow}-T_{\bot R}^{\downarrow}\end{aligned}$$ Analytical expressions for Spin Torques and Conductance ======================================================= After some algebra using Eqs. (\[eq:analtip\])-(\[eq:analg\]), we obtain the following results: $$\begin{aligned} \label{eq:22} T_{||0}&=&\frac{\hbar^2}{2m}\frac{\kappa_0^6}{2\pi^2\beta}\frac{(k_\uparrow^2-k_\downarrow^2)(\kappa_0^4-k_\uparrow^2k_\downarrow^2)}{(\kappa_0^2+k_\uparrow^2)^2(\kappa_0^2+k_\downarrow^2)^2}e^{-2\beta}\\\label{eq:23} T_{\bot0}&=&\frac{\hbar^2}{2m}\frac{\kappa_0^7}{\pi^2\beta^2}\frac{(k_\uparrow^2-k_\downarrow^2)(k_\uparrow-k_\downarrow)(\kappa_0^2-k_\uparrow^2k_\downarrow^2)}{(\kappa_0^2+k_\uparrow^2)^2(\kappa_0^2+k_\downarrow^2)^2}e^{-2\beta}\\\label{eq:24} G^{p,ap}_0&=&\frac{2e^2}{\hbar}\frac{\kappa_0^4}{\pi^2\beta}\frac{(k_\downarrow+k_\uparrow)^2(k_\uparrow k_\downarrow+\kappa_0^2)^2\pm(k_\downarrow-k_\uparrow)^2(k_\uparrow k_\downarrow-\kappa_0^2)^2}{(\kappa_0^2+k_\uparrow^2)^2(\kappa_0^2+k_\downarrow^2)^2}e^{-2\beta}\end{aligned}$$ and $$\begin{aligned} a_1&=&1+\frac{\kappa_0^2[2(\beta-1)k_\uparrow^2k_\downarrow^2+\kappa_0^2(k_\uparrow^2+k_\downarrow^2)]}{2k_\uparrow k_\downarrow ((\beta-1)k_\uparrow^2k_\downarrow^2-(\beta-3)\kappa_0^4+(k_\uparrow^2+k_\downarrow^2)\kappa_0^2)}\frac{\Delta\phi}{\phi}\nonumber\\ &&-\frac{2(\beta-1)k_\uparrow^2k_\downarrow^2(k_\uparrow^2+k_\downarrow^2)+5\kappa_0^2(k_\uparrow^4+k_\downarrow^4)-2k_\uparrow^2k_\downarrow^2\kappa_0^2}{8k_\uparrow k_\downarrow ((\beta-1)k_\uparrow^2k_\downarrow^2-(\beta-3)\kappa_0^4+(k_\uparrow^2+k_\downarrow^2)\kappa_0^2)}\frac{\Delta J}{\phi}\\ a_2&=&\frac{\kappa_0^2[2(\beta-1)k_\uparrow^2k_\downarrow^2-\kappa_0^2(k_\uparrow^2+k_\downarrow^2)](\kappa_0^2+k_\uparrow^2)(\kappa_0^2+k_\downarrow^2)}{4k_\uparrow^3 k_\downarrow^3 ((\beta-1)k_\uparrow^2k_\downarrow^2-(\beta-3)\kappa_0^4+(k_\uparrow^2+k_\downarrow^2)\kappa_0^2)}-\frac{\beta}{24}\frac{\Delta\phi}{\phi}\nonumber\\ &&-\frac{k_\uparrow^4(\kappa_0^2+k_\uparrow^2)^2((\beta-1)k_\downarrow^2-\kappa_0^2)+k_\downarrow^4(\kappa_0^2+k_\downarrow^2)^2((\beta-1)k_\uparrow^2-\kappa_0^2)}{8k_\uparrow^4 k_\downarrow^4 ((\beta-1)k_\uparrow^2k_\downarrow^2-(\beta-3)\kappa_0^4+(k_\uparrow^2+k_\downarrow^2)\kappa_0^2)}\frac{\Delta J}{J}\\ b_1&=&\frac{\beta-2}{8}\frac{k_\uparrow k_\downarrow(k_\uparrow k_\downarrow-3\kappa_0^2)(k_\uparrow-k_\downarrow)-(2k_\uparrow k_\downarrow-\kappa_0^2)(k_\uparrow^3-k_\downarrow^3)+(k_\uparrow^5-k_\downarrow^5)}{k_\uparrow k_\downarrow(k_\uparrow-k_\downarrow)(\kappa_0^2-k_\uparrow k_\downarrow)}\frac{\Delta J}{\phi}\nonumber\\ &&-\frac{k_\uparrow k_\downarrow(\beta+3)+(2\beta-9)\kappa_0^2}{6(k_\uparrow k_\downarrow-\kappa_0^2)}\frac{\Delta\phi}{\phi}\\ b_2&=&\beta\left(\frac{\beta}{8}-\frac{5}{12}\right)\\ g^p_1&=&\frac{\kappa_0^4}{2k_\uparrow^2 k_\downarrow^2}\frac{k_\uparrow^2(\kappa_0^2+k_\uparrow^2)^2-k_\downarrow^2(\kappa_0^2+k_\downarrow^2)^2}{k_\uparrow^2(\kappa_0^2+k_\downarrow^2)^2+k_\downarrow^2(\kappa_0^2+k_\uparrow^2)^2}\frac{\Delta J}{\phi}-\left(\frac{\beta}{12}-\frac{3}{8}\right)\frac{\Delta \phi}{\phi}\\ g^{ap}_1&=&\frac{\kappa_0^2(k_\uparrow^2- k_\downarrow^2)^2[k_\uparrow^2(3k_\downarrow^2+\kappa_0^2)(k_\uparrow^2+\kappa_0^2)+k_\downarrow^2(3k_\uparrow^2+\kappa_0^2)(k_\downarrow^2+\kappa_0^2)]}{16k_\uparrow^4k_\downarrow^4(\kappa_0^2+k_\uparrow^2)(\kappa_0^2+k_\downarrow^2)}\frac{\Delta J}{J}-\frac{\beta}{12}\frac{\Delta\phi}{\phi}\\ g^{p,ap}_2&=&\frac{\beta}{8}(\beta-1)\end{aligned}$$ where $k_{\uparrow,\downarrow}=\sqrt{\frac{2m}{\hbar^2}(E_F\pm J)}$ and $\kappa_0=\sqrt{\frac{2m}{\hbar^2}\phi}$. When the barrier becomes thinner, a corrective multiplication factor of the form $(1+\frac{3}{2\beta}+\frac{3}{4\beta^2})$ should be inserted into Eqs. (\[eq:22\])-(\[eq:24\]). Note that $T_{\bot0}$ and $G_0$ are similar to previous derivations using a free electron model [@slonc89; @brinkman]. The above relations are limited to low bias voltage in low band filling systems. Using more realistic densities of states, these relations may by modified. [200]{} J. C. Slonczewski, J. Magn. Magn. Mater. [**159**]{}, L1 (1996); L. Berger, Phys. Rev. B [**54**]{} 9353, (1996). J.Z. Sun, J. Magn. Magn. Mater. [**202**]{}, 157 (1999); Y. Huai, F. Albert, P. Nguyen, M. Pakala, and T. Valet, Appl. Phys. Lett. [**84**]{}, 3118 (2004); G. D. Fuchs, N. C. Emley, I. N. Krivorotov, P. M. Braganca, E. M. Ryan, S. I. Kiselev, J. C. Sankey, D. C. Ralph, R. A. Buhrman, and J. A. Katine, Appl. Phys. Lett. [**85**]{}, 1205 (2004); D. Chiba, Y. Sato, T. Kita, F. Matsukura, and H. Ohno, Phys. Rev. Lett. [**93**]{}, 216602 (2004). J. A. Katine, and E. E. Fullerton, J. Magn. Magn. Mater. [**320**]{}, 1217 (2008). S. Ikeda, J. Hayakawa, Y. M. Lee, F. Matsukura, Y. Ohno, T. Hanyu, and H. Ohno, IEEE Trans. Elec. Dev. [**54**]{}, 991 (2007). J. Z. Sun, and D. C. Ralph, J. Magn. Magn. Mater. [**320**]{}, 1227 (2008). J. C. Slonczewski, Phys. Rev. B [**71**]{}, 024411 (2005); J.C. Slonczewski and J.Z. Sun, J. Magn. Magn. Mater. [**310**]{}, 169-175 (2007); See also, J. C. Slonczewski, Phys. Rev. B [**39**]{}, 6995 (1989). I. Theodonis, N. Kioussis, A. Kalitsov, M. Chshiev, and W. H. Butler, Phys.Rev. Lett. [**97**]{}, 237205 (2006); A. Kalitsov, M. Chshiev, I. Theodonis, N. Kioussis, and W. H. Butler, Phys. Rev. B [**79**]{}, 174416 (2009). Y.-H. Tang, N. Kioussis, A. Kalitsov, W. H. Butler, and R. Car, Phys. Rev. Lett. [**103**]{}, 057206 (2009); Phys. Rev. B [**81**]{}, 054437 (2010). A. Manchon, N. Ryzhanova, N. Strelkov, A. Vedyayev, M. Chshiev and B. Dieny, J. Phys.: Condens. Matter [**20**]{}, 145208 (2008); [*ibid*]{} [**19**]{}, 165212 (2007). J. Xiao, G. E. W. Bauer, and A. Brataas, Phys. Rev. B [**77**]{}, 224419 (2008). M. Wilczynski, J. Barnas, and R. Swirkowicz, Phys. Rev. B [**77**]{}, 054434 (2008). C. Heiliger and M. D. Stiles, Phys. Rev. Lett. [**100**]{}, 186805 (2008). J. C. Sankey, Y.-T. Cui, R. A. Buhrman, D. C. Ralph, J. Z. Sun, and J. C. Slonczewski, Nature Physics [**4**]{}, 67 (2008); H. Kubota, A. Fukushima, K. Yakushiji, T. Nagahama, S. Yuasa, K. Ando, H. Maehara, Y. Nagamine, K. Tsunekawa, D. D. Djayaprawira, N. Watanabe, and Y. Suzuki, Nature Physics [**4**]{}, 37 (2008). A. M. Deac, A. Fukushima, H. Kubota, H. Maehara, Y. Suzuki, S. Yuasa, Y. Nagamine, K. Tsunekawa, D. D. Djayaprawira, N. Watanabe, Nature Physics [**4**]{}, 803 (2008). S. Petit, C. Baraduc, C. Thirion, U. Ebels, Y. Liu, M. Li, P. Wang, and B. Dieny, Phys. Rev. Lett. [**98**]{}, 077203 (2007). Z. Li, S. Zhang, Z. Diao, Y. Ding, X. Tang, D.M. Apalkov, Z. Yang, K. Kawabata, and Y. Huai, Phys. Rev. Lett. [**100**]{}, 246602 (2008). J. Z. Sun, M. C. Gaidis, G. Hu, E. J. O’Sullivan, S. L. Brown, J. J. Nowak, P. L. Trouilloud, and D. C. Worledge, J. Appl. Phys. [**105**]{}, 07D109 (2009); T. Min, J. Z. Sun, R. Beach, D. Tang, and P. Wang, J. Appl. Phys. [**105**]{}, 07D126 (2009). S.-C. Oh, S.-Y. Park, A. Manchon, M. Chshiev, J.-H. Han, H.-W. Lee, J.-E. Lee, K.-T. Nam, Y. Jo, Y.-C. Kong, B. Dieny,and K.-J. Lee, Nature Physics [**5**]{}, 898 (2009). J.C. Slonczewski, Phys. Rev. B [**39**]{}, 6995 (1989). A. Manchon and S. Zhang, Phys. Rev. B [**79**]{}, 174401 (2009). A. Manchon and S. Zhang, [*Spin Transfer Torque: Theory*]{}, in [*Spin Transport and Magnetism in Electronics Systems*]{}, Eds. E. Y. Tsymbal and I. Zutic, Taylor and Francis (2010). P. M. Levy and A. Fert, Phys. Rev. Lett. [**97**]{}, 097205 (2006); Phys.Rev. B [**74**]{}, 224446 (2006). D. E. Eastman, F. J. Himpsel, and J. A. Knapp, Phys. Rev. Lett. [**44**]{}, 95-98 (1980). H. B. Michaelson, J. Appl. Phys. [**48**]{}, 4729-4733 (1977). See for example M. Sharma, S. X. Wang, and J. H. Nickel, Phys. Rev. Lett. [**82**]{}, 616 (1999). W. F. Brinkman, R. C. Dynes and J. M. Rowell, J. Appl. Phys. [**41**]{}, 1915 (1970). S. Zhang, P. M. Levy, A. C. Marley and S. S. P. Parkin, Phys. Rev. Lett. [**79**]{}, 3744 (1997). A.M. Bratkovsky, Appl. Phys. Lett. [**72**]{}, 2334 (1998). D. Pines, Elementary excitations in solids, Westview Press, 1999. See also J.M. Ziman, Electrons and phonons, Oxford Classic Texts, 2001; G. D. Mahan, Many-Particle Physics, 2nd Ed. (Plenum Press, New York and London) (1990).
{ "pile_set_name": "ArXiv" }
--- abstract: | We give an explicit formula for a quasi-isomorphism between the operads $\Hycomm$ (the homology of the moduli space of stable genus $0$ curves) and $\BV/\Delta$ (the homotopy quotient of Batalin-Vilkovisky operad by the $\BV$-operator). In other words we derive an equivalence of $\Hycomm$-algebras and $\BV$-algebras enhanced with a homotopy that trivializes the $\BV$-operator. These formulas are given in terms of the Givental graphs, and are proved in two different ways. One proof uses the Givental group action, and the other proof goes through a chain of explicit formulas on resolutions of $\Hycomm$ and $\BV$. The second approach gives, in particular, a homological explanation of the Givental group action on $\Hycomm$-algebras. address: - 'A. Khoroshkin:Simons Center for Geometry and Physics, State University of New YorkStony Brook, NY 11794-3636, U.S.A. and ITEP, Bolshaya Cheremushkinskaya 25, 117259, Moscow, Russia ' - 'N. Markarian:Department of Mathematics, National Research University Higher School of Economics, Ul. Vavilova 7, Moscow 117312, Russia' - 'S. Shadrin:Korteweg-de Vries Institute for Mathematics, University of Amsterdam, P. O. Box 94248, 1090 GE Amsterdam, The Netherlands' author: - 'A. Khoroshkin' - 'N. Markarian' - 'S. Shadrin' title: Hypercommutative operad as a homotopy quotient of BV --- Introduction ============ The main purpose of this paper is to describe a natural equivalence between the category of differential graded Batalin-Vilkovisky algebras enhanced with the trivialization of BV-operator and the category of formal Frobenius manifolds without a pairing (also known under the name of hypercommutative algebras). The problem we are discussing has an explicit topological origin. I. e., we are looking for an equivalence of the operad of moduli spaces of stable curves and a homotopy quotient of the framed discs operad by the circle action. Having in mind that both topological operads under consideration are known to be formal we restrict ourselves to the corresponding relationship of the homology operads. We suggest a pure algebraic solution of the problem accompanied with an exact formula for the desired quasi-isomorphism. Let us first briefly recall the definitions of two categories under consideration using the language of operads. Consider the moduli spaces of stable genus $0$ curves $\oM_{0,n+1}$, $n=2,3,\dots$. A stable genus $0$ curve is a nodal curve of arithmetic genus $0$ with $(n+1)$ pairwise distinct marked points in its smooth part, and it has at least three special points (nodes or marked points) on each of the irreducible components. The points are labeled by the numbers $0,1,\dots,n$. There is a natural stratification by the topological types of nodal curves. The strata of codimension one can be realized as the images of the gluing morphism $\rho=\rho_i\colon\oM_{0,n_1+1}\times\oM_{0,n_2+1}\to\oM_{0,n+1}$, $n=n_1+n_2-1$, $i=1,\dots,n_1$, where the new nodal curve is obtained by attaching the zero point of a curve in $\oM_{0,n_2+1}$ to the $i$-th point of a curve in $\oM_{0,n_1+1}$. These morphisms define on the spaces $\oM_{0,n+1}$, $n=2,3,\dots$, the structure of a topological operad. Therefore, the homology of the spaces $\oM_{0,n+1}$, $n=2,3,\dots$, are endowed with the structure of an algebraic operad. This operad is called *the hypercommutative operad* and we denote it by $\Hycomm$. We recall an explicit description of the *hypercommutative algebra* in Section \[sec::open\_close\]. We refer to [@Man] for details and to [@Keel] for the description of the intersection theory on $\oM_{0,n+1}$. (Note that Manin uses in [@Man] the notation $\mathcal{C}om_\infty$ for the operad of hypercommutative algebras.) Another important topological operad under consideration is the framed little discs operad. The set of $n$-ary operations of this operad consists of configurations of the disjoint union of $n$ small discs inside the unit disc, each inner disc has a marked point on the boundary. It is equivalent to mark a point on the boundary of the circle or to fix a rotation of the inner disc which gives an identification of the inner disc with a standard disc of the same radius. The gluing of the outer boundary of the unit disc coming from configuration of $n_1$ small discs with the boundary of the $i$’th inner disc of the configuration of $n_2$ small discs defines a configuration of $n_1+n_2+1$ small pointed discs which prescribes the composition rules in the operad. The homology of this operad is known under the name of Batalin-Vilkovisky operad and has very simple description in terms of generators and relations. Namely, a (differential graded) Batalin-Vilkovisky algebra is a graded commutative associative algebra with two operators, $d$ of degree $1$ and $\Delta$ of degree $-1$, such that $d^2$, $\Delta^2$, and $d\Delta+\Delta d$ are equal to zero, $d$ is a derivation and $\Delta$ is a differential operator of the second order with respect to the multiplication. These two algebraic structures, hypercommutative algebras and Batalin-Vilkovisky algebras are known to be closely related. The hypercommutative algebra structure is the most important ingredient of a formal Frobenius manifold structure. A typical application of a relation between $\BV$-algebras and hypercommutative algebras is that under some conditions like Hodge property or some trivialization of the $\BV$-operator $\Delta$ we obtain a Frobenius manifold structure on the cohomology of a $\BV$-algebra; we refer to [@BerCecOogVaf; @BarKon; @LosSha; @Katzarkov_Pantev; @DotKho; @DruVal; @DotShaVal; @Dru] for different aspects and different examples of this kind of correspondence and relations between them. The topological origin of all these statements looks as follows. The homotopy quotient of the framed little discs operad by rotations is weakly equivalent to the operad of moduli spaces of stable genus $0$ curves. This statement was mentioned in [@Mar] and written in details in [@Dru]. We are focused on the algebraic counterpart of this statement equipped with precise formulas. In topology the homotopy quotient functor by the group $G$ is a functor from the category of $G$-spaces to the category of spaces which is defined as a left adjoint functor to the trivial embedding: any topological space admits a trivial action of the group $G$. The algebra over the homotopy quotient by $G$ of a given operad $\P$ is an algebra over $\P$ where the action of $G$ is trivialized. We will show the equivalence of this two definitions in the particular case $G=S^1$ and $\P=\BV$. In general, the condition on trivialization of the $\BV$-operator $\Delta$ that one has to use can be formulated in several different ways. First, we require that $\Delta$ is homotopically trivial, that is, the full homotopy transfer of $\Delta$ on the cohomology of $d$ is equal to zero. Equivalently, we can say that the spectral sequence (if exists) for $(d,\Delta)$ converges on the first page. (See [@DotShaVal] for details of this approach.) We use the different but similar approach. Consider the bi-complex $V[[z]]$ with differential $d+z\Delta$, where $z$ is a formal parameter of homological degree $2$, and consider a particular trivialization (homotopy) for the action of $\Delta$. Namely, we choose a particular automorphism of the space $V[[z]]$ which gives a quasi-isomorphism of complexes with respect to differentials $d$ and $d+z\Delta$. The other possible way to say the same is that $d+z\Delta=\exp(-\phi(z))d\exp(\phi(z))$, where $z$ is a formal variable, and $\phi(z)=\sum_{i=1}^\infty \phi_iz^i$ is some series of operators. We consider these extra operators $\phi_i$, $i=1,2,\dots$, as a part of the algebraic structure we have, and a $\BV$ algebra equipped with this extra trivialization data is a representation of the homotopy quotient of the $\BV$ operad. We denote this model of the homotopy quotient by $\BV/\Delta$. The main result of this paper is an explicit formula for a quasi-isomorphism $\theta\colon\Hycomm\to\BV/\Delta$. This result summarizes the relations between hypercommutative algebras and Batalin-Vilkovisky algebras mentioned above. The equivalence of the homotopy categories of $\Hycomm$-algebras and homotopy quotient of $\BV$-algebras was given in [@Dru] on the level of chains. There are two ways to construct this map: First approach goes through a careful analysis of a system of relations between the operads $\Hycomm$, $\BV/\Delta$, the operad of Gerstenhaber algebras and the gravity operad. It deals with different precise relationships between homotopy quotients and equivariant (co)homology first discovered by Getzler in [@Getzler_grav; @Get; @Getzler_genus0]. [**Theorem \[thm::diag:Hycom-&gt;BV\]**]{} summarizes these relationships in main [**Diagram ** ]{} of quasi-iso relating $\BV/\Delta$ and $\Hycomm$. We go through Diagram  specifying the generating cocycles in the cohomology at each step. As a result we get a formula for $\theta$ given in terms of summations over three-valent graphs. Second approach is a generalization of the interpretation of the BCOV theory suggested in [@Sha]. There is an action of the loop group of the general linear group on the representations of $\Hycomm$ in a given vector space. It was constructed by Givental and the action of its Lie algebra was studied by Y.-P. Lee, see [@Giv3; @Lee1]. We generalize this group action to an action on the space of morphisms from $\Hycomm$ to an arbitrary operad. This way we can describe the map $\theta$ as an application of a particular Givental group element to a very simple morphism from $\Hycomm$ to $\BV/\Delta$, the one that preserves the commutative associative product and ignores all the rest. In this case the final formula is given in terms of summations over graphs with arbitrary valencies of vertices. We state that these two formulas for $\theta$ coincide, however, we prefer to omit the direct proof of this statement and use the uniqueness arguments in order to explain the coincidence. The Givental-style formula is simpler for applications and contains already all cancellations, however, the homological approach is of it’s own interest. In particular, it allows to give additional point of view on the $\psi$-classes which we want to use elsewhere. So far, we show how one can get the topological recursion relations using this homological interpretation. Finally, our result on an explicit quasi-isomorphism formula allows to give a new interpretation for the Givental group action mentioned above. It appears that the action of the Givental group on morphisms of $\Hycomm$ corresponds to the ambiguity of a particular choice of a trivialization for $\Delta$ in $\BV/\Delta$. Outline of the paper -------------------- We repeat once again that in spite of topological motivation all proofs and all expositions are purely algebraic. All operads involved and algebras over them are defined in pure algebraic terms of generators and relations. In Section \[sec:explicitformula\] we formulate our main result on an explicit formula for the quasi-iso $\theta\colon\Hycomm\to\BV/\Delta$. Section \[sec::circle\] deals with the circle action. Namely, the categorical definition of the homotopy quotient by $\Delta$ is given in \[sec::homotopy\_quotients\_def\] and the algebraic counterpart of Chern classes is presented in \[sec::chern\]. In Section \[sec::operad::definitions\] we introduce notations and definitions for all operads involved in the main chain of quasi-isomorphisms between $\Hycomm$ and $\BV/\Delta$ (Diagram ). This part is quite technical and is needed mainly to fix the notation. Section \[sec::main\_diagram\] contains the main Diagram  of quasi-iso connecting $\Hycomm$ and $\BV/\Delta$. We play around it in order to get algebraic description of $\psi$-classes and a useful dg-model of the $\BV$-operad. In Section \[sec:diagrammatic\] we go through all these quasi-isomorphisms specifying generating cocycles in the cohomology, and this way we obtain a direct map $\theta\colon\Hycomm\to\BV/\Delta$. In Section \[sec:givental\] we recall the Givental theory, apply it in order to get a formula for $\theta$ from Section \[sec:explicitformula\], and then use the existence of such a map in order to give a new interpretation for the Givental theory. Those readers who are interested more in the results rather than in the proofs may skip technical Sections \[sec::operad::definitions\] and \[sec:diagrammatic\]. Acknowledgment -------------- We are grateful to V. Dotsenko, E. Getzler, A. Givental, G. Felder, A. Losev, and B. Vallette for useful discussions on closely related topics. An explicit formula {#sec:explicitformula} =================== In this section we give an explicit formula for a map $\Hycomm\to \BV/\Delta$ that takes $\Hycomm$ isomorphically to the cohomology of $\BV/\Delta$. A presentation of $\BV/\Delta$ {#sec::bv/delta} ------------------------------ The definition of a homotopy quotient given below is more convenient for applications then the standard categorical definition. We discuss the equivalence of these definitions in Section \[sec::homotopy\_quotients\_def\]. The algebras over the homotopy quotient $\BV/\Delta$ are in one-to-one correspondence with the $\BV$-algebras where $\Delta$ acts trivially on homology and moreover one chooses a particular trivialization for this action. I. e. the $\BV/\Delta$ algebra on a complex $(V^{{{\:\raisebox{4pt}{\selectfont\text{\circle*{1.5}}}}}},d)$ consists of commutative multiplication, differential operator $\Delta: V^{{{\:\raisebox{4pt}{\selectfont\text{\circle*{1.5}}}}}}\rightarrow V^{{{\:\raisebox{4pt}{\selectfont\text{\circle*{1.5}}}}}}[-1]$ of order at most $2$ and an isomorphism of complexes $$\label{eq::homotopy_quotients_def} \Phi(z): (V^{{{\:\raisebox{4pt}{\selectfont\text{\circle*{1.5}}}}}}[[z]],d + z\Delta) \rightarrow (V^{{{\:\raisebox{4pt}{\selectfont\text{\circle*{1.5}}}}}}[[z]],d),$$ where $z$ is a formal parameter of degree $2$ and $\Phi(z)$ is a formal power series in $z$. I. e. $\Phi(z) = \sum_{i\geq 0} \Phi_i z^i$. $\Phi_i$ should be linear endomorphisms of the vector space $V^{{{\:\raisebox{4pt}{\selectfont\text{\circle*{1.5}}}}}}$ of pure homological degree $-2i$ and $\Phi_0=Id_{V}$. Our formulas below become simpler if we consider exponential coordinates for trivialization. Namely, we represent $\Phi(z)$ as a series $\exp( \phi(z))$, $ \phi(z):=\sum_{i\geq 1} \phi_i z^i$ that is, $$Id_{V} + \Phi_1 z + \Phi_2 z^2 +\ldots = \exp( \phi_1 z +\phi_2 z^2 +\ldots ).$$ This allows us to describe the operad $\BV/\Delta$ in the following way. In order to homotopically resolve the operation $\Delta$ in the operad $\BV$ we have to add a number of generators $\phi_i$, $i\geq 1$, $\deg \phi_i=-2i$, and define a differential $d$ that vanishes on all generators of $\BV$ operad and such that $\Delta$ itself becomes an exact cocycle, while the rest of the $\BV$-structure survives in the cohomology (and no new cohomology cycles appear). We rewrite the formula $d\exp(\phi(z))=\exp(\phi(z)) (d+z\Delta)$ as $$\label{eq:d-delta} \Phi(z)^{-1}d\Phi(z) = \exp(-\phi(z))d\exp(\phi(z))=d+z\Delta$$ and use the expansion of the left hand side of the latter equation in order to define a differential that we denote by $\Delta\frac{\d}{\d\phi}$ on the generators $\phi_i$ as an expression for $[d,\phi_i]$. That is, the formulas $$\begin{aligned} \label{eq:d-a-0} \Delta & = [d,\phi_1], \\ \notag 0 & = [d,\phi_2]+\frac{1}{2} [[d,\phi_1],\phi_1],\\ \notag 0 & = [d,\phi_3]+ [[d,\phi_1],\phi_2]+ [[d,\phi_2],\phi_1] + \frac{1}{6}[[[d,\phi_1],\phi_1],\phi_1], \end{aligned}$$ turn into $$\begin{aligned} \label{eq:d-a} \Delta\frac{\d}{\d\phi}(\phi_1) & = [d,\phi_1] = \Delta \\ \notag \Delta\frac{\d}{\d\phi}(\phi_2) & = [d,\phi_2] = -\frac{1}{2}[\Delta,\phi_1] \\ \notag \Delta\frac{\d}{\d\phi}(\phi_3) & = [d,\phi_3] =-[\Delta,\phi_2] +\frac{1}{3}[[\Delta,\phi_1],\phi_1], \end{aligned}$$ respectively. We define the operad $\BV/\Delta$ to be the operad obtained by adding to $\BV$ the generators $\phi_i$, $i\geq 0$, with the differential $\Delta\frac{\d}{\d\phi}$ equal to zero on $\BV$ and given by Equations . We use the notation $\Delta\frac{\d}{\d\phi}$ for the differential in order to point out that it decreases the degree in $\phi$ by $1$ and increases the degree in $\Delta$ also by $1$ so looks like a differential operator $\Delta\frac{\d}{\d\phi}$. A formula for quasi-isomorphism {#sec:quasi} ------------------------------- We construct a map $\Hycomm\to\BV/\Delta$. To the generator $\mm_n\in \Hycomm(n)$ given by the fundamental cycle $[\oM_{0,n+1}]$ we associate an element $\theta_n$ of $\BV/\Delta(n)$ represented as a sum over all possible rooted trees with $n$ leaves, where - at the each vertex with $k$ inputs we put the $(k-1)$ times iterated product in the $\BV$-algebra. The iterated product $m(x_1,\dots,x_k)$ is defined as $m(x_1,\dots m(x_{k-2},m(x_{k-1},x_k))\dots)$, where $m(x_1,x_2)$ denotes the usual binary multiplication from $\BV(2)$. Abusing the notation we denote the iterated product by the same letter $m$. - Each input/output $e$ of any given vertex in a graph is enhanced by a formal parameter $\psi_{e}$. I. e. a vertex with $k$ inputs will be equipped with $k+1$ additional parameters. These parameters will be used to determine the combinatorial coefficient of the graph. - On each leaf $e$ (an input of the graph) we put the operator $\exp(-\phi(-\psi_e))$, where $\psi_e$ is the defined above formal parameter associated to the corresponding input $e$ of the vertex where the leaf is attached. - At the root (the output of the graph) we put the operator $\exp(\phi(\psi))$. Again, $\psi$ is a formal parameter associated to the output of the vertex, where the root is attached. - At the internal edge that serves as the output of a vertex $v'$ and an input of a vertex $v''$ we put the operator $$\mathcal{E}:=-\frac{\exp(-\phi(-\psi'')\exp(\phi(\psi'))-1}{\psi''+\psi'},$$ where $\psi'$ (respectively, $\psi''$) are attached to the output of $v'$ (respectively, the corresponding input of $v''$) in the same way as above. Each graph should be considered as a sum of graphs obtained by expansion of all involved series in $\psi$’s, and each summand has a combinatorial coefficient equal to the product over all vertices of the integrals $$\label{eq:psi-int} \int_{\oM_{0,k+1}}\psi_0^{d_0}\psi_1^{d_1}\cdots \psi_k^{d_k} := \begin{cases} \frac{(k-2)!}{d_0!d_1!\cdots d_k!}, & \mbox{if } k-2=d_0+d_1+\cdots+d_k; \\ 0, & \mbox{otherwise, } \end{cases}$$ where the degrees $d_0,d_1,\dots,d_k$ are precisely the degrees of $\psi$-classes associated to the inputs/output of a vertex. Note that after expansion of all exponents there are only finitely many monomials in $\psi$’s that contribute in the total summand for $\theta_n$. Consequently, $\theta_n$ is represented by a finite sum of combinations of multiplications and $\phi_i$’s. In particular, the total degree of each nonzero term is equal to $2-2n$ (recall that $\deg m =0$ and $\deg \phi_i=-2i$). Here $\psi$-classes and their integrals over the space $\oM_{0,k+1}$, as in Equation , should be understood as a formal notation for some combinatorial constants (multinomial coefficients). However, in Sections \[sec:givental\] and \[sec::TRR\] we clarify the geometric meaning and the origin of this formula. \[ex::theta\_23\] Explicit formulas for the $\theta_2$ and $\theta_3$. $$\begin{aligned} \theta_2\left(x_1,x_2\right) = & m\left(x_1,x_2\right) \\ \theta_3\left(x_1,x_2,x_3\right) = & \phi_1\left(m\left(x_1,x_2,x_3\right)\right) + \left( m\left(x_1,x_2,\phi_1(x_3)\right) + m\left(x_2,x_3,\phi_1(x_1)\right) + m\left(x_3,x_1,\phi_1(x_2)\right)\right)\\ & -\left(m\left(x_1,\phi_1\left(m(x_2,x_3)\right)\right) + m\left(x_2, \phi_1\left(m(x_1,x_3)\right)\right) +m\left(x_3, \phi_1\left(m(x_1,x_2)\right)\right)\right).\end{aligned}$$ \[thm::formula\_BV-Hycomm\] Using the Leibniz rule, the map $\theta$ defined on generators by $\theta\colon \mm_n\mapsto \theta_n$ extends to a morphism of operads $\theta\colon\Hycomm\rightarrow \BV/\Delta$. Moreover, $\theta$ is a quasi-isomorphism of operads. We present two ways to prove this theorem. The first proof uses computations with equivariant homology. It is presented in Section \[sec:diagrammatic\]. First, we give a sequence of natural quasi-iso connecting $\Hycomm$ and $\BV/\Delta$. Second, a careful diagram chase allows us to obtain a formula for $\theta$, and, in addition, a natural homological explanation of the Givental group action on representations of $\Hycomm$. The second proof also consists from two steps. The first step is the same. We observe that the cohomology of $\BV/\Delta$ coincides with $\Hycomm$. Second, we notice that the expression for $\theta_k$ does not contain $\Delta$ and therefore $\theta_k\notin Im(\Delta\frac{\d}{\d\phi})$. Third, using a certain generalization of the Givental theory we show that $\theta_k$ are $\Delta\frac{\d}{\d\phi}$-closed. The degree count implies that $\theta$ defines a quasi-isomorphism of operads. This proof is explained in detail in Section \[sec:givental\]. Examples -------- There are natural examples of the $\BV/\Delta$ algebras structures on the de Rham complexes of Poisson and Jacobi manifolds. These examples are discussed in detail in [@DotShaVal] from a different perspective. In the case of a Poisson manifold, we consider its de Rham complex with the de Rham differential $d^{dR}$ and wedge product, and the operator $\phi_1$ equal to the contraction with the Poisson structure and $\phi_i=0$, $i=2,3,\dots$. The operator $\Delta=[d^{dR},\phi_1]$ is a $\BV$-operator, thus we have a natural structure of a $\BV/\Delta$ algebra. In the case of Jacobi manifold the $\BV/\Delta$ structure exists on the space of basic differential forms, the construction is very similar, and we refer to [@DotShaVal] for details. In both cases, the explicit formulas for $\theta_k$, $k=2,3,\dots$, gives a structure of the $\Hycomm$ algebra on the cohomology in these examples. In fact, with these formulas it is easy to see that in these cases the structure of a $\Hycomm$-algebra gives raise to a full structure of a Frobenius manifold, that is, we also have a scalar product and homogeneity with all the necessary properties. In [@DotShaVal] the structure of a $\Hycomm$ algebra is obtained in a different way, using a general result of Drummond-Cole and Vallette in [@DruVal] on a homotopy Frobenius structure on the cohomology of a $\BV$-algebra, where the homotopy transfer of the $\BV$ operator $\Delta$ vanishes. In fact this result in [@DruVal] is completely parallel to ours, as it is shown in [@DotShaVal], though exact match of the formulas will require more work. Circle action {#sec::circle} ============= In this section we compare our definition of a homotopy quotient with a categorical one and show how this affects to the Chern classes. Homotopy quotient in Topology ----------------------------- Consider a topological space $X$ with a chosen action of the group $S^1$. If the action of $S^1$ is not free then the quotient space $X/S^{1}$ is not well defined. Therefore in order to define the quotient one has to replace the space $X$ by a homotopy equivalent space $X\times ES^1$ with the free action of $S^1$. Recall that $ES^1$ is a contractible space with the free $S^1$-action. The corresponding bundle $ES^{1}\stackrel{S^1}{\rightarrow} BS^1$ is called the universal $S^1$-bundle and it’s base $BS^1$ is called classifying space and known to coincide with ${\mathbb{C}\mathrm{P}^{\infty}}$. The homotopy quotient “$X/S^{1}$” is defined as a factor $\frac{X\times ES^1}{S^1}$. There is another categorical definition which we find useful to recall. Denote by $S^{1}\tt\Top$ (resp.$\Top$) the homotopy categories of topological spaces with (and without) action of $S^1$. There is a natural exact functor $Triv^{S^1}:\Top \rightarrow S^{1}\tt\Top$ which assigns a trivial action of $S^1$ on any topological space. The left adjoint functor to the functor $Triv^{S^1}$ is called the homotopy quotient by $S^1$: $${\mathrm{Hom}}_{\Top} ( X/ S^1, Y) \simeq {\mathrm{Hom}}_{S^1\tt\Top}(X,Triv^{S^1}(Y))$$ In particular, if $X$ is isomorphic to the direct product $Z\times S^{1}$ the homotopy quotient $X/S^1$ is isomorphic to $Z$. Homotopy quotient for algebraic operads {#sec::homotopy_quotients_def} --------------------------------------- Replace the category $\Top$ by the category $dg\tt\mathcal{O}p$ of differential graded operads. The cohomology ring of the circle is the Grassman algebra $\kk[\Delta]$ with one odd generator of degree $-1$ such that $\Delta^2=0$. The category ${\Delta\tt dg\tt\mathcal{O}p}$ of dg-operads with a chosen embedding of the Grassman algebra $\kk[\Delta]$ replaces the category of $S^{1}\tt\Top$ of topological spaces with a circle action. An object of the category ${\Delta\tt dg\tt\mathcal{O}p}$ is a dg-operad with a chosen unary operation of degree $-1$, such that its square is equal to zero. Any dg-operad $\Q$ admits a trivial map $\kk[\Delta]\rightarrow \Q$ with $\Delta\mapsto 0$. This defines a functor $Triv^{\Delta}: dg\tt\mathcal{O}p \rightarrow \Delta\tt dg\tt \mathcal{O}p$. \[def::hom\_quotient\] The homotopy quotient by $\Delta$ is the left adjoint functor to the enrichment by trivial embedding of Grassman algebra: I. e. it is a functor $(\tt)/\Delta: \Delta\tt dg\tt\mathcal{O}p \rightarrow dg\tt \mathcal{O}p$ such that for any pair of operads $\P,\Q$ there exists a natural equivalence $${\mathrm{Hom}}_{dg\tt \mathcal{O}p}(\P/\Delta,\Q)\simeq {\mathrm{Hom}}_{\Delta\tt dg\tt\mathcal{O}p} ( \P, Triv^{\Delta}(\Q))$$ which is functorial in $\P$ and in $\Q$. In Section \[sec::bv/delta\] we have already chosen a particular model of the homotopy quotient by $\Delta$. Let us show that this model indeed satisfies the adjunction property required by Definition \[def::hom\_quotient\]. First, let us repeat the construction from Section \[sec::bv/delta\] in a general setting. Any given operad $\Q$ with a chosen unary operation $\Delta\in\Q(1)$ (such that $\Delta^2=0$) may be extended by a collection of unary operations $\phi_i$, $i=1,2,\dots$, of homological degree $\deg \phi_i = -2i$, and the differential prescribed by Equation . We remind that the generating series in $z$ of the sequence of identities on the commutators $[d,\phi_i]$ defines a differential: $$\exp(-\phi_1 z^1 -\phi_2 z^2 -\ldots) d \exp(\phi_1 z^1 +\phi_2 z^2 +\ldots) = d + z\Delta.$$ Note that the differential decreases the degree in $\phi_i$’s by $1$ and increases the degree in $\Delta$ by $1$. We want to keep this property in the notation for the differential; therefore, we denote it by $\Delta\frac{\d}{\d\phi}$ and this notation should be understood just as a single symbol. \[lem:adjunction\] The functor that sends an operad $\Q$ with a chosen squared zero unary operation $\Delta$ to the dg-operad $\left(\Q\star\kk\langle\phi_1,\phi_2,\ldots\rangle,\Delta\frac{\d}{\d\phi}\right)$ gives a particular model of the homotopy quotient $\Q/\Delta$. I. e. the twisted free product with $\phi$’s is the left adjoint functor to the trivial action of $\kk[\Delta]$. Recall that $\kk[\Delta]$ is a skew commutative algebra with one odd generator $\Delta$, where the skew-commutativity implies the relation $\Delta^2=0$. This algebra is Koszul and its Koszul dual is the free algebra $\kk[\delta]$ with one even generator of degree $2$. The free product of the Grassman algebra $\kk[\Delta]$ and the free algebra $F$ generated by the augmentation ideal of Koszul dual coalgebra together with Koszul differential is acyclic. We state that $\phi_i$’s is just a one possible way to find generators in the free algebra generated by the augmentation ideal of $\kk[\delta]$ and the differential $\Delta\frac{\d}{\d\phi}$ is the corresponding description of the Koszul differential. Therefore, the free product $\kk[\Delta]\star\kk\langle\phi_1,\phi_2,\ldots\rangle$ is a factor of the free associative algebra generated by $\Delta$ and $\phi_i$, $i=1,2,\ldots$ by the unique relation $\Delta^2=0$. This algebra is acyclic with respect to the differential $\Delta\frac{\d}{\d\phi}$, admits the natural splitting: $$\label{eq::ES1::noncom} \kk \stackrel{1\mapsto 1}{\hookrightarrow} \left(\kk\langle\Delta,\phi_1,\phi_2,\ldots\rangle/(\Delta^2),\Delta\frac{\d}{\d\phi}\right) \stackrel{\Delta,\phi_i\mapsto 0}{\twoheadrightarrow} \kk$$ and satisfies the following universal categorical property: For any dg-algebra $(A,d_A)$ with a chosen dg-subalgebra $(\kk[\Delta_A],0)$ there exists a map of dg-algebras $\varphi_{A}:(\kk[\Delta]\star\kk\langle\phi_1,\phi_2,\ldots\rangle,\Delta\frac{\d}{\d\phi})\rightarrow (A,d_A)$ that sends $\Delta\mapsto \Delta_A$ and is functorial with respect to $A$. One should think about the dg-algebra $(\kk[\Delta]\star\kk\langle\phi_1,\phi_2,\ldots\rangle,\Delta\frac{\d}{\d\phi})$ as a noncommutative algebraic replacement of the universal bundle $ES^1$. We will come back to the connection with the universal bundle in the next Section \[sec::chern\]. For any given dg-operad $(\P,d_{\P})$ we define the quasi-isomorphic inclusion of dg-operads $$\label{eq::eps_P} \varepsilon_{\P}:(\P,d_{\P}) \rightarrow \left(\P\star\kk[\Delta]\star\kk\langle\phi_1,\phi_2,\ldots\rangle, d_{\P} + \Delta\frac{\d}{\d\phi}\right)$$ that sends $\P$ to $\P$; and with any dg-operad $(\Q,d_{\Q})$ with a chosen unary operation $\Delta_{\Q}\in\Q(1)$ we associate the projection of $S^{1}\tt$dg-operads $$\label{eq::eta_Q} \eta_{\Q}: \left(\Q\star\kk\langle\phi_1,\phi_2,\ldots\rangle\star\kk[\Delta], d_{\Q}+(\Delta-\Delta_{\Q})\frac{\d}{\d\phi} \right) \rightarrow (\Q,d_{\Q})$$ that sends identically $\Q$ to $\Q$, $\Delta\mapsto\Delta_{Q}$ and $\phi_i$ maps to $0$ for all $i$. The morphisms $\varepsilon_{\P}$ and $\eta_{\Q}$ are quasi-isomorphisms for all $\P$ and $\Q$. The proof follows from the acyclicity of the dg-algebra $(\kk[\Delta]\star\kk\langle\phi_1,\phi_2,\ldots\rangle,\Delta\frac{\d}{\d\phi})$. Let us also give one more explanation on why we call the data $\phi_i$’s by a choice of trivialization of the action of $S^1$. The action of $S^1$ on a topological space $X$ is encoded in the fibration $X\times ES^1\stackrel{S^1}{\rightarrow} B$. The trivialization of the $S^1$ action is the isomorphism of this fibration and the trivial one. I.e. is given via isomorphism $\Phi$ of the base $B$ and the product $X\times BS^1$. The algebraic counterpart of this isomorphism looks as follows: $$\Phi: Tor_{{{\:\raisebox{1.5pt}{\selectfont\text{\circle*{1.5}}}}}}^{\kk[\Delta]}(V^{{{\:\raisebox{4pt}{\selectfont\text{\circle*{1.5}}}}}},\kk)\stackrel{\cong}{\longrightarrow} V^{{{\:\raisebox{4pt}{\selectfont\text{\circle*{1.5}}}}}}\otimes Tor_{{{\:\raisebox{1.5pt}{\selectfont\text{\circle*{1.5}}}}}}^{\kk[\Delta]}(\kk,\kk)$$ where $V^{{{\:\raisebox{4pt}{\selectfont\text{\circle*{1.5}}}}}}=C^{{{\:\raisebox{4pt}{\selectfont\text{\circle*{1.5}}}}}}(X)$. The trivial module $\kk$ admits a Koszul resolution $$(\kk[\Delta]\otimes \kk[z], z\frac{\d}{\d\Delta} )\rightarrow \kk$$ and we ends up with the following isomorphism of complexes: $$\Phi=\Phi(z): (V^{{{\:\raisebox{4pt}{\selectfont\text{\circle*{1.5}}}}}}[z],d +z\Delta) \rightarrow (V^{{{\:\raisebox{4pt}{\selectfont\text{\circle*{1.5}}}}}}[z],d)$$ that is called the trivialization of the action of $\Delta$ (the trivialization of $S^1$-action). Chern character {#sec::chern} --------------- Suppose that $\Q$ is a topological operad with a chosen embedding $S^1\hookrightarrow \Q(1)$. Note that the latter embedding gives, in particular, the action of ($n+1$) copies of $S^1$ on the space of $n$-ary operations $\Q(n)$ via the substitution on inputs/output of operations. It is possible to take a homotopy quotient with respect to the action on each particular input/output on the space of $n$-ary operations of $\Q$. We denote by $\Q/(\circ_i\Delta)$ the quotient with respect to the action of $S^1$ on the $i$-th slot. Moreover, the $S^1$-action on each particular input produces a canonical $S^1$-fibration on the space of $n$-ary operations of the entire quotient $\Q/\Delta$. It is simpler to describe the algebraic counterpart of this fibration in order to define the first Chern class of this fibration which gives a canonical operation on a factor. This description will be used later on to give another algebraic description of the $\psi$-classes in the moduli spaces of curves. We hope that the reader will not be confused about no difference in the notations of the topological operad and the corresponding algebraic operad of its singular chains. From now on $\Q$ means an algebraic operad with a chosen unary odd operation $\Delta$ with $\Delta^2=0$. Let $(\Q/\Delta)_{\epsilon_i}$ be the subset of $n$-ary operations in $\Q/\Delta$ where we take the augmentation map $$\epsilon: \kk\langle\phi_1,\phi_2,\ldots\rangle \twoheadrightarrow \kk$$ with respect to the $i$-th input of operations. I. e. we consider only those elements of $\Q/\Delta$ which do not contain any nonconstant element from the algebra $\kk\langle\phi_1,\phi_2,\ldots\rangle$. The natural inclusion of complexes $(\Q/\Delta)_{\epsilon_i}(n) \rightarrow (\Q/\Delta)(n)$ gives the algebraic model of the $S^1$-fibration described above. Let $\frac{\d}{\d \phi_1}$ be the derivation of the algebra $\kk\langle\phi_1,\phi_2,\ldots\rangle$ that sends the generator $\phi_1$ to $1$ and all other generators $\phi_i$ for $i\geq 2$ to zero. Let $\circ_i\frac{\d}{\d \phi_1}$ be the derivation of the set of $n$-ary operations $\Q/\Delta(n)$ obtained by applying the derivation $\frac{\d}{\d \phi_1}$ in the $i$-th slot of the operation. \[prop::psi\_algebraic\] The derivation $\circ_i\frac{\d}{\d \phi_1}$ of the complex of $n$-ary operations $\Q/\Delta(n)$ represents the evaluation of the first Chern class of the $S^1$-fibration over $\Q/\Delta$ associated to the $S^1$ action in the $i$-th slot. The Chern class is defined as a generator of the cohomology of the Eilenberg-Maclein space $BS^1$ (the base of the universal bundle). In order to switch to algebra we have to reformulate the required categorical properties of the universal bundle in algebraic terms. First, let us formulate the desired property in the category of commutative dg-algebras since the homology functor is the map from topological spaces to commutative algebras. The commutative dg-algebra $(\kk[\Delta,u],\Delta\frac{\d}{\d u})$ is an acyclic dg-algebra that satisfy the universal property: for any commutative dg-algebra $(A,d_A)$ with a chosen dg-subalgebra $(\kk[\Delta],0)$ there exists a map of dg-algebras $\varphi_{A}:(\kk[\Delta,u],\Delta\frac{\d}{\d u})\rightarrow (A,d_A)$ that sends $\Delta\mapsto \Delta$ and is functorial with respect to $A$. The generator $u$ is the multiplicative generator of $H^{{{\:\raisebox{4pt}{\selectfont\text{\circle*{1.5}}}}}}(BS^1;\kk)$ and the derivation $\frac{\d}{\d u}$ coincides with the evaluation of the first Chern class of the circle bundle. Second, we notice that the dg-algebra $(\kk[\Delta]\star\kk\langle\phi_1,\phi_2,\ldots\rangle,\Delta\frac{\d}{\d\phi})$ is an acyclic dg-algebra satisfying the same universal property, but in the category of noncommutative algebras. Now the generator $\phi_i$ corresponds to the additive generators of $H^{2i}(BS^1;\kk)$. There exists a natural quasi-iso projection between these two algebras: $$\label{eq::phi_i->u} \xymatrix{ ab: \left(\kk\langle\Delta,\phi_1,\phi_2,\ldots\rangle / (\Delta^2=0), \Delta\frac{\d}{\d \phi}\right) \ar@{->>}[r] & \left( \kk[\Delta,u], \Delta\frac{\d}{\d u}\right). }$$ that sends $\Delta$ to $\Delta$, $\phi_1$ to $u$ and all other $\phi_i$, for $i\geq 2$ to $0$. Moreover, the derivation $\frac{\d}{\d\phi_1}$ of the left hand side of  commutes with the differential and is mapped to the derivation $\frac{\d}{\d u}$ on the right and, therefore, coincides (on the homology level) with the evaluation of the first Chern class. Operads involved: definition and notation {#sec::operad::definitions} ========================================= In this section we recall the definitions of algebraic operads that correspond to the topological operads of open and closed moduli spaces of curves of zero genus. We follow the papers of Getzler [@Getzler_genus0; @Getzler_grav]. Since we want to work with precise formulas, we specify algebraic generators and relations in these operads. We also give definitions of the operads involved in Section \[sec:diagrammatic\] in the main commutative diagram  used to derive the equivalence of $\Hycomm$ and $\BV/\Delta$. We use the notation $\circ_l$ for the operadic compositions in the $l$’th slot. I. e. for an operad $\P$ and a pair of finite sets $I,J$ the composition $\circ_{l}:\P(I\sqcup\{l\})\otimes\P(J)\rightarrow \P(I\sqcup J)$ is a substitution of operations from $\P(J)$ into the slot $l$ of operations from $\P(I\sqcup\{l\})$. The corresponding cocomposition map $\P^{\dual}(I\sqcup J)\rightarrow \P^{\dual}(I\sqcup\{l\})\otimes\P^{\dual}(J)$ for the dual cooperad $\P^{\dual}$ will be denoted by $\mu_l$ or just by $\mu$ if the precise index becomes clear from the context. There are two standard ways to think of elements of an operad/cooperad in terms of its (co)generators. The first way in terms of tree monomials represented by planar trees and the second one is in terms of compositions/cocompositions of operations presented by formulas with brackets. Our approach is somewhere in the middle: in most cases, we prefer (and strongly encourage the reader) to think of tree monomials, but to write formulas required for definitions and proofs in the language of operations since it makes things more compact. While using the language of operations/cooperations we always suppose that the (co)operation that is attached to the root vertex is written in the leftmost term. $\BV$ and framed little discs operad {#sec::BV::framed_little_discs} ------------------------------------ The space of configurations of the small little discs without intersections inside the unit disc form one of the most well known topological operad. The boundary of the unit disc is considered as an output and the boundaries of the inner small discs are considered as inputs. This means that the composition rules are defined by gluing the boundary of the inner disc of the outgoing operation with the outer boundary of the incoming operation. Following May [@May] we use the name $E_d$ for this operad where $d$ is a dimension of the disc. We restrict ourself to the case $d=2$. It is also known that operad $E_2$ is formal over $\mathbb{Q}$ (see e.g. [@Tamarkin_form; @Kontsevich_form]) and its homology operad coincides with the operad of Gerstenhaber algebras. Recall, that the operad $\Gerst$ of Gerstenhaber algebras is a quadratic operad generated by two binary operations: the commutative associative multiplication and the Lie bracket of degree $-1$. The quadratic operadic relations consists of: the associativity of multiplication, Jacobi identity for the bracket and the Leibniz identity for their composition: $$[a\cdot b, c] = \pm[a,c]\cdot b \pm a\cdot [b,c]$$ Moreover, the space of $n$-ary operations $\Gerst(n)$ form a coalgebra, such that the composition maps are compatible with comultiplications in these coalgebras. We will come back later to this description of the Gerstenhaber operad in Section \[sec::grav\]. Let us mark a point on the boundary circle of each inner disk in a configuration from $E_2(n)$. This leads to a description of the space of $n$-ary operations of the operad of framed little discs which we denote by $FE_2$. The composition rules in $FE_2$ are also defined by gluing the boundary of the inner disc of the outgoing operation with the outer boundary of the incoming operation but now the marked point of the inner circle should be glued with the north pole of the outer circle. I. e. one has to rotate the incoming configuration with respect to the angle prescribed by the marked point in the inner circle of the outgoing configuration. This operad is also known to be formal ([@Severa; @Salvatore]) and the homology operad coincides with the operad of Batalin-Vilkovisky algebras (shortly denoted by $\BV$). The operad $\BV$ is generated by the binary commutative associative multiplication and a unary operation $\Delta$ of degree $-1$ such that $\Delta^2=0$ and $\Delta$ is a differential operator of the second order with respect to the multiplication. The latter statement is equivalent to the following so-called $7$-term relation: $$\Delta(abc) - (\Delta(ab)c + \Delta(bc)a +\Delta(ca)b) + (\Delta(a)bc + \Delta(b)ca +\Delta(c)ab) = 0$$ We omit the precise signs that comes from the Koszul sign rule in the ${\mathbb Z}$-graded settings. Note that the topological description of the operad of framed little discs is presented as a semi-direct product (or semi-direct composition) of the little discs operad $E_2$ and the group of rotations $S^1$. The topological definition of the semi-direct composition of a group and an operad is given in [@Salvatore_fE_d]. In our case, the group is $S^1$, and the algebraic counterpart consists of the semi-direct product of the Gerstenhaber operad with a free skew-commutative algebra $\kk[\Delta]$ generated by a unique generator $\Delta$ of degree $-1$. This leads to the following equality of operads: $$\BV = \Gerst \ltimes \kk[\Delta].$$ Here the semi-direct product $\Gerst \ltimes \kk[\Delta]$ means the operad generated by the binary commutative multiplication, Lie bracket and unary operation $\Delta$ subject to relations for multiplication and bracket as in $\Gerst$, $\Delta^2=0$ as in the skew-commutative algebra $\kk[\Delta]$ and the following commutation relation between $\Delta$ and generators of $\Gerst$: $$\{ \Delta, \text{ multiplication}\} = \text{ Lie bracket} , \qquad \{ \Delta, \text{ Lie bracket}\} = 0$$ The patterned brackets denotes the operadic commutator. In particular, the operadic commutator of a unary operation $\Delta$ and an $n$-ary operation $\alpha(\tt,\ldots,\tt)$ means the following expression with $n+1$ terms: $$\{\Delta, \alpha(\tt,\ldots,\tt)\}:= \Delta\circ\alpha(\tt,\ldots,\tt) - \sum_{i=1}^{n} \alpha(\tt,\ldots,\tt)\circ_i \Delta.$$ We will come back later to the precise description of the spaces of $n$-ary operations of $\Gerst(n)$ and $\BV(n)$ in Sections \[sec::grav\] and \[sec::BV=Gerst\*Delta\] respectively. Closed moduli spaces of zero genus {#sec::open_close} ---------------------------------- The union of spaces of compactified moduli spaces of curves of zero genus form an operad. This operad is formal. Its homology is called $\Hycomm$ (the operad of hypercommutative algebras). The algebraic description of the operad $\Hycomm$ looks as follows. The operad $\Hycomm$ has one generator in each arity grater or equal to 2. The generator $\mm_k$ of arity $k$ is of degree $(4-2k)$ and is given by the fundamental cycle $\mm_k:=[\oM_{0,k+1}]$. The generators satisfy the following quadratic relations (here $a,b,c,x_1,\dots,x_n$, $n\ge 0$, are elements of a $\Hycomm$-algebra): $$\label{hycom_rel} \sum_{S_1\amalg S_2=\{1,\dots,n\}} \pm \mm_{|S_2|+2}(\mm_{|S_1|+2}(a,b,x_{S_1}),c,x_{S_2}) = \sum_{S_1\amalg S_2=\{1,\dots,n\}} \pm \mm_{|S_2|+2}(a,\mm_{|S_1|+2}(b,c,x_{S_1}),x_{S_2}).$$ Here, for a finite set $S=\{s_1,\dots,s_k\}$, $x_S$ denotes for $x_{s_1},\dots,x_{s_k}$, and $\pm$ means the Koszul sign rule. Let us define a family of binary operations $m_x(\tt,\tt)$ on $V$ parametrized by the same space $V$: $$\forall x\in V \text{ let } m_{x}(a,b):= \sum_{n\geq 0} \frac{1}{n!}\mm_{n+2}(a,b,x,\ldots,x)$$ Then Equation  is equivalent to the associativity of the multiplication $m_x(\tt,\tt)$ for all $x\in V$. This observation explains the relation between hypercommutative algebras and Frobenius manifolds. The first Chern class of the tangent bundle at the $i$’th marked point on $\oM_{0,n+1}$ is usually denoted by $\psi_i$. Let $\mm_{n}^{d_0 d_1\ldots d_n}$ be the cycle corresponding to the evaluation of the product of $\psi$-classes of corresponding degrees on the fundamental cycle of the space of curves: $$\mm_{n}^{d_0 d_1\ldots d_n} := \psi_0^{d_0}\psi_1^{d_1}\ldots\psi_n^{d_n}[\oM_{0,n+1}].$$ These classes satisfy the so-called *Topological Recursion Relations* that are quadratic linear relations in the operadic sense: $$\begin{aligned} & \mathrm{m}^{(d_0+1)d_1\cdots d_n} + \mathrm{m}^{d_0\cdots d_{i-1}(d_i+1) d_{i+1}\cdots d_n} = \sum_{\begin{smallmatrix} S_1\sqcup S_2 \sqcup \{0,i\} \\ = \{0,\dots,n\} \end{smallmatrix}} \mathrm{m}^{d_0d_{S_1}0} \circ_{|S_1|+1} \mathrm{m}^{0d_id_{S_2}} & & \forall 1\leq i \leq n ; \\ & \mathrm{m}^{(d_0+1)d_1\cdots d_n} = \sum_{\begin{smallmatrix} S_1\sqcup S_2 \sqcup \{0,i,j\} \\ = \{0,\dots,n\} \end{smallmatrix}} \mathrm{m}^{d_0d_{S_1}0} \circ_{|S_1|+1} \mathrm{m}^{0d_id_jd_{S_2}} & & \forall 1\leq i, j \leq n. \end{aligned}$$ Here we denote by $d_S$, $S=\{s_1,\dots,s_k\}$, the sequence $d_{s_1}\cdots d_{s_k}$. We will come back later to TRR equations in Section \[sec::TRR\]. For more details see [@Man]. Open moduli spaces of zero genus {#sec::open:moduli} -------------------------------- The shifted homology of the union of spaces of open moduli spaces of curves of zero genus also form a formal operad. The corresponding algebraic operad is called $\Grav$ (the operad of gravity algebras). It was studied by Getzler in [@Getzler_genus0], in particular, he proved that $\Grav$ and $\Hycomm$ are Koszul dual to each other. An algebra over $\Grav$ is a chain complex with graded anti-symmetric products $$\label{eq:mmg-first} \mmg_n[x_1,\dots,x_n]\colon A^{\otimes n}\to A$$ of degree $2-n$ that satisfy the relations: $$\begin{aligned} \label{relgrav} & \sum_{1\le i<j\le k} \pm \mmg_{k+l-1}[\mmg_2[a_i,a_j],a_1,\dots,\widehat{a_i},\dots,\widehat{a_j},\dots,a_k, b_1,\dots,b_l] \\ \notag & = \begin{cases} \mmg_{l+1}[\mmg_k[a_1,\dots,a_k],b_1,\dots,b_l] , & l>0 , \\ 0 , & l=0, \end{cases} \end{aligned}$$ for all $k>2$, $l\ge0$, and $a_1,\dots,a_k,b_1,\dots,b_l\in A$. For example, in the case of $k=3$ and $l=0$, we obtain the Jacobi relation for $\mmg_2[\cdot,\cdot]$. Once again, Getzler proved in [@Getzler_genus0] that $\Hycomm$ and $\Grav$ are Koszul dual operads. Moreover for all $n\geq 2$ the generators $\mm_n\in\Hycomm(n)$ and $\mmg_n\in\Grav(n)$ are Koszul dual generators in these operads.[^1] In particular, the associativity relation for the commutative multiplication $\mm_2\in\Hycomm(2)$ is a relation Koszul dual to the Jacobi relation for the Lie bracket $\mmg_2\in\Grav(2)$. Let us also mention another result due to Getzler which hints the desired connection between $\Hycomm$ and $\BV$. The space of cotangent lines at the $i$-th marked point of curves from $\mathcal{M}_{0,n+1}$ forms a line bundle over the open moduli space $\mathcal{M}_{0,n+1}$. Consider the product of corresponding $(n+1)$ principal $U(1)$-bundles over $\mathcal{M}_{0,n+1}$, where the factors are numbered by the marked points. ([@Getzler_grav]) \[stat::S1-&gt;BV-&gt;Grav\] The homology of the total space of the $(S^1)^{\times (n+1)}$-bundle over $\mathcal{M}_{0,n+1}$ (associated with the product of cotangent lines at the marked points of a curve) coincides with the space of $n$-ary operations in the operad $\BV$. We give the algebraic counterpart of this statement in the next section. Gerstenhaber and gravity operads {#sec::grav} -------------------------------- Getzler observed that the $S^1$-equivariant homology of the Gerstenhaber operad is isomorphic to the gravity operad. This statement has very clear geometric background, see [@Getzler_grav], since the Gerstenhaber operad is the homology of the little disk operad. We recall the algebraic counterpart of this isomorphism. It is easier to compute the cohomology rather than homology of the space of little disks (it was done by Arnold in [@Arnold]). This way we obtain a description of the cooperad dual to the Gerstenhaber operad. The space of $n$-ary cooperations of the cooperad $\Gerst^{\dual}$ form a so-called Orlik-Solomon algebra: $$\Gerst^{\dual}(n):=\frac{\kk\left[\left\{w_{ij}\right\}_{1\leq i,j\leq n,\ i\ne j}\right]}{ \left( w_{ij} - w_{ji}, w_{ij} w_{jk} + w_{jk} w_{ki} + w_{ki} w_{ij} \right) }$$ Here we mean that $\Gerst^{\dual}(n)$ is a quotient modulo an ideal of the free graded commutative algebra generated by $w_{ij}$, $\deg\, w_{ij}=1$. The cooperad structure satisfies the Leibniz rule with respect to the product structure in the algebra $\Gerst^{\dual}(n)$, $n\geq 2$. Therefore, it is enough to define the cooperad structure $\mu:\Gerst^{\dual}(I\sqcup J) \rightarrow \Gerst^{\dual}(I\sqcup\{*\})\otimes\Gerst^{\dual}(J)$ on the generators $w_{ij}$. By definition, $$\mu(w_{ij}) = \begin{cases} w_{ij}\otimes 1 \text{, if } i,j\in I; \\ w_{i*}\otimes 1 \text{, if } i\in I, j\in J; \\ 1\otimes w_{ij}\text{, if } i,j\in J. \end{cases} \label{eq::coordinates_in_BV}$$ There is an action of the circle $S^1$ on the little discs operad via the rotation of the outer circle. The corresponding coaction of the generator $\Delta$ of the first cohomology of the circle $S^1$ on the space $\Gerst^{\dual}(n)$ is given by the following operator: $$\label{eq::Delta_act_gerst} \frac{\d}{\d{w}} := \sum_{1\leq i<j \leq n} \frac{\d}{\d w_{ij}}.$$ The action of the operator $\frac{\d}{\d{w}}$ on $\Gerst^{\dual}$ is dual to the action of the operator $\Delta$ on $\Gerst$. ([@Getzler_grav]) The action of the operator $\Delta$ is free on the Gerstenhaber operad $\Gerst$. The image of $\Delta$ coincides with its kernel and is isomorphic to the gravity operad. \[stat::Gerst\_Grav\] Let us define a homotopy model for the gravity operad. We use standard manipulations with equivariant homology. We consider the free polynomial algebra $\kk[\delta]$, $\delta$ is even, as the Koszul dual of the algebra $\kk[\Delta]$. By $\kk[\delta]\otimes\Gerst$ we denote a dg-operad with $(\kk[\delta]\otimes\Gerst)(n):= \kk[\delta]\otimes(\Gerst(n))$ for $n\geq 2$ and $(\kk[\delta]\otimes\Gerst)(1):= \Gerst(1) = \kk$. The composition is defined by $$(\delta^{a}\otimes \alpha )\circ (\delta^b\otimes \beta ) := \delta^{a+b} \alpha\circ\beta \qquad \text{ for } \alpha,\beta \in \Gerst(n).$$ The $\BV$-operator defines the differential $\delta\Delta\colon \delta^{a}\alpha\mapsto \delta^{a+1}\Delta(\alpha)$ on this operad. Let us rephrase Statement \[stat::Gerst\_Grav\] in the language of cooperads. We use the notation $u$ for the even variable linear dual to $\delta$ and $\kk[u]\otimes\Gerst^{\dual}(n)$ for the space linear dual to $\kk[\delta]\otimes\Gerst(n)$ for all $n\geq 2$. \[lem::gerst-&gt;grav\] The augmentation map of dg cooperads $$\varepsilon:(\kk[u]\otimes\Gerst^{\dual},\frac{\d}{\d{u}}\frac{\d}{\d{w}}) \twoheadrightarrow \left( \Grav^{\dual} , 0\right) \label{eq::augmentation:Gerst->Grav}$$ that maps $u\mapsto 0$ and $\Gerst^{\dual}\twoheadrightarrow \Gerst^{\dual}/(Im\frac{\d}{\d w}) = \Grav^{\dual}$ is a quasi-isomorphism. In particular, the map $\varepsilon$ maps any basic element $w_{ij}\in\Gerst^{\dual}(n)$ to the unique $n$-ary cogenerator $\mmg_n$ of the gravity cooperad. The precise homological grading is discussed in the next section. Bar complexes {#sec::Bar_Gerst} ------------- \[sec::bar\_complex\] In this section we recall the general definition of cobar complex and the precise formulation of Koszul self-duality for the operad $\Gerst$ and Koszul resolution of $\Hycomm$ via a cobar complex of $\Grav$. Consider a cooperad $\P^{\dual}$ with a cocomposition $\mu: \P^{\dual} \rightarrow \P^{\dual}\circ \P^{\dual}$. Let $\P^{\dual}_{+}$ be the augmentation ideal. In all our examples $\P^{\dual}(1)=\kk$ and the augmentation ideal $\P^{\dual}_{+}$ is equal to $\oplus_{n\geq 2}\P^{\dual}(n)$. The cobar complex $\Bar(\P^{\dual})$ is a free dg-operad generated by the shifted space $\P^{\dual}_{+}[-1]$. The cocomposition $\mu$ defines a differential of degree $1$ on generators. Using the Leibniz rule we extend it to the whole cobar complex $\Bar(\P^{\dual})$. In [@Getzler_Jones] it is proved that the operad $\Gerst$ is Koszul self-dual up to an appropriate even shift of homological degree. Pure algebraic proof of that fact was first given in [@Markl_distr_law]. Let us specify the desired homological shift. Note that Getzler defined two different types of grading on $\Grav$ in [@Getzler_genus0; @Getzler_grav]. They differ by the even shift $s^2$ on the Gerstenhaber operad that we define now. By $s^{2}\Gerst^{\dual}$ we denote a quadratic cooperad whose $n$-th space is given by $s^2\Gerst^{\dual}(n) = \Gerst^{\dual}(n)[2n-2]$. In other words, we can define $s^2\Gerst^{\dual}$ as a quotient of a free cooperad generated by binary operations modulo an ideal exactly in the same way as $\Gerst^{\dual}$, but we shift by $2$ the homological degree of the binary generators. The Koszul self-duality means that the natural projection of dg-operads $$\label{eq::BGerst->Gerst} \pi: \left(\Bar(s^{2}\Gerst^{\dual}),\mu\right) \twoheadrightarrow \left(\Gerst,0\right)$$ is a quasi-isomorphism. Here the map $\pi$ interchanges the multiplication and the bracket. In particular, under $\pi$ $$\begin{aligned} \label{eq::pi::BGerst->Gerst} w_{12} \in \Gerst^{\dual}(2) & \mapsto \text{multiplication} \\ \notag 1\in \Gerst^{\dual}(2) & \mapsto \text{Lie bracket} \\ \notag \Gerst^{\dual}(k) & \to 0 \text{ for } k >2.\end{aligned}$$ In order to give a similar construction for the resolution of the operad $\Hycomm$, we consider the cobar complex of the equivarient model of the gravity operad: $$\Bar(\kk[u]\otimes s^2\Gerst^{\dual}) \stackrel{\varepsilon}{\twoheadrightarrow} \Bar(\Grav) \stackrel{\kappa}{\twoheadrightarrow} \Hycomm. \label{eq::res_gerst_to_hycom}$$ The differential $d$ on $\Bar(\kk[u]\otimes s^2\Gerst^{\dual})$ is a sum of two parts. The first summand is equal to the inner differential $\frac{\d}{\d{u}}\frac{\d}{\d{w}}$. The second summand is given by cocomposition $\mu$ defined by Equation . For example, on a generator $\frac{u^k}{k!} f(w_{ij})$, where $f$ is a monomial in $w_{ij}$, $1\leq i\neq j \leq n$, the differential is given by $$d\left( \frac{u^k}{k!} f(w_{ij}) \right) = \frac{u^{k-1}}{(k-1)!} \sum_{i,j} \frac{\d f}{\d w_{ij}} + \sum_{\begin{smallmatrix}I\sqcup J = [n], |J|\geq 2, |I|\geq 1,\\k_1+k_2=k\end{smallmatrix}} (-1)^{\deg_{w}{f^{I}}} \frac{u^{k_1}}{k_1!}f^{I}\otimes \frac{u^{k_2}}{k_2!}f^{J}$$ Since $f\in \Gerst^\dual(n)$ is a monomial in $w_{ij}$, for each decomposition $I\sqcup J = [n]$ we have a uniquely defined pair of monomials $f^{I}\in \Gerst^\dual(|I|+1)$ and $f^{J}\in \Gerst^\dual(|J|)$. It is important for Koszul sign rule in future computations to recall once again the degree of a particular generator of the cobar complex: $$\deg (\frac{u^k}{k!} f(w_{ij})) = 2-2n +2k +\deg_{w}f + 1= 3-2(n-k) +\deg_{w} f$$ Applying the homotopy quotient and the free product to gravity operad --------------------------------------------------------------------- Let us apply the composition of functors we defined in Section \[sec::homotopy\_quotients\_def\] to the free dg-model of the operad of hypercommutative algebras discussed in Equation . I. e. in this section we describe the dg-operad which is the homotopy quotient by $\Delta$ of the free product with $\kk[\Delta]$ of the dg-operad $\Bar(\kk[u]\otimes s^2\Gerst^{\dual})$. Consider first the image of the free product functor $\Bar(\kk[u]\otimes s^2\Gerst^{\dual})\star \kk[\Delta]$. Note that $\kk[u]$ comes from the cohomology ring of $BS^1$ and, therefore, it is natural to define the differential which interacts the action of $\Delta$ and $u$: $$\Delta^{ad}\frac{\d}{\d u}\colon \gamma \mapsto \left\{\Delta,\frac{\d \gamma}{\d u}\right\} = \Delta \circ \frac{\d \gamma}{\d u} - \sum_{i=1}^{n} \pm \frac{\d \gamma }{\d u}\circ_{i} \Delta.$$ That is, the operator $\frac{\d}{\d u}$ acts on $n$-ary operation $\gamma$ and $\Delta^{ad}\frac{\d}{\d u}$ acts as the commutator of $\frac{\d \gamma}{\d u}$ and $\Delta$. Note that operators $\Delta^{ad}$ and $\frac{\d}{\d u}$ commute. The following corollary follows directly from the proof of Proposition \[lem:adjunction\]: \[lem::delta/delta\] The natural projection that takes $\Delta,\phi_1,\phi_2,\ldots$ to $0$ is a quasi-isomorphism of dg-operads $$\label{eq::j_def} \left( \frac{ \Bar(\kk[u]\otimes s^2\Gerst^{\dual})\star \kk[\Delta]}{\Delta}, \frac{\d}{\d{u}} \frac{\d}{\d{w}} + \mu + \Delta^{ad}\frac{\d}{\d u} + \Delta\frac{\d}{\d\phi} \right) \longrightarrow \left(\Bar(\kk[u]\otimes s^2\Gerst^{\dual}), \frac{\d}{\d{u}} \frac{\d}{\d{w}} + \mu \right).$$ The operad $\Bar(\kk[u]\otimes s^2\Gerst^{\dual})\star \kk[\Delta]$ will be referred to as an *equivariant cobar complex*. This operad is spanned by trees whose vertices are marked by elements of the cooperad $\kk[u]\otimes s^2\Gerst^{\dual}$ and some edges are marked by $\Delta$. BV and semi-direct composition of operads {#sec::BV=Gerst*Delta} ----------------------------------------- In this section we recall the presentation of the $\BV$ operad in terms of the semi-direct composition. The topological definition of the semi-direct composition of a group and an operad is given in [@Salvatore_fE_d]. In our case, the group is $S^1$, and the algebraic counterpart consists of the semi-direct composition of the Gerstenhaber operad with a free algebra $\kk[\Delta]$ generated by a unique generator $\Delta$ of degree $-1$. As we have already mentioned in Section \[sec::grav\], the circle acts by inner rotations of the disc and the corresponding coaction is given by the operator $\frac{\d}{\d w}$ defined by Equation [(\[eq::Delta\_act\_gerst\])]{}. We have already mentioned in Section \[sec::BV::framed\_little\_discs\] that the operad $\Gerst\ltimes\kk[\Delta]$ coincides with $\BV$. Let us specify a bit the description of $\Gerst\ltimes\kk[\Delta]$. The space of $n$-ary operations of $\Gerst\ltimes\kk[\Delta](n)$ is equal to $\Gerst(n)\otimes\kk[\Delta_1,\ldots,\Delta_n]$. In particular, $\Gerst\ltimes\kk[\Delta](1)=\kk[\Delta]$. By definition, for any $\gamma\in\Gerst(n)$ we have: $$\begin{aligned} \gamma\circ_i \Delta & := \gamma\otimes \Delta_i; \\ \notag \Delta\circ \gamma & := \sum_{i=1}^{n} \gamma\circ_i\Delta + \Delta(\gamma),\end{aligned}$$ where in the last summand we use the action of $\Delta$ on $\Gerst$. These two formulas allow to extend unambiguously the operadic product on $\Gerst$ to an operadic product on $\Gerst\ltimes\kk[\Delta](n)$. Moreover, the projection $\pi:\Bar(s^2\Gerst^{\dual})\twoheadrightarrow \Gerst$ from Equation  is extended to a quasi-isomorphism of semi-direct compositions: $$\label{eq::def_BGerst->BV} \pi: (\Bar(s^2\Gerst^{\dual})\ltimes \kk[\Delta], \mu) \twoheadrightarrow (\Gerst\ltimes \kk[\Delta],0) = (\BV,0).$$ \[lem::uDelta-&gt;semiDelta\] The natural projection $$\epsilon\colon \left( \Bar (\kk[u]\otimes s^2\Gerst^\dual)\star\kk[\Delta], \frac{\d}{\d{u}} \frac{\d}{\d{ w}} + \mu + \Delta^{ad}\frac{\d}{\d u}\right) \twoheadrightarrow \left(\Bar(s^2\Gerst^\dual)\ltimes \kk[\Delta], \mu\right)$$ that sends $u\mapsto 0$, $\Delta\mapsto \Delta$, and $\Gerst\to \Gerst$, is a quasi-isomorphism of dg-operads. First, we check that $\epsilon$ is a morphism of dg-operad. Indeed, a direct computation follows that $\epsilon$ is compatible with the differentials. Since the cobar complexes are free operads, we immediately get the compatibility with the operadic structures. Then we consider a filtration by the number of internal edges in cobar complexes both in the source and in the target of $\epsilon$. The associated graded differential in the target is equal to $0$, and the associated graded differential in the source dg-operad is equal to $\frac{\d}{\d{u}} \frac{\d}{\d{ w}} + \Delta^{ad}\frac{\d}{\d u}$. At that point it is possible to choose a filtration (or rather a sequence of filtrations) in the source dg-operad such that associated graded differential will simplifies further and is equal to is $\Delta^{out}\frac{\d}{\d u}$. Here $\Delta^{out}$ is an operator defined by $\Delta^{out}(\gamma)=\Delta\circ \gamma$, that is, we create a new $\Delta$ only at the output of a vertex. The cohomology of the complex $(\kk[\Delta]\otimes\kk[u],\Delta\frac{\d}{\d {u}})$ is equal to $\kk$. Therefore, the cohomology with respect to the differential $\Delta^{out}\frac{\d}{\d u}$ are generated by the graphs whose vertices are decorated by $u^0$ and there are no $\Delta$’s on the outputs of the vertices. This means that the whole graph is allowed to have only some $\Delta$’s at the global inputs of the graph. This kind of graphs span by definition the semi-direct composition $\Bar(s^2\Gerst^{\dual})\ltimes \kk[\Delta]$. Main diagram of quasi-isomorphisms {#sec::main_diagram} ================================== In this section we present the full diagram of quasi-isomorphisms that connects $\Hycomm$ and $\BV/\Delta$. We show how $\psi$-classes appears in the picture and how one can get an algebraic model of the Kimura-Stasheff-Voronov operad. In the forthcoming Section \[sec:diagrammatic\] we are going to move through this diagram the generators $\mm_k$ of $\Hycomm$, and this way we obtain a quasi-isomorphism $\theta:\Hycomm\to\BV/\Delta$. \[thm::diag:Hycom-&gt;BV\] We have the following sequence of quasi-isomorphisms: $$\xymatrix{ \left(\Bar(\kk[u]\otimes s^2\Gerst^{\dual}),\right. \left.\frac{\d}{\d{u}}\frac{\d}{\d w} + \mu\right) \ar@{->}[d]^{\varepsilon} & & *{\begin{array}{l} \left(\Bar(\kk[u]\otimes s^2\Gerst^{\dual})\star \kk[\Delta]/\Delta, \right.\\ {\ } \quad \left.\frac{\d}{\d{u}} \frac{\d}{\d{ w}} + \mu + \Delta^{ad}\frac{\d}{\d u} + \Delta\frac{\d}{\d\phi}\right) \end{array}} \ar@{->}[ll]_-{j} \ar@{->}[d]^{\epsilon} \\ \left(\Bar(\Grav^{\dual}),\mu^{\Grav}\right) \ar@{->}[d]^{\kappa} & & \left( \frac{\Bar(s^2 \Gerst^{\dual})\ltimes \kk[\Delta]}{\Delta}, \mu + \Delta\frac{\d}{\d\phi} \right) \ar@{->}[d]^{\pi} \\ (\Hycomm,0) \ar@{..>}[rr]^{\theta} & & \left(\BV/\Delta, \Delta\frac{\d}{\d\phi} \right) } \label{eq::diag::Hycom_BV}$$ In Section \[sec::operad::definitions\] we give a detailed description of all the morphisms involved in Diagram  and prove that they are quasi-isomorphisms, except for $\theta$. Indeed, - The morphism $\kappa$ is a quasi-isomorphism because the operads $\Hycomm$ and $\Grav$ are Koszul dual to each other, see Section \[sec::open:moduli\]. - The equivariant model of the operad $\Grav$ is discussed in Section \[sec::grav\]. We apply the cobar functor to the quasi-isomorphism $\varepsilon:\left(\kk[u]\otimes \Gerst^{\dual},\frac{\d}{\d u}\frac{\d}{\d w}\right)\rightarrow \Grav^{\dual}$ described in Lemma \[lem::gerst-&gt;grav\]. - The morphism $j$ is a special case of the composition of the free product functor and the homotopy quotient functor discussed in Section \[sec::homotopy\_quotients\_def\], see Corollary \[lem::delta/delta\]. - The existence of $\epsilon$ is discussed in Section \[sec::BV=Gerst\*Delta\]. The quasi-isomorphism property of $\epsilon$ is proved in Lemma \[lem::uDelta-&gt;semiDelta\] via a sequence of filtrations. - The map $\pi$ is obtained as a homotopy quotient of the quasi-isomorphism given by Equation . The latter one is obtained from the standard Koszul resolution of $\Gerst$ (see Equations ,). - Section \[sec:diagrammatic\] contains a careful description of $\theta$ together with the proof of quasi-iso and commutativity of the diagram. We take the generators of $\Hycomm$ and move the corresponding cocycles through the diagram above in a clockwise direction. We will show that the resulting map of generators from $\Hycomm$ to $\BV/\Delta$ defines a morphism of operads and does not depend on particular choices of cocycles one should made in-between. In particular, the image of the map $\theta$ coincides with the intersection of the kernel of differential $\Delta\frac{\d}{\d\phi}$ with the suboperad of $\BV/\Delta$ generated by multiplication and $\phi_i$’s. Recall from Section \[sec::chern\] that any given $S^1$-operad $\Q$ and a pair of natural numbers $i<n$ defines an $S^1$-fibration over $\Q/\Delta(n)$ associated with the $S^1$-rotations in the $i$-th slot. We will apply this construction for the operad $\BV$ in order to have another description of the line bundles over the moduli space $\oM_{0,n+1}$ formed by the cotangent lines at the marked point. Recall that $\psi$-classes are the first Chern classes of these line bundles. Theorem \[thm::psi-classes\] below explains the algebraic counterpart of the action of $\psi$-classes in Diagram . \[thm::psi-classes\] The $S^1$-fibration over the space of $n$-ary operation of the homotopy quotient by $S^1$ of the framed little discs operad associated to the rotations in the $i$’th slot coincides with the $S^1$-bundle over $\oM_{0,n+1}$ coming from the line bundle of the cotangent lines at the $i$-th marked point. The algebraic models of the evaluation of the first Chern class of $S^1$-bundles under consideration are underlined in the following refinement of commutative Diagram : $$\label{eq::diag::psi_clas} \xymatrix{ \left(\Bar(\kk[u]\otimes s^2\Gerst^{\dual}), \frac{\d}{\d{u}}\frac{\d}{\d w} + \mu\right) \ar@{->}[d]^{\kappa\circ\varepsilon} & & *{\begin{array}{l} \left(\Bar(\kk[u]\otimes s^2\Gerst^{\dual})\star \kk[\Delta]/\Delta, \right.\\ {\ } \quad \left.\frac{\d}{\d{u}} \frac{\d}{\d{ w}} + \mu + \Delta^{ad}\frac{\d}{\d u} + \Delta\frac{\d}{\d\phi}\right) \end{array}} \ar@{->}[ll]_-{j} \ar@{->}[d]^{\pi\circ\epsilon} \\ (\Hycomm,0) \ar@{..>}[rr]^{\theta} & & \left(\BV/\Delta, \Delta\frac{\d}{\d\phi} \right) {\ar_{{\circ_i\frac{\d}{\d\phi_1}}}@/_1pc/(70,-23)*{};(78,-23)*{}} {\ar_{{\psi_i}}@/_1pc/(-4,-23)*{};(4,-23)*{}} {\ar^{{\circ_i\frac{\d}{\d u} + \circ_i\frac{\d}{\d \phi_1}}}@/^1pc/(70,8)*{};(78,8)*{}} {\ar^{{\circ_i\frac{\d}{\d u}}}@/^1pc/(-4,5)*{};(4,5)*{}} }$$ Each operator drawn as a loop near the appropriate complex defines an operator which commutes with differential in this complex and the vertical and horizontal arrows map these derivations one to another. For example, the derivation $\circ_i\frac{\d}{\d u}$ is the differentiation by $u$-variable in the vertex attached to the $i$-th slot (input/output) of the element in the cobar complex $\Bar(\kk[u]\otimes s^2\Gerst^{\dual})$, and the differentiation $\circ_i\frac{\d}{\d \phi_1}$ means the noncommutative differentiation by $\phi_1$ in the algebra $\kk\langle\phi_1,\phi_2,\ldots\rangle$ which is also attached to the $i$-th slot. We omit the detailed proof of this Theorem because the proof repeats the one of Theorem \[thm::diag:Hycom-&gt;BV\] and is based on the results of Getzler mentioned in Statement \[stat::S1-&gt;BV-&gt;Grav\]. It is a direct check that the diagram commutes everywhere except the leftmost arrow. From Proposition \[prop::psi\_algebraic\] we know that the corresponding derivations drawn in the loops represents the evaluation map with the first Chern class on the homology level. Statement \[stat::S1-&gt;BV-&gt;Grav\] finishes the coincidence of the corresponding bundles and Chern classes respectively. Recall that for any operad $\Q$ with a chosen $S^1$-action we construct a functorial quasi-iso projection $\eta_{\Q}: \left(\left(\Q /\Delta\right) \star \kk[\Delta], (\Delta-\Delta_{\Q})\frac{\d}{\d \phi}\right) \mapsto \Q$ (Compare with Equation ). We want to apply the functor $\eta_{\Q}\circ (\tt\star\kk[\Delta])$ to the main Diagram . This operation is well defined because all operads involved in Diagram  are quasi-iso to the image of the functor of homotopy quotient by $\Delta$. Moreover the composition of functors $\eta_{\Q}\circ (\tt\star\kk[\Delta])$ applied to the second column of Diagram  just removes the homotopy quotient. On the other hand we show how this functor affects the differential if we apply the same functor to the left column of Diagram . Indeed we have the following dg-model for the $\BV$-operad (the image of $\eta_{\Q}\circ (\tt\star\kk[\Delta])$ to the left-top operad from Diagram ): $$\left(\Bar(\kk[u]\otimes s^2\Gerst^{\dual})\star \kk[\Delta], \frac{\d}{\d{u}}\frac{\d}{\d w} + \mu + \Delta^{ad}\frac{\d}{\d u}\right).$$ Theorem \[thm::psi-classes\] says that the differential in the bottom of the column should replace the operator $\frac{\d}{\d u}$ by the evaluation of the corresponding $\psi$-class. I. e. the image of the bottom complex is $(\Hycomm\star\kk[\Delta], \Delta\psi)$ where the differential “$\Delta\psi$” is defined on the generators by the following formula: $$(\Delta\psi) \cdot \mm_n = \sum_{i=0}^{n} (\psi_i \mm_n)\circ_i \Delta - \sum_{S_1\sqcup S_2= \{0,..,n\}} \mm_{|S_1|+1} \circ_{*} \Delta\circ_{*} \mm_{1+|S_2|}$$ The formulas have the same form whenever one uses the $\psi$-classes description of the $\Hycomm$-operad: $$\begin{aligned} (\Delta\psi) \cdot \psi_0^{d_0}\ldots \psi_{i}^{d_i}\ldots\psi_n^{d_n} [\oM_{0,n+1}] = & \sum_{i=0}^{n} \psi_i \prod_{s=0}^{n} \psi_s^{d_s} [\oM_{0,n+1}]\circ_i\Delta + \\ & - \sum_{S_1\sqcup S_2= \{0,\ldots,n\}} \prod_{s\in S_1} \psi_{s}^{d_s} [\oM_{0,|S_1|+1}] \otimes \Delta \otimes \prod_{s\in S_2} \psi_{s}^{d_s} [\oM_{0,|S_2|+1}] \end{aligned}$$ We finally ends up with the following corollary which seems to be quite useful in order to have a description of the Quillen homology and minimal resolution of $\BV$-operad: There exists a commutative diagram of quasi-isomorphisms of operads: $$\xymatrix{ *{\begin{array}{l} \left(\Bar(\kk[u]\otimes s^2\Gerst^{\dual})\star \kk[\Delta],\right. \\ {\ } \quad \left.\frac{\d}{\d{u}}\frac{\d}{\d w} + \mu + \Delta^{ad}\frac{\d}{\d u}\right) \end{array}} \ar@{->}[d]^{\kappa\circ \varepsilon} & & \left(\Bar(\kk[u]\oplus \kk[u]\otimes s^2\Gerst^{\dual}), \frac{\d}{\d{u}} \frac{\d}{\d{ w}} + \mu \right) \ar@{->}[ll]_-{j} \ar@{->}[d]^{\pi\circ\epsilon} \\ (\Hycomm\star\kk[\Delta],\Delta\psi) \ar@{..>}[rr]^{\theta} & & (\BV,0) }$$ Note that the operad $(\Hycomm\star \kk[\Delta], \Delta\psi)$ is an algebraic model of Kimura-Stasheff-Voronov operad (see e.g.[@Kimura] for details). Moreover, the map $\theta$ becomes an obviously defined projection that sends the operation $\mm_2\in\Hycomm(2)$ to the multiplication in $\BV$, $\Delta$ to $\Delta$ and all other generators $\mm_k$ for $k\geq 3$ of the operad $\Hycomm$ are mapped to $0$. Diagram chase {#sec:diagrammatic} ============= This technical section consists of the precise description of the inverse maps that appear in Diagram . The aim is to get precise formulas for the cocycles in this Diagram. We move our cocycles through the Diagram  step by step in the clockwise direction starting with the operad $\Hycomm$. The inverse of $\kappa$ ----------------------- The generators of the cohomology of $\left(\Bar(\Grav^{\dual}),\mu^{\Grav}\right)$ that project under $\kappa$ to the cocycles $\mm_i$, $i=2,3,\dots$, are $\mmg_i$ described in Section \[sec::open:moduli\], see Equation . The inverse of $\varepsilon$ {#sec::inv::epsilon} ---------------------------- The complex $\left(\Bar(\kk[u]\otimes s^2\Gerst^{\dual}), \frac{\d}{\d{u}}\frac{\d}{\d w} + \mu\right)$ has two differentials. The quasi-isomorphism $\varepsilon$ is the projection to the cohomology with respect to the differential $\frac{\d}{\d{u}}\frac{\d}{\d w}$. Let us give the inductive procedure of writing an inverse map to $\varepsilon$. We will show how one can increase the number of inputs in order to write down a sequence of representing cocycles. The way we are doing that is not symmetric in the inputs; each cocycle will depend on the ordering of the inputs, but different orderings will give homologous cocycles. The map that increases the number of inputs is defined as a linear combination of some auxiliary maps that we introduce now. Consider the natural embedding of the Orlik-Solomon algebras: $$\begin{aligned} \iota_{I,n}\colon & \kk[u]\otimes \Gerst^{\dual}(I) \rightarrow \kk[u]\otimes \Gerst^{\dual}(I\sqcup\{n\}); \\ \notag \iota_{I,n}\colon & w_{i j} \mapsto w_{i j}, \qquad \forall\ i,j\in I; \\ \notag \iota_{I,n}\colon & u \mapsto u. \end{aligned}$$ The meaning of this formula is the following. We just increase the number of inputs: the set of inputs $I$ is replaced by the set of inputs $I\sqcup \{n\}$. We extend the map $\iota_{I,n}$ to a derivation of the bar-complex $\Bar(\kk[u]\otimes s^2\Gerst^{\dual})$. It is not well-defined for the operations of arity $\geq n$, because in this case it might appear that $I\ni n$. But we restrict the resulting map to the operations of arity $n-1$. We denote this extension by $\iota_{n}\colon \Bar(\kk[u]\otimes s^2\Gerst^{\dual})(n-1)\to \Bar(\kk[u]\otimes s^2\Gerst^{\dual})(n)$. Now we define a collection of derivations $\varsigma_{s n}$, $s=0,\ldots,n-1$, of the Bar-complex $\Bar(\kk[u]\otimes s^2\Gerst^{\dual})$. Again, this definition we need only in arity $(n-1)$, and it doesn’t work in arity $\geq n$. The map $\varsigma_{s n}$ increases the set of inputs by the input $n$ in the same sense as $\iota_n$. Since $\varsigma_{s,n}$ is a derivation, it is enough to describe what happens when we apply it to a corolla $\gamma$. It produces a tree with one internal edge and two internal vertices. One vertex coincides with the corolla $\gamma$ and the remaining vertex corresponds to a binary operation, that is, it has two inputs and one output. There are two cases, $s=1,\dots,n-1$, and $s=0$. For $s=1$,…,$n-1$ we have a map: $$\begin{aligned} \varsigma_{s n}\colon & \kk[u]\otimes\Gerst^{\dual}(I) \rightarrow \left( \kk[u]\otimes\Gerst^{\dual}(I\sqcup\{*\}\setminus \{s\})\right) \otimes \left( \kk[u]\otimes\Gerst^{\dual}(\{s,n\})\right); \\ \notag & \frac{u^{k}}{k!} f(w_{ij}) \mapsto \sum_{k_1+k_2=k} \frac{u^{k_1}}{k_1!} f(w_{ij})\otimes \frac{u^{k_2+1}}{(k_2+1)!} w_{s n}.\end{aligned}$$ Note that in the first factor on the right hand side we identify $w_{is}$ and $w_{i*}$ as it is prescribed by the cocomposition rules defined in Equation . For $s=0$ we have: $$\begin{aligned} \varsigma_{0 n}\colon & \kk[u]\otimes\Gerst^{\dual}(I) \rightarrow \left( \kk[u]\otimes\Gerst^{\dual}(\{*,n\})\right)\otimes \left( \kk[u]\otimes\Gerst^{\dual}(I)\right); \\ \notag & \frac{u^{k}}{k!} f(w_{ij}) \mapsto - \sum_{k_1+k_2=k} \frac{u^{k_2+1}}{(k_2+1)!} w_{* n} \otimes \frac{u^{k_1}}{k_1!} f(w_{ij}).\end{aligned}$$ \[lem::homotopy:Gerst-&gt;grav\] The map $\zeta_n:= \iota_n + \sum_{s=0}^{n-1} \varsigma_{s n}$ is a chain map of homological degree $(-2)$ between the subcomplexes spanned by operations of arity $(n-1)$ and $n$: $$\zeta_n\colon \left(\Bar(\kk[u]\otimes s^2\Gerst^{\dual})(n-1),\frac{\d}{\d{u}}\frac{\d}{\d{w}} + \mu\right) \rightarrow \left(\Bar(\kk[u]\otimes s^2\Gerst^{\dual})(n),\frac{\d}{\d{u}}\frac{\d}{\d{w}} + \mu\right)[-2]$$ The only thing that we have to check is that $\zeta_n$ commutes with the differential. Since $\iota_n$ and $\varsigma_{s n}$, $s=0,\dots,n-1$, as well as $\frac{\d}{\d{u}}\frac{\d}{\d{w}}$ and $\mu$ are all derivations of the cobar complex, it is enough to check the compatibility on the generators. First, observe that $[\iota_n,\frac{\d}{\d{u}}\frac{\d}{\d{w}}]=0$, because they does not interact with the $n$’th input. Then we compute the image of the commutator $[\iota_n,\mu]$ applied to the monomial $\frac{u^{k}}{k!} f(w_{ij})$, where the indices $i,j$ belong a given set $K$: $$\begin{aligned} & (\iota_n\mu - \mu\iota_n) \left( \frac{u^{k}}{k!} f(w_{ij})\right) = \iota_n\left(\sum_{\begin{smallmatrix}I\sqcup J = K,\\ |J|\geq 2, |I|\geq 1,\\k_1+k_2=k\end{smallmatrix}} (-1)^{\deg_{w}{f^{I}}} \frac{u^{k_1}}{k_1!}f^{I}\otimes \frac{u^{k_2}}{k_2!}f^{J} \right) \\ \notag & - \left( \sum_{\begin{smallmatrix}(I\sqcup\{n\})\sqcup J = K\sqcup\{n\},\\ |J|\geq 2, |I|+1\geq 1,\\k_1+k_2=k\end{smallmatrix}} (-1)^{\deg_{w}{f^{I}}} \frac{u^{k_1}}{k_1!}f^{I}\otimes \frac{u^{k_2}}{k_2!}f^{J} + \sum_{\begin{smallmatrix}I\sqcup(J\sqcup\{n\}) = K\sqcup\{n\},\\ |J|+1\geq 2, |I|\geq 1,\\k_1+k_2=k\end{smallmatrix}} (-1)^{\deg_{w}{f^{I}}} \frac{u^{k_1}}{k_1!}f^{I}\otimes \frac{u^{k_2}}{k_2!}f^{J} \right).\end{aligned}$$ Since $\iota_n$ increases the number of inputs in the operations but does not change the monomial, the only summands that are not canceled in the difference above are the ones with $|J|=1$ or $|I|=0$. Therefore, $$[\iota_n,\mu] \cdot \left( \frac{u^{k}}{k!} f(w_{ij}) \right) = - \sum_{k_1+k_2=k} \left(\sum_{s\in K} (-1)^{\deg_{w}{f}} \frac{u^{k_1}}{k_1!}f^{*}\otimes \frac{u^{k_2}}{k_2!} 1^{s,n} + \frac{u^{k_1}}{k_1!}1^{n*}\otimes \frac{u^{k_2}}{k_2!}f \right)$$ The monomial $f^{*}$ is obtained from $f$ by replacing the index $s$ by an additional index $*$ that appears in the cocomposition. Observe that $[\varsigma_{s n},\mu]=0$, $s=0,\dots,n-1$, since $\mu$ vanishes on binary operations. Meanwhile, for $s=1,\dots,n-1$ we have: $$\begin{aligned} & \left[\varsigma_{s n},\frac{\d}{\d{u}}\frac{\d}{\d{w}}\right] \left( \frac{u^{k}}{k!} f(w_{ij})\right) \\ \notag & = \sum_{k_1+k_2=k-1} \frac{u^{k_1}}{k_1!} \frac{\d f}{\d w} \otimes \frac{u^{k_2+1}}{(k_2+1)!} w^{sn} -\frac{\d}{\d u}\frac{\d}{\d w} \left(\sum_{k_1+k_2 = k} \frac{u^{k_1}}{k_1!} f \otimes \frac{u^{k_2+1}}{(k_2+1)!} w^{sn}\right) \\ \notag & = \sum_{k_1+k_2=k-1} \frac{u^{k_1}}{k_1!} \frac{\d f}{\d w} \otimes \frac{u^{k_2+1}}{(k_2+1)!} w^{sn} -\sum_{k_1+k_2 =k}\left( \frac{u^{k_1-1}}{(k_1-1)!} \frac{\d f}{\d w} \otimes \frac{u^{k_2}}{k_2!} w^{sn} +(-1)^{deg_{w}f -1} \frac{u^{k_1}}{k_1!} f \otimes \frac{u^{k_2}}{k_2!} 1^{sn}\right) \\ \notag & = (-1)^{deg_w f} \sum_{k_1+k_2=k} \frac{u^{k_1}}{k_1!} f \otimes \frac{u^{k_2}}{k_2!} 1^{sn}.\end{aligned}$$ Here the sign $(-1)^{deg_{w}f -1}$ comes from the Koszul sign rule. Similarly, for $s=0$ we have: $$\left[\varsigma_{0 n},\frac{\d}{\d{u}}\frac{\d}{\d{w}}\right] \left( \frac{u^{k}}{k!} f(w_{ij})\right) = \sum_{k_1+k_2=k} \frac{u^{k_1}}{k_1!} 1^{* n} \otimes \frac{u^{k_2}}{k_2!} f.$$ Finally, we see the cancellation: $$\begin{aligned} & \left[\zeta_n, \frac{\d}{\d{u}}\frac{\d}{\d{w}} +\mu\right] \left( \frac{u^{k}}{k!} f(w_{ij})\right) \\ \notag & = - \sum_{k_1+k_2=k} \left(\sum_{s\in K} (-1)^{\deg_{w}{f}} \frac{u^{k_1}}{k_1!}f\otimes \frac{u^{k_2}}{k_2!} 1^{s,n} + \frac{u^{k_1}}{k_1!}1^{n}\otimes \frac{u^{k_2}}{k_2!}f \right) \\ \notag & \phantom{ = }\ + \sum_{k_1+k_2=k} \frac{u^{k_1}}{k_1!} 1^{* n} \otimes \frac{u^{k_2}}{k_2!} f + \sum_{s=1}^{n-1} \sum_{k_1+k_2=k} (-1)^{deg_w f} \frac{u^{k_1}}{k_1!} f \otimes \frac{u^{k_2}}{k_2!} 1^{sn} \\ \notag & = 0.\end{aligned}$$ We define a sequence of elements $\nu_n\in \Bar(\kk[u]\otimes s^2\Gerst^{\dual})(n)$, $n=2,3,\dots$. We set $\nu_2=w_{12}$ and define $\nu_{i+1}:=\zeta_{i+1}(\nu_i)$, $i=2,3,\dots$. Lemma \[lem::homotopy:Gerst-&gt;grav\] implies that The elements $\nu_n$ are the cocycles that project to the generators of the hypercommutative operad, $n=2,3,\dots$. That is, for all $n\geq 2$ we have: $$\left(\frac{\d}{\d{u}}\frac{\d}{\d{w}} +\mu\right)\nu_n=0 \quad \mbox{ and }\quad \kappa(\varepsilon(\nu_n))=\mm_n.$$ \[rem::S\_n::action::cocycles\] Any permutation $\sigma$ of the inputs will provide another choice of a cocycle given by $$\zeta_{\sigma(n)}( \zeta_{\sigma({n-1})}(\ldots(\zeta_{\sigma(3)} (w_{\sigma(1)\sigma(2)}))\ldots)).$$ It is homologous to $\nu_n$ for any $\sigma\in S_n$. ### The topological recursion relation {#sec::TRR} In this section we show how the formulas for $\nu_n$, $n=2,3,\dots$, imply the topological recursion relations. \[lem::TRR\] The following two cocycles are homologous: $$\nu_{n} \circ_1 \frac{\partial}{\partial u} \quad \mbox{ and } \quad \sum_{|S_1\sqcup S_2| = n-2} \nu_{S_1\sqcup \{2,*\}} \otimes \nu_{S_2 \sqcup \{1\}}.$$ Similarly, the cocycle $\frac{\partial}{\partial u}\circ_0 \nu_{n}$ is homologous to the sum $\sum_{|S_1\sqcup S_2| = n-2} \nu_{S_1\sqcup \{*\}} \otimes \nu_{S_2 \sqcup \{1,2\}}$. Recall that the meaning of the derivation $\circ_i\frac{\partial}{\partial u}$ is to take the partial derivative with respect to the variable $u$ attached to the $i$-th input (or, in the case of $i=0$, output) of the element in the cobar complex (c. f. Theorem \[thm::psi-classes\]). A direct computation similar to the one we made in the proof of Lemma \[lem::homotopy:Gerst-&gt;grav\] shows that the commutator $[\circ_i\frac{\d}{\d u},\zeta_n]$ acts on the monomial generator $\frac{u^{k}}{k!} f(w_{ij})$ by the following formula: $$\left[\circ_i\frac{\d}{\d u},\zeta_n\right] \left( \frac{u^{k}}{k!} f(w_{ij}) \right) = \left[\circ_i\frac{\d}{\d u},\varsigma_{in} \right] \left( \frac{u^{k}}{k!} f(w_{ij}) \right) = \frac{u^{k}}{k!} f^{*}\otimes w_{in}.$$ Here $f^{*}$ is obtained from $f$ by replacing the index $i$ with the index $*$ corresponding to the coproduct. Note that two cocycles are homologous if and only if they have the same image under the morphism $\kappa\circ \varepsilon$, since this morphism is a projection on the homology. Recall that the augmentation map $\varepsilon$ annihilates all positive powers of $u$ and, in particular, $\varepsilon\circ \varsigma_{sn}=0$. This implies the following sequence of identities: $$\begin{aligned} \varepsilon\left(\nu_n\circ_1 \frac{\partial}{\partial u}\right) & = \varepsilon\left(\sum_{j=3}^{n}\zeta_n\ldots[\circ_1\frac{\d}{\d u},\zeta_j]\ldots\zeta_3 w_{12}\right) \\ \notag & = \varepsilon\left(\sum_{j=3}^{n}\iota_n\cdots\iota_{j+1}\left[\circ_1\frac{\partial}{\partial u},\varsigma_{1j}\right] \iota_{j-1}\cdots \iota_{3}(w_{12})\right) \\ \notag & = \varepsilon\left(\sum_{j=3}^{n} \sum_{S_1\sqcup S_2 = \{j+1,\ldots,n\}}\left( \left( \prod_{s\in S_1}\iota_{s} \right) \iota_{j-1}\cdots\iota_3 (w_{2 *})\right) \otimes \left( \prod_{s\in S_2}\iota_{s} (w_{1 j})\right)\right) \\ \notag & =\varepsilon\left(\sum_{S_1\sqcup S_2 = \{3,\dots,n\}} \nu_{S_1\sqcup \{2,*\}} \otimes \nu_{S_2 \sqcup \{1\}}\right).\end{aligned}$$ The second statement of Lemma \[lem::TRR\] deals with the derivation $\frac{\d}{\d u}\circ_0$ with respect to the variable $u$ attached to the output. The proof is absolutely the same. These homologous properties of the cocycles $\nu_n$ implies the topological recursion relations. \[cor::TRR\] We have: $$\label{eq::TRR} \psi_0^{d_0}\psi_1^{d_1+1}\psi_2^{d_2}\cdots \psi_n^{d_n} [\oM_{0,n+1}] = \sum_{S_1\sqcup S_2= \{3,\ldots,n\}} \prod_{s\in S_1\sqcup\{0,2\}} \psi_{s}^{d_s} [\oM_{0,|S_1|+3}] \otimes \prod_{s\in S_2\sqcup\{1\}} \psi_{s}^{d_s} [\oM_{0,|S_2|+2}].$$ Similarly, $$\label{eq::TRR-0} \psi_0^{d_0+1}\psi_1^{d_1}\cdots \psi_n^{d_n} [\oM_{0,n+1}] = \sum_{S_1\sqcup S_2= \{3,\ldots,n\}} \prod_{s\in S_1\sqcup\{0\}} \psi_{s}^{d_s} [\oM_{0,|S_1|+2}] \otimes \prod_{s\in S_2\sqcup\{1,2\}} \psi_{s}^{d_s} [\oM_{0,|S_2|+3}].$$ It follows from Theorem \[thm::psi-classes\] that we can use the partial derivation with respect to $u$ attached to the $i$-th input (respectively, to the output) instead of taking $\psi$-class in the $i$-th marked point (respectively, to the $0$-th marked point). Therefore, $$\begin{aligned} & \psi_1\prod_{s\in\{0,\ldots,n\}}\psi_s^{d_s} [\oM_{n+1}] = \kappa\circ\varepsilon\left(\left(\frac{\partial}{\partial u}\circ_0\right)^{d_0}\prod_{s=1}^{n} \left(\circ_s \frac{\partial}{\partial u}\right)^{d_s} \circ_i\frac{\d}{\d u} \nu_n \right) \\ \notag & = \kappa\circ\varepsilon\left(\left(\frac{\partial}{\partial u}\circ_0\right)^{d_0}\prod_{s=1}^{n} \left(\circ_s \frac{\partial}{\partial u}\right)^{d_s} \sum_{S_1\sqcup S_2 = \{3,\ldots,n\}} \nu_{S_1\sqcup \{2,*\}} \otimes \nu_{S_2\sqcup\{1\}}\right) \\ \notag & = \sum_{S_1\sqcup S_2 = \{3,\ldots,n\}} \kappa\circ\varepsilon\left(\left(\frac{\partial}{\partial u}\circ_0\right)^{d_0}\prod_{s\in S_1\sqcup\{2\}} \left(\circ_s \frac{\partial}{\partial u}\right)^{d_s} \nu_{S_1\sqcup \{2,*\}}\right) \otimes \kappa\circ\varepsilon\left(\prod_{s\in S_2\sqcup\{1\}} \left(\circ_s \frac{\partial}{\partial u}\right)^{d_s} \nu_{S_2\sqcup\{1\}}\right) \\ \notag & = \prod_{s\in S_1\sqcup\{0,2\}} \psi_{s}^{d_s} [\oM_{0,|S_1|+3}] \otimes \prod_{s\in S_2\sqcup\{1\}} \psi_{s}^{d_s} [\oM_{0,|S_2|+2}] \end{aligned}$$ The proof of the second statement of the corollary is exactly the same. The symmetric group acts on the cocycles $\nu_n$ changing them to the homologous one. Therefore, one can change the indices $1,2$ in the statement of Lemma \[lem::TRR\] and Corollary \[cor::TRR\] to any other pair of indices $i,j\in\{1,\ldots,n\}$. This completes our algebraic proof of the topological recursion relations. In particular, Equations  and  imply combinatorially that in the case $d_0+\cdots+d_n = n-2$ the product of $\psi$-classes evaluated on the fundamental class coincides with the iterated multiplication up to a multinomial coefficient: $$\psi_0^{d_0}\ldots\psi_n^{d_n}[\oM_{n+1}](x_1,\ldots,x_n) = \frac{(n-2)!}{d_0!\ldots d_n!} m(x_1,\ldots,x_n)$$ This formula explains the factors used in the definition of the map $\theta$ and, in particular, in Equation . The inverse of $j$ ------------------ In this section we construct the cocycles in the complex $$\label{eq:complex} \left(\Bar(\kk[u]\otimes s^2\Gerst^{\dual})\star \kk[\Delta]/\Delta, \frac{\d}{\d{u}} \frac{\d}{\d{ w}} + \mu + \Delta\frac{\d}{\d u} + \Delta\frac{\d}{\d\phi} \right)$$ that represent there the generators $\mm_n$, $n=2,3,\ldots$, of $\Hycomm$. The construction uses the definition of the homotopy quotient. Recall that the defining Equation  implies the following two identities: $$(d+\Delta\frac{\d}{\d\phi}) \Phi(z) = \Phi(z)(d+ z\Delta), \quad \Phi(z)^{-1} (d+\Delta\frac{\d}{\d\phi}) = (d+ z\Delta)\Phi(z)^{-1}$$ Therefore, the adjoint action of $\Phi$ on the complex  given by $\Phi^{ad}(z)\colon \gamma \mapsto \Phi(z) \gamma \Phi(z)^{-1}$ satisfies the following equation: $$\label{eq::Phi_adjoint} \left(d+\Delta\frac{\d}{\d\phi}\right) \Phi^{ad}(z)(\gamma) = \Phi^{ad}(z)( d\gamma + z[\Delta,\gamma]).$$ We use $\Phi(z)$ as a group-like element. This means that we want $\Phi^{ad}(z)$ must preserve the operadic composition, that is, $\Phi^{ad}(z)(\alpha\circ\beta)=(\Phi^{ad}(z)\alpha)\circ(\Phi^{ad}(z)\beta)$, where $z$ is an operator acting on corollas. \[lem::inv::j\] Let $\nu$ be a cocycle in $\left(\Bar(\kk[u]\otimes s^2\Gerst^{\dual}), \frac{\d}{\d{u}}\frac{\d}{\d{w}} + \mu\right)$. The cochain $\Phi^{ad}(\frac{\d}{\d u})\nu$ is a cocycle in the dg-operad $\left(\Bar(\kk[u]\otimes s^2\Gerst^{\dual})\star \kk[\Delta]/\Delta, \frac{\d}{\d{u}} \frac{\d}{\d{ w}} + \mu + \Delta^{ad}\frac{\d}{\d u} + \Delta\frac{\d}{\d\phi} \right)$. Moreover, $j(\Phi^{ad}\left(\frac{\d}{\d u}\right)\nu) = \nu$. Equation  implies that $$\left(\frac{\d}{\d{u}} \frac{\d}{\d{ w}} + \mu+\Delta\frac{\d}{\d \phi} + \Delta^{ad}\frac{\d}{\d u}\right) \Phi^{ad}\left(\frac{\d}{\d u}\right)\nu = \Phi^{ad}\left(\frac{\d}{\d u}\right) \left( \frac{\d}{\d{u}} \frac{\d}{\d{ w}} + \mu+ \Delta^{ad}\frac{\d}{\d u} -\Delta^{ad}\frac{\d}{\d u}\right)\nu = 0.$$ Since $j$ annihilates $\phi_i$, $i=1,2,\ldots$, the second statement of the lemma is obvious. Therefore, cocycles representing the generators $\mm_n$, $n=2,3,\ldots$, of $\Hycomm$ in the dg-operad  can be given by the formula $$\label{eq:cocycles} \Phi^{ad}\left(\frac{\d}{\d u}\right)\nu_n = \Phi^{ad}\left(\frac{\d}{\d u}\right)\zeta_{n}\cdots\zeta_{3}(w_{12}).$$ The projection $\pi\circ\epsilon$ --------------------------------- In this section we apply the projection $\pi\circ\epsilon$ to the cocycles given by Equation . Recall that the projection $\epsilon$ from Section \[sec::BV=Gerst\*Delta\] maps $u$ to $0$. The projection $\pi$ given by Equations  and  annihilates all non-binary trees in the cobar complex. In particular, $\pi$ vanishes on all contributions of the operators $\iota_n$ for the formulas $\nu_m$ for all $3\leq n\leq m$. Therefore, $$\label{eq::Hycom_to_BV_with_u} \theta_n:= \pi\circ\epsilon\left(\Phi^{ad}\left(\frac{\d}{\d u}\right)\cdot \nu_n\right) = \pi\circ\epsilon\left(\Phi^{ad}\left(\frac{\d}{\d u}\right) \sum_{\begin{smallmatrix} (i_3,\ldots,i_n) :\\ \forall s \ 0\leq i_s\leq s \end{smallmatrix}} \varsigma_{i_n n}\cdots\varsigma_{i_3 3} (w_{12}) \right).$$ Finally we are able to state our main result: \[thm:theta\] The map $\theta\colon\Hycomm\to\BV/\Delta$ defined by $\theta\colon \mm_n\mapsto \theta_n$ is a quasi-isomorphism of dg-operad. It makes the diagram  commutative. Theorem \[thm::diag:Hycom-&gt;BV\] implies that the cohomology of $\left(\BV/\Delta,\Delta\frac{\d}{\d\phi}\right)$ is isomorphic to $\Hycomm$. We denote by $\Q\subset \BV/\Delta$ the intersection of the kernel of $\Delta\frac{\d}{\d\phi}$ with the suboperad of $\BV/\Delta$ generated by multiplication and $\phi_i$’s. Observe that the suboperad $\Q\subset\BV/\Delta$ belongs to the cohomology. Indeed, by definition $\Q$ doesn’t intersect the image of $\Delta\frac{\d}{\d\phi}$ and belongs to the kernel of $\Delta\frac{\d}{\d\phi}$. Note that $\Delta$ does not appear in the representing cocycles $\Phi^{ad}(\frac{\d}{\d u}) \nu_n$ and, therefore, $\theta_n$ also does not contain $\Delta$ in its presentation in terms of the generators. This implies that the cocycles $\theta_n$ belong to $\Q$, $n=2,3,\dots$. The same is true if we apply the diagram chase for any element of $\Hycomm$. Therefore the full cohomology of $\left(\BV/\Delta,\Delta\frac{\d}{\d\phi}\right)$ is equal to $\Q$, and the map $\mm_n\mapsto \theta_n$, $n\geq 2$, defines the isomorphism between $\Hycomm$ and $\Q$. We finish this section with a diagram that summarizes our chase of cocycles in Diagram : $$\xymatrix{ { \nu_n \in \Bar(\kk[u]\otimes s^2\Gerst^{\dual}) } \ar@{->}[d]^{\kappa\circ\varepsilon} & { \Phi^{ad}\left(\frac{\d}{\d u}\right)\nu_n \in \frac{\Bar(\kk[u]\otimes s^2\Gerst^{\dual})\star \kk[\Delta]}{\Delta} } \ar@{->}[l]_-{j} \ar@{->}[d]^{\pi\circ\epsilon} \\ { \mm_n \in \Hycomm } \ar@{..>}[r]^{\theta} & { \theta_n \in \BV/\Delta } }$$ Examples for $n=2$ and $3$ {#sec::computation:2:3} -------------------------- In this section we compute Formula  for $n=2$ and $n=3$ and show the coincidence of two morphism (one via Givental graphs, another via diagram chase) for $n=2,3$. A direct computation for $n=2$ gives that $$\theta_2=\pi\circ\epsilon\left( \Phi^{ad}\left(\frac{\d}{\d u}\right) \left(w_{12}\right) \right)=\pi\left(w_{12}\right)=m_2,$$ which is exactly the formula for $\theta_2$ described in Section \[sec:quasi\]. In the case of $n=3$, we have: $$\theta_3 = \pi\circ\epsilon\circ \Phi^{ad}\left(\frac{\d}{\d u}\right) \left(\varsigma_{03} (w_{12}) + \varsigma_{13}(w_{12}) + \varsigma_{23}(w_{12})\right)$$ By definition, $$\varsigma_{03} (w_{12}) + \varsigma_{13}(w_{12}) + \varsigma_{23}(w_{12}) = - (u w_{3*}) \circ_{*} w_{12} + w_{2*} \circ_{*} (u w_{13}) + w_{1*} \circ_{*} (u w_{23}).$$ Using that $$\begin{aligned} \Phi^{ad}\left(\frac{\d}{\d u}\right) ((u w_{3*}) \circ_{*} w_{12}) = & \phi_1 \circ( w_{3*} \circ_{*} w_{12}) - (w_{3*}\circ_3 \phi_1) \circ_* w_{12} - w_{3*}\circ_{*}\phi_1\circ_* w_{12} \\ \notag \Phi^{ad}\left(\frac{\d}{\d u}\right) (w_{2*} \circ_{*} (u w_{13})) =& w_{2*} \circ_{*} \phi_1\circ_{*} w_{13} - w_{2*}\circ_{*} w_{13}\circ_1 \phi_1 - w_{2*}\circ_{*} w_{13}\circ_3 \phi_1 \\ \notag \Phi^{ad}\left(\frac{\d}{\d u}\right) (w_{1*} \circ_{*} (u w_{23})) =& w_{1*} \circ_{*} \phi_1\circ_{*} w_{23} - w_{1*}\circ_{*} w_{23}\circ_2 \phi_1 - w_{1*}\circ_{*} w_{23}\circ_3 \phi_1 \end{aligned}$$ it is then straightforward to compute the final expression for $\theta_3$ that appears to be a summation of $7$ terms and coincides with the formula for $\theta_3$ described in Example \[ex::theta\_23\]. The fact that we finally obtain the same formula for all $n\geq 0$ as in Section \[sec:quasi\] is based on Lemma \[lem::TRR\] and in particular on the topological recursion relations considered in Theorem \[thm::psi-classes\]. An easier proof is given in the next Section using a uniqueness argument. Uniqueness {#sec::uniqueness} ---------- In order to get the coincidence of morphisms $\theta$ (first defined via summation of Givental graphs in Section \[sec:quasi\] and second via diagram chase in formula ) we just explain in the lemma below that there is no big freedom in the possible morphisms from $\Hycomm$ to $\BV/\Delta$. \[thm::unique\] Any graded automorphism of the operad $\Hycomm$ is defined by arbitrary dilations of $\mm_2$ and $\mm_3$. I. e. for a given pair $\lambda_2,\lambda_3$ there exist a unique automorphism of $\Hycomm$ given by formulas $\mm_n \mapsto \lambda_2 \lambda_3^{n-2} \mm_n$ with $n\geq 2$; moreover, any automorphism belongs to this system. Indeed, note that for all $n\geq 2$ the subspace of $\Hycomm(n)$ of homological degree $4-2n$ is onedimensional and is generated by the generator of $\Hycomm$ operad denoted earlier by $\mm_n$. Therefore any graded automorphism of $\Hycomm$ should be of the form $\mm_n\mapsto \lambda_n \mm_n$. The quadratic equations $\sum_{i+j}\mm_i\circ\mm_j =0$ in the operad $\Hycomm$ implies that the product $\lambda_i\lambda_j$ should depend only on the sum $i+j$. By induction this follows that $\lambda_n = {\lambda_2}{\lambda_3}^{n-2}$. The morphism $\theta:\Hycomm\rightarrow \BV/\Delta$ given by Formula  via summation over binary trees coincides with the morphism $\theta$ described in Section \[sec:quasi\] via summation of Givental graphs. In the proof of Theorem \[thm:theta\] we explained that the suboperad $\Q\subset\BV/\Delta$ that is the intersection of the kernel of the differential $\Delta\frac{\d}{\d\phi}$ and the suboperad generated by multiplication and $\phi_i$’s is isomorphic to $\Hycomm$. Two maps $\theta$ that we have constructed defines two particular (iso)morphisms from $\Hycomm$ to $\Q$. We checked that this two morphisms coincide for $\mm_2$ and $\mm_3$. Therefore, our uniqueness Proposition \[thm::unique\] implies that they are the same for all $\mm_k$. It is possible to show the coincidence of two formulas for $\theta$ without using uniqueness arguments. The proof we know is technical and is based on the generalization of Lemma \[lem::TRR\]. Givental theory {#sec:givental} =============== In this section prove Theorem \[thm::formula\_BV-Hycomm\] using the Givental theory of a loop group action on the morphisms from $\Hycomm$ to an arbitrary operad. In fact, the action of the loop group on the $\Hycomm$-algebras has also a homological explanation. It comes from the action on trivializations of $\BV$-operator, and we explain this at the end of this section. Lie algebra action on morphisms of $\Hycomm$ -------------------------------------------- Consider an arbitrary operad $\P$. We consider morphisms of operads $\Hycomm\to \P$. We are going to introduce an infinitesimal action of the Lie algebra $\mathfrak{g}:=\P(1)\otimes {\mathbb C}[[z]]$ on space of morphisms, where $z$ is a formal variable and $\P(1)$ is considered as a Lie algebra with respect to the commutator $[x,y]=xy-yx$, $x,y\in \P(1)$. In order to fix a morphism of $\Hycomm$ to $\P$, we consider a system of cohomology classes $\alpha_n\in H^{{{\:\raisebox{4pt}{\selectfont\text{\circle*{1.5}}}}}}(\oM_{0,n+1},{\mathbb C})\otimes \P(n)$. These classes must satisfy the following condition: - For any map $\rho \colon \oM_{0,n_1+1}\times \oM_{0,n_2+1} \to \oM_{0,n+1}$, $n_1+n_2=n-1$, that realizes a boundary divisor in $\oM_{0,n+1}$ and induces the operadic composition $\circ_i\colon \Hycomm(n_1)\otimes \Hycomm(n_2)\to \Hycomm(n)$, we have: $$\label{eq:factorization} \rho^*\alpha_n = \alpha_{n_1} \circledast_i \alpha_{n_2},$$ where by $\circledast_i$ we denote the simultaneous product of cohomology classes and the $\circ_i$-composition in $\P$. The infinitesimal action of the Lie algebra $\mathfrak{g}$ is given by the explicit formulas. Consider an element $r_\ell z^\ell\in\mathfrak{g}$ for some $\ell\geq 0$. We have: $$\begin{aligned} \label{eq:r-action} r_\ell z^\ell. \alpha_n := & r_\ell \circ_1 \psi_0^\ell \alpha_n + (-1)^{\ell+1} \sum_{m=1}^n \psi_m^\ell \alpha_n \circ_m r_\ell \\ \notag & +\sum_{I\sqcup J=[n]} \sum_{i+j=\ell-1} (-1)^{i+1} \rho_* \left(\psi_1^i\alpha_{|I|+1} \circ_1 r_\ell \circledast_1 \psi_0^j\alpha_{|J|}\right)\end{aligned}$$ Here in all cases $\circ_m$ denotes the operation in $\P$; $\psi_m$ denotes the $\psi$-class in the corresponding moduli space ($\oM_{0,n+1}$ in the second summand or $\oM_{0,|I|+2}$ and $\oM_{0,|J|+1}$ in the third summand), that is, the first Chern class of the line bundle with the fiber $T^*_{x_m} C$ over the curve $(C,x_0,x_1,\dots,x_k)\in \oM_{0,k+1}$ ($k$ is then equal to $n$, $|I|$, and $|J|+1$ respectively). Moreover, we always assume that the “output” marked point is $x_0$, and, in the third summand, we assume that the map $\rho$ attaches the output point of $\oM_{0,|J|+1}$ to the first input (that is, the point $x_1$) of $\oM_{0,|I|+2}$. In the case $\ell=0$ we simply have $r_0z^0.\alpha_n=[r_0,\alpha_n]$ in the sense of commutation of operadic compositions in $\P$. The formula for the $\mathfrak{g}$-action is a generalization of the formulas considered in [@Giv3; @Lee1; @Sha; @Tel], and we refer the reader to these papers for a more detailed introduction to the Givental theory. \[lem:infinitesimal\] For any $r=\sum_{\ell=0}^\infty r_\ell z^\ell\in\mathfrak{g}$ and any system of classes $\alpha_n\in H^{{{\:\raisebox{4pt}{\selectfont\text{\circle*{1.5}}}}}}(\oM_{0,n+1},{\mathbb C})\otimes \P(n)$, $n\geq 2$, that satisfies the factorization condition , the classes $\alpha_n+\epsilon \cdot r.\alpha_n\in H^{{{\:\raisebox{4pt}{\selectfont\text{\circle*{1.5}}}}}}(\oM_{0,n+1},{\mathbb C})\otimes \P(n)$ also satisfy the factorization condition  in the first order in $\epsilon$. It is a straightforward generalization of Proposition 6.9 in [@Tel]. It follows from Lemma \[lem:infinitesimal\] that for any morphism $g\colon\Hycomm\to \P$ and an arbitrary sequence of elements $r_\ell\in \P(1)$, $\ell=1,2,\dots$, we obtain a new morphism $\exp(r.)g\colon \Hycomm\to \P$, $r=\sum_{\ell=1}^\infty r_\ell z^\ell$, by exponentiation of the infinitesimal Lie algebra action defined above. This means that we define an action of the Lie group $G=\{M(z) \in O(1)\otimes {\mathbb C}[[z]], M(0)=1\}$ on the space of morphisms $\Hycomm\to \P$. Application to the $\BV$-operad ------------------------------- We consider the morphism $\theta_0\colon \Hycomm\to \BV$ that sends the generator $\mm_k$ to the iterated multiplication $m()$, $k\geq 2$. In terms of the infinitesimal Givental action the condition that $\Delta$ is the second order operator with respect to the multiplication can be written as $$\label{eq:BV} (\Delta z^1).\theta_0 = 0$$ (it is proved in a bit different terms in [@Sha Proposition 1]). The same map $\theta_0$ can be also considered as a map to $\BV/\Delta$. In this case, in addition to Equation  we also have $$\label{eq:d} \left(\Delta\frac{\d}{\d \phi} z^0\right).\theta_0 = 0.$$ (abusing a little bit the notation we think of $\Delta \frac{\d }{\d \phi}$ as an element of $\BV/\Delta$ such that the differential is given by the commutator with this element). Consider the map $\theta\colon \Hycomm\to \BV/\Delta$ defined by $\exp(\phi(z).)\theta_0$. There are several observations. First of all, just by construction, $\theta$ is a morphism of operads. Second, we want to show that $\theta$ is a morphism of dg-operads, that is, $(\Delta\frac{\d}{\d\phi} z^0).\theta = 0$. This follows from the following computation: $$\left(\Delta\frac{\d}{\d\phi} z^0\right).\theta = \left(\Delta\frac{\d}{\d\phi}z^0\right).\exp(\phi(z).)\theta_0 = \exp(\phi(z).)\left(\Delta\frac{\d}{\d\phi}z^0+\Delta z^1\right). \theta_0 = 0.$$ Here the first equality is the definition of $\theta$, the second one is a consequence of Equation , and the third equality follows from Equations  and . Thus we see that $\theta(\Hycomm)\subset \Q\subset \BV/\Delta$, where $\Q$ is the suboperad considered in the proof of Theorem \[thm:theta\], that is, $\Q$ is the intersection of the kernel of $\Delta\frac{\d}{\d \phi}$ with the suboperad generated by the multiplication and $\phi_i$’s, $i=1,2,\ldots$. In the proof of Theorem \[thm:theta\] we observed that $\Q$ is isomorphic to $\Hycomm$. Moreover, a simple degree count shows that the map $\theta\colon \mm_k\mapsto \theta_k$, $k=2,3,\dots$, preserves the degrees. Therefore, $\theta$ maps generators to generators, and it is an isomorphism between $\Hycomm$ and $\Q$. The last observation is that $\theta$ is exactly the map constructed in Section \[sec:quasi\] in terms of graphs. This can be observed by an explicit exponentiation of the formula , and, for example, it is also explained in [@Tel Section 6.14] and [@DunShaSpi]. This completes the proof of Theorem \[thm::formula\_BV-Hycomm\]. Homological origin of the Givental action ----------------------------------------- In this section we explain how the Givental group action emerges naturally via the loop group action on trivializations of $\Delta$. Consider a finite-dimensional $\Hycomm$-algebra $V$ with zero differential. Let $\bar V$ be the corresponding differential graded $\BV$-algebra with the differential $d$, and we denote by $\phi_i$ the corresponding additional operators coming from structure of $\BV/\Delta$ on $\bar V$. Consider an arbitrary sequence of endomorphisms $\alpha_i\in \End(V)$. Since the cohomology of $\bar V$ coincides with $V$, we can define a sequence of endomorphisms $\bar{\alpha}_i\in\End(\bar V)$ such that they commute with the differential on $\bar V$ and their restrictions to the cohomology coincide with $\alpha_i$, $i=1,2,\ldots$. We have: $$\exp\left(-\sum_{i=1}^\infty\bar{\alpha}_i z^{i}\right) d \exp\left(\sum_{i=1}^\infty\bar{\alpha}_i z^i\right) = d$$ Therefore, $$\exp(-\phi(z))\exp\left(-\sum_{i=1}^\infty\bar{\alpha}_i z^{i}\right) d \exp\left(\sum_{i=1}^\infty\bar{\alpha}_i z^i\right)\exp(\phi(z))= d+z\Delta$$ Thus we see that the sequence of operators $\phi'_i$ given by the formula $$\phi'(z)=\sum\phi'_i z^{i} \colon = \ln(\exp(\bar{\alpha}(z)\exp(\phi(z)))$$ defines a new $\BV/\Delta$-algebra structure on $(\varsigma(V),d)$. This structure induces a new $\Hycomm$-algebra structure on $V=H^\bullet(\bar V, d)$. The new $\Hycomm$-algebra structure on $V$ coincides with the one obtained by the Givental group action of the element $\exp\left(\sum_{i=1}^\infty\alpha_iz^i\right)$ applied to the original $\Hycomm$-algebra. It is easier to compare the infinitesimal deformations. Indeed, assume that $\sum_{i=1}^\infty\bar{\alpha}_i z^i= r_\ell z^\ell$ and we consider the first order deformation in $r_\ell$. In this case $\phi'(z)=\phi(z)+ r_\ell z^\ell$. Then it is just a tautological observation to see that the corresponding deformation of the formulas for $\theta_k$ in Section \[sec:quasi\], $k\geq 2$, is given by Equation . [99]{} V. I. Arnold, The cohomology ring of the group of dyed braids (in Russian), Mat. Zametki 5 (1969), 227–231. S. Barannikov, M. Kontsevich, Frobenius manifolds and formality of Lie algebras of polyvector fields, Int. Math. Res. Not. **1998**, no. 4, 201–215. M. Bershadsky, S. Cecotti, H. Ooguri, C. Vafa, Kodaira-Spencer theory of gravity and exact results for quantum string amplitudes, Comm. Math. Phys. **165** (1994), no. 2, 311–427. V. Dotsenko, A. Khoroshkin, Free resolutions via Gröbner bases, arXiv: 0912.4895v3, 1–31. V. Dotsenko, S. Shadrin, B. Vallette, De Rham cohomology and homotopy Frobenius manifolds, arXiv:1203.5077v1, 1–10. P. Dunin-Barkowski, S. Shadrin, L. Spitz, Givental graphs and inversion symmetry, arXiv:1201.4930, 1–23. G.  Drummond-Cole, Homotopically trivializing the circle in the framed little disks , arXiv:1112.1129v1, 1–37. G. Drummond-Cole, B. Vallette, The minimal model for the Batalin-Vilkovisky operad, preprint 2011. I. Galvez-Carrillo, A. Tonks, B. Vallette, Homotopy Batalin-Vilkovisky algebras, arXiv:0907.2246v3, 1–57. E. Getzler, Two-dimensional topological gravity and equivariant cohomology, Comm. Math. Phys. **163** (1994), no. 3, 473–489 E. Getzler, Batalin-Vilkovisky algebras and two-dimensional topological field theories, Comm. Math. Phys. **159** (1994), 265–285. E. Getzler, Operads and moduli spaces of genus $0$ Riemann surfaces, arXiv:alg-geom/9411004v1, 1–24. E. Getzler, J. Jones Operads, homotopy algebra and iterated integrals for double loop spaces arXiv:hep-th/9403055v1, 1–70. J. Giansiracusa, P. Salvatore, Formality of the framed little 2-discs operad and semidirect products. Contemp. Math., 519, AMS Providence, 2010, 115–121, A. B. Givental, Symplectic geometry of Frobenius structures, in: Frobenius manifolds. Quantum cohomology and singularities. Proceedings of the workshop, Bonn, Germany, July 8–19, 2002 (C. Hertling et al., eds.), 91–112, Vieweg, Wiesbaden, Aspects of Mathematics E 36, 2004. T. Kimura, J. Stasheff, A. Voronov, On operad structures of moduli spaces and string theory, Comm. Math. Phys. **171** (1995), 1–25. L. Katzarkov, M. Kontsevich, T. Pantev, Hodge theoretic aspects of mirror symmetry, From Hodge theory to integrability and TQFT tt\*-geometry, Proc. Sympos. Pure Math 87–174. Amer. Math. Soc., Providence, RI, 2008. S. Keel, Intersection theory of moduli space of stable $n$-pointed curves of genus zero, Trans. Amer. Math. Soc. 330 (1992), no. 2, 545–574 M. Kontsevich, Operads and Motives in Deformation Quantization. Lett. Math. Phys., no. 48 (1999) 35–72. Y.-P. Lee, Invariance of tautological equations II: Gromov–Witten theory (with Appendix A by Y. Iwao and Y.-P. Lee), J. Amer. Math. Soc. **22** (2009), no. 2, 331–352. J.-L. Loday and B. Vallette, Algebraic operads, Grundlehren Math. Wiss. 346, Springer, Heidelberg, 2012. A. Losev, S. Shadrin, From Zwiebach invariants to Getzler relation, Comm. Math. Phys. **271** (2007), no. 3, 649–679. Nikita Markarian, [$\Hycomm=\BV/\Delta$]{}, see [http://nikitamarkarian.wordpress.com]{}. Yu. Manin, Frobenius manifolds, quantum cohomology, and moduli spaces. American Mathematical Society Colloquium Publications, 47. American Mathematical Society, Providence, RI, 1999. M. Markl, Distributive laws and Koszulness. Ann. Inst. Fourier (Grenoble) **46** (1996), no. 2, 307–323. J. P. May, The geometry of iterated loop spaces. Lectures Notes in Mathematics, Vol. 271. (1972.) J. Milnor, J. Stasheff, Characteristic classes. Annals of Mathematics Studies. No. 76. Princeton, N.J.: Princeton University Press and University of Tokyo Press, 1974. P. Salvatore, N. Wahl, Framed discs operads and Batalin-Vilkovisky algebras, Q. J. Math. **54** (2003), no. 2, 213–231. P. Ševera Formality of the chain operad of framed little disks. Lett. Math. Phys. 93 (2010), 29–35. S. Shadrin, BCOV theory via Givental group action on cohomological field theories, Mosc. Math. J. **9** (2009), no. 2, 411–429. D. Tamarkin, Formality of chain operad of little discs. Lett. Math. Phys. 66 (2003), no. 1-2, 65–72. C. Teleman, The structure of 2D semi-simple field theories, arXiv:0712.0160v2, 1–34. [^1]: We are a bit cheating here because Koszul duality gives the duality between generators and cogenerators. But there is no reason to separate generators and cogenerators in our particular situation because the corresponding subspaces of homological degrees $2n-4$ and $n-2$ in $\Hycomm(n)$ and $\Grav(n)$ respectively are one-dimensional.
{ "pile_set_name": "ArXiv" }
--- abstract: '[image rotation, polarisation, rotating dielectric, specific rotary power]{} When light is passing through a rotating medium the optical polarisation is rotated. Recently it has been reasoned that this rotation applies also to the transmitted image (Padgett *et al.* 2006). We examine these two phenomena by extending an analysis of Player (1976) to general electromagnetic fields. We find that in this more general case the wave equation inside the rotating medium has to be amended by a term which is connected to the orbital angular momentum of the light. We show that optical spin and orbital angular momentum account respectively for the rotation of the polarisation and the rotation of the transmitted image.' author: - 'Jörg B. Götte$^1$, Stephen M. Barnett$^1$ and Miles Padgett$^2$' title: On the dragging of light by a rotating medium --- \[firstpage\] Introduction ============ Jones (1976) studied the propagation of light in a moving dielectric and showed by experiment that a rotating medium induces a rotation of the polarisation of the transmitted light. Player (1976) confirmed that this observation could be accounted for through an application of Maxwells equations in a moving medium. More recently Padgett *et al.* (2006) reasoned that the rotation of the medium turns a transmitted image by the same angle as the polarisation. This is in contrast to the Faraday effect (Faraday 1846), where a static magnetic field in a dielectric medium, parallel to the propagation of light, causes a rotation of the polarisation but not a rotation of a transmitted image. Rotation of the plane of polarisation and image rotation in a rotating medium may be attributed respectively to the spin and orbital angular momentum of light (Allen *et al.* 1999, 2003). The first theoretical treatment of this problem was published by Fermi (1923), who considered plane waves and a non-dispersive medium. The theoretical analysis of Player (1976) was also restricted to the propagation of plane waves, but took the dispersion of the medium into account. Player assumed that the dielectric response does not depend on the motion of the medium. In our treatment we follow his assumption although a more careful analysis by Nienhuis *et al.* (1992) showed that there will be an effect of the motion on the refractive index for a dispersive medium near to an absorption resonance (see also Baranova & Zel’dovich (1979) for a discussion on the effect of the Coriolis force on the refractive index). In contrast to Player we allow for more general electromagnetic fields that can carry orbital angular momentum (OAM). This leads to an additional term in our wave equation, which corresponds to a Fresnel drag term familiar from analysis of uniform motion. For a rotating medium, however, this drag leads to a rotational shift of the image. The propagation of light in a rotating medium thus involves both spin angular momentum (SAM) and OAM. We solve the wave equation for circularly polarised Bessel beams and consider two different superpositions of such Bessel beams to quantify the effects of both polarisation and image rotation. For rotation of the polarisation we examine a superposition of left- and right-circularly polarised Bessel beams carrying the same amount of OAM. For image rotation we consider a superposition of Bessel beams with the same circular polarisation but opposite OAM values. Such a superposition creates an intensity pattern with lobes or ‘petals’. In both cases the constituent Bessel beams propagate differently in the medium, which leads to a change in their relative phase. This is the origin of the rotation of both the polarisation and the transmitted image. For both phenomena we derive an expression for the angle per unit length of dielectric through which the image or the polarisation is rotated. The significance of the total angular momentum can be most easily seen in the wave equation for the propagation of light in a rotating medium. We derive this wave equation in section \[sec:waveeq\]. In the remaining sections we calculate the rotation of polarisation (section \[sec:polrot\]) and the image rotation (section \[sec:imgrot\]) and reveal their common form. Wave equations {#sec:waveeq} ============== The wave equation for a general electric displacment $\mathbf{D}$ in a rigid dielectric medium rotating with angular velocity $\mathbf{\Omega}$ is given by: $$\label{eq:finalwaveeq} -\nabla^2 \mathbf{D} = - \epsilon(\omega') \ddot{\mathbf{D}} + 2 [\epsilon(\omega') - 1] \left[ \mathbf{\Omega} \times \dot{\mathbf{D}} - (\mathbf{v} \cdot \nabla) \dot{\mathbf{D}} \right].$$ An analoguous wave equation can be derived for the magnetic induction $\mathbf{B}$. Compared to the form derived by Player (1976), who considered the special case of a plane wave propagating along the direction of $\mathbf{\Omega}$, these wave equations contain an additional term $2 [\epsilon(\omega') - 1] (\mathbf{v} \cdot \nabla) \dot{\mathbf{D}} $. This term is responsible for the Fresnel drag effect which modifies the speed of light in a moving medium (McCrea 1954; Barton 1999; Rindler 2001). In the following we will derive this wave equation for the electric displacement. Our analysis starts with the same considerations as Player (1976), by introducing a rest frame and a moving frame. In the rest frame the dielectric medium rotates with an angular velocity $\mathbf{v} = \mathbf{\Omega} \times \mathbf{r}$ and in the moving frame the medium is at rest. We restrict our analysis to small velocities with $v \ll c$ and use Maxwell’s equations in both reference frames (Landau & Lifshitz 1975). For the medium at rest we assume the following constitutive relations: \[eq:constitutive\] $$\begin{aligned} \mathbf{D}' & = \epsilon(\omega') \mathbf{E}',\\ \mathbf{B}' & = \mathbf{H}',\end{aligned}$$ where we have used primes to denote the fields and their frequency $\omega'$ in the moving frame. The fields in the moving frame can be expressed in the rest frame by a Lorentz transformation (Stratton 1941; Jackson 1998), which gives to first order in $v/c$: \[eq:transforms\] $$\begin{aligned} \mathbf{D}' & = \mathbf{D} + \mathbf{v} \times \mathbf{H}, \\ \mathbf{B}' & = \mathbf{B} - \mathbf{v} \times \mathbf{E}, \\ \mathbf{E}' & = \mathbf{E} + \mathbf{v} \times \mathbf{B}, \\ \mathbf{H}' & = \mathbf{H} - \mathbf{v} \times \mathbf{D},\end{aligned}$$ where we have set $c=1$ and work with units in which $\epsilon_0 = \mu_0 = 1$. The two constitutive relation in (\[eq:constitutive\]) in the rest frame are thus given by $$\begin{aligned} \mathbf{D} + \mathbf{v} \times \mathbf{H} & = \epsilon(\omega') \left( \mathbf{E} + \mathbf{v} \times \mathbf{B} \right), \\ \mathbf{B} - \mathbf{v} \times \mathbf{E} & = \mathbf{H} - \mathbf{v} \times \mathbf{D}.\end{aligned}$$ The dielectric constant is still given as a function of the frequency in the moving frame. We also assume that the dielectric constant depends only on the frequency and is otherwise independent of the state of motion of the medium. On combining these two equations we can express $\mathbf{D}$ and $\mathbf{B}$ with the two other fields $\mathbf{E}$ and $\mathbf{H}$ to the first order in $v$: \[eq:dbfirstorder\] $$\begin{aligned} \mathbf{D} & = \epsilon(\omega') \mathbf{E} + [\epsilon(\omega') - 1] \mathbf{v} \times \mathbf{H}, \label{eq:dfirstorder}\\ \mathbf{B} & = \mathbf{H} - [ \epsilon(\omega') - 1] \mathbf{v} \times \mathbf{E}. \label{eq:bfirstorder}\end{aligned}$$ After taking the curl of (\[eq:dfirstorder\]) we can use the Maxwell equation $\nabla \times \mathbf{E} = -\dot{\mathbf{B}}$ and express $\dot{\mathbf{B}}$, with the help of (\[eq:bfirstorder\]), in terms of $\dot{\mathbf{H}}$ and $\dot{\mathbf{E}}$. If we assume $\mathbf{v}$ to be constant (see \[app:acceleration\]), as in Player’s paper (Player, 1976) this yields $$\label{eq:vsteady} \nabla \times \mathbf{D} = - \epsilon(\omega') \dot{\mathbf{H}} + \epsilon(\omega') [\epsilon(\omega') -1] \mathbf{v} \times \dot{\mathbf{E}} + [\epsilon(\omega') -1] \nabla \times ( \mathbf{v} \times \mathbf{H}).$$ It follows from (\[eq:dfirstorder\]) that $\epsilon(\omega') \mathbf{v} \times \mathbf{E} = \mathbf{v} \times \mathbf{D}$, to the first order in $\mathbf{v}$, and so we can rewrite (\[eq:vsteady\]) as: $$\label{eq:curld} \nabla \times \mathbf{D} = - \epsilon(\omega') \dot{\mathbf{H}} + [\epsilon(\omega') -1] \mathbf{v} \times \dot{\mathbf{D}} + [\epsilon(\omega') -1] \nabla \times ( \mathbf{v} \times \mathbf{H}).$$ We can now take the curl of (\[eq:curld\]) to obtain a wave equation for $\mathbf{D}$, as $\nabla \times \nabla \times \mathbf{D} = - \nabla^2 \mathbf{D}$ for $\nabla \cdot \mathbf{D} = 0$, and the curl of $\dot{\mathbf{H}}$ is given by $\nabla \times \dot{\mathbf{H}} = \ddot{\mathbf{D}}$. In order to express the curl of the vector products we use the identity $\nabla \times (\mathbf{a} \times \mathbf{b}) = \partial_i b_i \mathbf{a} - \partial_i a_i \mathbf{b}$, where the doubly occurring index denotes a summation over the Cartesian components. The operator $\partial_i$ represents differentiation with respect to the $i$th component and acts on the whole product which gives rise to terms containing the divergences of $\mathbf{v}, \mathbf{D}$ and $\mathbf{H}$. These terms are either zero, because $\nabla \cdot \mathbf{v} = 0$ and $\nabla \cdot \mathbf{D} = 0$ or they lead to terms which are of second order in $\mathbf{v}$ and therefore negligible. The wave equation for $\mathbf{D}$ is thus given by $$\label{eq:waveeq} \begin{split} -\nabla^2 \mathbf{D} & = - \epsilon(\omega') \ddot{\mathbf{D}} + [\epsilon(\omega') - 1] \left[ (\dot{\mathbf{D}} \cdot \nabla) \mathbf{v} - (\mathbf{v} \cdot \nabla) \dot{\mathbf{D}} \right] \\ & + [\epsilon(\omega') - 1] \nabla \times \left[ (\mathbf{H} \cdot \nabla) \mathbf{v} - (\mathbf{v} \cdot \nabla) \mathbf{H} \right]. \end{split}$$ For a rotation $\mathbf{v} = \mathbf{\Omega} \times \mathbf{r}$ we can specify terms of the form $(\mathbf{a} \cdot \nabla) \mathbf{v}$ by expressing the components of the velocity $\mathbf{v}$ using the Levi-Civitta symbol $\varepsilon_{ijk}$ as $v_i = \varepsilon_{ijk} \Omega_j r_k$. The components of $ (\mathbf{a} \cdot \nabla) \mathbf{v}$ are thus given by $$\label{eq:directderiv} \left[ (\mathbf{a} \cdot \nabla) \mathbf{v} \right]_i = a_l \partial_l \varepsilon_{ijk} \Omega_j r_k = a_l \varepsilon_{ijk} \Omega_j \delta_{lk} = \left[ \mathbf{\Omega} \times \mathbf{a} \right]_i.$$If we use the results from (\[eq:directderiv\]) in (\[eq:waveeq\]) we find for $\nabla^2 \mathbf{D}$: $$\begin{split} \label{eq:waveeqrot} -\nabla^2 \mathbf{D} & = - \epsilon(\omega') \ddot{\mathbf{D}} + [\epsilon(\omega') - 1] \left[ \mathbf{\Omega} \times \dot{\mathbf{D}} - (\mathbf{v} \cdot \nabla) \dot{\mathbf{D}} \right] \\ & + [\epsilon(\omega') - 1] \nabla \times \left[ \mathbf{\Omega} \times \mathbf{H} - (\mathbf{v} \cdot \nabla) \mathbf{H} \right]. \end{split}$$ The curl of the last bracket requires some some additional calculations. The first term is given by: $$\nabla \times \left( \mathbf{\Omega} \times \mathbf{H} \right) = \left( \nabla \cdot \mathbf{H} \right) \mathbf{\Omega} - \left( \mathbf{\Omega} \cdot \nabla \right) \mathbf{H},$$ and the second term can be written as: $$\nabla \times \left( \mathbf{v} \cdot \nabla \right) \mathbf{H} = \mathbf{\Omega} \left( \nabla \cdot \mathbf{H} \right) - \nabla \left( \mathbf{\Omega} \cdot \mathbf{H} \right) + \left( \mathbf{v} \cdot \nabla \right) \dot{\mathbf{D}},$$ where the last term originates from $\nabla \times \mathbf{H}$. The terms containing the divergence of $\mathbf{H}$ cancel and the term $\left( \mathbf{v} \cdot \nabla \right) \dot{\mathbf{D}}$ can be added to the second term in (\[eq:waveeqrot\]). The two remaining terms $-\left( \mathbf{\Omega} \cdot \nabla \right) \mathbf{H}$ and $\nabla \left( \mathbf{\Omega} \cdot \mathbf{H} \right)$ together give $\mathbf{\Omega} \times \dot{\mathbf{D}}$: $$\mathbf{\Omega} \times \dot{\mathbf{D}} = \mathbf{\Omega} \times \left( \nabla \times \mathbf{H} \right) = \nabla \left( \mathbf{\Omega} \cdot \mathbf{H} \right) - \left( \mathbf{\Omega} \cdot \nabla \right) \mathbf{H}.$$ This concludes the derivation of the wave equation (\[eq:finalwaveeq\]). It is possible to derive the same wave equation for $\mathbf{B}$ using similar methods. For a rotation around the $z$ axis with constant angular velocity $\mathbf{\Omega} = \Omega \mathbf{e}_z$, the directional derivative $\mathbf{v} \cdot \nabla$ is proportional to an azimuthal derivative, as $\mathbf{v} \cdot \nabla = \mathbf{\Omega} \times \mathbf{r} \cdot \nabla = \Omega \partial_\phi$. This allows us to identify the two terms $\mathbf{\Omega} \times \dot{\mathbf{D}}$ and $\Omega \partial_\phi \dot{\mathbf{D}}$ in the wave equation $$\label{eq:phiwaveeq} -\nabla^2 \mathbf{D} = - \epsilon(\omega') \ddot{\mathbf{D}} + 2 [\epsilon(\omega') - 1] \left[ \mathbf{\Omega} \times \dot{\mathbf{D}} - \Omega \partial_\phi \dot{\mathbf{D}} \right]$$ as the polarisation rotation and rotary Fresnel drag terms, respectively. Player’s derivation does not contain the term proportional to $\partial_\phi \dot{\mathbf{D}}$ because he treated only the case of a plane wave propagating in the $z$-direction and for such fields $\mathbf{D}$ is independent of $\phi$. On substituting a monochromatic ansatz of the form $\mathbf{D} = \mathbf{D}_0 \exp(-{\mathrm{i}}\omega t)$ into (\[eq:phiwaveeq\]), where $\omega$ is the optical angular frequency in the rest frame, we obtain: $$\label{eq:monowaveeq} -\nabla^2 \mathbf{D}_0 = \epsilon(\omega') \omega^2 \mathbf{D}_0 - 2 [ \epsilon(\omega') - 1 ] \omega \Omega \left[ {\mathrm{i}}\mathbf{e}_z \times \mathbf{D}_0 - {\mathrm{i}}\partial_\phi \mathbf{D}_0 \right].$$ If we make an ansatz for $\mathbf{D}_0$ with a general polarisation given by the complex numbers $\alpha$ and $\beta$ (with $|\alpha|^2 + |\beta|^2 = 1$) in the form of $\mathbf{D}_0 = (\alpha \mathbf{e}_x + \beta \mathbf{e}_y) \mathcal{D} + \mathcal{D}_z \mathbf{e}_z$, we find that the $x$ and $y$ components of the wave equation (\[eq:monowaveeq\]) decouple if $\beta = \pm {\mathrm{i}}\alpha$ corresponding to left- and right-circularly polarised light respectively. If we restrict the solutions to these two cases we can write the wave equation as: $$\nabla^2 \mathcal{D}= - \epsilon(\omega') \omega^2 \mathcal{D}+ 2 [ \epsilon(\omega') -1 ] \omega \Omega \left( \pm 1 - {\mathrm{i}}\partial_\phi \right) \mathcal{D},$$ where the plus sign refers to left-circular polarisation and the minus sign to right-circular polarisation. We can then identify $\pm 1$ as the extreme values of the variable $\sigma$ which corresponds to the circular polarisation or SAM of the light beam. Similarly we can identify $-{\mathrm{i}}\partial_\phi = L_z$ as the OAM operator, so that the wave equation contains a term which depends on the total angular momentum $\sigma + L_z$: $$\label{eq:transversewaveeq} \nabla^2 \mathcal{D}= - \epsilon(\omega') \omega^2 \mathcal{D}+ 2 [ \epsilon(\omega') -1 ] \omega \Omega \left( \sigma + L_z \right) \mathcal{D}.$$ We shall see that it is the dependence on the optical angular momentum that is responsible for the rotation of both the polarisation and of a transmitted image. Specific rotary power {#sec:polrot} ===================== The rotation of the polarisation arises from the difference in the refractive indices for left- and right-circularly polarised light. The angle per unit length by which the polarisation is rotated is called the specific rotary power. For an optically active medium at rest the specific rotary power is characteristic for a given material, but from (\[eq:transversewaveeq\]) it can be seen that light propagates differently in a rotating medium, depending on whether the circular polarisation turns in the same rotation sense as the dielectric or in the opposite sense. This phenomenon is described by the effective specific rotary power (Jones 1976; Player 1976). The specific rotary power, defined as (Fowles 1975): $$\label{eq:rotarypow} \delta_{\mathrm{pol}}(\omega) = \left( n_r(\omega) - n_l(\omega) \right) \frac{\pi}{\lambda} = \left( n_r(\omega) - n_l(\omega) \right) \frac{1}{2} \frac{\omega}{c},$$ is the angle of rotation of the plane of polarisation in an optical active medium. Here, the indices $r$ and $l$ refer to right- and left-circularly polarised light. It was convenient to set $c=1$ for our derivation in section \[sec:waveeq\] but we reintroduce it here to facilitate the calculation of measurable quantities. In order to illustrate the effect of the OAM of light we choose a Bessel beam as an ansatz for the electrical displacement in the $x-y$ plane: $$\mathcal{D}= J_m(\kappa \rho) \exp({\mathrm{i}}m \phi) \exp({\mathrm{i}}k_z z),$$ where $\kappa$ and $k_z$ are the transverse and longitudinal components of the wavevector. Bessel beams of this form carry OAM of $m\hbar$ per photon (Allen *et al.* 1992, 1999, 2003). Substituting the Bessel beam ansatz in the wave equation (\[eq:transversewaveeq\]) yields the following result for the overall wavenumber $k = \sqrt{\kappa^2 + k_z^2}$: $$k_{l/r}^2(\omega) = \epsilon(\omega') \frac{\omega^2}{c^2} - 2[\epsilon(\omega') - 1] \frac{\Omega \omega}{c^2} ( \sigma + m).$$ The indices $l$ and $r$ denoting the circular polarisation correspond respectively to $\sigma = 1$ and $\sigma = -1$. With the help of the relations $\epsilon(\omega') = n^2(\omega')$ and $k(\omega) = n(\omega) \omega / c$ we can turn the equation for the wavenumbers into an equation for the effective refractive indices for left- and right-circularly polarised light: $$\label{eq:refindex} n^2_{l/r}(\omega) = n^2(\omega') - 2[n^2(\omega') - 1] \frac{\Omega}{\omega} ( \sigma + m).$$ Following Player (1976) we assume that $\Omega \ll \omega$ and we can therefore approximate the square root for the refractive indices $n_{l/r}$ by a small parameter expansion to the first order in $\Omega/\omega$: $$n_{l/r}(\omega) \simeq n(\omega') - \left[ n(\omega') - \frac{1}{n(\omega')} \right] \frac{\Omega}{\omega} \left( \sigma + m \right).$$ The frequency in the moving frame $\omega'$ is different for left- and right-circularly polarised light (Garetz 1981) and, more generally, the azimuthal or rotational Doppler shift is proportional to the total angular momentum $(\sigma + m)$ (Allen *et al.* 1994; Bialynicki-Birula & Bialynicka-Birula 1997; Courtial *et al.* 1998; Allen *et al.* 2003). For left-circularly polarised light with $\sigma=1$ the frequency is thus $\omega' = \omega - \Omega(1+m)$, and for right-circularly polarised light with $\sigma=-1$ the frequency changes to $\omega' = \omega - \Omega(-1+m)$. Following Player (1976) we expand the refractive index of the dielectric in a Taylor series to calculate the difference $n_r - n_l$: \[eq:refindices\] $$\begin{aligned} n_l(\omega) & \simeq n(\omega) - \frac{d n}{d \omega} \Omega (1+m) - \left[ n(\omega) - \frac{1}{n(\omega)} \right] \frac{\Omega}{\omega} \left(1 + m \right), \\ n_r(\omega) & \simeq n(\omega) - \frac{d n}{d \omega} \Omega (-1+m) - \left[ n(\omega) - \frac{1}{n(\omega)} \right] \frac{\Omega}{\omega} \left(-1 + m \right).\end{aligned}$$ Higher order derivatives of $n$ become comparable in magnitude if $n'(\omega) \Omega \simeq n(\omega)$. This will only be case for a strongly dispersive medium, such as atomic or molecular gases, near a resonance. For such gaseous media the dielectric response in a rotating medium has to examined more closely (Nienhuis *et al.* 1992). For solid materials, such as a rotating glass rod, and for optical frequencies this condition is not fulfilled and we can neglect higher order derivatives in the expansion (\[eq:refindices\]). Within Player’s assumption that the refractive index is independent of the motion of the medium we find for the effective specific rotary power: $$\delta_{\mathrm{pol}}(\omega) = \left( \omega n'(\omega) + n(\omega) - \frac{1}{n(\omega)} \right) \frac{\Omega}{c}.$$ On introducing the group refractive index $n_g(\omega) = n(\omega) + \omega n'(\omega)$ and the phase refractive index $n_\varphi(\omega) = n(\omega)$, we can rewrite the rotary power as $$\label{eq:playerrotpow} \delta_{\mathrm{pol}}(\omega) = \left( n_g(\omega) - n_\varphi^{-1}(\omega) \right) (\Omega/c),$$ which is identical to Player’s (1976) expression. In this form the specific rotary power (\[eq:playerrotpow\]) can be used directly with experimental data in the SI unit system. In the next section we look at image rotation caused by a difference in the effective refractive indices for different values of $m$. Image rotation {#sec:imgrot} ============== The specific rotary power describes the rotation of the propagation, but we can define, analogously, a rotary power of image rotation. The image can simply be created by the superposition of two light beams carrying different values of OAM which leads to an azimuthal variation of the intensity pattern. In particular we consider an incident superposition of two similarly circularly polarised Bessel beams with opposite OAM values of the form $$\label{eq:superposition} \begin{split} \mathcal{D}& = \mathcal{D}_+ + \mathcal{D}_- \\ & = J_m(\kappa \rho) \exp({\mathrm{i}}m \phi) \exp({\mathrm{i}}k_z z) + J_{-m}(\kappa \rho) \exp(-{\mathrm{i}}m \phi) \exp({\mathrm{i}}k_z z). \end{split}$$ Outside the medium the superposition can be written as one Bessel beam with a trigonometric modulation $$\mathcal{D}= J_m(\kappa \rho) \left( \exp({\mathrm{i}}m \phi) + (-1)^m \exp(-{\mathrm{i}}m \phi) \right) \exp({\mathrm{i}}k_z z),$$ but inside the medium the effective refractive index is different for the two components of the superposition (Allen & Padgett 2007). On propagation this leads to phase difference which causes a rotation of the image (see figure \[fig:petalimage\]). We define $$\label{eq:imgrot} \delta_{\mathrm{img}}(\omega) = \left( n_-(\omega) - n_+(\omega) \right) \frac{1}{2 m} \frac{\omega}{c},$$ which is the angle per unit length by which the image is rotated. The factor $m$ in the expression for $\delta_{\mathrm{img}}$ appears because of the $\exp({\mathrm{i}}m \phi)$ and $\exp(-{\mathrm{i}}m \phi)$ phase structure of the interfering beams and the resulting $2m$-fold symmetry of the created image (Pagdett *et al.* 2006). ![\[fig:petalimage\]Image rotation](petals1 "fig:"){width="49.00000%"} ![\[fig:petalimage\]Image rotation](petals2 "fig:"){width="49.00000%"} The different effective refractive indices for the components of the superposition (\[eq:superposition\]) are given by: $$n^2_{+/-}(\omega) = n^2(\omega') - 2[n^2(\omega') - 1] \frac{\Omega}{\omega} ( \sigma \pm m).$$ Here, $\sigma$ is fixed in contrast to (\[eq:refindex\]). The roles of $\sigma$ and $m$ are reversed for the image rotation and the refractive indices for positive and negative OAM are given by: \[eq:rotdiffrefind\] $$\begin{aligned} n_+(\omega) & \simeq n(\omega) - \frac{d n}{d \omega} \Omega (\sigma+m) - \left[ n(\omega) - \frac{1}{n(\omega)} \right] \frac{\Omega}{\omega} \left(\sigma + m \right), \\ n_-(\omega) & \simeq n(\omega) - \frac{d n}{d \omega} \Omega (\sigma-m) - \left[ n(\omega) - \frac{1}{n(\omega)} \right] \frac{\Omega}{\omega} \left(\sigma - m \right).\end{aligned}$$ On substituting (\[eq:rotdiffrefind\]) into (\[eq:imgrot\]) we find: $$\delta_{\mathrm{img}} (\omega) = \left( \omega \frac{d n}{d \omega} + n(\omega) - \frac{1}{n(\omega)} \right) \frac{\Omega}{c},$$ which can be written in terms of the group and phase refractive indices as: $$\label{eq:imgrotpow} \delta_{\mathrm{img}} (\omega) = \left( n_g(\omega) - n_\varphi^{-1}(\omega) \right) (\Omega/c).$$ This verifies the reasoning of Padgett *et al.* (2006) that the polarisation and the image are turned by the same amount when passing through a rotating medium. It is the total angular momentum that determines the phase shifts and a linearly polarised image will undergo rotations of both the plane of polarisation and the intensity pattern or image. Conclusion ========== We have extended a theoretical study by Player (1976) on the propagation of light through a rotating medium to include general electromagnetic fields. In the original analysis Player (1976) showed that the rotation of the polarisation inside a rotating medium can be understood in terms of a difference in the propagation for left- and right-circularly polarised light. Player’s (1976) analysis was thus concerned solely with the spin angular momentum (SAM) of light. Our treatment has shown that the general wave equation has an additional term, which is of the same form as the Fresnel drag term for a uniform motion. In the context of rotating motion, however, this term is connected to the orbital angular momentum (OAM) of the light. By extending the theoretical analysis to include OAM we have been able to attribute polarisation rotation and image rotation to SAM and OAM respectively. We have shown that a superposition of Bessel beams with the same OAM but opposite SAM states leads to the rotation of the polarisation, whereas a superposition of Bessel beams with the same SAM and opposite OAM values gives rise to a rotation of the transmitted image. We have obtained quantitative expressions for the rotation of the polarisation and of the transmitted image and have verified that both are turned through the same angle, as recently suggested by Padgett *et al.* (2006). Player (1976) remarked that the derivation by Fermi (1923) appears to be in error. The mistake in Fermi’s treatment seems to be in missing the transformation of the magnetic fields. Whereas the change in the electric fields induced by the motion of the medium is explicitly given in terms of the electric polarisation $\mathbf{P}$[^1], a similar transformation for the magnetic field is missing. In terms of our derivation this would mean that (\[eq:bfirstorder\]) changes to $\mathbf{B} = \mathbf{H}$ in the rest frame. This in turn causes that the term $\mathbf{v} \times \dot{\mathbf{D}}$ would be missing in (\[eq:vsteady\]). This term and the term $\nabla \times (\mathbf{v} \times \mathbf{H})$ contribute equally to the wave equation (\[eq:finalwaveeq\]), which explains why Fermi’s result for the specific rotary power is smaller than Player’s and ours by a factor of two. As pointed out by Player (1976) this missing factor is cancelled by an additional factor of two in Fermi’s definition of the specific rotary power. We would like to thank Amanda Wright and Jonathan Leach whose experiments on this problem motivated our work. This work was supported by the UK Engineering and Physical Sciences Research Council. \[app:acceleration\] The assumption that $\mathbf{v} = \mathbf{\Omega} \times \mathbf{r}$ is steady is problematic for a rotating motion; if we assume $\Omega$ to be constant over time, then $\dot{\mathbf{v}} = (\mathbf{\Omega} \cdot \mathbf{r} ) \mathbf{\Omega} - \Omega^2 \mathbf{r}$. In principle this would invalidate our initial considerations for the transformation of the electromagnetic fields (\[eq:transforms\]) which strictly hold only for uniform motion. Including the time-derivative of $\mathbf{v}$ would lead to additional terms in (\[eq:vsteady\]) of the form $\epsilon(\omega')[\epsilon(\omega') -1] \dot{\mathbf{v}} \times \mathbf{E}$. If we proceed in taking the curl of this vector product we produce four terms which either can be neglected because they are second order in $v/c$, or they do not contain the time derivative of an optical field. The latter are smaller than terms that do contain a time derivative by $\sim \Omega/\omega$. For our assumption $\Omega \ll \omega$ all such terms are negligible. Allen, L., Beijersbergen, M. W., Spreeuw, R. J. C. & Woerdman, J. P. 1992 Orbital angular momentum of light and the transformation of Laguerre-Gaussian modes. *Phys. Rev. A* **45**, 8185–8190. Allen, L., Babiker, M. & Power, W. L. 1994 Azimuthal Doppler-shift in light-beams with orbital angular-momentum. *Opt. Commun.* **112**, 141–144. Allen, L., Padgett, M. J. & Babiker M. 1999 The Orbital Angular Momentum of Light. *Prog. Opt.* **39**, 291–372. Allen, L., Barnett, S. M. & Padgett, M. J. 2003 *Optical Angular Momentum*. Bristol: Institute of Physics Publishing. Allen, L. & Padgett, M. J. 2007 Equivalent geometric transformation for spin and orbital angular momentum of light. *J. Mod. Opt* **54**, 487–491. Baranova, N. B. & Zel’dovich, B. Ya. 1979 Coriolis contribution to the rotary ether drag. *Proc. R. Soc. Lond. A* **368**, 591–592. Barton, G. 1999 *Introduction to the Relativity principle*. Chichester: John Wiley & Sons. Bialynicki-Birula I. & Bialynicka-Birula Z. 1997 Rotational Frequency Shift. *Phys. Rev. Lett* **78**, 2539–2542. Courtial J., Robertson D. A., Dholakia K., Allen L. & Padgett M. J. 1998 Rotational Frequency Shift of a Light Beam. *Phys. Rev. Lett.* **81**, 4828 – 4830. Faraday, M. 1846 Experimental Researches in Electricity. Nineteenth Series. *Philos. Trans. R. Soc. Lond.* **136**, 1–20. Fermi, E. 1923 Sul trascinamento del piano di polarizzazione da parte di un mezzo rotante. *Rend. Lincei* **32**, 115–118. Reprinted in: Fermi, E. 1962 *Collected Papers*, vol. 1. Chicago: University of Chicago Press. Fowles, F. R. 1975 *Introduction to Modern Optics*, 2nd edn. New York: Dover Publications. Garetz, B. A. 1981 Angular Doppler-effect. *J. Opt. Soc. Am.* **71**, 609–611. Jackson, J. D. 1999 *Classical Electrodynamics*, 3rd edn. New York: John Wiley & Sons. Jones, R. V. 1976 Rotary ‘aether drag’. *Proc. R. Soc. Lond. A* **349**, 423–439. Landau, L. D. & Lifshitz, E. M. 1975 *The Classical Theory of Fields*, 4th edn. Oxford: Elsevier Butterworth-Heinemann. Nienhuis, G., Woerdman, J. P. & Kuščer 1992 Magnetic and mechanical Faraday effects. *Phys. Rev. A* **46** (11), 7079–7092. McCrea, W. H. 1954 *Relativity Physics*, 4th edn. London: Methuen Publishing. Padgett M., Whyte G., Girkin J., Wright A., Allen L., Öhberg P. & Barnett S. M. 2006 Polarization and image rotation induced by a rotating dielectric rod: an optical angular momentum interpretation. *Optics Lett.* **31** (14), 2205–2207. Player, M. A. 1976 Polarization and image rotation induced by a rotating dielectric rod: an optical angular momentum interpretation. *Proc. R. Soc. Lond. A* **349**, 441–445. Rindler, W. 2001 *Relativity*. Oxford: Oxford University Press. Stratton, J. A., 1941 *Electromagnetic Theory*. New York: McGraw-Hill. [^1]: Fermi (1923) denotes the electric polarisation by $\mathbf{S}$
{ "pile_set_name": "ArXiv" }
--- abstract: 'With about 12 fb$^{-1}$ collected XYZ data sets, BESIII continues the exploration of the exotic charmonium-like states. In this talk, recent results on the measurements of the spin-parity determination of $Z_{\rm c}(3900)$, as well as on line-shapes of $e^+e^- \rightarrow J/\psi\,\pi\pi, h_{\rm c}\pi\pi, \psi(2S)\,\pi^0\pi^0/\pi^+\pi^-$, and $\pi^+ D^0 D^{*-}$ from open charm are discussed. Also, the recent observation of $e^+ e^- \rightarrow \phi \chi_{c1/2}$ at $\sqrt(s)=4.6$GeV is reported.' address: | Institut für Kernphysik, Goethe Universität Frankfurt,\ and GSI Darmstadt, Germany author: - | F. Nerling[^1],\ on behalf of the BESIII Collaboration title: 'Recent results on charmonium-like (exotic) XYZ states at the BESIII/BEPCII experiment in Beijing/China [^2]' --- Introduction ============ In the charmonium region, the $c\bar{c}$ charmonium states can successfully be described using potential models. All the predicted states have been observed with the expected properties beneath the open-charm threshold and excellent agreement is achieved between theory and experiment. Above the open-charm threshold, however, there are still many predicted states that have not yet been discovered, and, surprisingly, quite some unexpected states have been observed since 2003. Interesting examples of these so-called (exotic) charmonium-like “XYZ” states are the $X(3872)$ observed by Belle [@X3872_belle], the vector states $Y(4260)$ and $Y(4360)$, both discovered by BaBar using initial state radiation (ISR) [@Y4260_barbar; @Y4360_barbar], and the charged state $Z_{\rm c}(3900)^\pm$ discovered by BESIII [@Zc3900_besiii], shortly after confirmed by Belle [@Zc3900_belle], that is a manifestly exotic state; for a recent overview see e.g. [@reviewMitchel_etal_2016]. The Beijing Spectrometer (BES) at the Beijing Electron-positron Collider (BEPC) in China started initially in 1989, and the BESIII/BEPCII experiment [@besiii] is the latest incarnation that began operation in March 2008 after major upgrades were finalised. The multi-purpose detector allows for coverage of a broad hadron physics programme, including not only charmonium and open-charm spectroscopy but also electromagnetic form factor as well as $R$ scan measurements and many others. We have collected the world largest data sets in the $\tau$-charm mass region. Among those are unique high-luminosity data sets to explore the still unexplained $XYZ$ states of in total more than 5fb$^{-1}$ accumulated above 3.8GeV. Recent major results on charmonium-like exotic XYZ states ========================================================= With BESIII/BEPCII conventional as well as charmonium-like (exotic) $XYZ$ states can be studied. In the $e^+e^-$ annihilation, we have direct access to vector $Y$ states ($J^{PC}$=$1^{--}$) that are produced at unprecedented statistics. Also, we can study charged as well as neutral $Z_{\rm c}$ states indirectly produced (together with recoil particles), whereas $X$ states are accessible via radiative decays, see [*e.g.*]{} [@nerlingMorionds2017]. Spin-parity determination of the $Z_{\rm c}(3900)$ and $Z_{\rm c}(3885)$ ------------------------------------------------------------------------ The discovery of the $Z_{\rm c}(3900)^\pm$ state is due to the charge in combination with the mass a strong hint for the first four-quark state being observed. After observation of the neutral partner $Z_{\rm c}(3900)^0\rightarrow J/\psi\pi^0$ [@Zc3900neutral], confirming earlier evidence reported by CLEO-c [@cleo-c_ZcNeutral], a $Z_{\rm c}(3900)^{\pm,0}$ isospin triplet seems to be established. Furthermore, also a second isospin triplet $Z_{\rm c}(4020)^{\pm,0}$ has meanwhile been established in the BESIII data [@Zc4020; @Zc4020_neutral_BESIII], also consistent with others [@Zc4020_1_CLEO; @Zc4020_b_Belle]. Despite this remarkable progress, the nature of these states is still unclear and the question is, whether the different decays the $Z_{\rm c}$ states have been observed in (hidden versus open charm) are decay modes of the same state. Therefore, the spin-parity $J^P$ of $Z_{\rm c}(3885)^\pm\rightarrow D\bar{D^*}^\pm$ has been studied in terms of the angular distribution $|\cos\theta_{\pi}|$ between the bachelor pion and the beam axis of the detection efficiency corrected signal event yields (Fig.\[ZcSpinParity\_a\]). Based on a single $D$-tag analysis (Fig.\[ZcSpinParity\_a\],left) of data taken at 4.26GeV, $J^P=1^+$ was determined [@Zc3885_AD_a_BESIII], and later re-confirmed at higher significance in a double $D$-tag analysis based on data at 4.23GeV and 4.26GeV (Fig.\[ZcSpinParity\_a\], right) [@Zc3885_AD_b_BESIII]. Recently, also the spin-parity of the $Z_{\rm c}(3900)^\pm\rightarrow J/\psi\pi^\pm$ system has been studied in an amplitude analysis (Fig.\[ZcSpinParity\_b\]), including a simultaneous fit to the data sets at 4.23GeV and 4.26GeV [@Zc3900_PWA_BESIII]. Not only the $J^P=1^+$ assignment for this state is significantly favoured by the data, but also the pole mass of $(3881.2 \pm 4.2_{\rm stat.} \pm 52.7_{\rm syst.})$MeV/$c^2$ is found to be consistent with that of the $Z_{\rm c}(3885)^\pm$ from the open charm channel [@Zc3885_AD_b_BESIII], and this holds also for the ratio ${\cal B}(Z_{\rm c}^\pm \rightarrow D\bar{D^*}^\pm / \,{\cal B}(Z_{\rm c}^\pm \rightarrow J/\psi\pi^\pm)$ of about $6.2\pm2.9$. In conclusion, it seems that these two $Z_{\rm c}$ states at 3.9GeV/$c^2$ are indeed the same object observed in different decay modes. Clearly, also further decay channels via other charmonia than $J/\psi$ and $h_{\rm c}$, like [*e.g.*]{} $\eta_c$ [@FNerling_etal], need to be investigated, and possible multiplets need to be completed also by high-spin states, which can only be accessed by future experiments such as PANDA/FAIR [@panda]. Line-shapes of $J/\psi\pi^+\pi^-$, $h_{\rm c}\pi\pi$, $\psi(2S)\,\pi^0\pi^0/\pi^+\pi^-$ and $\pi^+ D^0 D^{*-}$ -------------------------------------------------------------------------------------------------------------- The $Y(4260)$ and the $Y(4360)$ were discovered decaying to $J/\psi\pi^+\pi^-$ and $\psi(2S)\pi^+\pi^-$, respectively [@Y4260_barbar; @Y4360_barbar]. Based on increased statistics, the $Y(4260)$ appears with a somewhat asymmetric shape [@Y4260_barbar_update]. The Belle experiment confirmed the $Y(4260) \rightarrow J/\psi\pi^+\pi^-$. In contradiction to the BaBar result, they claimed a lower mass peak, the “$Y(4008)$”, to be needed in addition in order to describe the data [@Y4008_belle]. A simultaneous fit to the BESIII high-precision energy dependent cross-section measurements for $\sigma(e^+e^- \rightarrow J/\psi\pi^+\pi^-)$ of the high luminosity “XYZ” (8.2fb$^{-1}$) and low luminosity “$R$-scan” (0.8fb$^{-1}$) data sets [@Ycrosssection_besiii], resolves two resonance structures at high statistical significance ($>$$7\sigma$) in the $Y(4260)$ region, whereas a $Y(4008)$ appears not to be present. It should be emphasised that, while the $Y(4260)$ is observed with a significantly smaller width ($\Gamma$=$44.1\pm 3.8$MeV/$c^2$) at smaller mass ($m$=4220MeV/$c^2$), the second resonance (with $m$=$4326.8\pm$10.0MeV/$c^2$, $\Gamma$=$98.2^{+25.4}_{-19.6}$MeV/$c^2$) is (within errors) consistent with the $Y(4360)$, which is here firstly observed in the decay to $J/\psi\pi^+\pi^-$. Previously, it was only seen in $\psi(2S)\pi^+\pi^-$ [@Y4360_barbar; @psi2Spipi_belle]). Before coming back to $e^+e^-$ production of the $\psi(2S)\pi^+\pi^-$, the recent BESIII result on $h_{\rm c}\pi^+\pi^-$ production [@hcpipi_besiii] should be mentioned and noted, providing evidence for two resonant structures at 4.22GeV/$c^2$ ($m$=$(4218.4 \pm4.0 \pm 0.9)$MeV/$c^2$ and $\Gamma$=$(66.0 \pm 0.9 \pm 0.4)$MeV/$c^2$) and at 4.39GeV/$c^2$ ($m$=$(4391.6 \pm 6.3 \pm 1.0)$MeV/$c^2$, $\Gamma$=$(139.5 \pm 16.1 \pm 0.6)$MeV$c^2$) that we call “$Y(4220)$” and “$Y(4390)$”. They are observed at a statistical significance of more than 10$\sigma$ over the one resonance assumption. The Belle result on $e^+e^-$ production of $\psi(2S)\pi^+\pi^-$ (Fig.\[Ystates\_psi2Spipi\_bes3\], left) shows clear indication of $Y(4360)$ and $Y(4660)$, both decaying to $\psi(2S)\pi^+\pi^-$ [@psi2Spipi_belle]. However, no evidence for the $Y(4260)$ is found in the data ($<$3$\sigma$), and it was thus omitted from their best fit. The new $\psi(2S)\pi^+\pi^-$ production cross-section result by BESIII [@psi2Spipi_besiii] is compared to the ones from BaBar and Belle in Fig.\[Ystates\_psi2Spipi\_bes3\] (right). The BESIII measurement confirms the $Y(4360)$ line shape reported previously, and from our fit with three coherent Breit-Wigner functions, we observe for the first time $Y(4220)\rightarrow\psi(2S)\pi^+\pi^-$ and again $Y(4390)$, which are both consistent in the resonance parameters $(m,\Gamma)$ with the two structures that we observe in $h_{\rm c}\pi^+\pi^-$ [@hcpipi_besiii]. In the region of these both states, we studied also possible intermediate states using an unbinned maximum-likelihood fit to the Dalitz plots (Fig.\[DPanaBes3psi2Spipi\]), in which the parameterisation comprises two coherent sums of resonant and non-resonant production. At 4.42GeV, including an intermediate state of $(m,\Gamma)$=$(4032.1 \pm 2.4, 26.1 \pm 5.3)$MeV/$c^2$ improves significantly (9.2$\sigma$) the fit description of the data, consistent with a clearly visible narrow structure in $m(\psi(2S)\pi)$ at this $E_{\rm cms}$ (Fig.\[DPanaBes3psi2Spipi\], top/right). At 4.36GeV, no obvious structure is visible but a cluster of events at low $m(\pi\pi)$. At the two lower $E_{\rm cms}$ (Fig.\[DPanaBes3psi2Spipi\], top/left), [*i.e.*]{} in the region of $Y(4220)$, two accumulations of events at about 3.9 and 4.03GeV$/c^2$, respectively, are visible at $4.26$GeV, whereas at $4.23$GeV, no structure is clearly seen and the $m(\pi\pi)$ distribution appears very different from that at $4.26$GeV. It should be noted that possible intermediate states of 3.9 and 4.03GeV$/c^2$ at $4.26$GeV would have kinematic reflections at each other’s masses, and at $4.23$GeV, a possible $Z_{rm c}(4030)$ state would be rather close to the kinematical border, so that no obvious distinct structure would be expected to be visible here. In the fits at 4.36 and 4.26GeV/$c^2$, the resonance parameters of the intermediate $Z_{\rm c}$-like state were fixed to those obtained at 4.42GeV/$c^2$, resulting in an statistical significance of 3.6$\sigma$ and 9.6$\sigma$, respectively. Even though, also at 4.42GeV/$c^2$, the data is not described sufficiently, the confidence level improves to about 50%, when applying an additional cut of $m(\pi\pi)> 0.3$GeV/$c^2$. Even though there are still unresolved discrepancies (model [*vs.*]{} data), we might have observed a $Z_{\rm c}$-like intermediate state of a mass of about $m$=$4030$MeV/$c^2$. A similar analysis of the neutral counter part, $e^+e^-\rightarrow \psi(2S)\pi^0\pi^0$, delivers similar structures and results of the corresponding Dalitz plot analysis [@psi2Spi0pi0_besiii]. Higher statistics data and theoretical input are needed to improve and sort out the present discrepancies in describing the significant sub-structures in the $\psi(2S)\pi\pi$ system. \[Ystates\_Bes3\] In conclusion for the vector $Y$ states, we observe two structures, “$Y(4220)$” and “$Y(4390)$”, consistently in the decays to $c\bar{c}\pi\pi$, involving the three charmonia $J/\psi$, $h_c$ and $\psi(2S)$, and interestingly, we observe these two $Y$ states also being consistent in the resonance parameters as well as in the preliminary open charm analysis of $e^+e^-\rightarrow D^0D^{*}\pi$ (Fig.\[Ystates\_Bes3\], bottom). Conclusions and outlook ======================= The BESIII/BEPCII experiment is successfully operating since 2008. Given the world largest data set in the $\tau$-charm region, it offers unique possibilities for investigations of the $XYZ$ spectrum. We have the first two $Z_{\rm c}$ isospin triplets established, the $X(3872)$ for the first time observed in radiative decays [@X3872_besiii], and we have recently published precision measurements of production cross-section in the $Y$ energy range, resolving for the first time structures overseen in previous measurements. Similarly to the $X$ and $Z_{\rm c}$ states, we find these $Y$ states also decaying to $DD^*\pi$. As an outlook, BESIII is continuing to collect data, helping and needed to further resolve the $XYZ$ puzzle.\ \ [**Acknowledgement:**]{} This work is supported by the DFG Grant “FOR2359”. [99]{} S.K. Choi [*et al.*]{} (Belle Collaboration), Phys. Rev. Lett. 91, 262001 (2003). B. Aubert [*et al.*]{} (BaBar Collaboration), Phys. Rev. Lett. 95, 142001 (2005). B. Aubert [*et al.*]{} (BaBar Collaboration), Phys. Rev. Lett. 98, 212001 (2007). M. Ablikin [*et al.*]{} (BESIII Collaboration), Phys. Rev. Lett. 110, 252001 (2013). Z.Q. Liu [*et al.*]{} (Belle Collaboration), Phys. Rev. Lett. 110, 252002 (2013). R.F. Lebedev, R.E. Mitchell and E.S.  Swanson, arXiv:1610.04528v2\[hep-ex\]. M. Ablikin [*et al.*]{} (BESIII Collaboration), Nucl. Instrum. Methods Phys. Res., Sect. A 614, 345 (2010). F. Nerling, on behalf of the BESIII Collaboration, Proc, Moriond-QCD 2017; arXiv:1805.12450\[hep-ex\].. M. Ablikim [*et al.*]{} (BESIII Collaboration), Phys.Rev.Lett. 115, 112003 (2015). T. Xiao, S. Dobbs, A. Tomaradze and K.K. Seth, Phys. Lett. B 727 366 (2013). M. Ablikim [*et al.*]{} (BESIII Collaboration), Phys.Rev.Lett. 111, 242001 (2013). M. Ablikim [*et al.*]{} (BESIII Collaboration), Phys. Rev. Lett. 113, 212002 (2014). T. Xiao [*et al.*]{} (CLEO Collaboration), Phys. Lett. B 727, 366 (2013). K. Chilikin [*et al.*]{} (Belle Collaboration), Phys. Rev. D 90, 112009 (2014). M. Ablikim [*et al.*]{} (BESIII Collaboration), Phys. Rev. Lett. 112, 022001 (2014). M. Ablikim [*et al.*]{} (BESIII Collaboration), Phys. Rev. D 92, 092006 (2015). M. Ablikim [*et al.*]{} (BESIII Collaboration), Phys. Rev. Lett. 119, 072001 (2017). K. Götzen, R. Kliemt, F. Nerling and K. Peters, (BESIII Collaboration), Work in progress (2018). W. Erni [*et al.*]{} (PANDA Collaboration), Panda Performance Report (2009),\ arXiv:0903.3905\[hep-ex\]. B. Aubert [*et al.*]{} (BaBar Collaboration), Phys. Rev. D 86, 051102 (2012). Z.Q. Liu [*et al.*]{} (Belle Collaboration), Phys. Rev. Lett. 110, 252002 (2013). M. Ablikin [*et al.*]{} (BESIII Collaboration), Phys. Rev. Lett. 118, 092001 (2017). Z.Q. Liu [*et al.*]{} (Belle Collaboration), Phys. Rev. D 91, 112007 (2015). M. Ablikin [*et al.*]{} (BESIII Collaboration), Phys. Rev. Lett. 118, 092002 (2017). M. Ablikin [*et al.*]{} (BESIII Collaboration), Phys. Rev. D 96, 032004 (2017). M. Ablikin [*et al.*]{} (BESIII Collaboration), Phys. Rev. D 97, 052001 (2018). M. Ablikin [*et al.*]{} (BESIII Collaboration), Phys. Rev. Lett. 112, 092001 (2014). [^1]: Email: [email protected] [^2]: Presented at the $10^{\rm th}$ international workshop Excited QCD 2018
{ "pile_set_name": "ArXiv" }
--- abstract: 'Active learning is an important technique to reduce the number of labeled examples in supervised learning. Active learning for binary classification has been well addressed in machine learning. However, active learning of the reject option classifier remains unaddressed. In this paper, we propose novel algorithms for active learning of reject option classifiers. We develop an active learning algorithm using double ramp loss function. We provide mistake bounds for this algorithm. We also propose a new loss function called double sigmoid loss function for reject option and corresponding active learning algorithm. We offer a convergence guarantee for this algorithm. We provide extensive experimental results to show the effectiveness of the proposed algorithms. The proposed algorithms efficiently reduce the number of label examples required.' author: - | Kulin Shah, Naresh Manwani\ Machine Learning Lab, KCIS, IIIT Hyderabad, India\ [email protected], [email protected] bibliography: - 'Final-Biblography.bib' title: Online Active Learning of Reject Option Classifiers --- Introduction ============ In standard binary classification problems, algorithms return prediction on every example. For any misprediction, the algorithms incur a cost. Many real-life applications involve very high misclassification costs. Thus, for some confusing examples, not predicting anything may be less costly than any misclassification. The choice of not predicting anything for an example is called [*reject option*]{} in machine learning literature. Such classifiers are called reject option classifiers. Reject option classification is very useful in many applications. Consider a doctor diagnosing a patient based on the observed symptoms and preliminary diagnosis. If there is an ambiguity in observations and preliminary diagnosis, the doctor can hold the decision on the treatment. She can recommend to take advanced tests or consult a specialist to avoid the risk of misdiagnosing the patient. The holding response of the doctor is the same as to reject option for the specific patient [@Rocha2011]. On the other hand, the doctor’s misprediction can cost huge money for further treatment or the life of a person. In another example, a banker can use the reject option while looking at the loan application of a customer [@Rosowsky2013]. A banker may choose not to decide based on the information available because of high misclassification cost, and asks for further recommendations or a credit bureau score from the stakeholders. Application of reject option classifiers include healthcare [@btn349; @Rocha2011], text categorization [@1234113], crowdsourcing [@Qunwei2017] etc. Let $\X \subset \R^d$ be the feature space and $\{+1,-1\}$ be the label space. Examples of the form $({\mathbf{x}},y)$ are generated from an unknown fixed distribution on $\X \times \{+1,-1\}$. A reject option classifier can be described with the help of a function $f:\X \rightarrow \R$ and a rejection width parameter $\rho \in \R_+$ as below. $$\begin{aligned} \label{eq:reject_classifier} h_\rho(f({\mathbf{x}})) = 1.\I_{\{f({\mathbf{x}})>\rho\}} -1.\I_{\{f({\mathbf{x}})<-\rho\}}-0.\I_{\{ | f({\mathbf{x}}) | \leq \rho\}}\end{aligned}$$ The goal is to learn $f(.)$ and $\rho$ simultaneously. For a given example $({\mathbf{x}},y)$, the performance of reject option classifier $h_\rho(f(.))$ is measured using following loss function. $$\begin{aligned} \label{eq:0-d-1} L_d(yf({\mathbf{x}}),\rho) = \I_{\{yf({\mathbf{x}}) \leq -\rho\}} + d\I_{\{|f({\mathbf{x}})|\leq \rho\}} \end{aligned}$$ where $d\in (0,0.5)$ is the cost of rejection. A reject option classifier is learnt by minimizing the risk (expectation of loss) under $L_d$. As $L_d$ is not continuous, optimization of empirical risk under $L_d$ is difficult. @Bartlett:2008 [@wegkamp2011] propose a convex surrogate of $L_d$ called generalized hinge loss. They learn the reject option classifier using risk minimization algorithms based on generalized hinge loss. @Grandvalet2008 propose another convex surrogate of $L_d$ called double hinge loss and corresponding risk minimization approach for reject option classification. @Manwani15 [@shah2019sparse] propose double ramp loss based approaches for reject option classification. Double ramp loss is a non-convex bounded loss function. All these approaches assume that we have plenty of labeled data available. In general, classifiers learned with a large amount of training data can give better generalization on testing data. However, in many real-life applications, it can be costly and difficult to get a large amount of labeled data. Thus, in many cases, it is desirable to ask the labels of the examples selectively. This motivates the idea of active learning. Active learning selects more informative examples and queries labels of those examples. Active learning of standard binary classifiers has been well-studied [@Dasgupta:2009; @Bachrach:1999; @Tong:2002]. In @El-Yaniv:2012, authors reduce active learning for the usual binary classification problem to learning a reject option classifier to achieve faster convergence rates. However, active learning of reject option classifiers has remained an unaddressed problem. In this paper, we propose online active learning algorithms to reject option classification. Let us reconsider the example where the banker uses the reject option classifier for selecting the loan applications. Consider a loan application that satisfies the basic requirements. Thus, the banker is not clear about using the hold option. On the other hand, she is also not sure enough to approve the application. Such cases are instrumental in defining the separation rule between accepting the loan application and holding it for further investigation. This motivates us to think that one can use active learning to ask the labels of selective examples as described above while learning the reject option classifier. A broad class of active learning algorithms is inspired by the concept of a margin between the two categories. Thus, an example, which falls in the margin area of the current classifier, carries more information about the decision boundary. On the other hand, examples which are correctly classified with good margin or misclassified by a good margin, give less knowledge of the decision boundary. Margin examples can bring more changes to the existing classifier. Thus, querying the label of margin examples is more desirable than the other two kinds of examples. A reject option classifier can be viewed as two parallel surfaces with the rejection area in between. Thus, active learning of the reject option classifier becomes active learning of two surfaces in parallel with a shared objective. This shared objective is nothing but to minimize the sum of $L_d$ losses over a sequence of examples. In [@Manwani15], the authors propose a risk minimization approach based on double ramp loss ($L_{dr}$) for learning the reject option classifier. In [@Manwani15], it is shown that at the optimality, the two surfaces can be represented using only those examples which are close to them. Examples that are far from the two surfaces do not participate in the representation of the surfaces. This motivates us to use double ramp loss for developing an active learning approach to reject option classifiers. Our Contributions ----------------- We make the following contributions in this paper. - We propose an active learning algorithm based on double ramp loss $L_{dr}$ to learn a linear and non-linear classifier. We give bounds to the number of rejected examples and misclassification rates for un-rejected examples. - We propose a smooth non-convex loss called double sigmoid loss ($L_{ds}$) for reject option classification. - We propose an active learning algorithm based on $L_{ds}$ to learn both linear and non-linear classifiers. We also give convergence guarantees for the proposed algorithm. - We present extensive simulation results for both proposed active learning algorithms for linear as well as non-linear classification boundaries. Proposed Approach: Active Learning Inspired by Double Ramp Loss {#section:dr_active} =============================================================== Active learning algorithm does not ask the label in every trial. We denote the instance presented to algorithm at trial $t$ by ${\mathbf{x}}_{t}$. Each ${\mathbf{x}}_{t} \in \mathcal{X}$ is associated with a unique label $y_{t} \in \{-1, 1\}$. The algorithm calculates $f_{t}({\mathbf{x}}_{t})$ and outputs the decision using eq.(\[eq:reject\_classifier\]). Based on $f_{t}({\mathbf{x}}_{t})$, the active learning algorithm decides whether to ask label or not. [@pmlr-v5-guillory09a] shows that online active learning algorithms can be viewed as stochastic gradient descent on non-convex loss function therefore, we use a non-convex loss function [*Double ramp loss*]{} $L_{dr}$ [@Manwani15] to derive our first active learning approach. $L_{dr}$ is defined as follows. $$\begin{aligned} L_{dr}(yf({\mathbf{x}}), {\rho}) = d \Big{[}\big{[}1-yf({\mathbf{x}}) +\rho\big{]}_+ - \big{[}-1-yf({\mathbf{x}})+\rho\big{]}_+\Big{]}\\ +(1-d) \; \Big{[}\big{[}1 -yf({\mathbf{x}})-\rho\big{]}_+ - \big{[}-1-yf({\mathbf{x}})-\rho\big{]}_+\Big{]} \end{aligned}$$ ![Double Ramp Loss with $\rho=2$[]{data-label="fig:my_label"}](ld-ldr.png){width="0.7\columnwidth"} Here $[a]_{+} = \max(0, a)$ and $d$ is the cost of rejection. Figure \[fig:my\_label\] shows the plot of double ramp loss for $\rho=2$. We first consider developing active learning algorithm for linear classifiers (i.e. $f({\mathbf{x}}) = {\mathbf{w}}\cdot {\mathbf{x}}$). We use stochastic gradient descent (SGD) to derive double ramp loss based active learning algorithm. Parameters update equations using SGD are as follows. $$\begin{aligned} &\;\;\;{\mathbf{w}}_{t+1} = {\mathbf{w}}_t - \eta \nabla_{{\mathbf{w}}_t} L_{dr}(y_tf({\mathbf{x}}_t), \rho_{t})\\ =&\begin{cases} {\mathbf{w}}_{t} + \eta dy_t{\mathbf{x}}_t, & \rho_t - 1 \leq y_tf({\mathbf{x}}_t) \leq \rho_t + 1 \\ {\mathbf{w}}_{t} + \eta (1-d)y_t{\mathbf{x}}_t & -\rho_t - 1 \leq y_tf({\mathbf{x}}_t) \leq -\rho_t+1 \\ {\mathbf{w}}_{t} & \text{otherwise} \end{cases}\\ {\rho}_{t+1}& = \rho_t - \eta \nabla_{\rho_t} L_{dr}(y_tf({\mathbf{x}}_t), \rho_{t})\\ &= \begin{cases} {\rho}_{t} - \eta d, & \rho_t - 1 \leq y_tf({\mathbf{x}}_t) \leq \rho_t + 1 \\ {\rho}_{t} + \eta (1-d), & -\rho_t - 1 \leq y_tf({\mathbf{x}}_t) \leq -\rho_t + 1 \\ {\rho}_{t} & \text{otherwise} \end{cases}\end{aligned}$$ Where $\eta$ is the step-size. We see that the parameters are updated only when $|f_t({\mathbf{x}}_t)| \in [\rho_t - 1, \rho_t + 1]$. For the rest of the regions, the gradient of the loss $L_{dr}$ is zero therefore, there won’t be any update when an example ${\mathbf{x}}_t$ is such that $|f_t({\mathbf{x}}_t)| \notin [\rho_t - 1, \rho_t + 1]$. Thus, there is no need to query the label when $|f_t({\mathbf{x}}_t)| \notin [\rho_t - 1, \rho_t + 1]$. We only query the labels when $|f_t({\mathbf{x}}_t)| \in [\rho_t - 1, \rho_t + 1]$. Thus, we ask the label of the current example only if it falls in the linear region of the loss $L_{dr}$. This is the same way any margin based active learning approach updates the parameters. If the algorithm does not query the label $y_t$, the parameters (${\mathbf{w}}, \rho$) are not updated. Thus, we define the query function $Q_t$ as follows. $$Q_t= \begin{cases} 1 & \text{if} \; \rho_t - 1 \leq |f({\mathbf{x}}_t)| \leq \rho_t + 1 \\ 0 & \text{otherwise} \end{cases}$$ The detailed algorithm is given in Algorithm \[algo:double-ramp-active-learning\]. We call it DRAL (double ramp loss based active learning). DRAL can be easily extended for learning nonlinear classifiers using kernel trick and is described in Appendix \[app-sec:kernelized-DRAL\]. $d \in (0, 0.5)$, step size $\eta$ Weight vector ${\mathbf{w}}$, Rejection width $\rho$ ${\mathbf{w}}_1=\mathbf{0}, \rho_{1}=1$ Sample ${\mathbf{x}}_{t} \in S$ Set $f_{t}({\mathbf{x}}_{t})={\mathbf{w}}_{t} \cdot {\mathbf{x}}_{t}$ Set $Q_t=1$ Query the label $y_{t}$ of ${\mathbf{x}}_t$. ${\mathbf{w}}_{t+1} = {\mathbf{w}}_{t} + \eta dy_{t}{\mathbf{x}}_{t}$. $\rho_{t+1} = \rho_{t} - \eta d$ ${\mathbf{w}}_{t+1} = {\mathbf{w}}_{t} + \eta (1-d)y_{t}{\mathbf{x}}_{t}$ $\rho_{t+1} = \rho_{t} + \eta (1-d)$ ${\mathbf{w}}_{t+1} = {\mathbf{w}}_{t}$ $\rho_{t+1} = \rho_{t}$ Mistake Bounds for DRAL ----------------------- In this section, we derive the mistake bounds of DRAL. Before presenting the mistake bounds, we begin by presenting a lemma which would facilitate the following mistake bound proofs. Let $f_t({\mathbf{x}}_t)={\mathbf{w}}_t \cdot {\mathbf{x}}_t$. We define the following.[^1] $$\begin{aligned} \label{define-ct-rt-mt} \begin{cases} C_{t} = \mathbb{I}_{ \{ {\rho}_{t} \leq y_{t}f_t({\mathbf{x}}_{t}) \leq {\rho}_{t} + 1 \} }&R_{1t} = \mathbb{I}_{ \{ {\rho}_{t} - 1 \leq y_{t}f_t({\mathbf{x}}_{t}) \leq {\rho}_{t} \} } \\ R_{2t} = \mathbb{I}_{ \{ -{\rho}_{t} \leq y_{t}f_t({\mathbf{x}}_{t}) \leq -{\rho}_{t} + 1 \} }&M_{t} = \mathbb{I}_{ \{ -{\rho}_{t} - 1 \leq y_{t}f_t({\mathbf{x}}_{t}) \leq -{\rho}_{t} \} } \end{cases}\end{aligned}$$ \[lemma-first-lemma-general-results\] Let $({\mathbf{x}}_{1}, y_{1}), \dotso , ({\mathbf{x}}_{T}, y_{T})$ be a sequence of input instances, where ${\mathbf{x}}_{t} \in \mathcal{X}$ and $y_{t} \in \{-1, 1\}$ for all $t\in[T]$.[^2] Given $C_{t}, R_{1t}, R_{2t} \text{ and } M_{t}$ as defined in eq.(\[define-ct-rt-mt\]) and $\alpha >0$, the following bound holds for any ${\mathbf{w}}$ such that $\| {\mathbf{w}}\| \leq {\mathbf{W}}$. $$\begin{aligned} &{\alpha}^2{ \| {\mathbf{w}}\| }^2 + (1 - {\alpha}\rho)^2 + \frac{ 2 \alpha \eta }{ m } \sum_{t=1}^T L_{dr}(y_tf({\mathbf{x}}_{t}), \rho) \geq \\ &\sum\limits_{t=1}^{T} [{ \thinspace C_{t} + R_{1t} \thinspace }]\big{[} 2{\alpha} \eta d + 2 \eta (L_{dr}(y_tf_t({\mathbf{x}}_{t}), \rho_t) - d ) \\ -& \eta^2 d^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \big{]}+ \sum\limits_{t=1}^{T} [{ \thinspace R_{2t} + M_{t} \thinspace }]\big{[} \frac{ 2{\alpha} \eta (1 + d) m_{21} }{ m_{22} } \\ +& 2 \eta (L_{dr}(y_tf_t({\mathbf{x}}_{t}), \rho_t) - d- 1) - \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \big{]} \end{aligned}$$ where $f({\mathbf{x}}_t)={\mathbf{w}}\cdot {\mathbf{x}}_t$ and $f_t({\mathbf{x}}_t)={\mathbf{w}}_t \cdot {\mathbf{x}}_t$. The proof is given in Appendix \[app-sec:lemma-1\]. Now, we will find the bounds on rejection rate and mis-classification rate. Let $({\mathbf{x}}_{1}, y_{1}), \dotso , ({\mathbf{x}}_{T}, y_{T})$ be a sequence of input instances, where ${\mathbf{x}}_{t} \in \mathcal{X}$ and $y_{t} \in \{-1, 1\}$ and $\| {\mathbf{x}}_{t}\| \leq R$ for all $t\in[T]$. Assume that there exists a $f({\mathbf{x}})={\mathbf{w}}\cdot {\mathbf{x}}$ and $\rho$ such that $\| {\mathbf{w}}\| \leq {\mathbf{W}}$ and $L_{dr}(y_tf({\mathbf{x}}_t), \rho) = 0$ for all $t\in [T]$. 1. Number of examples rejected by DRAL (Algorithm \[algo:double-ramp-active-learning\]) among those for which the label was asked in this sequence is upper bounded as follows. $$\sum \limits_{t:Q_t=1} [R_{1t} + R_{2t}] \leq \alpha^2 \| {\mathbf{w}}\|^2 + (1 - \alpha \rho)^2$$ where $\alpha = \max \Big{(} \frac{1 + \eta^2 d^2(R^2 + 1) + 2 \eta d}{2 \eta d} ,\frac{m_{22} (1+ \eta^2 (1-d)^2(R^2 + 1) + 2 \eta (1-d) )}{2 m_{21} \eta (1+d)} \Big{)}$. 2. Number of examples mis-classified by DRAL (Algorithm \[algo:double-ramp-active-learning\]) among those for which the label was asked in this sequence is upper bounded as follows. $$\sum\limits_{t:Q_t=1}M_{t} \leq {\alpha}^2{ \| {\mathbf{w}}\| }^2 + (1 - {\alpha}\rho)^2$$ where $\alpha = \max \Big{(} \frac{ \eta d(R^2 + 1) + 2}{2} ,\frac{ m_{22} ( 1+ \eta^2 (1-d)^2(R^2 + 1) + 2 \eta (1-d) )}{2 m_{21} \eta (1+d)} \Big{)}$. The proof is given in Appendix \[app-sec-theorem-2\]. The above theorem assumes that there exists $f({\mathbf{x}})={\mathbf{w}}\cdot {\mathbf{x}}$ and $\rho$ such that $L_{dr}(y_tf({\mathbf{x}}_t), {\rho})=0$ for all $t \in [T]$. This means that the data is linearly separable. In such a case, the number of mistakes made by the algorithm on unrejected examples as well as the number of rejected examples are upper bounded by a complexity term and are independent of $T$. Now, we derive the bounds when the assumption $L_{dr}(y_tf({\mathbf{x}}_t), {\rho})=0,\;t\in[T]$ does not hold for any $f({\mathbf{x}})={\mathbf{w}}\cdot {\mathbf{x}}$ and $\rho$. Let $({\mathbf{x}}_{1}, y_{1}), ({\mathbf{x}}_{2}, y_{2}), \dotso , ({\mathbf{x}}_{T}, y_{T})$ be a sequence of input instances, where ${\mathbf{x}}_{t} \in \mathcal{X}$ and $y_{t} \in \{-1, 1\}$ and $\| {\mathbf{x}}_{t}\| \leq R$ for all $t\in [T]$. Then, for any given $f({\mathbf{x}})={\mathbf{w}}\cdot {\mathbf{x}}$ ($\| {\mathbf{w}}\| \leq {\mathbf{W}}$) and $\rho$, we observe the following. 1. Number of rejected examples by DRAL (Algorithm \[algo:double-ramp-active-learning\]) among those for which the label was asked in this sequence is upper bounded as follows. $$\sum \limits_{t:Q_t=1} [R_{1t} + R_{2t}] \leq \alpha^2 \| {\mathbf{w}}\|^2 + (1 - \alpha \rho)^2 + \sum_{t=1}^T \frac{ 2\eta \alpha }{ m } L_{dr}(y_tf({\mathbf{x}}_{t}), {\rho})$$ where $\alpha = \max \begin{cases} \frac{1 + \eta^2 d^2(R^2 + 1) + 2 \eta d}{2 \eta d} \\ \frac{m_{22}(1+ \eta^2 (1-d)^2(R^2 + 1) + 2 \eta (1-d))}{2 m_{21} \eta (1+d)} \end{cases}$. 2. The number of misclassified examples by DRAL (Algorithm \[algo:double-ramp-active-learning\]) is upper bounded as follows. $$\begin{aligned} \sum \limits_{t:Q_t=1} M_{t} \leq \alpha^2 \| {\mathbf{w}}\|^2 + (1 - \alpha \rho)^2 + \sum_{t=1}^T \frac{ 2\eta \alpha }{ m } L_{dr}(y_tf({\mathbf{x}}_{t}), {\rho})\end{aligned}$$ where $\alpha = \max \begin{cases} \frac{ \eta d(R^2 + 1) + 2}{2} \\ \frac{m_{22}(1+ \eta^2 (1-d)^2(R^2 + 1) + 2 \eta (1-d))}{2 m_{21} \eta (1+d)} \end{cases}$. The proof is given in Appenxid \[app-sec:theorem-3\]. We see that when the data is not linearly separable, the number of mistakes made by the algorithm is upper bounded by the sum of complexity term and sum of the losses using a fixed classifier. Active Learning Using Double Sigmoid Loss Function {#section:ds_active} ================================================== We observe that double ramp loss is not smooth. Moreover, $L_{dr}$ is constant whenever $yf({\mathbf{x}}) \in [\rho+1,\infty)\cup (-\infty, -\rho-1] \cup [-\rho+1 , \rho-1]$. Thus, when loss $L_{dr}$ for an example ${\mathbf{x}}$ falls in any of these three regions, the gradient of the loss becomes zero. The zero gradient causes no update. Thus, there is no benefit of asking the labels when an example falls in one of these regions. However, we don’t want to ignore these regions completely. To capture the information in these regions, we need to change the loss function in such a way that the gradient does not vanish completely in these regions. To ensure that, we propose a new loss function. Double Sigmoid Loss ------------------- We propose a new loss function for reject option classification by combining two sigmoids as follows. We call it [*double sigmoid loss*]{} function $L_{ds}$. $$\begin{aligned} L_{ds}(yf({\mathbf{x}}), \rho) = 2d \sigma (yf({\mathbf{x}}) - \rho) + 2(1-d) \sigma (y f({\mathbf{x}}) + \rho)\end{aligned}$$ where $\sigma(a) = \left(1+e^{ \gamma a}\right)^{-1}$ is the sigmoid function ($\gamma >0$). Figure \[fig:DSL\] shows the double sigmoid loss function. $L_{ds}$ is a smooth non-convex surrogate of loss $L_d$ (see eq.(\[eq:0-d-1\])). We also see that for the double sigmoid loss, the gradient in the regions $yf({\mathbf{x}}) \in [\rho+1,\infty)\cup (-\infty, -\rho-1] \cup [-\rho+1 , \rho-1]$ does not vanish unlike double ranp loss. ![Double sigmoid loss with $\gamma = 2$.[]{data-label="fig:DSL"}](ld-lds.png){width="0.7\columnwidth"} Below we establish that the loss $L_{ds}$ is $\beta$-smooth.[^3] \[lemma:smoothness\] Assuming $\| {\mathbf{x}}\| \leq \RR$, Double sigmoid loss $L_{ds}(yf({\mathbf{x}}), \rho)$ is $\beta-$smooth with constant $\beta = \frac{\gamma^2}{5} \big[ \RR^2 + 1 \big] $. The proof is given in Appendix \[app-sec:lemma-4\]. Query Probability Function -------------------------- In the case of DRAL, we saw that the gradient of $L_{dr}$ becomes nonzero only in the region $yf({\mathbf{x}}) \in [\rho-1,\rho +1]$. So, we ask the labels only when examples fall in this region. However, in case of double sigmoid loss, the gradient does not vanish. Thus, to perform active learning using $L_{ds}$, we need to ask the labels selectively. We propose a query probability function to set the label query probability at trial $t$. The query probability function should carry the following properties. In the loss $L_d$ (see eq.(\[eq:0-d-1\])), we see two transitions. One at $yf({\mathbf{x}})=\rho$ (transition between correct classification and rejection) and another at $yf({\mathbf{x}})=-\rho$ (transition between rejection and misclassification). Any example falling closer to one of these transitions captures more information about the two transitions. We want the query probability function to be such that it gives higher probabilities near these transitions. Examples that are correctly classified with a good margin, examples misclassified with a considerable margin, and examples in the middle of the reject region do not carry much information. Such examples are also situated away from the transition regions. Thus, query probability should decrease as we move away from these decision boundaries. Therefore, we ask the label in these regions with less probability. Considering these desirable properties, we propose the following query probability function. $$\begin{aligned} \label{eq:proper-bimodal-probability} p_{t} = 4 \; \sigma ( | f_{t}( {\mathbf{x}}_{t} ) | - \rho_{t} )\left( 1 - \sigma( | f_{t} ({\mathbf{x}}_{t}) | - \rho_{t}) \right)\end{aligned}$$ where $f_{t}({\mathbf{x}}_t) = {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t$. Figure \[fig:Query-probability\] shows the graph of the query probability function. We see that the probability function has two peaks. One peak is at $yf({\mathbf{x}})=\rho$ (transition between correct classification and rejection) and another at $yf({\mathbf{x}})=-\rho$ (transition between rejection and misclassification). ![Query Probability Function[]{data-label="fig:Query-probability"}](ld-lds-p.png){width="0.65\columnwidth"} Double Sigmoid Based Parameter Updates -------------------------------------- The parameter update equations using $L_{ds}$ is as follows. $$\begin{aligned} \nonumber & {\mathbf{w}}_{t+1}={\mathbf{w}}_t -\eta \nabla_{{\mathbf{w}}_t} L_{ds}(y_tf({\mathbf{x}}_t), \rho_t)\\ \nonumber &= {\mathbf{w}}_t-2y_{t} \alpha {\mathbf{x}}_{t} \Big{[}d\sigma ( y_{t} f_{t}( {\mathbf{x}}_{t} ) - \rho_{t} )\left( 1 - \sigma( y_{t} f_{t}( {\mathbf{x}}_{t} ) - \rho_{t}) \right)\\ &+ (1 - d)\sigma ( y_{t} f_{t}( {\mathbf{x}}_{t} ) + \rho_{t} )\left( 1 - \sigma( y_{t} f_{t}( {\mathbf{x}}_{t} ) + \rho_{t}) \right) \Big{]} \label{eq:proper-bimodal-w-update} \\ \nonumber & \rho_{t+1}=\rho_t - \eta \nabla_{\rho_t} L_{ds}(y_tf({\mathbf{x}}_t), \rho_t)\\ \nonumber &= \rho_t+ 2\alpha \Big{[}d \sigma ( y_{t} f_{t}( {\mathbf{x}}_{t} ) - \rho_{t} )\left( 1 - \sigma( y_{t} f_{t}( {\mathbf{x}}_{t} ) - \rho_{t}) \right)\\ &\;\;\;\;\;- (1 - d) \sigma ( y_{t} f_{t}( {\mathbf{x}}_{t} ) + \rho_{t} )\left( 1 - \sigma( y_{t} f_{t}( {\mathbf{x}}_{t} ) + \rho_{t}) \right) \Big{]} \label{eq:proper-bimodal-rho-update}\end{aligned}$$ Now, we will explain the update equations for ${\mathbf{w}}$ and $\rho$. 1. When an example is correctly classified with good margin (i.e. $y_{t}f_{t}({\mathbf{x}}_{t}) >> 0$) then the active learning algorithm will update ${\mathbf{w}}$ by a small factor of $y_{t} {\mathbf{x}}_{t}$ and will reduce the rejection width $(\rho)$ because for $y_{t}f_{t}({\mathbf{x}}_{t}) >> 0$, $ d \sigma ( y_{t} f_{t}( {\mathbf{x}}_{t} ) - \rho_{t} )\left( 1 - \sigma( y_{t} f_{t}( {\mathbf{x}}_{t} ) - \rho_{t}) \right) > (1-d) \sigma ( y_{t} f_{t}( {\mathbf{x}}_{t} ) + \rho_{t} )\left( 1 - \sigma( y_{t} f_{t}( {\mathbf{x}}_{t} ) + \rho_{t}) \right)$. 2. When an example is misclassified with good margin (i.e. $y_{t}f_{t}({\mathbf{x}}_{t}) << 0$) then the active learning algorithm will update ${\mathbf{w}}$ by a large factor of $y_{t} {\mathbf{x}}_{t}$ and will increase the rejection width $(\rho)$ because for $y_{t}f_{t}({\mathbf{x}}_{t}) << 0$, $d \sigma ( y_{t} f_{t}( {\mathbf{x}}_{t} ) - \rho_{t} )\left( 1 - \sigma( y_{t} f_{t}( {\mathbf{x}}_{t} ) - \rho_{t}) \right) < (1-d) \sigma ( y_{t} f_{t}( {\mathbf{x}}_{t} ) + \rho_{t} )\left( 1 - \sigma( y_{t} f_{t}( {\mathbf{x}}_{t} ) + \rho_{t}) \right)$. We use the acronym DSAL for double sigmoid based active learning. DSAL is described in Algorithm \[algo:double-sigmoid-active-learning\]. $d \in (0, 0.5)$, step size $\eta$ Weight vector ${\mathbf{w}}$, Rejection width $\rho$. ${\mathbf{w}}_{1}, \rho_{1}$ Sample ${\mathbf{x}}_{t} \in \mathbb{R}^{d}$ Set $f_{t}({\mathbf{x}}_t)={\mathbf{w}}_{t} \cdot {\mathbf{x}}_t$ Set $p_{t}=4 \sigma ( | f_{t}( {\mathbf{x}}_{t} ) | - \rho_{t} )\left( 1 - \sigma( | f_{t} ({\mathbf{x}}_{t}) | - \rho_{t}) \right)$ Randomly sample $z_{t} \in \{0, 1\}$ from Bernoulli($p_{t}$). Query the label $y_{t}$ of ${\mathbf{x}}_t$. Find ${\mathbf{w}}_{t+1}$ using eq.(\[eq:proper-bimodal-w-update\]). Find $\rho_{t+1}$ using eq.(\[eq:proper-bimodal-rho-update\]). ${\mathbf{w}}_{t+1} = {\mathbf{w}}_{t}$. $\rho_{t+1} = \rho_{t}$. Convergence of DSAL ------------------- In the case of DRAL, the mistake bound analysis was possible as $L_{dr}$ increases linearly in the regions where its gradient is nonzero. However, we don’t see similar behavior in double sigmoid loss $L_{ds}$. Thus, we are not able to carry out the same analysis here. Instead, we here show the convergence of DSAL to local minima. For which, we borrow the techniques from online non-convex optimization. In online non-convex optimization, it is challenging to converge towards a global minimizer. It is a common practice to state the convergence guarantee of an online non-convex optimization algorithm by showing it’s convergence towards an $\epsilon$-approximate stationary point. In our case, it means that for some $t$, $\| \nabla L_{ds} (y_{t} f_{t} ({\mathbf{x}}_{t}), \rho_t) \|^2 \leq \epsilon$. To prove the convergence of DSAL, we use the notion of local regret defined in [@DBLP:journals/corr/abs-1708-00075] . The local regret for an online algorithm is $$\mathcal{R} ( T ) = \sum_{t=1}^T \| \nabla L_{ds} (y_{t}f_{t}({\mathbf{x}}_{t}) , \rho_{t}) \|^2.$$ where $T$ is the total number of trials. (Defined in [@DBLP:journals/corr/abs-1708-00075]) Thus, in each trial, we incur a regret, which is the squared norm of the gradient of the loss. When we reach a stationary point, the gradient will vanish and hence the norm. Note that the convergence here requires that the objective function should be $\beta$-smooth. In this case, $L_{ds}$ holds that property, as shown in Lemma \[lemma:smoothness\]. Thus, we can use the convergence approach proposed in [@DBLP:journals/corr/abs-1708-00075].[^4] \[theorem:local-regret-bound\] If we choose $\eta = \frac{5}{ \gamma^2 \big[ \RR^2 + 1 \big] }$, then using smoothness condition of $L_{ds}(yf({\mathbf{x}}), \rho)$, the local regret of DSAL algorithm is bounded as follows. $$\mathcal{R}(T) \leq \frac{4 \gamma^2}{5} \left( \RR^2 + 1 \right) \left( T + 1 \right)$$ The proof is given in Appendix \[app-sec:theorem-6\]. To prove that DSAL reaches $\epsilon-$stationary point in expectation over iterates, we use following result of [@DBLP:journals/corr/abs-1708-00075]. $$\begin{aligned} \label{eq:expected-gradient-bound} \mathop{\mathbb{E}}_{t \sim \text{Unif} [T] }\left[ \| \nabla L_{ds}(y f({\mathbf{x}}), \rho) \|^2 \right] \leq \frac{\mathcal{R}(T)}{T} \end{aligned}$$ ------------------------------------------------------------------- --------------------------------------------------------- ----------------------------------------------------------- ---------------------------------------------------------------------- -------------------------------------------------------------- ![image](gisette-label-wise-risk1.png){width="0.38\columnwidth"} ![image](gisette-risk-1.png){width="0.38\columnwidth"} ![image](gisette-labels-1.png){width="0.38\columnwidth"} ![image](gisette-misclassification-1.png){width="0.38\columnwidth"} ![image](gisette-rejection-1.png){width="0.38\columnwidth"} ![image](gisette-label-wise-risk25.png){width="0.38\columnwidth"} ![image](gisette-risk-25.png){width="0.38\columnwidth"} ![image](gisette-labels-25.png){width="0.38\columnwidth"} ![image](gisette-misclassification-25.png){width="0.38\columnwidth"} ![image](gisette-rejection-25.png){width="0.38\columnwidth"} ![image](gisette-label-wise-risk4.png){width="0.38\columnwidth"} ![image](gisette-risk-4.png){width="0.38\columnwidth"} ![image](gisette-labels-4.png){width="0.38\columnwidth"} ![image](gisette-misclassification-4.png){width="0.38\columnwidth"} ![image](gisette-rejection-4.png){width="0.38\columnwidth"} ------------------------------------------------------------------- --------------------------------------------------------- ----------------------------------------------------------- ---------------------------------------------------------------------- -------------------------------------------------------------- \[corollary-thm\] For DSAL algorithm, $$\begin{aligned} \label{eq:corollary} \mathop{\mathbb{E}}_{ t \sim \text{Unif} [T] }\left[ \| \nabla L_{ds}(y f({\mathbf{x}}),\rho) \|^2 \right] \leq \frac{4 \gamma^2}{5} \left( \RR^2 + 1 \right) \left( 1 + \frac{1}{T} \right)\end{aligned}$$ Using theorem \[theorem:local-regret-bound\] and eq. (\[eq:expected-gradient-bound\]), we can get the required result of the Corollary. In the Corollary, We see that upper bound on the expectation of the square of the gradient is inversely proportional to $T$; hence, decreases as the total number of trials $T$ increases. It means that the probability of DSAL algorithm reaches to $\epsilon-$stationary point increases as $T$ increases. Experiments {#section:exp} =========== We show the effectiveness of the proposed active learning approaches on Gisette, Phishing and Guide datasets available on UCI ML repository [@Lichman:2013]. Experimental Setup ------------------ We evaluate the performance of our approaches to learning linear classifiers. In all our simulations, we initialize step size by a small value, and after every trial, step size decreases by a small constant. Parameter $\alpha$ in the double sigmoid loss function is chosen to minimize the average risk and average fraction of queried labels (averaged over 100 runs). We need to show that the proposed active learning algorithms are effectively reducing the number of labeled examples required while achieving the same accuracy as online learning. Thus, we compare the active learning approaches with an online algorithm that updates the parameters using gradient descent on the double sigmoid loss at every trial. We call this online algorithm as DSOL (double sigmoid loss based online learning). Simulation Results ------------------ We report the results for three different values of $d \in \{ 0.1, 0.25, 0.4 \}$. The results provided here are based on 100 repetitions of a total number of trial ($T$) equal to 10000. For every value of $d$, we find the average of risk, the fraction of asked labels, fraction of misclassified examples, and fraction of rejected examples over 100 repetitions. We plotted the average of each quantity (e.g., risk, the fraction of asked labels, etc.) as a function of $t \in [T]$. Moreover, the standard deviation of the quantity is denoted by error bar in figures. Figure \[fig:gisette-results\], \[fig:phishing-results\] and \[fig:guidesw-results\] show experimental results for Gisette and Phishing and Guide datasets. We observe the following. -------------------------------------------------------------------- ---------------------------------------------------------- ------------------------------------------------------------ ----------------------------------------------------------------------- --------------------------------------------------------------- ![image](phishing-label-wise-risk1.png){width="0.38\columnwidth"} ![image](phishing-risk-1.png){width="0.38\columnwidth"} ![image](phishing-labels-1.png){width="0.38\columnwidth"} ![image](phishing-misclassification-1.png){width="0.38\columnwidth"} ![image](phishing-rejection-1.png){width="0.38\columnwidth"} ![image](phishing-label-wise-risk25.png){width="0.38\columnwidth"} ![image](phishing-risk-25.png){width="0.38\columnwidth"} ![image](phishing-labels-25.png){width="0.38\columnwidth"} ![image](phishing-misclassification-25.png){width="0.38\columnwidth"} ![image](phishing-rejection-25.png){width="0.38\columnwidth"} ![image](phishing-label-wise-risk4.png){width="0.38\columnwidth"} ![image](phishing-risk-4.png){width="0.38\columnwidth"} ![image](phishing-labels-4.png){width="0.38\columnwidth"} ![image](phishing-misclassification-4.png){width="0.38\columnwidth"} ![image](phishing-rejection-4.png){width="0.38\columnwidth"} -------------------------------------------------------------------- ---------------------------------------------------------- ------------------------------------------------------------ ----------------------------------------------------------------------- --------------------------------------------------------------- ---------------------------------------------------------------------------- ------------------------------------------------------------------ -------------------------------------------------------------------- ------------------------------------------------------------------------------- ----------------------------------------------------------------------- ![image](guide-polynomial-label-wise-risk1.png){width="0.38\columnwidth"} ![image](guide-polynomial-risk-1.png){width="0.38\columnwidth"} ![image](guide-polynomial-labels-1.png){width="0.38\columnwidth"} ![image](guide-polynomial-misclassification-1.png){width="0.38\columnwidth"} ![image](guide-polynomial-rejection-1.png){width="0.38\columnwidth"} ![image](guide-polynomial-label-wise-risk25.png){width="0.38\columnwidth"} ![image](guide-polynomial-risk-25.png){width="0.38\columnwidth"} ![image](guide-polynomial-labels-25.png){width="0.38\columnwidth"} ![image](guide-polynomial-misclassification-25.png){width="0.38\columnwidth"} ![image](guide-polynomial-rejection-25.png){width="0.38\columnwidth"} ![image](guide-polynomial-label-wise-risk4.png){width="0.38\columnwidth"} ![image](guide-polynomial-risk-4.png){width="0.38\columnwidth"} ![image](guide-polynomial-labels-4.png){width="0.38\columnwidth"} ![image](guide-polynomial-misclassification-4.png){width="0.38\columnwidth"} ![image](guide-polynomial-rejection-4.png){width="0.38\columnwidth"} ---------------------------------------------------------------------------- ------------------------------------------------------------------ -------------------------------------------------------------------- ------------------------------------------------------------------------------- ----------------------------------------------------------------------- - [**Label Complexity Versus Risk:**]{} The first column in each figure shows how the risk goes down with the number of asked labels. For Gisette and Phishing datasets, given the number of queried labels, both DSAL and DRAL achieve lower risk compared to DSOL. For Guide dataset, DSAL always makes lower risk compared to DSOL for a given number of queried labels. For Gisette and Guide datasets, DSAL achieves lower risk compared to DRAL with the same number of label queries. For Phishing dataset, DSAL and DRAL perform comparably. - [**Average Risk:**]{} The second column in all the figures shows how the average risk (average of $L_d$) goes down with the number of steps ($t$). In all the cases, we see that the risk increases with increasing the value of $d$. We understand that the average risk of DSAL is higher than DRAL for Gisette and Phishing datasets and all values of $d$. For Guide dataset, DSAL always achieves lower risk compared to DRAL. For Gisette and Guide datasets, DSAL achieves similar risk as DSOL. For Phishing dataset, DSOL performs marginally better than DSAL and DRAL. DRAL does better risk minimization compared to DSOL for Phishing dataset. For Guide dataset, DRAL performs comparable to DSOL as $t$ becomes larger except for $d=0.1$. - [**Average Fraction of Asked Labels:**]{} Third column in all the figures show the fraction of labels asked for a given time step $t$. We observe that the fraction of asked labels decreases with increasing $d$. For Gisette and Phishing datasets, DSAL asks significantly less number of labels as DRAL. This happens because DRAL asks labels every time in a specific region and completely ignores other regions, but DSAL asks labels in every region with some probability. For Guide dataset, the fraction of labels asked to become the same for both DSAL and DRAL as $t$ becomes larger. - [**Average Fraction of Misclassified Examples:**]{} The fourth column of all the figures, shows how the average fraction of misclassified examples goes down with $t$. We observe that the misclassification rate goes up with increasing $d$. We see that DRAL achieves a minimum average misclassification rate in all the cases compared to DSOL and DSAL except for the Guide dataset with $d=0.1$ value. For Gisette and Phishing datasets, DSAL achieves a comparable average misclassification rate compared to DSOL for all the cases. For Guide dataset, DSAL achieves a lower misclassification rate compared to DSOL except for $d=0.1$. - [**Average Fraction of Rejected Examples:**]{} The fifth column in each figure shows how the rejection rate goes down with steps $t$. We see that the average fraction of rejected examples is higher in DRAL than DSAL and DSOL. Also, the rejection rate decreases with increasing $d$. Thus, we see that the proposed active learning algorithms DRAL and DSAL effective reduce the number of labels required for learning the reject option classifier and perform better compared to online learning. Conclusion {#section:conc} ========== In this paper, we have proposed novel active learning algorithms DRAL and DSAL. We presented mistake bounds for DRAL and convergence results for DSAL. We experimentally show that the proposed active learning algorithms reduce the number of labels required while maintaining a similar performance as online learning. Kernelized Active Learning algorithm using Double Ramp Loss {#app-sec:kernelized-DRAL} =========================================================== In this section, we describe the kernel version of the active learning algorithm proposed in Algorithm 1 for learning nonlinear classifiers. We use the usual kernel trick to determine the classifier parameters. For every example presented in the algorithm, we maintain a variable $a_t$ and save it. The classifier after trial $t$ is represented as $f_{t}(\cdot) = \underset{s\leq t:Q_s=1}{\sum} a_s {\cal K}({\mathbf{x}}_s,\cdot)$. Thus, the algorithm has to maintain the $a_t$ values for all the examples for which the label was queried. The detailed algorithm is described as follows. $d \in (0, 0.5)$, step size $\eta$, training set $S$ Weight vector ${\mathbf{w}}$, Rejection width $\rho$ $a_{0}, \rho_{0}$ Sample ${\mathbf{x}}_{t}$ from the training set $S$ Set $f_{t-1}({\mathbf{x}}) = \sum_{i=1}^{t-1} a_{i} \K ({\mathbf{x}}_{i}, {\mathbf{x}})$ Set $Q_t=1$ Query the label $y_{t}$ of ${\mathbf{x}}_t$ $a_{t} = \eta d y_{t}$ . $\rho_{t} = \rho_{t-1} - \eta d $ $a_{t} = \eta (1 - d) y_{t}$. $\rho_{t} = \rho_{t-1} + \eta (1-d)$. $a_{t} = 0$. $\rho_{t} = \rho_{t-1}$. Proof of Lemma 1 {#app-sec:lemma-1} ================ To prove Lemma 1, we will first prove Lemma 8 and Lemma 9. Assuming $\| {\mathbf{w}}\| \leq {\mathbf{W}}$ and $\| {\mathbf{x}}\| \leq \RR$, Double ramp loss $L_{dr}$ satisfies following inequality. $$\begin{aligned} L_{dr}(y ( {\mathbf{w}}\cdot {\mathbf{x}}), \rho ) \geq m_{11} d + m_{12} d \left( \rho - y ( {\mathbf{w}}\cdot {\mathbf{x}}) \right)\end{aligned}$$ where $m_{11} = m_{12} = m_{1} = \min \left( \frac{1}{ \rho }, \frac{2}{ d( 1 + \rho + {\mathbf{W}}\RR ) } \right)$. Assume $$\begin{aligned} \label{eq:ldr-lower-bound1} L_{dr}(y ( {\mathbf{w}}\cdot {\mathbf{x}}), \rho ) \geq m_{11} d + m_{12} d \left( \rho - y ( {\mathbf{w}}\cdot {\mathbf{x}}) \right)\end{aligned}$$ for some $m_{11}$ and $m_{12}$. We will prove that for $m_{11} = m_{12} = m_{1}$, eq.(\[eq:ldr-lower-bound1\]) satisfies for all values of $y({\mathbf{w}}\cdot {\mathbf{x}})$ and $\rho$ of consideration. It is easy to show that if eq.(\[eq:ldr-lower-bound1\]) satisfies at $y( {\mathbf{w}}\cdot {\mathbf{x}}) = \rho + 1 $, $y( {\mathbf{w}}\cdot {\mathbf{x}}) = - \rho + 1$ and $y( {\mathbf{w}}\cdot {\mathbf{x}}) = - {\mathbf{W}}\RR$ then eq.(\[eq:ldr-lower-bound1\]) will satisfy for all $ y( {\mathbf{w}}\cdot {\mathbf{x}}) $ for which $ \| {\mathbf{w}}\| \leq {\mathbf{W}}$ and $\| {\mathbf{x}}\| \leq \RR$. At $ y( {\mathbf{w}}\cdot {\mathbf{x}}) = \rho + 1 $, eq.(\[eq:ldr-lower-bound1\]) will be $$\begin{aligned} \label{eq:lr-lower-bound11} \nonumber 0 &\geq m_{11} d + m_{12} d (-1) \\ m_{12} &\geq m_{11}\end{aligned}$$ At $ y( {\mathbf{w}}\cdot {\mathbf{x}}) = -\rho + 1 $, eq.(\[eq:ldr-lower-bound1\]) will be $$\begin{aligned} \label{eq:lr-lower-bound12} 2d \geq m_{11} d + m_{12} d (2 \rho - 1)\end{aligned}$$ At $ y( {\mathbf{w}}\cdot {\mathbf{x}}) = - {\mathbf{W}}\RR $, eq.(\[eq:ldr-lower-bound1\]) will be $$\begin{aligned} \label{eq:lr-lower-bound13} 2 \geq m_{11} d + m_{12} d ( \rho + {\mathbf{W}}\RR )\end{aligned}$$ $m_{11} = m_{12} = m_{1} = \min \left( \frac{1}{ \rho }, \frac{2}{ d( 1 + \rho + {\mathbf{W}}\RR ) } \right) $ will satisfy all three equations (i.e. eq.(\[eq:lr-lower-bound11\]), eq.(\[eq:lr-lower-bound12\]) and eq.(\[eq:lr-lower-bound13\]) ). Assuming $\| {\mathbf{w}}\| \leq {\mathbf{W}}$, $\| {\mathbf{x}}\| \leq \RR$ and ${\mathbf{W}}\RR > \rho$, Double ramp loss $L_{dr}$ satisfies following inequality. $$\begin{aligned} L_{dr}(y ( {\mathbf{w}}\cdot {\mathbf{x}}), \rho ) \geq m_{21}(1 + d) - m_{22} (1 - d) \left( \rho + y ( {\mathbf{w}}\cdot {\mathbf{x}}) \right)\end{aligned}$$ where $m_{21} = \min \left( \frac{2 (2 \rho + 1) }{ (1+d)({\mathbf{W}}\RR + \rho + 1) }, \frac{1 + d({\mathbf{W}}\RR - \rho)}{ (1+d)( {\mathbf{W}}\RR - \rho + 1 ) } \right)$ and $m_{22} = \max \left( \frac{2}{ ({\mathbf{W}}\RR + \rho + 1) (1 - d) }, \frac{ (2-d)({\mathbf{W}}\RR - \rho) + 1 }{ ({\mathbf{W}}\RR - \rho + 1)({\mathbf{W}}\RR - \rho)(1 - d) } \right) $. Assume $$\begin{aligned} \label{eq:ldr-lower-bound2} L_{dr}(y ( {\mathbf{w}}\cdot {\mathbf{x}}), \rho ) \geq m_{21}(1 + d) - m_{22} (1 - d) \left( \rho + y ( {\mathbf{w}}\cdot {\mathbf{x}}) \right)\end{aligned}$$ for some $m_{21}$ and $m_{22}$. It is easy to show that if eq.(\[eq:ldr-lower-bound2\]) satisfies at $y( {\mathbf{w}}\cdot {\mathbf{x}}) = \rho + 1 $, $y( {\mathbf{w}}\cdot {\mathbf{x}}) = - \rho + 1$ and $y( {\mathbf{w}}\cdot {\mathbf{x}}) = - {\mathbf{W}}\RR$ then eq.(\[eq:ldr-lower-bound2\]) will satisfy for all values of $ y( {\mathbf{w}}\cdot {\mathbf{x}}) $ and $\rho$ of consideration. At $ y( {\mathbf{w}}\cdot {\mathbf{x}}) = \rho + 1 $, eq.(\[eq:ldr-lower-bound2\]) will be $$\begin{aligned} \label{eq:lr-lower-bound21} \nonumber 0 \geq & \; m_{21} (1 + d) - m_{22} (1 - d) (2\rho + 1) \\[5pt] m_{22}& \geq \frac{m_{21} (1 + d) }{ (1 - d) (2 \rho + 1) }\end{aligned}$$ At $ y( {\mathbf{w}}\cdot {\mathbf{x}}) = -\rho + 1 $, eq.(\[eq:ldr-lower-bound2\]) will be $$\begin{aligned} \label{eq:lr-lower-bound22} \nonumber 2d &\geq m_{21} (1 + d) - m_{22} (1 - d) \\[5pt] m_{22} & \geq \frac{ m_{21} (1 + d) - 2d}{ (1 - d) }\end{aligned}$$ At $ y( {\mathbf{w}}\cdot {\mathbf{x}}) = - {\mathbf{W}}\RR $, eq.(\[eq:ldr-lower-bound2\]) will be $$\begin{aligned} \label{eq:lr-lower-bound23} \nonumber 2 &\geq m_{21} (1 + d) - m_{22} (\rho - {\mathbf{W}}\RR) (1 - d) \\[5pt] m_{22} &\leq \frac{ 2 - m_{21} (1 + d) }{ ({\mathbf{W}}\RR - \rho)(1 - d) }\end{aligned}$$ One can check that $m_{21} = \min \left( \frac{ 2(2 \rho + 1) }{ (1 + d)( {\mathbf{W}}\RR + \rho +1 ) }, \frac{ 1 + d({\mathbf{W}}\RR - \rho) }{ (1 + d)( {\mathbf{W}}\RR - \rho + 1 ) } \right)$ and $m_{22} = \max \left( \frac{2}{ ({\mathbf{W}}\RR + \rho + 1) (1 - d) }, \frac{ (2-d)({\mathbf{W}}\RR - \rho) + 1 }{ ({\mathbf{W}}\RR - \rho + 1)({\mathbf{W}}\RR - \rho)(1 - d) } \right) $ satisfies eq.(\[eq:lr-lower-bound21\]), eq.(\[eq:lr-lower-bound22\]) and eq.(\[eq:lr-lower-bound23\]). Now, we will prove Lemma 1 using Lemma 8 and Lemma 9. $$\begin{aligned} &\| {\mathbf{w}}_{t} - {\alpha}{\mathbf{w}}\|^{2} - {\| {\mathbf{w}}_{t+1} - {\alpha}{\mathbf{w}}\|}^{2} ={\| {\mathbf{w}}_{t} - {\alpha}{\mathbf{w}}\|}^{2}\\ &- {\| {\mathbf{w}}_{t} + {\eta dy_{t}{\mathbf{x}}_{t}}[{ \thinspace C_{t} + R_{1t} \thinspace }]} {+ {\eta (1-d)y_{t}{\mathbf{x}}_{t}}[{ \thinspace R_{2t} + M_{t} \thinspace }] - {\alpha}{\mathbf{w}}\|}^{2} \end{aligned}$$ Note that only one of four indicator $C_{t}, R_{1t}, R_{2t}, M_{t}$ can be true at time $t$ therefore following equations will be true. $$\begin{aligned} &[{ \thinspace C_{t} + R_{1t} \thinspace }]^2 = [{ \thinspace C_{t} + R_{1t} \thinspace }]\\ &[{ \thinspace R_{2t} + M_{t} \thinspace }]^2 = [{ \thinspace R_{2t} + M_{t} \thinspace }]\\ &[{ \thinspace C_{t} + R_{1t} \thinspace }][{ \thinspace R_{2t} + M_{t} \thinspace }] = 0\\ \end{aligned}$$ Using above facts, $$\begin{aligned} &\| {\mathbf{w}}_{t} - {\alpha}{\mathbf{w}}\|^{2} - {\| {\mathbf{w}}_{t+1} - {\alpha}{\mathbf{w}}\|}^{2}\\ = \;\; & {\| {\mathbf{w}}_{t} \|}^{2} + {\alpha}^{2}{\| {\mathbf{w}}\|}^{2} - 2{\alpha}({{\mathbf{w}}\cdot {\mathbf{w}}_{t}}) - \Big{[} { \| {\mathbf{w}}_{t} \|}^{2} \\ &+ {\eta^2 {d}^2 {y_{t}}^2}{ \| {\mathbf{x}}_{t} \| }^{2}[{ \thinspace C_{t} + R_{1t} \thinspace }] \\ &+ {{ \eta^2 (1-d)}^2 {y_{t}}^2 }{ \| {\mathbf{x}}_{t} \| }^{2}[{ \thinspace R_{2t} + M_{t} \thinspace }] \\ &+ {\alpha}^{2}{ \| {\mathbf{w}}\| }^{2} + {2 \eta dy_{t}}({{\mathbf{w}}_{t}} \cdot {\mathbf{x}}_{t})[{ \thinspace C_{t} + R_{1t} \thinspace }] \\ &+ {2 \eta (1-d)y_{t}}({{\mathbf{w}}_{t}} \cdot {\mathbf{x}}_{t})[{ \thinspace R_{2t} + M_{t} \thinspace }] - 2\alpha({\mathbf{w}}\cdot {\mathbf{w}}_{t}) \\ & - {2{\alpha} \eta dy_{t}}({\mathbf{w}}\cdot {\mathbf{x}}_{t})[{ \thinspace C_{t} + R_{1t} \thinspace }] \\ &- {2{\alpha} \eta (1-d)y_{t}}({\mathbf{w}}\cdot {\mathbf{x}}_{t})[{ \thinspace R_{2t} + M_{t} \thinspace }] \Big{]} \\[6pt] =\;\;& {2{\alpha} \eta dy_{t}}({\mathbf{w}}\cdot {\mathbf{x}}_{t})[{ \thinspace C_{t} + R_{1t} \thinspace }] \\ &+ {2{\alpha} \eta (1-d)y_{t}}({\mathbf{w}}\cdot {\mathbf{x}}_{t})[{ \thinspace R_{2t} + M_{t} \thinspace }] \\ &-{2 \eta dy_{t}}({\mathbf{w}}_{t} \cdot {\mathbf{x}}_{t})[{ \thinspace C_{t} + R_{1t} \thinspace }] \\ &- {2 \eta (1-d)y_{t}}({\mathbf{w}}_{t} \cdot {\mathbf{x}}_{t})[{ \thinspace R_{2t} + M_{t} \thinspace }] \\ &-{\eta^2 d^2{\| {\mathbf{x}}_{t} \|}^2}[{ \thinspace C_{t} + R_{1t} \thinspace }] -{\eta^2 (1-d)^2{\| {\mathbf{x}}_{t} \|}^2}[{ \thinspace R_{2t} + M_{t} \thinspace }] \end{aligned}$$ Combining the coefficient of $[{ \thinspace C_{t} + R_{1t} \thinspace }] \text{ and } [{ \thinspace R_{2t} + M_{t} \thinspace }]$, $$\begin{aligned} \| &{\mathbf{w}}_{t}- {\alpha}{\mathbf{w}}\|^{2} - {\| {\mathbf{w}}_{t+1} - {\alpha}{\mathbf{w}}\|}^{2} = {2{\alpha} \eta dy_{t}}({\mathbf{w}}\cdot {\mathbf{x}}_{t})[ { \thinspace C_{t} + R_{1t} \thinspace }] \\ &+ {2{\alpha} \eta (1-d)y_{t}}({\mathbf{w}}\cdot {\mathbf{x}}_{t})[ { \thinspace R_{2t} + M_{t} \thinspace }] - {2 \eta dy_{t}}({\mathbf{w}}_{t} \cdot {\mathbf{x}}_{t})[ { \thinspace C_{t} + R_{1t} \thinspace }]\\ &- {2 \eta (1-d)y_{t}}({\mathbf{w}}_{t} \cdot {\mathbf{x}}_{t})[ { \thinspace R_{2t} + M_{t} \thinspace }] -{ \eta^2 d^2{\| {\mathbf{x}}_{t} \|}^2}[{ \thinspace C_{t} + R_{1t} \thinspace }]\\ &-{ \eta^2 (1-d)^2{\| {\mathbf{x}}_{t} \|}^2}[ { \thinspace R_{2t} + M_{t} \thinspace }] \\[6pt] = \; & \; [ { \thinspace C_{t} + R_{1t} \thinspace }]\big{[} {2{\alpha} \eta dy_{t}}({\mathbf{w}}\cdot {\mathbf{x}}_{t}) -{2 \eta dy_{t}}({\mathbf{w}}_{t} \cdot {\mathbf{x}}_{t}) -{ \eta^2 d^2{\| {\mathbf{x}}_{t} \|}^2} \big{]} \\ &+ [ { \thinspace R_{2t} + M_{t} \thinspace }]\big{[} {2{\alpha} \eta (1-d)y_{t}}({\mathbf{w}}\cdot {\mathbf{x}}_{t}) \\ &-{2 \eta (1-d)y_{t}}({\mathbf{w}}_{t} \cdot {\mathbf{x}}_{t}) -{ \eta^2 (1-d)^2{\| {\mathbf{x}}_{t} \|}^2} \big{]} \end{aligned}$$ Repeating the similar procedure for $\rho$, we get, $$\begin{aligned} &( {\rho}_{t} - {\alpha}\rho )^{2} - {( {\rho}_{t+1} - {\alpha}\rho )}^{2} ={( {\rho}_{t} - {\alpha}\rho )}^{2} \\ &- ( {\rho}_{t} - { \eta d}[ { \thinspace C_{t} + R_{1t} \thinspace }] + {\eta(1-d)}[ { \thinspace R_{2t} + M_{t} \thinspace }] - {\alpha}\rho )^{2} \\[6pt] = \; & \; { {\rho}_{t} }^{2} + {\alpha}^2{\rho}^{2} - 2{\alpha}{\rho}{\rho}_{t} - \Big{[} { {\rho}_{t} }^{2} + \eta^2 {d^2}[ { \thinspace C_{t} + R_{1t} \thinspace }] \\ &+ {\eta^2 (1-d)^2}[ { \thinspace R_{2t} + M_{t} \thinspace }] + {\alpha}^2{\rho}^{2} - {2 \eta d {\rho}_{t}}[{ \thinspace C_{t} + R_{1t} \thinspace }] \\ &+ {2 \eta (1-d) {\rho}_{t}}[{ \thinspace R_{2t} + M_{t} \thinspace }] - 2{\alpha}{\rho}{\rho}_{t} + {2 {\alpha} \eta d \rho }[{ \thinspace C_{t} + R_{1t} \thinspace }] \\ &- {2{\alpha} \eta (1-d) \rho}[{ \thinspace R_{2t} + M_{t} \thinspace }] \Big]\\[6pt] =\; & \; [{ \thinspace C_{t} + R_{1t} \thinspace }] \thinspace \big{[} - \eta^2 d^2 + 2 \eta d{\rho}_{t} - 2 {\alpha} \eta d{\rho} \thinspace \big{]} \\ &+ [{ \thinspace R_{2t} + M_{t} \thinspace }]\big{[} - \eta^2 (1-d)^2 \\ &- 2 \eta (1-d) {\rho}_{t} + 2{\alpha} \eta (1-d)\rho \big{]} \end{aligned}$$ Adding $\| {\mathbf{w}}_{t} - {\alpha}{\mathbf{w}}\|^{2} - \| {\mathbf{w}}_{t+1} - {\alpha}{\mathbf{w}}\|^{2}$ and ${( {\rho}_{t} - {\alpha}\rho ) }^{2} - {( {\rho}_{t+1} - {\alpha}\rho )}^{2}$, we get the following. $$\begin{aligned} &\| {\mathbf{w}}_{t} - {\alpha}{\mathbf{w}}\|^{2} - \| {\mathbf{w}}_{t+1} - {\alpha}{\mathbf{w}}\|^{2} + {( {\rho}_{t} - {\alpha}\rho ) }^{2} - {( {\rho}_{t+1} - {\alpha}\rho )}^{2} \\ = \; &\ \; [{ \thinspace C_{t} + R_{1t} \thinspace }] \big{[} 2{\alpha} \eta d(y_{t}( {\mathbf{w}}\cdot {\mathbf{x}}_{t} ) - \rho) \\ &- 2 \eta d(y_{t}({\mathbf{w}}_{t} \cdot {\mathbf{x}}_{t}) - {\rho}_{t}) - \eta^2 {d^2}( {\| {\mathbf{x}}_{t} \|}^2 + 1)) \big{]} \\ &+ [{ \thinspace R_{2t} + M_{t} \thinspace }] \big{[} 2{\alpha} \eta (1-d)(y_{t}( {\mathbf{w}}\cdot {\mathbf{x}}_{t} ) + \rho )\\ &- 2 \eta (1-d)(y_{t}( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_{t} ) + {\rho}_{t}) - \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1) \big{]} \end{aligned}$$ If ${ \thinspace C_{t} + R_{1t} \thinspace }= 1$, then $L_{dr}( y_t ({\mathbf{w}}_t \cdot {\mathbf{x}}_t), \rho_t)=d[ \thinspace {\rho} + 1 - y_{t}({\mathbf{w}}_t \cdot {\mathbf{x}}_{t}) \thinspace ]$. If ${ \thinspace R_{2t} + M_{t} \thinspace }= 1$, then $L_{dr}( y_t ({\mathbf{w}}_t \cdot {\mathbf{x}}_t), \rho_t)=2d + (1-d)[1 - y_{t}( {\mathbf{w}}_t \cdot {\mathbf{x}}_{t} ) - \rho]$. We use these facts and Lemma 8 and Lemma 9 to get the following. $$\begin{aligned} &\| {\mathbf{w}}_{t}- {\alpha}{\mathbf{w}}\|^{2} - {\| {\mathbf{w}}_{t+1} -} { {\alpha}{\mathbf{w}}\|}^{2} + {( {\rho}_{t} - {\alpha}\rho ) }^{2} - {( {\rho}_{t+1} - {\alpha}\rho )}^{2} \\ \geq & \; [{ \thinspace C_{t} + R_{1t} \thinspace }] \Bigg{[} \frac{ 2\alpha \eta }{m_{1}} ( m_{1}d - L_{dr}(y_t ( {\mathbf{w}}\cdot {\mathbf{x}}_t),\rho_{t}) ) \\ &+ 2 \eta (L_{dr}( y_t ({\mathbf{w}}_t \cdot {\mathbf{x}}_t),\rho_{t}) - d) - \eta^2 d^2({\| {\mathbf{x}}_{t} \|}^2 + 1) \Bigg{]}\\ &+ [{ \thinspace R_{2t} + M_{t} \thinspace }] \Bigg{[} \frac{2 \alpha \eta}{ m_{22} } ( m_{21}(1 + d) - L_{dr}( y_t({\mathbf{w}}\cdot {\mathbf{x}}_t) ,\rho) )\\ &+ 2 \eta ( L_{dr,\rho}( y_t({\mathbf{w}}_t \cdot {\mathbf{x}}_t), \rho_t) - d - 1) - \eta^2 (1-d)^2({\| {\mathbf{x}}_{t} \|}^2 + 1) \Bigg{]} \end{aligned}$$ Summing the above equation for all $t=1,2,...,T$. $$\begin{aligned} &\sum\limits_{t=1}^{T} [{ \thinspace C_{t} + R_{1t} \thinspace }] \Bigg{[} \thinspace \frac{ 2{\alpha} \eta }{m_1} ( m_1 d - L_{dr}( y_t({\mathbf{w}}\cdot {\mathbf{x}}_t),\rho))\\ &+ 2 \eta (L_{dr}(y_t ({\mathbf{w}}_t \cdot {\mathbf{x}}_t), \rho_t) - d ) - \eta^2 d^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \thinspace \Bigg{]} \\ &+ \sum\limits_{t=1}^{T} [{ \thinspace R_{2t} + M_{t} \thinspace }]\Bigg{[} \thinspace \frac{2{\alpha} \eta }{ m_{22} } \big( m_{21} (1 + d) - L_{dr}( y_{t}({\mathbf{w}}\cdot {\mathbf{x}}_t), \rho_t) \big) \\ &+ 2 \eta (L_{dr}( y_t ( {\mathbf{w}}_t \cdot {\mathbf{x}}_t ), \rho_t) - d - 1) - \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \thinspace \big{]} \\[6pt] &\leq {\| {\mathbf{w}}_{1} - {\alpha}{\mathbf{w}}\|}^2 - {\| {\mathbf{w}}_{T+1} - {\alpha}{\mathbf{w}}\|}^2 + {( {\rho}_{1} - {\alpha}\rho )}^2 - {( {\rho}_{T+1} - {\alpha}\rho )}^2 \\ &\leq {\| {\mathbf{w}}_{1} - {\alpha}{\mathbf{w}}\|}^2 + {( {\rho}_{1} - {\alpha}\rho )}^2 \\ &= {\alpha}^2{ \| {\mathbf{w}}\| }^2 + (1 - {\alpha}\rho)^2 \end{aligned}$$ Here, we used the fact that we initialize with ${\mathbf{w}}_1=\mathbf{0}$ and $\rho_1=1$. Rearranging terms, we will get required inequality. $$\begin{aligned} & \sum\limits_{t=1}^{T} [{ \thinspace C_{t} + R_{1t} \thinspace }]\big{[} \thinspace 2{\alpha} \eta d + 2 \eta (L_{dr}( y_t ({\mathbf{w}}_t \cdot {\mathbf{x}}_t),\rho_t) - d ) \\ &- \eta^2 d^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \thinspace\big{]} + \sum\limits_{t=1}^{T} [{ \thinspace R_{2t} + M_{t} \thinspace }]\Bigg{[} \thinspace \frac{ 2{\alpha} \eta (1 + d) m_{21}}{ m_{22} } \\ &+ 2 \eta (L_{dr}( y_t ( {\mathbf{w}}_t \cdot {\mathbf{x}}_t), \rho_t) - d - 1) - \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \thinspace \Bigg{]} \\[6pt] & \leq {\alpha}^2{ \| {\mathbf{w}}\| }^2 + (1 - {\alpha}\rho)^2 + \sum_{t=1}^T \frac{ 2 \alpha \eta }{ m_{1} } L_{dr}( y_{t} ({\mathbf{w}}\cdot {\mathbf{x}}_t), \rho_t) [{ \thinspace C_{t} + R_{1t} \thinspace }] \\ &+ \sum_{t=1}^T \frac{2 \alpha \eta}{ m_{22} } L_{dr}( y_{t} ( {\mathbf{w}}\cdot {\mathbf{x}}_t ), \rho_{t}) [{ \thinspace R_{2t} + M_{t} \thinspace }] \\[6pt] & \leq {\alpha}^2{ \| {\mathbf{w}}\| }^2 + (1 - {\alpha}\rho)^2 + \sum_{t=1}^T \frac{ 2 \alpha \eta }{ m } L_{dr}( y_{t} ({\mathbf{w}}\cdot {\mathbf{x}}_t), \rho_t)\end{aligned}$$ where $m = \min( m_{1}, m_{22} )$. Proof of Theorem 2 {#app-sec-theorem-2} ================== 1. Putting $L_{dr}( y_{t} ( {\mathbf{w}}\cdot {\mathbf{x}}_t ),\rho)$ = 0 in the result of Lemma 1, we will get $$\label{eq-theorem2-loss-zero} \begin{aligned} &\alpha^2 \| {\mathbf{w}}\|^2 + (1 - \alpha \rho)^2 \\[6pt] &\geq \sum\limits_{t=1}^{T} [{ \thinspace C_{t} + R_{1t} \thinspace }] \big[ \; 2{\alpha} \eta d + 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t), \rho_t) - d ) \\ &- \eta^2 d^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \; \big] + \sum\limits_{t=1}^{T} [{ \thinspace R_{2t} + M_{t} \thinspace }] \Bigg[ \frac{ 2{\alpha} \eta (1 + d) m_{21} }{ m_{22} } \\ &+ 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ),\rho_t) - d - 1) - \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \Bigg] \end{aligned}$$ We choose the following value of $\alpha$. $$\label{eq-alpha-val-loss-zero-reject} \begin{aligned} \alpha = \max \begin{cases} \frac{1 + \eta^2 d^2(R^2 + 1) + 2 \eta d}{2 \eta d} \\ \frac{ m_{22} \left(1 + \eta^2 (1-d)^2(R^2 + 1) + 2 \eta (1-d) \right) }{2 m_{21} \eta (1+d)} \end{cases} \end{aligned}$$ This implies that $\alpha \geq \frac{1 + \eta^2 d^2(R^2 + 1) + 2 \eta d}{2 \eta d}$. Using this inequality in expression of coefficient of ${ \thinspace C_{t} + R_{1t} \thinspace }$ in eq.(\[eq-theorem2-loss-zero\]), $$\label{eq-alpha-value-proof21} \begin{aligned} 2{\alpha} \eta d & + 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d ) - \eta^2 d^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \\[6pt] \geq & \;\; 2 \left( \frac{1 + \eta^2 d^2(R^2 + 1) + 2 \eta d}{2 \eta d} \right) \eta d \\[3pt] &+ 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d ) - \eta^2 d^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \\[6pt] \geq & \; 1 + \eta^2 d^2 (R^2 - \| {\mathbf{x}}_{t} \|^2) + 2 \eta L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) \\[6pt] \geq & \; 1 \end{aligned}$$ Moreover, from eq.(\[eq-alpha-val-loss-zero-reject\]), we can say that $\alpha \geq \frac{ m_{22} \left( 1+ \eta^2 (1-d)^2 (R^2 + 1) + 2 \eta (1-d) \right) }{2 m_{21} \eta (1+d)}$. Using this inequality in coefficient of $R_{2t} + M_{t}$ in eq.(\[eq-theorem2-loss-zero\]), $$\begin{aligned} & \frac{ 2{\alpha} \eta (1 + d) m_{21}}{ m_{22} } + 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d - 1) \\[3pt] & \;\;\;\; - \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \\[6pt] \geq & \; 2 \left( \frac{ 1 + \eta^2 (1-d)^2 (R^2 + 1) + 2 \eta (1-d) }{2 \eta (1+d)} \right) \eta (1+d) \\[5pt] & \;\; + 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d - 1) - \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \\[6pt] \geq & \; \eta^2(1-d)^2 (R^2 - \| {\mathbf{x}}_{t} \|^2) \\[5pt] &+ 2 \eta ( L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - 2d ) + 1 \end{aligned}$$ When ${ \thinspace R_{2t} + M_{t} \thinspace }=1$ then $L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) \geq 2d$. Using this inequality, $$\begin{aligned} \label{eq-alpha-value-proof22} \frac{ 2{\alpha} \eta (1 + d) m_{21} }{ m_{22} } &+ 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d - 1) \\[4pt] &- \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \geq 1 \end{aligned}$$ Using eq.(\[eq-alpha-value-proof21\]) and eq.(\[eq-alpha-value-proof22\]), for $\alpha$ value given in eq.(\[eq-alpha-val-loss-zero-reject\]), $$\begin{aligned} \sum_{t=1}^T [ R_{1t} + R_{2t} ] &\leq \sum_{t=1}^T \; [ C_{t} + R_{1t} ] + \sum_{t=1}^T \; [ R_{2t} + M_{t} ] \\[6pt] &\leq \alpha^2 \| {\mathbf{w}}\|^2 + (1 - \alpha \rho)^2 \\\end{aligned}$$ Here $\alpha = \max \Bigg{(} \frac{1 + \eta^2 d^2(R^2 + 1) + 2 \eta d}{2 \eta d} ,\frac{ m_{22} \left( 1+ \eta^2 (1-d)^2(R^2 + 1) + 2 \eta (1-d) \right) }{2 \eta m_{21} (1+d)} \Bigg{)}$. 2. Putting $L_{dr}( y_{t} ( {\mathbf{w}}\cdot {\mathbf{x}}_t ), \rho) = 0,\;\forall t\in[T]$ in Lemma 1, we will get $$\label{eq-theorem1-zero-loss} \begin{aligned} &{\alpha}^2 { \| {\mathbf{w}}\| }^2 + (1 - {\alpha}\rho)^2 \\ &\geq \sum\limits_{t=1}^{T} [{ \thinspace C_{t} + R_{1t} \thinspace }][ \thinspace 2{\alpha} \eta d + 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d ) \\ &- \eta^2 d^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \thinspace] + \sum\limits_{t=1}^{T} [{ \thinspace R_{2t} + M_{t} \thinspace }] \Bigg[ \thinspace \frac{ 2{\alpha} \eta (1 + d) m_{21} }{ m_{22} } \\ &+ 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t), \rho_t) - d - 1) - \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \thinspace \Bigg] \\[6pt] \end{aligned}$$ Now, take $$\label{eq-alpha-val-loss-zero-mistake} \alpha = \max \begin{cases} \frac{ \eta d(R^2 + 1) + 2}{2 } \\ \frac{ m_{22} \left( 1+ \eta^2 (1-d)^2(R^2 + 1) + 2 \eta (1-d) \right) }{2 \eta m_{21} (1+d)} \end{cases}$$ This implies that $\alpha \geq \frac{ \eta d(R^2 + 1) + 2}{2}$. Using this inequality in the expression of coefficient of ${ \thinspace C_{t} + R_{1t} \thinspace }$ in eq.(\[eq-theorem1-zero-loss\]), $$\label{eq-coeff1-geq-zero} \begin{aligned} &2{\alpha} \eta d + 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d ) - \eta^2 d^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \\ &\geq 2 \left( \frac{ \eta d(R^2 + 1) + 2}{2} \right) \eta d + 2 \eta (L_{dr}( y_{t} ({\mathbf{w}}_{t} \cdot {\mathbf{x}}_t), \rho_t) - d) \\ &- \eta^2 d^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 )\\ = & \; \eta^2 d^2( R^2 - {\| {\mathbf{x}}_{t} \|}^2 ) + 2 \eta L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) \\ & \geq 0, \;\forall t \in [T] \end{aligned}$$ Value of $\alpha$ in eq.(\[eq-alpha-val-loss-zero-mistake\]) also implies that $\alpha \geq \frac{ m_{22} \left( 1+ \eta^2 (1-d)^2(R^2 + 1) + 2 \eta (1-d) \right) }{2 m_{21} \eta(1+d)}$. Using this inequality in the expression of coefficient of ${ \thinspace R_{2t} + M_{t} \thinspace }$ in eq.(\[eq-theorem1-zero-loss\]), $$\label{eq-coeff2-geq-one} \begin{aligned} & \frac{ 2\alpha \eta (1+d) m_{21} }{ m_{22} } + 2 \eta ( L_{dr}( y_{t} ({\mathbf{w}}_{t} \cdot {\mathbf{x}}_t), \rho_t) - d - 1 ) \\[5pt] &- \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \\[4pt] &\geq 2\left( \frac{1+ \eta^2 (1-d)^2(R^2 + 1) + 2\eta(1-d)}{2 \eta (1+d)} \right) \eta (1+d) \\ &+ 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d -1) - \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \\ &= 1 + \eta^2 (1-d)^2( R^2 - {\| {\mathbf{x}}_{t} \|}^2 ) \\ &+ 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - 2d) \geq 1 \end{aligned}$$ From eq.(\[eq-coeff1-geq-zero\]) and (\[eq-coeff2-geq-one\]), we can say that using value of $\alpha$ given in eq.(\[eq-alpha-val-loss-zero-mistake\]) will result into coefficient of ${ \thinspace C_{t} + R_{1t} \thinspace }$ greater than equal to 0 and coefficient of ${ \thinspace R_{2t} + M_{t} \thinspace }$ greater than equal to 1. $$\begin{aligned} &\sum\limits_{t=1}^{T} M_{t} \leq \sum\limits_{t=1}^{T}[{ \thinspace R_{2t} + M_{t} \thinspace }]\\ &\leq \sum\limits_{t=1}^{T} [{ \thinspace R_{2t} + M_{t} \thinspace }] \Bigg[ \thinspace \frac{ 2{\alpha} \eta (1 + d) m_{21} }{ m_{22} } + 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) \\ &- d - 1) - \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \thinspace ] \Bigg] \\ &\leq {\alpha}^2{ \| {\mathbf{w}}\| }^2 + (1 - {\alpha}\rho)^2 \end{aligned}$$ Proof of Theorem 3 {#app-sec:theorem-3} ================== 1. According to lemma 1, $$\begin{aligned} \label{eq-theorem2-loss-nonzero} &\sum\limits_{t=1}^{T} [{ \thinspace C_{t} + R_{1t} \thinspace }]\big{[} \thinspace 2{\alpha} \eta d + 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d ) \\ &- \eta^2 d^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \thinspace\big{]} + \sum\limits_{t=1}^{T} [{ \thinspace R_{2t} + M_{t} \thinspace }]\Bigg{[} \thinspace \frac{ 2{\alpha} \eta (1 + d) m_{21} }{ m_{22} } \\ &+ 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d - 1) \\[5pt] &- \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \thinspace \Bigg{]} \\ &\leq {\alpha}^2{ \| {\mathbf{w}}\| }^2 + (1 - {\alpha}\rho)^2 + \sum_{t=1}^T \frac{ 2 \alpha \eta }{ m } L_{dr}( y_{t} ( {\mathbf{w}}\cdot {\mathbf{x}}_t ), \rho) \end{aligned}$$ Now, taking value of $\alpha$ as $$\begin{aligned} \label{eq-alpha-val-loss-nonzero-reject} \alpha = \max \begin{cases} \frac{1 + \eta^2 d^2(R^2 + 1) + 2 \eta d}{2 \eta d} \\ \frac{ m_{22} \left( 1 + \eta^2 (1-d)^2(R^2 + 1) + 2 \eta (1-d) \right) }{2 \eta (1+d) m_{21} } \end{cases} \end{aligned}$$ This implies that $\alpha \geq \frac{1 + \eta^2 d^2(R^2 + 1) + 2 \eta d}{2 \eta d}$. Using this inequality in expression of coefficient of ${ \thinspace C_{t} + R_{1t} \thinspace }$ in eq.(\[eq-theorem2-loss-nonzero\]), $$\label{eq-value-proof21} \begin{aligned} &2{\alpha} \eta d + 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d ) - \eta^2 d^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \\ &\geq 2 \left( \frac{1 + \eta^2 d^2(R^2 + 1) + 2 \eta d}{2 \eta d} \right) \eta d \\ &+ 2 \eta (L_{dr}( y_t ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t), \rho_t) - d ) - \eta^2 d^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \\ &\geq 1 + \eta^2 d^2 (R^2 - \| {\mathbf{x}}_{t} \|^2) + 2 \eta L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) \\ &\geq 1 \end{aligned}$$ Moreover, from eq.(\[eq-alpha-val-loss-nonzero-reject\]), we can say that $\alpha \geq \frac{ m_{22} (1+ \eta^2 (1-d)^2 (R^2 + 1) + 2 \eta (1-d)) }{2 \eta (1+d) m_{21} }$. Using this inequality in coefficient of $R_{2t} + M_{t}$ in eq.(\[eq-theorem2-loss-nonzero\]), $$\begin{aligned} & \frac{ 2{\alpha} \eta (1 + d) m_{21} }{ m_{22} } + 2 \eta (L_{dr}( y_{t}({\mathbf{w}}_{t} \cdot {\mathbf{x}}_t), \rho_t) - d - 1) \\[5pt] &- \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \\ & \geq 2 \left( \frac{ 1 + \eta^2 (1-d)^2 (R^2 + 1) + 2 \eta (1-d) }{2 \eta (1+d)} \right) \eta (1+d) \\[5pt] & + 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d - 1) - \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \\[5pt] & \geq \eta^2(1-d)^2 (R^2 - \| {\mathbf{x}}_{t} \|^2) + 2 \eta ( L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t), \rho_t) - 2d ) + 1\\[6pt]\end{aligned}$$ When ${ \thinspace R_{2t} + M_{t} \thinspace }=1$ then $L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t), \rho_t) \geq 2d$. Using this inequality, $$\begin{aligned} \label{eq-alpha-value-proof23} & \frac{ 2{\alpha} \eta (1 + d) m_{21} }{ m_{22} } + 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d - 1) \\[3pt] &- \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \geq 1 \end{aligned}$$ Using eq.(\[eq-value-proof21\]) and eq.(\[eq-alpha-value-proof23\]), for $\alpha$ value given in eq.(\[eq-alpha-val-loss-nonzero-reject\]), $$\begin{aligned} &\sum_{t=1}^T [ R_{1t} + R_{2t} ] \leq \sum_{t=1}^T \; [ C_{t} + R_{1t} ] + \sum_{t=1}^T \; [ R_{2t} + M_{t} ] \\ & \leq \alpha^2 \| {\mathbf{w}}\|^2 + (1 - \alpha \rho)^2 + \sum_{t=1}^T \frac{ 2\eta \alpha }{ m } L_{dr} ( y_{t} ( {\mathbf{w}}\cdot {\mathbf{x}}_{t} ), \rho) \\\end{aligned}$$ Here, we used following value of $\alpha$. $$\begin{aligned} \alpha = \max \begin{cases} \frac{1 + \eta^2 d^2(R^2 + 1) + 2 \eta d}{2 \eta d} \\ \frac{ m_{22} \left( 1+ \eta^2 (1-d)^2(R^2 + 1) + 2 \eta (1-d) \right) }{2 \eta (1+d) m_{21} } \end{cases}\end{aligned}$$. 2. According to Lemma 1, $$\label{eq-theorem1-nonzero-loss} \begin{aligned} &\sum\limits_{t=1}^{T} [{ \thinspace C_{t} + R_{1t} \thinspace }]\big{[} \thinspace 2{\alpha} \eta d + 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d ) \\ &- \eta^2 d^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \thinspace \big{]} + \sum\limits_{t=1}^{T} [{ \thinspace R_{2t} + M_{t} \thinspace }]\Bigg{[} \frac{ 2{\alpha} \eta (1 + d) m_{21} }{ m_{22} } \\ &+ 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d - 1) - \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \Bigg{]} \\ &\leq {\alpha}^2{ \| {\mathbf{w}}\| }^2 + (1 - {\alpha}\rho)^2 + \sum_{t=1}^T \frac{ 2 \alpha \eta }{ m } L_{dr}( y_{t} ( {\mathbf{w}}\cdot {\mathbf{x}}_t), \rho_t) \end{aligned}$$ Now, take $$\label{eq-alpha-val-loss-nonzero-mistake} \alpha = \max \begin{cases} \frac{ \eta d(R^2 + 1) + 2}{2 } \\ \frac{ m_{22} ( 1+ \eta^2 (1-d)^2(R^2 + 1) + 2 \eta (1-d) ) }{2 \eta m_{21} (1+d)} \end{cases}$$ This implies that $\alpha \geq \frac{ \eta d(R^2 + 1) + 2}{2}$. Using this inequality in the expression of coefficient of ${ \thinspace C_{t} + R_{1t} \thinspace }$ in eq.(\[eq-theorem1-nonzero-loss\]), $$\label{eq-second-coeff1-geq-zero} \begin{aligned} &2{\alpha} \eta d + 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t), \rho_t) - d ) - \eta^2 d^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \\ & \geq 2 \Big{(} \frac{ \eta d(R^2 + 1) + 2}{2} \Big{)} \eta d + 2 \eta (L_{dr}( y_{t} ({\mathbf{w}}_{t} \cdot {\mathbf{x}}_t), \rho_t) - d) \\ &- \eta^2 d^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 )\\ &= \eta^2 d^2( R^2 - {\| {\mathbf{x}}_{t} \|}^2 ) + 2 \eta L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) \\ & \geq 0, \; \forall t \in [T] \end{aligned}$$ Value of $\alpha$ in eq.(\[eq-alpha-val-loss-nonzero-mistake\]) also implies that $\alpha \geq \frac{ m_{22} \left( 1+ \eta^2 (1-d)^2(R^2 + 1) + 2 \eta (1-d) \right) }{2 \eta (1+d) m_{21} }$. Using this inequality in the expression of coefficient of ${ \thinspace R_{2t} + M_{t} \thinspace }$ in eq.(\[eq-theorem1-nonzero-loss\]), $$\label{eq-second-coeff2-geq-one} \begin{aligned} & \frac{ 2\alpha \eta (1+d) m_{21} }{ m_{22} } + 2 \eta ( L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d - 1 ) \\ &- \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \\ &\geq 2\Bigg{(} \frac{1+ \eta^2 (1-d)^2(R^2 + 1) + 2\eta(1-d)}{2 \eta (1+d)} \Bigg{)} \eta (1+d) \\ &+ 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho_t) - d -1) \\ &- \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \\ &= 1 + \eta^2 (1-d)^2( R^2 - {\| {\mathbf{x}}_{t} \|}^2 ) \\ &+ 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t), \rho_t) - 2d) \\[4pt] & \geq 1 \end{aligned}$$ From eq.(\[eq-second-coeff1-geq-zero\]) and (\[eq-second-coeff2-geq-one\]), we can say that using value of $\alpha$ given in eq.(\[eq-alpha-val-loss-nonzero-mistake\]) will result into coefficient of ${ \thinspace C_{t} + R_{1t} \thinspace }$ greater than equal to 0 and coefficient of ${ \thinspace R_{2t} + M_{t} \thinspace }$ greater than equal to 1. $$\begin{aligned} \sum\limits_{t=1}^{T} & M_{t} \leq \sum\limits_{t=1}^{T}[{ \thinspace R_{2t} + M_{t} \thinspace }] \\ \leq & \sum\limits_{t=1}^{T} [{ \thinspace R_{2t} + M_{t} \thinspace }] \Bigg[ \thinspace \frac{ 2{\alpha} \eta (1 + d) m_{21} }{ m_{22} } + 2 \eta (L_{dr}( y_{t} ( {\mathbf{w}}_{t} \cdot {\mathbf{x}}_t ), \rho) \\ &- d - 1) - \eta^2 (1-d)^2( {\| {\mathbf{x}}_{t} \|}^2 + 1 ) \thinspace \Bigg] \\ \leq & \; {\alpha}^2{ \| {\mathbf{w}}\| }^2 + (1 - {\alpha}\rho)^2 + \sum_{t=1}^T \frac{ 2\eta \alpha }{ m } L_{dr}( y_{t} ( {\mathbf{w}}\cdot {\mathbf{x}}_{t} ), \rho) \end{aligned}$$ Proof of Lemma 4 {#app-sec:lemma-4} ================ To prove $\beta-$smoothness property of $L_{ds}(yf({\mathbf{x}}), \rho)$, we will first get Hessian matrix of $L_{ds}(yf({\mathbf{x}}), \rho)$ (i.e. $\nabla^2 L_{ds}(yf({\mathbf{x}}), \rho)$). $$\label{eq:def-yfx} \begin{aligned} &\frac{ \partial L_{ds}(yf({\mathbf{x}}), \rho ) }{ \partial {\mathbf{w}}} = -2d\gamma y {\mathbf{x}}\big[ \ms ( 1 - \ms ) \big] \\[3pt] &\;\;\;-2(1-d)\gamma y {\mathbf{x}}\big[ \ps ( 1 - \ps ) \big] \end{aligned}$$ $$\label{eq:def-rho} \begin{aligned} &\frac{ \partial L_{ds}(yf({\mathbf{x}}), \rho) }{ \partial \rho } = 2d\gamma \big[ \ms (1 - \ms) \big] \\[3pt] &\;\;\;-2(1-d)\gamma \big[ \ps (1 - \ps) \big] \end{aligned}$$ Now, taking double derivative of $L_{ds} ( y f({\mathbf{x}}), \rho)$, $$\begin{aligned} \nonumber &\frac{ \partial^2 L_{ds} }{ \partial {\mathbf{w}}^2 } = 2 d \gamma^2 {\mathbf{x}}{\mathbf{x}}^{T} \Big[ \ms ( 1 - \ms ) \\ \nonumber & -2 \sigma^2( yf({\mathbf{x}}) - \rho ) (1 - \ms) \Big] \\ \nonumber & +2 (1-d) \gamma^2 {\mathbf{x}}{\mathbf{x}}^{T} \Big[ \ps (1 - \ps) \\ & - 2 \sigma^2 ( yf({\mathbf{x}}) + \rho)(1 - \ps) \Big] \label{eq:double-def-yfx-yfx} \end{aligned}$$ $$\label{eq:double-def-yfx-rho} \begin{aligned} \frac{ \partial^2 L_{ds} }{ \partial {\mathbf{w}}\; \partial \rho } &= 2 d \gamma^2 y {\mathbf{x}}\Big[ - \ms ( 1 - \ms ) \\ & +2 \sigma^2( yf({\mathbf{x}}) - \rho ) (1 - \ms) \Big] \\ & +2 (1-d) \gamma^2 y {\mathbf{x}}\Big[ \ps (1 - \ps) \\ & - 2\sigma^2 ( yf({\mathbf{x}}) + \rho ) (1 - \ps) \Big] \end{aligned}$$ $$\label{eq:double-def-rho-yfx} \begin{aligned} \frac{ \partial^2 L_{ds} }{ \partial \rho \; \partial {\mathbf{w}}} &= 2 d \gamma^2 y {\mathbf{x}}\Big[ -\ms ( 1 - \ms ) \\ & +2 \sigma^2( yf({\mathbf{x}}) - \rho ) (1 - \ms) \Big] \\ & +2 (1-d) \gamma^2 y {\mathbf{x}}\Big[ \ps (1 - \ps) \\ & - 2\sigma^2 ( yf({\mathbf{x}}) + \rho ) (1 - \ps) \Big] \end{aligned}$$ $$\label{eq:double-def-rho-rho} \begin{aligned} \frac{ \partial^2 L_{ds} }{ \partial \rho^2 } =& 2 d \gamma^2 \Big[ \ms ( 1 - \ms ) \\ & -2 \sigma^2( yf({\mathbf{x}}) - \rho ) (1 - \ms) \Big] \\ & +2 (1-d) \gamma^2 \Big[ \ps (1 - \ps) \\ & - 2\sigma^2 ( yf({\mathbf{x}}) + \rho ) (1 - \ps) \Big] \end{aligned}$$ Using eq.(\[eq:double-def-yfx-yfx\]), (\[eq:double-def-yfx-rho\]), (\[eq:double-def-rho-yfx\]) and (\[eq:double-def-rho-rho\]), we can construct Hessian matrix $\nabla^2 L_{ds} (yf({\mathbf{x}}), \rho)$. $$\begin{aligned} \nabla^2 L_{ds} ( yf({\mathbf{x}}), \rho ) = \begin{pmatrix} \frac{ \partial^2 L_{ds}(yf({\mathbf{x}}), \rho) }{ \partial {\mathbf{w}}^2 } & \frac{ \partial^2 L_{ds}(yf({\mathbf{x}}), \rho) }{ \partial {\mathbf{w}}\; \partial \rho }\\[7pt] \frac{ \partial^2 L_{ds}(yf({\mathbf{x}}), \rho) }{ \partial \rho \; \partial {\mathbf{w}}} & \frac{ \partial^2 L_{ds}(yf({\mathbf{x}}), \rho) }{ \partial \rho^2 } \end{pmatrix} \end{aligned}$$ We know that upper bound on the spectral norm of the Hessian matrix $\nabla^2 L_{ds} ( yf({\mathbf{x}}), \rho )$ is smoothness constant $\beta$ of $L_{ds} ( yf({\mathbf{x}}), \rho )$. To upper bound the spectral norm of the Hessian matrix, we use following inequality. $$\label{eq:l2-frob-norm-inequality} \| \nabla^2 L_{ds} ( yf({\mathbf{x}}), \rho ) \|_{2} \leq \| \nabla^2 L_{ds} ( yf({\mathbf{x}}), \rho ) \|_{F}$$ where $\|.\|_{2}$ stands for spectral norm and $\|.\|_{F}$ stands for Frobenius norm. We can think $\ms ( 1 - \ms ) - 2 \sigma^2( yf({\mathbf{x}}) - \rho ) (1 - \ms)$ as a cubic polynomial in $\ms$. Now, using fact that $\ms \in [0, 1]$, we get range of the polynomial as \[-0.1, 0.1\]. In the same manner, we can get range of $\ps ( 1 - \ps ) - 2 \sigma^2( yf({\mathbf{x}}) + \rho ) (1 - \ps)$ as \[-0.1, 0.1\] therefore, We can use $| \ms ( 1 - \ms ) - 2 \sigma^2( yf({\mathbf{x}}) - \rho ) (1 - \ms) | \leq 0.1 $ and $| \ps ( 1 - \ps ) - 2 \sigma^2( yf({\mathbf{x}}) + \rho ) (1 - \ps) | \leq 0.1 $. Using $\| {\mathbf{x}}\| \leq \RR$, we get that $$\label{eq:frob-norm-bound} \begin{aligned} \| \nabla^2 L_{ds} ( yf({\mathbf{x}}), \rho ) \|_{F} \leq \frac{\gamma^2}{5} \big[ \RR^2 + 1 \big] \end{aligned}$$ Using eq.(\[eq:l2-frob-norm-inequality\]) and eq.(\[eq:frob-norm-bound\]), we can say that $L_{ds} ( yf({\mathbf{x}}) , \rho)$ is $\beta-$smooth with smoothness constant $\beta = \frac{ \gamma^2 }{5} \big[ \RR^2 + 1 \big] $. Proof of Theorem 6 {#app-sec:theorem-6} ================== Let $\Theta = [{\mathbf{w}}\; \rho]$. Using smoothness property of $L_{ds}(y_{t}f_{t}({\mathbf{x}}_{t}), \rho_{t} )$, $$\begin{aligned} L_{ds}& ( y_{t}f_{t+1}( {\mathbf{x}}_{t} ), \rho_{t}) - L_{ds}( y_{t}f_{t}( {\mathbf{x}}_{t} ), \rho_{t} ) \\ &\leq \Big( \nabla_{\P} L_{ds}( y_{t} f_{t} ({\mathbf{x}}_{t}), \rho_{t} ) \cdot \big( \Theta_{t+1} - \Theta_{t} \big) \Big) \\ &+ \frac{\beta}{2} \| \Theta_{t+1} - \Theta_{t} \|^2 \end{aligned}$$ We know that $\Theta_{t+1} - \Theta_{t} = - \eta z_{t} \nabla_{\Theta} L_{ds}( y_{t} f_{t} ({\mathbf{x}}_{t}), \rho_{t} )$ where $\eta$ is step-size. As $z_{t} \in \{0, 1\}$, we use fact that $z_{t}^2 = z_{t}$. $$\begin{aligned} & L_{ds} ( y_{t} f_{t+1} ({\mathbf{x}}_{t}), \rho_{t} ) - L_{ds}( y_{t} f_{t} ({\mathbf{x}}_{t}), \rho_{t} ) \\[3pt] & \;\; \leq \; - \eta z_{t} \| \nabla L_{ds}(y_{t} f_{t} ({\mathbf{x}}_{t}), \rho_{t}) \|^2 + \frac{\beta z_{t} \eta^2 }{2} \| \nabla L_{ds}( y_{t} f_{t} ({\mathbf{x}}_{t}), \rho_{t} ) \|^2 \end{aligned}$$ Taking $\E_{z}$ on both side, we will get following equation. $$\begin{aligned} & \E_{z} \Big[ L_{ds} ( y_{t} f_{t+1} ({\mathbf{x}}_{t}), \rho_{t} ) - L_{ds}( y_{t} f_{t} ({\mathbf{x}}_{t}), \rho_{t} ) \Big] \\[3pt] & \;\; \leq \; \Big( - \eta + \frac{\beta \eta^2 }{2} \Big) \| \nabla L_{ds}( y_{t} f_{t} ({\mathbf{x}}_{t}), \rho_{t} ) \|^2 \E_{z}[ z_{t} ] \end{aligned}$$ Using $\E_{z}[ z_{t} ] = p_{t} \leq 1$, $$\begin{aligned} & \E_{z} \Big[ L_{ds} ( y_{t} f_{t+1} ({\mathbf{x}}_{t}), \rho_{t} ) - L_{ds}( y_{t} f_{t} ({\mathbf{x}}_{t}), \rho_{t} ) \Big] \\[3pt] & \;\; \leq \; \Big( - \eta + \frac{\beta \eta^2 }{2} \Big) \| \nabla L_{ds}( y_{t} f_{t} ({\mathbf{x}}_{t}), \rho_{t} ) \|^2 \end{aligned}$$ Multiplying above equation with -1 on both side, $$\begin{aligned} & \Big( \eta - \frac{\beta \eta^2 }{2} \Big) \| \nabla L_{ds}(y_{t} f_{t} ({\mathbf{x}}_{t}), \rho_{t}) \|^2 \\[5pt] & \;\; \leq \; \E_{z} \Big[ L_{ds} ( y_{t} f_{t+1} ({\mathbf{x}}_{t}), \rho_{t} ) - L_{ds}( y_{t} f_{t} ({\mathbf{x}}_{t}), \rho_{t} ) \Big] \\[3pt] & \;\;\ \leq \; \E_{z} \Big[ L_{ds} ( y_{t+1} f_{t+1} ({\mathbf{x}}_{t+1}), \rho_{t+1} ) - L_{ds}( y_{t} f_{t} ({\mathbf{x}}_{t}), \rho_{t} ) \\[4pt] &+ L_{ds} ( y_{t} f_{t+1} ({\mathbf{x}}_{t}), \rho_{t} ) - L_{ds} ( y_{t+1} f_{t+1} ({\mathbf{x}}_{t+1}), \rho_{t} ) \Big] \\[4pt] & \;\; \leq \; \E_{z} \Big[ L_{ds} ( y_{t+1} f_{t+1} ({\mathbf{x}}_{t+1}), \rho_{t+1} ) - L_{ds}( y_{t} f_{t} ({\mathbf{x}}_{t}), \rho_{t} ) + 2 \Big] \end{aligned}$$ Taking sum over $t=1,...,T$, we will get $$\sum_{t=1}^T \| \nabla L_{ds}( y_{t}f_{t}({\mathbf{x}}_{t}), \rho_{t} ) \|^2 = \mathcal{R}(T) \leq \frac{2T + 2}{ \eta - \frac{\beta \eta^2}{2} }$$ Taking $\eta = \frac{1}{\beta}$, we will get $$\sum_{t=1}^T \| \nabla L_{ds}( y_{t}f_{t}({\mathbf{x}}_{t}), \rho_{t} ) \|^2 = \mathcal{R}(T) \leq 4 \beta ( T + 1 )$$ Using $\beta = \frac{\gamma^2}{5} (\RR^2 + 1)$, we will get following local regret. $$\mathcal{R}(T) \leq \frac{4 \gamma^2}{5} (\RR^2 +1)(T + 1)$$ Active Learning of Non-Linear Reject Option Classifiers Based on Double Sigmoid Loss {#app-sec:DSAL-kernel} ==================================================================================== The query probability function and $\rho-$update equation of non-linear double sigmoid active learning algorithm is same as query update of the linear algorithm. They are as follows. $$\begin{aligned} \label{eq:proper-bimodal-probability} \text{probability } p_{t} = 4 \; \sigma ( | f_{t}( {\mathbf{x}}_{t} ) | - \rho_{t} )\left( 1 - \sigma( | f_{t} ({\mathbf{x}}_{t}) | - \rho_{t}) \right)\end{aligned}$$ $$\begin{aligned} \label{eq:proper-bimodal-rho-update} \nonumber \text{Update}(\rho) &= - 2\alpha \Big{[}d \sigma ( y_{t} f_{t}( {\mathbf{x}}_{t} ) - \rho_{t} )\left( 1 - \sigma( y_{t} f_{t}( {\mathbf{x}}_{t} ) - \rho_{t}) \right)\\ & \;\;\;\;\;- (1 - d) \sigma ( y_{t} f_{t}( {\mathbf{x}}_{t} ) + \rho_{t} )\left( 1 - \sigma( y_{t} f_{t}( {\mathbf{x}}_{t} ) + \rho_{t}) \right) \Big{]}\end{aligned}$$ $d \in (0, 0.5)$, step size $\eta$. Weight vector ${\mathbf{w}}$, Rejection width $\rho$. ${\mathbf{w}}_{1}, \rho_{1}$ Sample ${\mathbf{x}}_{t} \in \mathbb{R}^{d}$ Set $f_{t-1}({\mathbf{x}}) = \sum_{i=1}^{t-1} a_{i} \K({\mathbf{x}}_{i}, {\mathbf{x}}) $ Set $p_{t}$ using eq.(\[eq:proper-bimodal-probability\]). Draw a Bernouli random variable $z_{t} \in \{0, 1\}$ of parameter $p_{t}$. Query for correct label $y_{t}$ of ${\mathbf{x}}_t$. $a_{t} = 2y_{t} \alpha \big[ d\sigma ( y_{t} f_{t-1}( {\mathbf{x}}_{t} ) - \rho_{t} )\left( 1 - \sigma( y_{t} f_{t-1} ( {\mathbf{x}}_{t} ) - \rho_{t}) \right) + (1 - d)\sigma ( y_{t} f_{t-1} ( {\mathbf{x}}_{t} ) + \rho_{t} )\left( 1 - \sigma( y_{t} f_{t-1} ( {\mathbf{x}}_{t} ) + \rho_{t}) \right) \big] $ Update $\rho_{t}$ using eq.(\[eq:proper-bimodal-rho-update\]). ($\rho_{t+1} = \rho_{t} + \text{Update}(\rho))$) $a_{t} = 0$. $\rho_{t+1} = \rho_{t}$. [^1]: $\mathbb{I}_{\{A\}}$ takes value 1 when $A$ is true and 0 otherwise. [^2]: Here, $[T]$ denotes the sequence $1,\ldots, T$. [^3]: A function $f$ is $\beta$-smooth if for all $x, y \in$ Domain($f$), $$\| \nabla f(x) - \nabla f(y) \| \leq \beta \| x - y \|.$$ [^4]: $L_{dr}$ does not have sufficient smoothness properties required in [@DBLP:journals/corr/abs-1708-00075]. Thus, we do not present these convergence results for DRAL.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We show that in the continuum limit watersheds dividing drainage basins are Schramm-Loewner Evolution (SLE) curves, being described by one single parameter $\kappa$. Several numerical evaluations are applied to ascertain this. All calculations are consistent with SLE$_\kappa$, with $\mbox{$\kappa=1.734\pm0.005$}$, being the only known physical example of an SLE with $\kappa<2$. This lies outside the well-known duality conjecture, bringing up new questions regarding the existence and reversibility of dual models. Furthermore it constitutes a strong indication for conformal invariance in random landscapes and suggests that watersheds likely correspond to a logarithmic Conformal Field Theory (CFT) with central charge $\mbox{$c\approx-7/2$}$.' author: - 'E. Daryaei' - 'N. A. M. Araújo' - 'K. J. Schrenk' - 'S. Rouhani' - 'H. J. Herrmann' bibliography: - 'sle.bib' title: 'Watersheds are Schramm-Loewner Evolution curves' --- The possibility of statistically describing the properties of random curves with a single parameter fascinates physicists and mathematicians alike. This capability is provided by the theory of Schramm-Loewner Evolution (SLE), where random curves can be generated from a Brownian motion with diffusivity $\kappa$ [@Schramm00]. Once $\kappa$ is identified, several geometrical properties of the curve are known (e.g. fractal dimension, winding angle, and left-passage probability) [@Cardy05; @Bauer06]. Among the examples of such curves, we find self-avoiding walks [@Kennedy02] and the contours of critical clusters in percolation [@Smirnov01], $Q$-state Potts model [@Rohde05], and spin glasses [@Bernard07], as well as in turbulence [@Bernard06]. Establishing SLE for such systems has provided valuable information on the underlying symmetries and paved the way to some exact results [@Lawler01; @Smirnov01; @Smirnov06]. In fact, SLE is not a general property of non-self-crossing walks since many curves have been shown not to be SLE as, for example, the interface of solid-on-solid models [@Schwarz09], the domain walls of bimodal spin glasses [@Risau-Gusman08], and the contours of negative-weight percolation [@Norrenbrock12]. Recently, the watershed (WS) of random landscapes [@Schrenk12; @Cieplak94; @Fehr11; @*Fehr11b], with a fractal dimension $d_f\approx1.22$, was shown to be related to a family of curves appearing in different contexts such as, e.g., polymers in strongly disordered media [@Porto97], bridge percolation [@Schrenk12], and optimal path cracks [@Andrade09]. In the present Letter, we show that this universal curve has the properties of SLE, with $\kappa=1.734\pm0.005$. $\kappa<2$ is a special limit since, up to now, all known examples of SLE found in Nature and statistical physics models have $2\leq\kappa\leq8$, corresponding to fractal dimensions $d_f$ between $1.25$ and $2$. Scale invariance and, consequently, the appearance of fractal dimensions have always motivated to apply concepts from conformal invariance to shed light on critical systems. Archetypes of self-similarity are the contours of critical clusters in lattice models. Already back in 1923, Loewner proposed an expression for the evolution of an analytic function which conformally maps the region bounded by these curves into a standard domain [@Loewner23]. Such an evolution, follows the theory, should only depend on a continuous function of a real parameter, known as *driving function*. Recently, Schramm argued that to guarantee conformal invariance, and domain Markov property, the continuous function needs to be a one-dimensional Brownian motion [@Schramm00] thrusting into motion numerous studies in what is today known as the Schramm-Loewner, or Stochastic-Loewner, Evolution (SLE). Such one-dimensional Brownian motion has zero mean value and is solely characterized by its diffusivity $\kappa$, which relates with the fractal dimension $d_f$ as [@Duplantier03; @Beffara04], $$\label{eq::fractal.dimension} d_f=\min\{1+\kappa/8,2\} \ \ .$$ Although it is believed that SLE should hold for the entire class of equilibrium O($n$) systems, it has only been rigorously proven for a few cases [@Smirnov01; @Smirnov06]. Nevertheless, numerically correspondence has been shown for a large number of models as mentioned above. It has been argued that SLE can be applied to models exhibiting non-self-crossing paths on a lattice, showing self-similarity, not only in equilibrium but also out of equilibrium as the example discussed here [@Amoruso06; @Bernard06; @Saberi09]. In random discretized landscapes each site is characterized by a real number such as, e.g., the height in an elevation map, the intensity in a pixelated image, or the energy in an energy landscape [@Schrenk12]. If sites are occupied from the lowest to the highest, clusters of adjacent occupied sites can be defined and, at a certain fraction of occupied sites, a spanning cluster emerges connecting opposite borders (e.g. from left to right). In the example of the elevation map, this procedure corresponds to filling the landscape with water until a giant lake emerges at the threshold, which drains to the borders [@Knecht11]. When we now suppress spanning by imposing the constraint that sites, merging two clusters touching the opposite borders, are never occupied, a line emerges delineating the boundaries between two clusters: one connected to the left and the other to the right border [@Schrenk12]. This line is the watershed line (WS) separating two hydrological basins [@Fehr09] and has a fractal dimension $d_f=1.2168\pm0.0005$ [@Fehr11c]. We show in this Letter that in the scaling limit its statistics converges to an SLE$_\kappa$, consistent with $\kappa=1.734\pm0.005$. To study the scaling limit of WS and compare it to SLE$_\kappa$, we numerically generated ensembles of curves and carried out three different statistical evaluations, namely, the variance of the *winding angle* (quantifying the angular distribution of the curves) [@Duplantier88; @Wieland03], the *left-passage probability* [@Schramm00; @Schramm01], and the characterization of the driving function (*direct SLE*) [@Bernard06]. We show here that the values of $\kappa$ independently obtained for each analysis are numerically consistent and in line with the fractal dimension of the WS. For all cases, simulations have been performed on both, square lattices of square shape ($L_x=L_y$) and in strip geometries ($L_x>L_y$), all with free boundary conditions in horizontal and periodic boundary conditions in vertical direction. $L_x$ is the size of the horizontal boundary, while $L_y$ is the length of the vertical one. Hereafter, we discuss each analysis separately. ![ (color online) Defining $\alpha_i$ as the turning angle between two adjacent edges, $i$ and $i+1$, their winding angles $\theta_i$ and $\theta_{i+1}$ are related by $\theta_{i+1}=\theta_i+\alpha_i$, with $\theta_1=0$, as illustrated in the main figure. We performed simulations for 18 different lattice sizes, with $L_y=2^{4+n}$ and $3 \times 2^{3+n}$ for $n=1,2,\dots,9$. Results are averages over $10^6$ samples for the smallest system sizes and $3\times10^3$ for the largest one. **Inset**: Dependence of the variance of the winding angle on the lateral size of the lattice $L_y$. Statistical error bars are smaller than the symbols. The slope in the linear-log plot corresponds to $\kappa/4=0.432\pm0.002$. \[fig::winding.angle\] ](fig1.pdf){width="\columnwidth"} *Winding angle*. Using conformal invariance and Coulomb-gas techniques, Duplantier and Saleur [@Duplantier88] have found the dependence of the distribution of the winding angle on the system size and the Coulomb-gas parameter. Given the correspondence of the Coulomb-gas parameter to $\kappa$, the relation for the winding angle can be extended to SLE [@Wieland03]. To analyze the winding angle $\theta_i$ at edge $i$, we set $\theta_1=0$ and define $\alpha_i$ as the turning angle between the edges $i$ and $i+1$ (see Fig. \[fig::winding.angle\]). The winding angle of each edge is then computed iteratively as $\theta_{i+1}=\theta_i+\alpha_i$. For SLE$_\kappa$, the variance of the winding angle over all edges in the curve scales as $\langle\theta^2\rangle=b+(\kappa/4)\ln L_y$, where $b$ is a constant and $L_y$ the lateral size of the lattice [^1]. Figure \[fig::winding.angle\] shows the variance as function of lateral size $L_y$ for the WS, with a slope $0.432\pm0.002$ in a linear-log plot. This slope corresponds to $\kappa=1.728\pm0.008$ which is in good agreement with the one predicted by Eq. (\[eq::fractal.dimension\]) from the WS fractal dimension. ![ (color online) Original (left) and mapped (right) watershed. The Schwartz-Christoffel transformation (SCT) has been applied to map from the square lattice (left) to the upper half-plane (right). In this way dipolar curves are turned into chordal curves. \[fig::sct\] ](fig2.pdf){width="\columnwidth"} *From dipolar to chordal representation.* In the original setup, WS are *dipolar* curves which start at one point on the lower boundary and end when they touch the upper boundary, for the first time. For the left-passage probability and direct SLE evaluations, exact results are however known for *chordal* curves [@Schramm01], which start at the same point but go to infinity. Therefore, to proceed with these evaluations, we map the dipolar WS curves into chordal ones in the upper half-plane $\mathbb{H}$ (see Fig. \[fig::sct\]). For such mapping, as suggested in Refs. [@Chatelain12; @Driscoll96], we used the inverse Schwartz-Christoffel transformation [^2]. *Left-passage probability*. For SLE curves in the upper half-plane $\mathbb{H}$, starting at the origin, the probability that a point $R e^{i\phi}$ is at the right side of the curve (see Fig. \[fig::left.passage\](a)) solely depends on $\phi$ and $\kappa$ and is given by Schramm’s formula [@Schramm01], $$\label{eq::left.passage} P_{\kappa}(\phi)=\frac{1}{2}+\frac{\Gamma\left(\frac{4}{\kappa}\right)}{\sqrt{\pi} \Gamma\left(\frac{8-\kappa}{2\kappa}\right)}\cot(\phi)_2F_1 \left(\frac{1}{2};\frac{4}{\kappa},\frac{3}{2};-\cot^{2}(\phi)\right) \ \ ,$$ where $_2F_1$ is the Gaussian hypergeometric function and $\Gamma$ is the Gamma function. Figure \[fig::left.passage\](b) are the data points for the difference between the numerically measured probability $P(\phi,R)$ and the one predicted by Schramm’s formula, Eq. (\[eq::left.passage\]), for the chordal curve. It is shown that $P(\phi,R)$ is independent on $R$. To estimate $\kappa$ we plot, in Fig. \[fig::left.passage\](c), the mean square deviation $Q(\kappa)$ defined as, $$\label{eq::deviation} Q(\kappa)=\frac{1}{M}\sum_R\sum_\phi\left[P(\phi,R)-P_\kappa(\phi)\right]^2 \ \ ,$$ where the outer sum goes over values of $0.05\leq R\leq 1.2$, in steps of $0.05$, and the inner one over values of $0\leq\phi\leq\pi$, in steps of $\pi/15$. $M$ is the total number of considered points $R e^{i\phi}$. To reduce the statistical noise we used the relation $P(\phi,R)+P(\pi-\phi,R)=1$. The minimum in the plot corresponds to the value of $\kappa$ that best fits the left-passage probability, giving , in line with the prediction based on the fractal dimension of WS, given by Eq. (\[eq::fractal.dimension\]). ![ (color online) (a) Schematic representation of the left-passage definition (details in the text). (b) $P(\phi,R)-P_{1.725}(\phi)$ for the chordal watershed at different distances from the origin $R=\{0.05,0.5,1.2\}$, where $P_{1.725}(\phi)$ is the left-passage probability for $\kappa=1.725$ given by Schramm’s formula, Eq. (\[eq::left.passage\]). (c) Mean square difference $Q(\kappa)$ between the numerical data and Schramm’s formula (Eq. (\[eq::deviation\])) for different values of $\kappa$, exhibiting a minimum at $\kappa=1.73\pm0.01$. In both cases, results are averages over $10^5$ curves on square lattices with $L_y=512$. \[fig::left.passage\] ](fig3.pdf){width="\columnwidth"} *Direct SLE*. Consider a chordal SLE curve $\gamma(t)$ which starts at a point on the real axis and grows to infinity inside the region of the upper half-plane $\mathbb{H}$, parametrized by an adimensional parameter $t$, typically called Loewner time. To compute its driving function $\xi(t)$ one needs to find the sequence of maps $g_t(z)$ which at each time $t$ map the upper half-plane $\mathbb{H}$ into $\mathbb{H}$ itself and satisfy the Loewner equation [@Loewner23]. This map is unique and can be approximately obtained by considering the driving function to be constant within an interval $\delta t$, obtaining the slit map, $$\label{eq::slit.map} g_t(z)=\xi(t)+\sqrt{\left(z-\xi(t)\right)^2+4\delta t} \ \ ,$$ where $z$ is a point in $\mathbb{H}$ and $\delta t$ also depends on $t$. This map converges to the exact one for vanishing $\delta t$ [@Bauer03]. Initially, we set $t=0$ and $\xi(0)=0$ and we proceed iteratively through all points $z_i$ of the chordal curve. At each iteration $j$, we map the point $z_j$ to the real axis, by setting $\delta t_j=\left(\text{Im } z_j\right)^2/4$ and the driving function $\xi(t_j)=\text{Re } z_j$ (being Re and Im the real and imaginary parts, respectively). We also compute the Loewner time $t_j=t_{j-1}+\delta t_j$. As referred above, in SLE, the driving function is related to a Brownian motion $B(t)$, with vanishing mean value and unit dispersion, such that $\xi(t)=\sqrt{\kappa}B(t)$ [@Schramm01]. ![ (color online) Dependence of the second moment of the driving function $\langle\xi^2(t)\rangle$ on the Loewner time, for the chordal watershed. The slope corresponds to $\kappa=1.69\pm0.05$. **Inset**: Probability distribution of the driving function at two different Loewner times for chordal watersheds. The rescaled parameter $X$ is defined as $X=\xi(t)/\sqrt{\kappa t}$, where we have taken $\kappa=1.69$. The solid line is the normal distribution of vanishing mean value and unit dispersion. Results are averages over $4\times10^4$ realizations on a square lattice with $L_y=1024$. \[fig::direct.sle\] ](fig4.pdf){width="\columnwidth"} Figure \[fig::direct.sle\] shows the second moment of the driving function for the chordal WS. The inset displays the probability distribution for the rescaled driving function for two different times for the chordal WS. All results are consistent with a Brownian motion with vanishing mean value and unit dispersion, when $\kappa=1.69\pm0.05$, in good agreement with the results discussed above. The direct SLE analysis is characterized by larger error bars than the other two methods (winding angle and left-passage probability) due to strong discretization effects in the slit mapping [@Bauer03]. *Discussion*. Our detailed numerical analysis shows that watersheds are likely to be SLE curves with . This is the first documented case of a physical model with $\kappa<2$, lying outside the well-known duality conjecture range $2\leq\kappa\leq8$, giving $\kappa'=16/\kappa$, where $\kappa'$ is the diffusivity of the dual model [@Dubedat05]. It has been proven that SLE$_\kappa$ with $\kappa>8$ is not reversible [@Zhan08], therefore if a dual model exists which respects reversibility, then it cannot be SLE$_{\kappa'}$ with ${\kappa'}>8$. In the context of SLE, duality implies that two apparently different fractal dimensions might actually stem from the same curve. Geometrically, this corresponds to a relation between the fractal dimension of the accessible external perimeter and the one of the curve. Our work shows that watersheds are non-local SLE curves. Although a connection with SLE is strong indication for conformal invariance, it cannot be interpreted as a proof. Nevertheless, if such invariance is established, it becomes possible to develop a field theory for this new universality class. CFT has helped to classify continuous critical behavior in two-dimensional equilibrium phenomena [@DiFrancesco97; @Henkel12]. A well-established relation between diffusivity $\kappa$ and central charge $c$ of minimal CFT models which have a second level null vector in their Verma module is $c=(3\kappa-8)(6-\kappa)/2\kappa$ [@Bauer03]. If the watershed is conformally invariant it likely corresponds to a logarithmic CFT (LCFT) with central charge $c\approx-7/2$. A series of LCFT’s corresponding to loop models have been suggested in Ref. [@Provencher11], which thus seem to be related to watersheds. It is also noteworthy that negative central charges have been reported in different contexts like, e.g., stochastic growth models, $2D$ turbulence, and quantum gravity [@Duplantier92; @Flohr96; @Lipatov99]. In particular, the loop erased random walk is believed to have $\kappa=2$ which corresponds to $c=-2$. Besides, since the watershed of a landscape is based on the distribution of heights, the configurational space grows with $N!$, where $N$ is the number of sites, being a promising candidate to develop a field theory with quenched disorder. The connection between SLE and statistical properties of the watershed opens up new possibilities. Since the latter are related to fractal curves emerging in several different contexts, our work paves the way to bridge between connectivity in disordered media and optimization problems where the same $\kappa$, and its corresponding central charge, are observed. Besides, a systematic study of the $\kappa$ dependence on correlations in the landscape might provide the required information to find SLE curves on natural landscapes. The possibility of a multifractal spectrum for watersheds is also an open question. We acknowledge financial support from the ETH Risk Center. We also acknowledge the Brazilian institute INCT-SC. ED also acknowledges financial support from Sharif University of Technology during his visit to ETH as well as useful discussion with C. Chatelain, E. Dashti, and M. Rajabpour. NAMA thanks J. P. Miller and B. Duplantier for some helpful discussion. [^1]: There is some discussion on the proper way to measure the winding angle [@Wieland03]. In this work we follow the definition described in the text. [^2]: We used the algorithm described in Ref. [@Driscoll96]. Since with the Schwartz-Christoffel transformation, the vertices of the square lattice are mapped to the real axis in $\mathbb{H}$, to avoid the mapped curve to return to the real axis, we mapped a square domain $[-1,1]\times[0,2]$ with the curve constrained to the domain $[-0.5,0.5]\times[0,1]$.
{ "pile_set_name": "ArXiv" }
[**Eta Photoproduction on the Neutron at GRAAL:\ Measurement of the Differential Cross Section**]{} *[$^1$ IN2P3, Laboratory for Subatomic Physics and Cosmology, 38026 Grenoble, France\ $^2$ INFN sezione di Roma II and Università di Roma “Tor Vergata", 00133 Roma, Italy\ $^3$ INFN sezione di Catania and Università di Catania, 95100 Catania, Italy\ $^4$ INFN sezione di Genova and Università di Genova, 16146 Genova, Italy\ $^5$ IN2P3, Institut de Physique Nucléaire, 91406 Orsay, France\ $^6$ INFN sezione di Torino and Università di Torino, 10125 Torino, Italy\ $^7$ INFN sezione di Roma I and Istituto Superiore di Sanità, 00161 Roma, Italy\ $^8$ Institute for Nuclear Research, 117312 Moscow, Russia\ $^9$ INFN Laboratori Nazionali di Frascati, 00044 Frascati, Italy\ $^{10}$ RRC “Kurchatov Institute", 123182 Moscow, Russia]{}* [**Abstract**]{} - In this contribution, we will present our first preliminary measurement of the differential cross section for the reaction $\gamma n \rightarrow \eta n$. Comparison of the reactions $\gamma p \rightarrow \eta p$ for free and bound proton (D$_2$ target) will also be discussed. Introduction ============ In our attempt to extract the properties of excited states of the nucleon, meson photoproduction on the neutron is of utmost importance in bringing complementary information, hence additional constraints on the theoretical interpretations. After having studied extensively meson photoproduction on a proton target, we have recently started looking at reactions on the neutron using data taken with a deuteron target. As for the proton [@aja98; @ren02], one of our main goal was to study eta photoproduction by measuring both the differential cross section and the beam asymmetry $\Sigma$. This combination will allow not only to better fix the parameters of the dominant S$_{11}$(1535) but also to explore the nature of the other contributions and test the validity of the Moorhouse selection rule. Moreover, this channel has recently drawn much attention in connexion with the pentaquark $\theta^+$. This state, first observed in 2003 by the LEPS collaboration [@nak03], still awaits a definite confirmation as discussed during the first session of this conference. The chiral soliton model $\chi$SM [@dya97] which is at the origin of this discovery, actually predicts the existence of an entire new anti-decuplet, the $\theta^+$ being its lightest element. Observation of these other states would provide decisive information for the validity of this model. Of particular interest to us is the second element of the anti-decuplet, identified as the P$_{11}$(1710) resonance in the original version of the model. This non-strange resonance should have a mass around 1700 MeV and a width $\sim$10 MeV, strikingly narrower than usual resonances. This state has been predicted by Polyakov and Rathke [@pol03] to couple preferentially to the neutron. Besides eta as well as kaon photoproduction have been suggested as particularly sensitive channels, both of them being accessible to the GRAAL facility. K$^0$ and K$^+$ photoproduction on the neutron are two reactions with a very low cross section combined to an intricate final state for our set-up. Even by summing up all available data, we have a limited statistics and cannot make any definitive statement for the time being. By contrast, eta photoproduction on the neutron is a rather “easy” channel for GRAAL for which the measurement of the differential cross section is a realistic objective. This information will allow to investigate the nature of new contributions, exotic or not, through Partial Wave Analysis. The GRAAL facility ================== The GRAAL facility uses a tagged and polarized $\gamma$-ray beam produced by Compton scattering of laser light off the 6.03 GeV electrons circulating in the storage ring of the ESRF (Grenoble, France). The tagged energy spectrum ranges from 600 to 1500 MeV. Data dicussed hereafter have been obtained with highly polarized linear photons (P$_l \geq$50% over the whole range). The non-magnetic 4$\pi$ detector LAGRANGE has the nice property to detect all neutral and charged particles over almost the full angular acceptance. It is composed of three layers: MWPC’s for the tracking of charged particles; thin plastic scintillators for charged particles identification; and a third layer for calorimetry with a BGO ball, made up of 480 crystals, for the detection of $\gamma$-rays in the central region ($\theta \geq 25^0$) and a shower wall to cover the forward region. This latter detector posseses a high efficiency and a good angular resolution for photons but provides no energy measurement. Both detectors can detect neutrons with a good efficiency (respectively $\sim$40% and $\sim$20%), the shower wall giving in addition n/$\gamma$ identification thanks to its ToF measurement. Analysis procedure ================== When going from a free nucleon to a nucleon bound in a deuteron target, one has to take into account nuclear effects : the Fermi motion of the struck nucleon and possible final-state interactions. Hence, in order to extract any meaningful information on a reaction that occured on a bound neutron, one has to first evaluate these nuclear effects on the proton, by comparing the free and bound proton differential cross sections. At a later stage, one should be able with the collaboration of theoreticians to extract the free proton cross section from the bound one, and apply similar corrections on the bound neutron. In this work, our goal was to extract simultaneously the differential cross sections for the reactions $\gamma p \rightarrow \eta p$ and $\gamma n \rightarrow \eta n$, both from the deuteron target. The analysis procedure we have followed was identical to the one used for the free proton with, in addition, a veto on the recoiling spectator nucleon. Selection of the reaction was obtained by means of the $\eta$ invariant mass and using the two-body kinematics, assuming the struck nucleon at rest. For this first attempt and in order to limit uncertainties arising from neutron efficiency and identification, we have restricted the detection of neutrons at forward angles ($\theta \leq 25^0$). The shower wall response has been simulated for neutrons and its efficiency is estimated to be around 22 %. The BGO efficiency is under study and could be as high as 40-50%. All distributions look very similar to the free case and, because of Fermi motion, are slightly broadened. Despite the fact we are measuring all kinematical variables (angles and energy) of outgoing particles (2 $\gamma$ and n or p), we cannot precisely measure the Fermi momentum of the struck nucleon, our resolution being of the same order of magnitude. Simulation studies have shown that the application of narrow cuts based on the two-body kinematics can eliminate the largest Fermi momenta and therefore slightly reduce the broadening effect ($\sim$- 30%). Nevertheless, such cuts make it difficult to control the analysis efficiency and hence to get reliable cross section. As an illustration, the $\eta$ invariant mass is displayed in Fig.1 for both reactions. As can be readily seen from the tails of these distributions, the level of continuous background is very low ($\leq$2%). An evaluation from a Monte-Carlo simulation of contamination from neighbour reactions gives similar results ($\eta$N$\leq$1%, $\eta \pi$N$\leq$1-2%). Furthermore, cross sections have been checked to remain stable when changing the width of cuts, confirming the absence of any significant background. ![Eta invariant mass for the reaction $\gamma n \rightarrow \eta n$ (left) and $\gamma p \rightarrow \eta p$ (right) from a deuteron target.](rebreyend_fig1.eps){width="8cm"} Comparison between free and bound proton ======================================== Comparison of differential cross sections for the reaction $\gamma p \rightarrow \eta p$ for the free and bound proton are displayed on Fig. 2 at four CM $\eta$ angles versus E$_{\gamma}$. These results are preliminary and error bars are only statistical. Because of the selection of neutron/proton at forward angle in the present analysis, the $\eta$ angular range is limited to angles larger than 90$^0$. The curves represent the SAID FA02 solution with (dashed line) and without (solid line) Fermi motion convolution. At backward angles (cos($\theta^{\eta}_{CM}$)=-0.95 and -0.75), the moderate difference between the free and the bound proton is fairly consistent with the Fermi motion broadening of the S$_{11}$(1535) peak. This is not the case for the most forward bin (cos($\theta^{\eta}_{CM}$)=-0.35, $\theta^{\eta}_{CM}$=110$^0$) where a slight discrepancy is seen between the free and the bound proton, incompatible with Fermi motion. In any case, the good overall agreement is again a good indication that no sizeable background is present. Extension of this measurement over the full angular range is under way and will help to understand whether this is the signature of some nuclear effect or some analysis bias. ![Differential cross section for the reaction $\gamma p \rightarrow \eta p$ at four $\eta$ CM angles versus E$_{\gamma}$. Comparison between the free (open squares) and bound (close circles) proton. The curves represent the SAID FA02 solution with (dashed line) and without (solid line) Fermi motion convolution.](rebreyend_fig2.eps){width="8cm"} Differential cross section for the reaction $\gamma n \rightarrow \eta n$ ========================================================================= ![Differential cross section for the reaction $\gamma n \rightarrow \eta n$ at four CM $\eta$ angles versus E$_{\gamma}$ (close circles). For comparison, the renormalized cross section for $\gamma p \rightarrow \eta p$ on the bound proton (stars) is also plotted.](rebreyend_fig3.eps){width="8cm"} Fig. 3 displays the differential cross section of the $\gamma n \rightarrow \eta n$ reaction at four CM $\eta$ angles versus E$_{\gamma}$. It is compared to the $\gamma p \rightarrow \eta p$ for the [*bound*]{} proton, normalized to the neutron cross section in the S$_{11}$(1535) region. These two results have been obtained using identical procedures and it is likely that the bound neutron cross section suffers the same “effect" as the bound proton close to 90$^0$. The shape of both cross sections is exactly identical below 900 MeV, indicating the same dominance of the S$_{11}$(1535) resonance on the neutron as for the proton close to threshold. The measured ratio $\sigma_n/\sigma_p$ is around 0.6, in fair agreement with previous results.[@hof97] Above 900 MeV, whereas the proton falls off rapidly, a clear structure appears on the neutron with a marked angular dependence, evolving from a shoulder at backward angle (top-left) to a peak-like structure close to 90$^0$ (bottom-right) at E$_{\gamma}\approx$1 GeV, i.e. W$\approx$1.7 GeV. An other interesting and complementary comparison is displayed on Fig. 4. The cross sections ($\eta$n and normalized $\eta$p) of Fig. 3, integrated over the angular range where the free and bound proton cross sections are consistent with Fermi motion widening, are displayed on the left-hand side, whereas the $\eta$N invariant mass is plotted on the right-hand side. This latter variable is calculated using only final state information and is therefore “free" of Fermi motion. In other words, the broadening of a narrow structure would be only due to the resolution of our apparatus. The two distributions exhibit a similar behaviour with a resonant-like structure around W=1.7 GeV. In the context of the search for the non-strange member of the $\chi$SM anti-decuplet discussed above, this seemingly resonant structure is of great interest. Yet, it is not a narrow one which would certainly be the signature of something “exotic", be it a pentaquark or not. As mentioned previously, because of Fermi motion, our choice to extract cross section makes it impossible to observe such a narrow state in the present analysis. The simulation tells us that a 10 MeV broad resonance would become $\sim$130 MeV wide in E$_{\gamma}$ or $\sim$70 MeV in ($\eta$n) invariant mass. Even by applying stringent cuts, one may only moderatly reduce the width and we want therefore to further explore the possibility to minimize the effect of Fermi motion by using alternative analysis methods. On the other hand, the observed structure could be compatible with a usual broad resonance, several candidates being in this energy range: S$_{11}$(1650), D$_{13}$(1700) or D$_{15}$(1675). Like for the proton case, the beam asymmetry $\Sigma$ will bring valuable information to better constrain PWA and to discriminate among these various possibilities. We have now obtained preliminary values for $\Sigma$ over the full angular range.[@rac04] Most surprisingly, the overall shape is rather similar between the proton and the neutron, in sharp contrast with the cross sections. Conclusions =========== ![Left: Differential cross section integrated over the covered angular range (113-180$^0$); neutron (close circles) compared to renormalized proton (stars). Right: Idem as left for invariant mass of ($\eta$n, p) calculated from final state information.](rebreyend_fig4.eps){width="8cm"} In summary, we have presented preliminary results for the differential cross sections of $\gamma n \rightarrow \eta n$ using data taken with a deuteron target. Results for the reaction $\gamma p \rightarrow \eta p$ have been also extracted from the same set of data. We confirm that the reaction on the neutron is dominated by the S$_{11}$(1535) close to threshold. By contrast, a resonant-like structure is observed on the neutron around W=1.7 GeV, not seen on the proton. [0]{} J. Ajaka [*et al.*]{}, Phys. Rev. Lett. [**81**]{}, 1797 (1998). F. Renard [*et al.*]{}, Phys. Lett. B [**528**]{}, 215 (2002). T. Nakano [*et al.*]{}, Phys. Rev. Lett. [**91**]{}, 012002 (2003). D. Dyakonov, V. Petrov, M. Polyakov Z. Phys. A [**359**]{}, 305 (1997). M. Polyakov and A. Rathke, Eur. Phys. J. A[**18**]{}, 691 (2003). P. Hoffman-Rothe [*et al.*]{}, Phys. Rev. Lett. [**78**]{}, 4967 (1997). R. di Salvo for the GRAAL collaboration, this conference.
{ "pile_set_name": "ArXiv" }
--- author: - 'Michela D’Onofrio' - Aleksi Kurkela - 'Guy D. Moore' title: Renormalization of Null Wilson Lines in EQCD --- Introduction {#sec:intro} ============ The quark gluon plasma created in the laboratory [@EXPT] appears to be strongly coupled, in that its description requires quick thermalization and a small viscosity [@HYDRO]. This is presumably because the temperature, and therefore the energy scale for most of the physics, is not far above the QCD transition temperature, where the coupling is large. But some of the most important probes of the medium involve the interaction of very high energy particles with the medium – hard probes, the most prominent of which is jet modification [@WHO]. Even if the medium is strongly coupled, the high energy of the jet introduces a large energy scale at which QCD is weakly coupled. Therefore there is hope that jet modification can be understood perturbatively. More likely, it can be understood by treating the jet constituents and their evolution (particularly, particle splitting) perturbatively, but treating the interaction of jet constituents with the medium nonperturbatively. The propagation of a sufficiently high energy excitation through the medium can be described in terms of a null Wilson line, and the transverse momentum exchange with the medium is related to the falloff with distance of a parallel pair of such lines [@Wilson1; @Wilson2]. Specifically, the probability per length to exchange transverse momentum $\Delta p_\perp$ is given by \[defC\] C(p\_) , C(p\_) = d\^2 x\_e\^[i\_\_]{} C(x\_) , where $C(x_\perp)$ is determined by a Wilson loop with two null segments of length $l$ and two transverse spatial components of length $x_\perp$: \[CandW\] C(x\_) & = & \_[l]{} - W\_[lx\_]{} ,\ W\_[lx\_]{} & = & U\_[(0,0,0);(l,0,l)]{} U\_[(l,0,l);(l,\_,l)]{} U\_[(l,\_,l);(0,\_,0)]{} U\_[(0,\_,0);(0,0,0)]{} , where $U_{x^\mu;y^\mu}$ are straight Wilson lines from $x^\mu$ to $y^\mu$, and the three entries are the time, transverse coordinate, and longitudinal coordinate. The Wilson loop is to be evaluated in the density matrix describing the collision, which is presumably a thermal density matrix. Knowledge of $C(p_\perp)$, or equivalently $C(x_\perp)$, is a key input into models of medium induced jet energy loss and jet modification [@Wilson2; @Arnold:2008iy]. The leading order perturbative form of $C(p_\perp)$ is fully known [@Arnold:2008vd], and for momentum transfer of order of the temperature or higher $p_\perp \gtrsim T$, the corrections to the leading order result are suppressed by $\mathcal{O}(g^2)$. For a soft momentum transfer $p_\perp \sim g T$, however, the introduction of a soft scale forces one to use resummed perturbation theory, and the Next-to-Leading order correction arises already at the $\mathcal{O}(g^3)$-order, making the physics of soft momentum transfers significantly more complicated. However, in a remarkable paper [@Simon], Caron-Huot has shown that for soft momentum transfers, to NLO, the Wilson loop $W_{l\times x_\perp}$ above can be replaced by a Wilson loop in the much simpler theory of EQCD, that is, Quantum Chromodynamics dimensionally reduced to three Euclidean dimensions, with the $A^0$ field converted into an adjoint scalar field which we will call $\Phi$ (roughly speaking $\g\Phi = i A^0$ and $\g^2 \sim g^2 T$). Specifically, \[defW\] W\_[x\_]{} U\_[(0,0);(0,l)]{} U\_[(0,l);(\_,l)]{} U\_[(\_,l);(\_,0)]{} U\_[(\_,0);(0,0)]{} . There is now no time coordinate, only the transverse and $z$ coordinates. The complication is that the Wilson lines which replace the null lines in the 4-D version of $W$ are modified, still containing the descendant of the $A^0$ field, which enters in the definition of $\tilde{U}$: \[utilde\] U\_[(0,0);(0,l)]{} = \_0\^l dz T\_a ( i A\^a\_z + g\^a ). The representation matrices $T_a$ should be in the same representation as the propagating particle, which we will label $R$ (typically the fundamental or adjoint representation). The relative factor of $\g$ is because we absorbed a $\g$ factor in defining $\Phi$. The relative phase – $A_z$ enters with an $i$ and $\Phi$ does not – is because $\Phi$ is a Euclidean continuation of $A^0$ and the $i$ factor is absorbed in the Wick rotation. The overall sign is reversed in $\tilde U_{(\x_\perp,l);(\x_\perp,0)}$. We will call this modified Wilson line the null Wilson line of EQCD. Perturbation theory fails near the QCD crossover because the theory is genuinely strongly coupled there. But it is possible that the failure of perturbation theory at a few times the crossover temperature arises because the 3D theory is strongly coupled, while the short-distance physics involved in dimensional reduction is not [@Mikko]. In this case, a nonperturbative treatment of the 3D theory may still give useful information about QCD at the highest temperatures achieved in heavy ion collisions. If true, then the nonperturbative nature in the interaction of a jet parton with the medium is captured by the EQCD value of $C(p_\perp)$, which can be measured on the lattice. With this motivation, there has been an upswing in interest, recently, in studying the Wilson loop and $C(x_\perp)$ in EQCD on the lattice [@PaneroRummukainen]. The relation between continuum thermal QCD and continuum EQCD is known to high perturbative order [@Mikko; @DimRed1; @DimRed2; @DimRed3], and the matching of the action, and some operators, between continuum and lattice EQCD is known to order $\g^2 a$ [@Oa2][^1][^2]. But the Wilson line in is a new operator and its lattice implementation has not been studied beyond the tree level. In practice it is challenging to make lattice studies quantitatively reliable without a calculation of the $\OO(\g^2 a)$ renormalization of the null Wilson line operator. This is true even if the lattice spacing is taken very small, if one is simultaneously interested in $C(x_\perp)$ at short distances; as we will argue below, the corrections arising from the Wilson operator scale as the *larger* of $\g^2 a$ and $a/x_\perp$. Indeed, the first efforts to numerically determine the $C(p_{\perp})$ by Panero, Rummukainen, and Schäfer [@PaneroRummukainen] show how it is challenging to make contact with perturbation theory at $p_\perp \gg g T$, corresponding to small spatial separations. Therefore we believe that a study of $\OO(a)$ corrections to the null Wilson line operator are essential to the success of this program. We carry out this calculation in the remainder of the paper. In the next section we set up the problem, by writing the Lagrangian of EQCD and an expression for the Wilson line in the continuum and the lattice, highlighting what is needed in an NLO matching calculation. The section also shows why the $\OO(a)$ correction becomes more important at small $x_\perp$. The body of the calculation appears in Section \[Sec:details\], which explains how to handle lattice diagrams with null Wilson lines, and tabulates the (Feynman gauge) contribution of each relevant diagram. We close with a brief discussion. Statement of the Problem {#Sec:Lagrangian} ======================== Lattice and continuum action, Wilson line {#subsec:lattcontin} ----------------------------------------- EQCD is the theory of a 3-dimensional SU($N$) gauge field $A^i$ with field strength $F^{ij} \equiv F^{ij}_a T^a$, together with an adjoint scalar $\Phi\equiv \Phi^a T_a$ \[with $T_a$ the fundamental representation group generators normalized such that $\Tr T_a T_b = \delta_{ab}/2$\]. Writing the path integral as $\int \mathcal{D}[A,\Phi] \exp(-S_{\rm EQCD})$, the most general super-renormalizable action[^3] in the continuum is \[Scontin\] S\_[EQCD,c]{} = d\^3 x ( F\^[ij]{} F\^[ij]{} + D\^i D\^i +\^2 \^2 +\_1 (\^2)\^2 + \_2 \^4 ) , where we have not shown the counterterm which subtracts UV divergences from the $\Tr \Phi^2$ term[^4]. The three dimensional theory corresponds to the dimensionally reduced four dimensional QCD along a *matching curve*, specifying the values of the parameters of EQCD as a function of four-dimensional parameters: $g,T$, and $N$ and the number and masses of quark species $N_f$ and $m_i$. For explicit expressions see for example Eq. (5.2)-(5.5) of [@Kajantie:2002wa]. For quark mass dependence, see [@Laine:2006cp]. It is customary to introduce dimensionless versions of the mass and scalar coupling terms, by defining[^5] \[xandy\] y , x\_1 = , x\_2 = . The corresponding lattice theory, with lattice spacing $a$, is defined in terms of the link matrices $U_i(x)=U_{x;x+a\hat{i}}$ and the lattice scalar field $\PhiL$. The lattice action is \[Slatt\] S\_[EQCD,L]{} & = & \_[x,i&gt;j]{}(1 - \_[x,ij]{} )\ && +2 Z\_\_[x,i]{} ( \^2(x) - (x) U\_i(x) (x+ai) U\^\_i(x) )\ && + \_x Z\_[4]{} + Z\_2 (y [+]{} y) \^2 ,\ \[defbox\] \_[x,ij]{} & & U\_i(x) U\_j(x+a) U\^\_i(x+a) U\^\_j(x) , and the lattice implementation of $\tilde U$ is[^6] U\_[(0,0);(0,na)]{} = \_[m=0]{}\^[n-1]{} ( Z T\_[R]{}\^a\^a(ma) ) U\_[z,R]{}(ma) , \[lattW\] for a Wilson line in the $R$ representation. Note that there is no factor of $i$ in $\exp(Z \PhiL)$, which is not a unitary matrix. The value of the scalar field wave function normalization $Z_\Phi$ is actually a free choice in implementing the lattice theory, corresponding to the normalization choice for the lattice scalar field. For instance, Panero, Rummukainen, and Schäfer [@PaneroRummukainen] choose $Z_\Phi = \g^2 a Z_g$ (which they call $6/\beta$). Another sensible choice would be $Z_\Phi = 1/(\g^2 a Z_g)$, so the lattice spacing enters the action as a common multiplicative factor. We will not choose a specific prescription in this paper. Instead, we focus on the combinations $Z_g$, $Z^2/Z_\Phi$, $Z_2/Z_\Phi$, and $Z_{4}/Z_\Phi^2$, which are invariant under this normalization freedom. At tree level we would have $Z_g = 1$, $Z^2/Z_\Phi = \g^2 a=Z_{4}/Z_\Phi^2$ and $Z_2/Z_\Phi = \g^4 a^2$. The coefficients $Z_g$, $Z_2/Z_\Phi$, $Z_4/Z_\Phi^2$, $\delta x_{1,2}$ and $\delta y$ are already known. For completeness we list their values in Appendix \[App1\]. Our goal is to determine the remaining unknown parameter $Z^2/Z_\Phi$, which controls the renormalization of the null Wilson line of EQCD. Sensitivity of Wilson loop to Renormalization {#subsec:sensitive} --------------------------------------------- Since we are interested in the $l$-dependence of $\Tr W$ when $l$ is large, we can ignore contributions from the ends and corners of the Wilson loop and focus on correlations between the long edges. We are also only interested in the $x_\perp$ dependence of $C(x_\perp)$, since any $x_\perp$-independent piece does not enter in $C(p_\perp)$. Therefore we need only consider diagrams with at least one line connecting the null Wilson lines. At lowest order there are two, involving the exchange of an $A_z$ or a $\Phi$ line, as illustrated in Figure \[fig:LO\]. Because the $A_z$ fields attach with factors of $i,-i$ while the $\Phi$ fields attach with factors of $1,-1$, the contributions are of opposite sign. In the continuum they are C\_(x\_) = \_[-]{}\^ dz A\_z(x\_,z) A\_z(0) - \^2 (x\_,z) (0) = - , \[Cx\_contin\] while on the lattice we find (defining, as usual $U_i(x) = \exp(1+ia A_i(x+a \hat{i} /2 ))$) C\_(x\_) = \_n a\^2 A\_z(x\_,na) A\_z(0) - Z\^2 (x\_,na) (0) = - . \[Cx\_latt\] Here $\tilde p_x^2 \equiv \sin^2(p_x a/2)/(a/2)^2$ is the lattice propagator, and $\crr$ is the quadratic Casimir in the representation $R$ of the Wilson loop. The important feature of is that the two terms approximately cancel at large $p_\perp$, up to subleading $\mD^2/p_\perp^4$ corrections. The presence of the lattice propagator in does not change this cancellation. Of course this cancellation does not persist at higher loop order; but because the theory is super-renormalizable, each loop order gives weaker large-$p_\perp$ behavior. Indeed, at NLO the large $p_\perp$ behavior is $\OO(\g^4/p_\perp^3)$ [@Simon]. The problem is that the renormalization of $Z$ which is not taken into account in a lattice calculation will spoil the cancellation in , giving rise to uncanceled $1/p_\perp^2$ large-$p_\perp$ behavior – specifically, a contribution of $(1 - Z^2_{\rm used}/Z^2_{\rm correct})\g^2/p_\perp^2$ – in . Therefore the short-distance or large-$p_\perp$ behavior is especially sensitive to errors in the Wilson line renormalization constant $Z$. Assuming $Z^2_{\rm used}/Z^2_{\rm correct} = 1 + \OO(\g^2 a)$, the relative error is of order $(\g^4 a/p_\perp^2)/(\g^4/p_\perp^3) \sim a p_\perp$, corresponding to a $a/x_\perp$ relative error in $C(x_\perp)$. Therefore the need to renormalize the Wilson line operator increases at small separation, scaling as the inverse separation of the Wilson lines in lattice units. For instance, if the Wilson lines are separated by $N$ lattice spacings in the transverse direction, the $\OO(a)$ corrections are $\OO(1/N)$, no matter how small the lattice spacing may be. Finding the $\OO(a)$ correction to $Z$ will improve this behavior to $1/N^2$, an important correction for realistic values $N\sim 5$. Calculation Details {#Sec:details} =================== Strategy {#subsec:strategy} -------- The matching calculation consists of computing $C(p_\perp)/C_R$ at NLO within continuum and lattice EQCD, and fixing the coefficients of the lattice theory such that the calculations agree to all orders in $\g$ and $\lambda_i$ and up to the desired order in $a$, here $\mathcal{O}(a)$. As usual, once the coefficients are fixed at one order, the infrared behavior is automatically the same at the next order, since the infrared behaviors of the theories coincide by construction. Then it is the difference in the ultraviolet region of any loops which must be calculated. As usual, such behavior can be understood in terms of a renormalization of the parameters of the theory appearing in diagrams of lower order. Again we only need diagrams with at least one line running between the null Wilson lines. There are a number of NLO diagrams, see Figure \[fig:NLO\]. Fortunately, both propagators in diagrams $A$ and $B$ must be infrared (since they connect spatially well-separated Wilson lines), so they do not contribute to UV renormalization. In Feynman gauge (which we will use throughout), diagrams $E$, $H$, and $O$ are zero. Diagrams $J,L,Q,S$ have no continuum analog; they arise because $U_z=\exp(iaA_z)$ and $\exp(Z\Phi)$ are nonlinear in $A_z$ and $\Phi$. But the form of the lattice Wilson line, , does not contain anything which would introduce mixed $A_z,\Phi$ vertices on the Wilson line, so there are no mixed-field analogs of diagrams $H,J,L,O,Q,S$. Our strategy will be the following. Since only the UV behavior of diagrams is relevant, we can ignore $\mD$ and treat the propagators to be \[latt\_props\] A\_z A\_z(p) = , (p) = . In this case, for a soft momentum $p_\perp \ll 1/a$ running between the Wilson lines, we extract all $1/p_\perp^2$ contributions, and choose the value of $Z^2/Z_\Phi$ such that they cancel, as they do in the continuum according to . Self-energies {#subsec:self} ------------- Diagrams $C$ and $D$ have been calculated [@Oa2]. Indeed, diagram $C$ makes up the principal contribution to $Z_g$ the gauge action renormalization. For this reason, we reproduce here the expression, from [@Oa2], for $Z_g$. Ref. [@Oa2] finds: $$\begin{aligned} \label{Zg} Z_g^{-1} &= 1 + 2(Z_\Phi-1)+ 2(V_{A,L}-V_{A,c})+\frac{\left( \pi_{A,L}- \pi_{A,c} \right)}{p^2},\end{aligned}$$ where $V_{A,L}$ and $V_{A,c}$ are the one loop contributions to the three-point gauge-scalar vertex on the lattice and in continuum, respectively, while $\pi_{A,L}$ and $\pi_{A,c}$ are the corresponding gauge self-energies. While $Z_g^{-1}$ itself is gauge invariant, the individual contribution of each term to $Z_g^{-1}$ is gauge dependent. Nevertheless, we only need the answer in Feynman gauge, in which they read: 2(Z\_- 1) + 2(V\_[A,L]{}-V\_[A,c]{}) = = ( { - } + ). \[eq:diffV\] We have written out color factors in terms of the fundamental Casimir $\cf$ and dimension $\df$, and the adjoint Casimir $\ca$ and dimension $\da$: in SU($N$) theory these are $\df = N$, $\cf = (N^2{-}1)/(2N)$, $\ca = N$ and $\da = (N^2{-}1)$. The constants appearing here are are \_[-/a]{}\^[/a]{} , \_[-/a]{}\^[/a]{} - \_[-]{}\^ \[eq:diffPi\] where in the latter expression we implicitly IR regulate both integrals in the same way; numerically $\xi = 0.152859324966101$ and $\Sigma = 3.17591153562522$. These are the only constants we will need in the remainder of the calculation. The terms in curly brackets in Eq. (\[eq:diffV\]) arise from gauge self-energy diagrams with $\Phi$ running in the loops, and are, in fact, independent of the gauge parameter. The terms in square brackets arise from pure gauge self-energy diagrams[^7] and in a general gauge, would acquire a dependence on the gauge parameter. In Feynman gauge, the contribution from diagram C to $C(p_\perp)/C_R$ reads simply = -\^2 , so that if we consider the sum of the self-energy diagram and the leading-order $A_z$ exchange contribution, the self energy contributions cancel and leave only the parts of $Z_g$ which arise from other sources: \[diagramC\] + = ( 1 - 8 ) . The scalar self-energy is also computed in Ref. [@Oa2], where it is responsible for the quantity called $Z_\Phi$ there. Re-computing to convert from Landau to Feynman gauge using the result for the self-energy in Ref. [@Oa1], we find the sum of the tree level and self-energy $\Phi$ exchange diagrams to give \[diagramD\] - + = - ( 1 + ) . Vertex corrections ------------------ Next consider diagrams $M,N$. Here it is relevant that involves $\ln\Tr W$, not just its trace. In an abelian theory the Wilson loop is the exponential of all 1-propagator corrections[^8], which means that the abelian parts of diagrams $I-N$ and $P-U$ are absorbed when we take the log. Only the nonabelian parts of these diagrams contribute. The group theory factor in diagram $N$ is $T^a T^a T^b T^b = \crr^2$, the product of the group factors for each line. Therefore $N$ is abelian. Diagram $M$ involves $T^a T^b T^a T^b = \crr(\crr-\ca/2)$; the $\crr^2$ piece is the abelian part, the $\crr\ca$ piece is the nonabelian part we need. Label the momenta on the $\PhiL$ and $A_z$ lines $p$ and $q$ respectively. The $\PhiL$ line can attach anywhere on each Wilson line. The sum over locations on the lower Wilson line gives a factor $\ell/a$, which cancels the $a$ in the propagator and gives the $\ell$ which should be canceled in . The sum over the location of the upper Wilson line gives a factor $\delta(a p_z)$ which ensures that $p$ is purely transverse; the $\Phi$ line then gives rise to the $Z^2/aZ_\Phi / \tilde{p}_\perp^2$ term found in . Next we sum over the attachment positions of the $A_z$ propagator. It is most convenient to consider the link matrix $U_z(x)$ to “live” at the center of the link $x+a\hat{z}/2$. In this case, for a line momentum $q$, the sum over attachment locations gives ( a\_[n=0]{}\^e\^[iq\_z (n+1/2) a]{} ) ( a\_[m=0]{}\^e\^[-iq\_z(-m-1/2) a]{} ) , where the first (second) term sums over the attachment of the front (back) vertex, relative to where the $\PhiL$ attaches. The sum is easily performed by splitting off the first term and shifting the remaining terms: \[sum1\] a\_[n=0]{}\^e\^[iq\_z (n+1/2) a]{} &=& ae\^[iq\_z a/2]{} + ae\^[iq\_z a]{} \_[n=0]{}\^e\^[iq\_z (n+1/2) a]{}\ a\_[n=0]{}\^e\^[iq\_z (n+1/2) a]{} & = & = . This term will always arise when summing over the location of an $A_z$ attachment which must be to the right of a $\PhiL$ attachment on the Wilson line. Therefore it makes sense to define it as the Feynman rule for the propagator of the Wilson line between a $\PhiL$ and an $A_z$ attachment. The corresponding continuum expression is $i/ q_z$. The group theoretical issues in treating diagrams $I,J,K,L$ are similar. Each involves a factor $\crr^2$ and a factor $-\crr \ca$, with coefficient $-1/2$, $-1/4$, $0$, and $-1/6$ for $I$, $J$, $K$, and $L$ repectively. The sum over the attachment points in diagram $I$ is similar to that in diagram $M$, except that the attachments must be separated an integer distance. They therefore involve the sum \[sum2\] a \_[n=1]{}\^e\^[iq\_z n a]{} = e\^[iq\_z a]{} (1+a \_[n=1]{}\^e\^[iq\_z n a]{} ) = = - - . The sum of the nonabelian contributions from diagrams $I,J,K,L$ is therefore \[IJL\] I+J+K+L &=& -\^2 \_[-/a]{}\^[/a]{} ( \^2\_[I]{} -ia\_[J]{} -\_[L]{} )\ & = & (-\^2) \_[-/a]{}\^[/a]{} ( - ). We can rewrite \[work\_utilde\] \^2 \^2 = 1 - and therefore \[IJL\_final\] I+J+L = -\^2 \_[-/a]{}\^[/a]{} ( - ) . The calculation of $P,Q,R,S$ proceeds similarly and the result is in fact identical except for a factor of $(-Z^2/aZ_\Phi)^2$, which is 1 at the level of precision needed in the current calculation. On the other hand, diagrams $M$ and $T$ each give \[MandT\] M = T = +\^2 \_[-/a]{}\^[/a]{} . These cancel the like factors from $I,J,L,P,Q,S$, so that all vertex correction diagrams add to \[all\_vertex\] = \^2 \_[-/a]{}\^[/a]{} = . Y-diagrams {#subsec:y} ---------- Finally we consider diagrams $E,F,G$. In Coulomb gauge the vertex appearing in $E$ connects three $A_z$ propagators. Labeling the lower momentum $p$ and the upper left and right momenta $q$ and $p+q$, we find that $p_z=0$ automatically. Applying the vertex Feynman rule (see Ref. [@Rothe] page 383), = \_z \_z - + \_z \_z = 0 . This is not surprising; for instance, there is certainly no $A_z^3$ continuum vertex, since $F_{ij}^2$ always involves two distinct labels each appearing twice. Diagrams $F$ and $G$ can be computed in a straightforward way using the Feynman rules we have already found for the attachment of lines to the Wilson line, and we find F &=& (-2\^2) \_[-/a]{}\^[/a]{} q\_z , \[F\_is\]\ G &=& 2\^2 \_[-/a]{}\^[/a]{} (q\_z q\_z) , \[G\_is\]\ \[F\_and\_G\] F+G & = & 2\^2 \_[-/a]{}\^[/a]{} (1 - q\_z\^2) = ( - ) . Summing it up ------------- Summing the leading-order and subleading-order contributions of form $1/p_\perp^2$, that is, , , , and , and requiring that the cancellation of $1/p_\perp^2$ terms should occur, we find 0 & = & ( 1 - + { -8 - 8 - + - } )\ & = & 1 + ( - 16 ). \[main\_result\] This constitutes our main result. Discussion {#Sec:discussion} ========== We have found the 1-loop renormalization factor which should be included in the lattice implementation of the EQCD null Wilson line. Specifically, given the definition of the lattice action found in and of the Wilson line operator in , the ratio of the normalization of the lattice scalar field $\PhiL$ appearing in the Wilson line to its normalization in the action is given in , which we repeat for convenience: = 1 + ( - 16 ). \[main\_again\] Using this renormalization in the Wilson line will facilitate faster and more accurate lattice calculations of the infrared contribution to $\hat{q}$ and $C(p_\perp)$. In particular, it eliminates the last source of error (except for $\delta y$, see Appendix \[App1\]) which obstructs a quick and accurate continuum extrapolation in the lattice determination of $C(x_\perp)$. Structurally the most interesting feature of the calculation is the tendency for diagrams to nearly cancel, when one sums over lines being $A_z$ and $\PhiL = iA_0$. This cancellation is broken in the UV because the Wilson line is built out of $\PhiL$ fields appearing at integer sites and $A_z$ fields appearing at half-integer links. Therefore the Wilson line propagator between two like-type fields differs in the UV from that between opposite-type fields. There are a few other physically interesting quantities which can be computed with the same methodology as the calculation performed here. It is pointed out in Ref. [@NLOphotons] that $\hat{q}$ and its semi-collinear analogue $\hat{q}(\delta E)$ can both be computed as correlation functions of operators separated by adjoint null Wilson lines. The renormalization of the Wilson line found here can be adopted in that problem, though a rather high-loop calculation of UV contributions to the correlator will also be necessary. We leave this to be considered in future work. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank Jacopo Ghiglieri, Kari Rummukainen, and Urs Wiedemann for useful discussions. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada. M.D. was supported by the Magnus Ehrnrooth Foundation of Finland. Scalar Mass and Self-Coupling Renormalization {#App1} ============================================= Here we write the known 1- and 2-loop renormalizations of the scalar self-couplings and mass in the notation of this paper; and we discuss what would be involved in a full $\OO(a)$ (3-loop) determination of $\delta y$, or how it could be avoided by using the lattice to measure the requisite corrections. The 1-loop renormalization $Z_g$ appears already in . The remaining one-loop renormalizations can be found in Ref. [@Oa2], and are:[^9] & = & \^2 a ( 1 - \^2 a ),\ & = & \^4 a\^2 ( 1 + \^2 a ),\ x\_1 & = & \^2 a ( 3 + (N\^2[+]{}7) x\_1\^2 + 2 x\_1 x\_2 + ( 3 + ) x\_2\^2 ) ,\ x\_2 & = & \^2 a ( N + 2 x\_2\^2 + 12 x\_1 x\_2 ) ,\ y\_[1loop]{} & = & ( 2 N + (N\^2+1) x\_1 + x\_2 ) . Here we have departed from our previous pattern of writing everything in terms of Casimirs and dimensions and have specialized to SU($N$) gauge theory, because it is not obvious to us that the form of the quartic interaction we have used can be considered without modification in more general groups. The renormalization of $y$ is known to two loops. Besides the factor $Z_2$ included above, one needs -16\^2 y\_[2loop]{} & = & ( (N\^2+1) x\_1 + x\_2 ) ( + N - 2 N )\ && + N\^2 ( - + + 2\_1 - \_4 - 4 - 4 )\ & & + ( (N\^2+1) x\_1 (2N-2x\_1) + x\_2 (2N-4x\_1) .\ && . - x\_2\^2 ) (+-3). Here $\zeta$, $\delta$, and $4\rho-2\kappa_1+\kappa_4$ are additional constants which are defined in Ref. [@MikkoPhi]; specifically $\zeta=0.08849$, $\delta=1.942130$, and $4\rho-2\kappa_1+\kappa_4 = -1.968325$. Note that the 1-loop contribution is parametrically $1/a$ and the two-loop contribution is of order $a^0, \ln(\g^2 a)$. Therefore, in a complete $\OO(a)$ corrected study, one should also establish the three-loop $\OO(a)$ correction to $\delta y$, which will be parametrically of form y\_[3]{} \~\^2 a ( C\_3 x\^3 + C\_2 x\^2 + C\_1 x + C\_0 ) . \[y\_3loop\] (Really $C_3 x^3 = C_{30} x_1^3 x_0^0 + C_{21} x_1^2 x_2 + \ldots$ so there are 10 coefficients in all.) The diagrammatic computation of this correction, and particularly of $C_0$, appears rather difficult. However, there is an alternative to a diagrammatic computation which could be attempted. The key is that the SU($N$) theory with $N>2$ has a phase transition at some $y_{\rm crit}(x_1,x_2)$ for all values of $x_1,x_2$. A lattice study, at fixed $x$, can find the critical value of $y$ at a given lattice spacing, $y_{\rm crit}(a,x)$. One then repeats for several values of $a$, and examines the extrapolation to small $a$. Since all other parameters are known up to $\OO(a^2)$ corrections, the only source for $\OO(a)$ dependence in $y_{\rm crit}(a,x)$ is the unknown ($x$-dependent) $\OO(a)$ correction to $\delta y$. If the lattice determination of $y_{\rm crit}$ is accurate enough to determine the linear in $a$ behavior with precision, this constitutes an evaluation of the terms in , at a given value of $x$. By repeating for several $x$ values, one can reconstruct all terms. In particular, one can determine $C_3$ by studying the theory with the gauge fields switched off, and only scalar fields with quartic interactions. Since much more powerful algorithms exist to study this theory (cluster, worm, multigrid), an accurate determination of $C_3$ should be straightforward. [39]{} K. Aamodt [*et al.*]{} \[ALICE Collaboration\], Phys. Rev. Lett.  [**105**]{} (2010) 252302 \[arXiv:1011.3914 \[nucl-ex\]\]. G. Aad [*et al.*]{} \[ATLAS Collaboration\], Phys. Rev. C [**86**]{} (2012) 014907 \[arXiv:1203.3087 \[hep-ex\]\]. S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Rev. C [**87**]{} (2013) 014902 \[arXiv:1204.1409 \[nucl-ex\]\]. M. Luzum and P. Romatschke, Phys. Rev. C [**78**]{} (2008) 034915 \[Erratum-ibid. C [**79**]{} (2009) 039903\] \[arXiv:0804.4015 \[nucl-th\]\]; B. Schenke, S. Jeon and C. Gale, Phys. Lett. B [**702**]{} (2011) 59 \[arXiv:1102.0575 \[hep-ph\]\]; H. Song, S. Bass and U. W. Heinz, arXiv:1311.0157 \[nucl-th\]. S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Lett. B [**718**]{} (2013) 773 \[arXiv:1205.0206 \[nucl-ex\]\]; S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Lett. B [**712**]{} (2012) 176 \[arXiv:1202.5022 \[nucl-ex\]\]; G. Aad [*et al.*]{} \[ATLAS Collaboration\], Phys. Lett. B [**719**]{} (2013) 220 \[arXiv:1208.1967 \[hep-ex\]\]. B. G. Zakharov, JETP Lett.  [**63**]{}, 952 (1996) \[hep-ph/9607440\]; JETP Lett.  [**65**]{}, 615 (1997) \[hep-ph/9704255\]. R. Baier, Y. L. Dokshitzer, S. Peigne and D. Schiff, Phys. Lett. B [**345**]{}, 277 (1995) \[hep-ph/9411409\]. R. Baier, Y. L. Dokshitzer, A. H. Mueller, S. Peigne and D. Schiff, Nucl. Phys. B [**483**]{}, 291 (1997) \[hep-ph/9607355\]. P. B. Arnold, Phys. Rev. D [**79**]{} (2009) 065025 \[arXiv:0808.2767 \[hep-ph\]\]. P. B. Arnold and W. Xiao, Phys. Rev. D [**78**]{} (2008) 125008 \[arXiv:0810.1026 \[hep-ph\]\]. S. Caron-Huot, Phys. Rev. D [**79**]{}, 065039 (2009) \[arXiv:0811.1603 \[hep-ph\]\]. M. Laine and Y. Schroder, JHEP [**0503**]{} (2005) 067 \[hep-ph/0503061\]. M. Panero, K. Rummukainen and A. Schäfer, arXiv:1307.5850 \[hep-ph\]. S. Nadkarni, Phys. Rev. D [**38**]{}, 3287 (1988); N. P. Landsman, Nucl. Phys. B [**322**]{}, 498 (1989). E. Braaten and A. Nieto, Phys. Rev. D [**53**]{}, 3421 (1996) \[hep-ph/9510408\]. K. Kajantie, M. Laine, K. Rummukainen and M. E. Shaposhnikov, Nucl. Phys. B [**458**]{}, 90 (1996) \[hep-ph/9508379\]; G. D. Moore, Nucl. Phys. B [**523**]{}, 569 (1998) \[hep-lat/9709053\]. M. Laine, Nucl. Phys. B [**451**]{}, 484 (1995) \[hep-lat/9504001\]. G. D. Moore, Nucl. Phys. B [**493**]{}, 439 (1997) \[hep-lat/9610013\]. K. Kajantie, M. Laine, K. Rummukainen and M. E. Shaposhnikov, Nucl. Phys. B [**503**]{} (1997) 357 \[hep-ph/9704416\]. K. Kajantie, M. Laine, K. Rummukainen and Y. Schroder, Phys. Rev. D [**67**]{} (2003) 105008 \[hep-ph/0211321\]. M. Laine and Y. Schroder, Phys. Rev. D [**73**]{} (2006) 085009 \[hep-ph/0603048\]. H. J. Rothe, World Sci. Lect. Notes Phys.  [**43**]{}, 1 (1992) \[World Sci. Lect. Notes Phys.  [**59**]{}, 1 (1997)\] \[World Sci. Lect. Notes Phys.  [**74**]{}, 1 (2005)\] \[World Sci. Lect. Notes Phys.  [**82**]{}, 1 (2012)\]. J. Ghiglieri, J. Hong, A. Kurkela, E. Lu, G. D. Moore and D. Teaney, JHEP [**1305**]{}, 010 (2013) \[arXiv:1302.5970 \[hep-ph\]\]. [^1]: \[footdim\] In the 3-D theory the gauge coupling $g^2$ is dimensionful, carrying units of energy or inverse length; so $g^2 a$ is a dimensionless quantity. [^2]: \[footmass\] Actually the $\OO(g^2 a)$ matching between continuum and lattice EQCD is incomplete; the mass parameter of the $\Phi$ field is only known to two-loop order [@MikkoPhi], which is $\OO(a^0)$. Improving this parameter to $\OO(g^2 a)$ will require a 3-loop calculation, though there is a way to determine the matching within numerical EQCD simulations, which we outline in Appendix \[App1\]. [^3]: The Lagrangian could in addition contain a $\Tr \Phi^3$ term, but at zero baryon number chemical potential it is forbidden by the charge conjugation symmetry. [^4]: For the exact form of the counterterm see, e.g., Eq.(2.8) of [@Kajantie:1997tt]. [^5]: \[footx\]Note that, for SU(2) or SU(3), the $\Tr \Phi^4$ and $(\Tr \Phi^2)^2$ terms are not independent, as $\Tr \Phi^4 = (\Tr \Phi^2)^2/2$ for these groups. So in these cases one of the scalar terms can be eliminated in favor of the other. [^6]: \[footexp\] Actually the implementation shown here is not unique; for instance, one can also replace $\exp(Z \PhiL) \to Z_0 + Z_1 \PhiL$, avoiding the need to exponentiate. But there are advantages to the exponential choice; for instance, $Z_0,Z_1$ are already nontrivial in an abelian theory, for which $Z$ takes its tree-level value. We will only consider the exponential choice here. [^7]: It may look strange that the pure-glue self-energy contains a rather large term proportional to $\cf$. This is a tadpole-type contribution, which is dependent on the choice to implement the lattice link operators as fundamental-representation matrices. [^8]: Here it is essential that both the $A$ and $\Phi$ field attachments are implemented via exponentials in . In the implementation suggested in Footnote \[footexp\], exponentiation would fail for the $\PhiL$ field. [^9]: $Z_2/\g^4 a^2 Z_\Phi$ was called $Z_m$ there, and the notation for $x_1$, $x_2$, as well as the division between $x$ and $Z_4$, was slightly different.\[foot:notation\]
{ "pile_set_name": "ArXiv" }
One of the most striking properties of liquid $^4$He(II) is its ability to mimic the behavior of a solid body when subjected to uniform rotation. Since the superfluid velocity field ${\bf v}_s$ is irrotational ($\nabla\times{\bf v}_s=0$), the superfluid component of $^4$He(II) might be expected to remain at rest while the normal component rotates with the container. In fact, for sufficiently large values of the rotation frequency $\Omega$, the entire fluid is found to rotate like a classical liquid at all temperatures [@Osborne]. The paradox may be resolved by assuming that the superfluid is threaded by quantized vortices. These are singularities of ${\bf v}_s$, around which the phase of the superfluid order parameter increases by 2$\pi$. Although the mechanisms for the spin-up of the superfluid are not fully understood, at equilibrium the vortices must flow with the normal velocity due to the the mutual friction between superfluid and normal components [@Khalatnikov]. In addition to considerable indirect evidence for this hypothesis, small numbers of vortices have been imaged directly in rotating superfluid $^4$He [@Donnelly]. In this work we show how the presence of quantized vortices can allow a Bose-Einstein condensate (BEC) to mimic a classical fluid under rotation, as has been suggested by recent experiments at JILA [@Haljan]. In these experiments, a trapped gas of ultracold $^{87}$Rb atoms is spun up, and then cooled through the Bose-Einstein condensation transition. For small values of $\Omega$, the condensate density is found to assume its usual non-rotating shape, while the thermal cloud bulges outward. This corroborates previous evidence that the condensate behaves as an irrotational superfluid [@Marago; @Onofrio]. The condensate density profile undergoes a sudden change at a value of $\Omega$ that is comparable to the thermodynamic critical frequency for the stability of a single vortex [@Fetter]. As $\Omega$ increases further, the shape of the condensate gradually approaches that of the thermal cloud. This suggests that for any given value of $\Omega$ and temperature, the condensate contains the appropriate number and distribution of vortices for thermodynamic equilibrium. In contrast, when no appreciable thermal fraction is present, higher rotation frequencies are generally required to nucleate vortices in BECs [@Madison; @Abo-Shaeer; @Hodby]. We present two key results, which also bear on issues raised by recent experiments at MIT [@Abo-Shaeer]. First, the vortices are arranged in extremely regular triangular arrays: even near the condensate surface, little circular distortion [@Campbell] is found. Second, the number of vortices is consistently lower than that required to ensure solid-body rotation throughout the condensate. To make explicit comparison with the recent JILA experiment, we consider the case of $N=200,000$ atoms of $^{87}$Rb, confined in a cylindrically symmetric trap with radial frequency $\omega_{\rho}/2\pi=8$ Hz, and anisotropy $\lambda\equiv\omega_z/\omega_{\rho}={5\over 8}$. Unless stated explicitly, our units of energy, angular frequency, length, and time are given by $\hbar\omega_{\rho}$, $\omega_{\rho}$, $d_{\rho}=\sqrt{\hbar/M\omega_{\rho}}\approx 3.845$ $\mu$m, and $\omega_{\rho}^{-1}$, respectively, where $M$ is the atomic mass and $\hbar$ is Planck’s constant $h$ divided by $2\pi$. We work in a frame that rotates with angular frequency $\Omega$ about the $z$ axis. The time-dependent Gross-Pitaevskii (GP) equation [@GP], which governs the dynamics of the condensate wavefunction $\psi$ of a dilute BEC at zero temperature, is then given by: $$i\partial_t\psi({\bf r},t) =\left[T+V_{\rm trap}+V_{\rm H}-\Omega L_z\right]\psi({\bf r},t), \label{gp}$$ with kinetic energy $T=-{\case1/2}\nabla^2$, trap potential $V_{\rm trap}={\case1/2}\left(\rho^2+\lambda^2z^2\right)$, and angular momentum component $L_z=i\left(y\partial_x-x\partial_y\right)$. The effects of atomic interactions are included in the nonlinear term $V_{\rm H}=4\pi\eta|\psi|^2$, $\eta=Na/d_{\rho}$, where $a=5.29$ nm is the scattering length for $^{87}$Rb collisions [@Eite]. We use the normalization condition $\int d{\bf r}|\psi({\bf r},t)|^2=1$. In equilibrium in the rotating frame, $\psi({\bf r},t)=e^{-i\mu t}\psi({\bf r})$, where $\mu$ is the chemical potential. To estimate the properties of a rotating condensate, such as the aspect ratio and the number of vortices, we consider two tractable cases: a single vortex applicable for small $\Omega$, and multiple vortices relevant to high $\Omega$ where the condensate is expected to behave essentially as a rigid body. With one vortex at the center of the trap, $\psi = |\psi| e^{i\phi}$, where $\phi$ is the polar angle. In the large-$N$ or Thomas-Fermi (TF) limit, the condensate density is $|\psi|^2=\left(\mu-{\rho^2\over 2}-{\lambda^2z^2\over 2} -{1\over 2\rho^2}+\Omega\right)/4\pi\eta$ when that quantity is positive, and is zero elsewhere. [@Fetter]. The inner cutoff defines the vortex core size or the healing length $\xi$; for $\mu\gg 1$, one obtains $\xi\approx 1/\sqrt{2\mu}\sim 1/R_0$, where $R_0=(15\eta\lambda)^{1/5}$ is the TF radius along $\hat{\rho}$ in the absence of a vortex. A straightforward calculation shows that, for large $R_0$, the TF radius for an isolated vortex is $R_{\rho}\approx R_0[1+(3/2R_0^4)\ln(2R_0/\xi)]$ and the condensate aspect ratio is $\lambda_{\rm TF}'\equiv R_\rho/R_z\approx\lambda[1+(1/2R_0^2)].$ Assuming this result depends only weakly on vortex position, and is additive with respect to the number of vortices $N_v$, then in explicit units $\lambda_{\rm TF}'\approx\lambda[1+{1\over 2}N_v(d_{\rho}/R_0)^2]$ at larger $\Omega$. For large $N_v$, as we will show below, the condensate rotates almost as a solid body, so the rotating-frame velocity operator ${\bf v}_r = -i \nabla - \Omega\hat{z}\times{\bf r}$ can be neglected. Since $T-\Omega L_z ={1\over 2}{\bf v}_r ^2 -{1\over 2}\Omega^2\rho^2$, rotation effectively softens the radial potential, $V_{\rm trap}\to{1\over 2}(1-\Omega^2)\rho^2+{1\over 2}\lambda^2z^2$. In this case, $R_{\rho}=R_0/(1-\Omega^2)^{3/10}$ and $\lambda_{\rm sb}'=\lambda/ \sqrt{1-\Omega^2}$. The number of vortices is the line integral of the phase gradient around the cloud perimeter; assuming the solid-body value of the tangential velocity $\Omega R_{\rho}$, then the areal vortex density is $n_v=N_v/\pi R_{\rho}^2=\Omega/\pi$, and $N_v^{\rm sb}=\Omega R_0^2 /(1-\Omega^2)^{3/5}$. If the vortices form a regular array at large $\Omega$, then the lattice constant $b$ should be comparable to the average separation between vortices $n_v^{-1/2}\sim\sqrt{\pi/\Omega}$. For a triangular array centered at the origin [@Tkachenko; @Ho], the vortices arrange themselves in concentric hexagonal rings labelled by ring index $r$, such that $N_v=1+3r(r+1)$. Assuming the superfluid velocity exactly matches the solid-body value midway between nearest-neighbor vortices (where the two nearest rotational fields exactly cancel), then $N_v=\Omega b^2(r+{1\over 2})^2$ and $b\approx\sqrt{3/\Omega}$ for large $r$. Since $\Omega_{\max}=1$ in a harmonic trap, the smallest vortex separation is $b_{\rm min}\approx\sqrt{3}d_{\rho}$ in explicit units. When the vortex cores begin to overlap significantly ($b\sim\xi$), the system might undergo a phase transition, possibly into a state akin to a quantum Hall insulator [@Ho]; since $\xi(\rho=0,z=0)=1/\sqrt{2\mu} =1/R_0(1-\Omega^2)^{1/5}$ in the TF limit, the value of $\Omega$ for this to occur must become extremely close to unity: $1-\Omega\sim R_0^{-5}$. The stationary solutions of the GP equation in the rotating frame, defined as local minima of the free energy $\langle E\rangle=\mu N-{1\over 2} \langle V_H\rangle$, are found numerically by norm-preserving imaginary time propagation using an adaptive stepsize Runge-Kutta integrator. The wavefunction is solved on a three-dimensional Cartesian mesh within a discrete-variable representation [@Feder] based on Gauss-Hermite quadrature, and is assumed to be even under reflection in the $z=0$ plane. The initial state is taken to be the TF wavefunction with a phase $\Phi(x,y)=\sum_{x_0,y_0}\tan^{-1}[(y-y_0)/(x-x_0)]$, where $(x_0,y_0)$ are vortex positions in a regular array centered at the origin. The GP equation for a given value of $\Omega$ is propagated in imaginary time until the fluctuations in both $\mu$ and the norm become smaller than $10^{-11}$. The condensate densities integrated down $\hat{x}$ and $\hat{z}$ are then fit to a TF profile using a nonlinear least-squares analysis, where densities lower than $0.1\%$ of the maximum value are discarded. For the vortex-free condensate, this yields an aspect ratio of $0.645$, which is $3\%$ larger than the TF value of ${5\over 8}$. The resulting equilibrium configurations are sensitive to the initial vortex distributions. Fig. \[45fig\] shows three different solutions of the GP equation (\[gp\]) for $\Omega=0.45$. These were obtained using seed arrays with rhombohedral (left), square (center), and triangular (right) symmetries, respectively. Though observables such as the energy, angular momentum, and cloud aspect ratio are all comparable, they each have different vortex numbers and arrangements. Though a complete survey of possible configurations is beyond the scope of the present work, for all cases considered the initial rhombohedral vortex distribution is found to yield the final state with lowest energy; for larger $\Omega$ this symmetry gives rise to equilibrium arrays that are generally triangular (see below). The central results of the present work are shown in Fig. \[pix\], which depicts equilibrium solutions for $0.25\leq\Omega\leq 0.95$. A single vortex at the origin has appeared by $\Omega=0.35$; the thermodynamic critical frequency (the energy difference between states with zero and one vortex, divided by $\hbar$) is $\Omega_c=0.30$. This value is slightly lower than the experimental value $0.32<\Omega_c<0.38$; since $\Omega_c\sim N^{-2/5}$, perhaps there are fewer atoms in the condensate at vortex nucleation. (The dynamic critical frequency, at which the first collective mode becomes negative, is somewhat higher: $\Omega_{\nu}=0.46$). With a vortex, the cloud aspect ratio changes to $\lambda=0.663$; using the fitted values for the nonrotating cloud $\lambda=0.645$ and $R_0=4.86$, the TF prediction is $\lambda_{\rm TF}'=0.659$. As $\Omega$ continues to increase, so does the aspect ratio; the cloud becomes spherical for $0.75<\Omega<0.8$ (consistent with the experimental results) and highly oblate for $\Omega=0.95$, at which $\lambda'=1.8$. As shown in Fig. \[sbfig\], the solid-body estimate of the cloud anisotropy $\lambda_{\rm sb}'$ tracks (but consistently exceeds) our numerical values; in contrast, $\lambda_{\rm TF}'$ is always too small. For example, when $\Omega=0.95$ one obtains $\lambda_{\rm sb}'=2.00$ and $\lambda_{\rm TF}'=1.50$ with $N_v=65$ (see Fig. \[Nvfig\]). The number of enclosed vortices is not known [*a priori*]{}, however; using the solid-body estimate $N_v^{\rm sb}=89$ for $\Omega=0.95$ yields the much improved $\lambda_{\rm TF}'=1.83$. Another indication that the condensate is behaving classically at large $\Omega$ is the moment of inertia $I$ (inset of Fig. \[sbfig\]). The effective value $I=\langle L_z\rangle/\Omega$ is always lower than the solid-body $I=\langle x^2+y^2\rangle$, but is within $4\%$ by $\Omega=0.95$. The number of vortices at equilibrium is always considerably lower than the solid-body prediction, as in previous experimental observations [@Abo-Shaeer]. Since the numerical solutions are stationary in the rotating frame, this discrepancy cannot be explained by positing that the vortex array rotates more slowly than the trap. Consider the cases $\Omega=0.55$, $0.8$, $0.9$, and $0.95$ shown in Fig. \[pix\], which approximate centered triangular arrays with $N_v=1+3r(r+1)$, $r=1-4$, respectively. The average vortex spacing is found to follow the prediction $b=\sqrt{3/\Omega}$ to within $3\%$. An additional hexagonal ring of vortices could therefore fit comfortably within the cloud. For $\Omega=0.95$, $5b=8.89$ is smaller than the radius $R_{\rho}=9.41$, and $r=5$ corresponds to $N_v=91$ which is close to the solid-body prediction $N_v^{\rm sb}=89$. For $N_v=169$ ($r=7$), which is comparable to the largest array in experiments at MIT [@Abo-Shaeer], the missing $n_r=8$ ring implies that the equilibrium number of vortices is of order $20\%$ lower than the solid-body prediction. The absence of the last ring might be due to the fact that vortices in this low-density region would significantly overlap because of their large core size. Assuming that the vortex diameter is twice the local healing length, then with $\xi(\rho,z=0)=1/R_{\rho}\sqrt{(1-\Omega^2)(1-\rho^2/R_{\rho}^2)}$ one obtains a critical vortex displacement $\rho_c\sim 9$ for $\Omega=0.95$. In fact, the energy of a uniform array of vortices in a rotating cylinder is also minimized if there exists a ‘vortex-free strip’ the size of approximately one ring near the edge of the vessel [@Hall], i.e.$N_v=2\pi R^2\Omega/\kappa-\delta$, where $\delta\sim N_v^{-1}$. This correction is due to the contribution to the energy of strictly irrotational flow in the region between the last vortex and the superfluid surface. The existence of a vortex-free region in trapped condensates is confirmed by evaluating the change in condensate phase around a contour in the $xy$-plane of increasing radius $R$ from the origin. This is accomplished by calculating the spatial derivatives of the numerical data in order to determine ${\bf v}\equiv\nabla\Phi$, interpolating the results onto a one-dimensional azimuthal grid with 2000 points, and evaluating the line integral $\oint{\bf v}\cdot d{\bf l}$ numerically using a trapezoidal rule. The results for $\Omega=0.75$ and $0.95$ are given in Fig. \[Nvfig\]. On average, the number of vortices follows the solid-body expression $\Omega R^2$ for small rings, but begins to lag noticeably as $R\to R_{\rho}$ even before the vortex-free strip is reached. The velocity field for the $\Omega=0.95$ case, shown in Fig. \[velocityfig\], is small in the rotating frame everywhere except for the rotational currents near the vortex cores and the irrotational flow near the surface. In order to further explore this issue, consider a model wavefunction with constant amplitude and phase given by $\Phi(x,y)=\sum_{x_0,y_0}\tan^{-1} [(y-y_0)/(x-x_0)]$, where $(x_0,y_0)$ are vortex positions in a centered triangular array with lattice constant $b$. For $N_v=61$ ($r=4$), the vortex velocities $v=|\nabla\Phi|$ on successive hexagonal rings $n_r$ are $v={1\over b}\{3.63,\,7.23,\,10.69,\,13.57\}$. Since $v(n_r=4)<4v(n_r=1)$ by $7\%$, the angular velocity of the last ring cannot attain the solid-body value for any choice of $b$. For large arrays, this mismatch in velocities varies as $(R/R_{\rho})^5$, and is why significant distortion of the vortex array from triangular is expected near the superfluid surface [@Campbell]. The question that immediately arises is: why are the vortex arrays observed in confined condensates so perfectly triangular, even very near the surface? One possible explanation is that a displaced vortex will precess around the origin even in the absence of other vortices, due to the inhomogeneous external potential. Neglecting vortex curvature (which from Fig. \[pix\] is evidently negligible at large $\Omega$), the additional contribution to the velocity is $v=[R/(R_{\rho}^2-R^2)]\ln(\xi/R_{\rho})$ in the TF limit [@Fetter]. Let us return to the case considered above, with $r=4$, and choose $\Omega=0.95$ for concreteness. Assuming $R_{\rho}=R_0/(1-\Omega^2)^{3/10}$ and imposing $3.63/b+v(R=b)\equiv\Omega b$, one obtains $b=1.98$ and $v=\{1.88,\,3.76,\,5.62,\,7.37\}$. Thus, including the effect of precession, the solid-body value $v=4\times 1.88$ at $R=b$ now exceeds the velocity of the last ring $R=4b$ by only $2\%$. In conclusion, we have explored the crossover of a confined Bose-Einstein condensate from that of an irrotational superfluid to a solid body with increasing rotation. The external potential is shown to strongly influence the density and arrangement of the resulting vortices. Many related issues remain unresolved, however, among them the spin-up of the superfluid by the thermal cloud, the upper critical frequency, and the approach to a quantum Hall state; these will be the subject of future work. We are grateful to E. A. Cornell and P. C. Haljan for numerous fruitful discussions. This work was supported by the U.S. Office of Naval Research. D. V. Osborne, Proc. Phys. Soc. London [**A63**]{}, 909 (1950). I. M. Khalatnikov, [*An Introduction to the Theory of Superfluidity*]{} (W. A. Benjamin, New York, 1965). R. J. Donnelly, [*Quantized Vortices in Helium II*]{} (Cambridge University Press, Cambridge, 1991). P. C. Haljan, I. Coddington, P. Engels, and E. A. Cornell, e-print: cond-mat/0106362. O. M. Marago [*et al.*]{}, Phys. Rev. Lett. [**84**]{}, 2056 (2000). R. Onofrio [*et al.*]{}, Phys. Rev. Lett. [**85**]{}, 2228 (2000); C. Raman [*et al.*]{}, J. Low Temp. Phys. [**122**]{}, 99 (2001). A. L. Fetter and A. A. Svidzinsky, J. Phys. Cond. Mat. [**13**]{}, R135 (2001). K. W. Madison, F. Chevy, W. Wohlleben, and J. Dalibard, Phys. Rev. Lett. [**84**]{}, 806 (2000). J. R. Abo-Shaeer, C. Raman, J. M. Vogels, and W. Ketterle, Science [**292**]{}, 476 (2001); C. Raman [*et al.*]{}, e-print: cond-mat/0106235. E. Hodby [*et al.*]{}, e-print: cond-mat/0106262. L. J. Campbell and R. M. Ziff, Phys. Rev. B [**20**]{}, 1886 (1979). E. P. Gross, Nuovo Cimento [**20**]{}, 454 (1961); L. P. Pitaevskii, Zh. Eksp. Teor. Fiz. [**40**]{}, 646 (1961) \[Sov. Phys. JETP [**13**]{}, 451 (1961)\]. P. S. Julienne, F. H. Mies, E. Tiesinga, and C. J. Williams, Phys. Rev. Lett. [**78**]{}, 1880 (1997). V. K. Tkachenko, Sov. Phys. JETP [**22**]{}, 1282 (1966). T.-L. Ho, e-print: cond-mat/0104522. B. I. Schneider and D. L. Feder, Phys. Rev. A [**59**]{}, 2232 (1999). H. E. Hall and W. F. Vinen, Phys. Roy. Soc. [**A238**]{}, 215 (1956).
{ "pile_set_name": "ArXiv" }
--- author: - 'N. E. D. Noël, C. Gallart, E. Costa, R. A. Méndez' title: 'Old Main-Sequence Turnoff Photometry in the SMC' --- Introduction {#sec:intro} ============ Besides the intrinsec interest in studying the SMC stellar populations, which is key to understand our local environment, this galaxy has been largely neglected in favour of its closest neighbour, the LMC. There is a disagreement in the published SFHs of the SMC since some authors (e.g. Harris & Zaritsky 2004) argue that there was a quiescent period at intermediate ages in which the galaxy formed few or no stars, and others (e.g. Dolphin et al. 2001) do not find a gap in the star formation rate of the SMC. It is important to stress that the SMC regions analised by these authors are different. In an attempt to address this conflict, here we present a discussion based on deep CMDs with theoretical isochrones overlapped. SMC Stellar Content {#sec:stellar} =================== The interpretation of composite stellar population CMDs strongly relies on the stellar evolution models adopted (see Gallart et al. 2005). For our purpose we used the Teramo stellar evolution models (Pietrinferni et al. 2004) as shown in Figure \[fig:fig1\] (Eastern SMC fields in the upper panel and Western SMC fields in the lower panel) and in Figure \[fig:fig2\] (Southern SMC fields), using metallicities suitable for the SMC stellar populations. Our analysis shows the presence of spatial variations in the stellar content as a function of the position angle and strong gradients in the stellar population as a function of the galactocentric distance. The fact that the SMC CMDs analyzed here do not show a blue horizontal branch and that none of them is dominated by a completely old population indicate that, at $\sim$4$\arcdeg$ from the SMC center, we do not reach an old halo similar to that of the Milky Way (Noël et al. 2006). Dolphin, A. E., Walker, A. R., Hodge, P. W., Mateo, M., et al. 2001, ApJ, 562, 303 Gallart, C., Zoccali, M., Aparicio, A. 2005, ARA&A, 43, 10 Harris, J., & Zaritsky, D. 2004, AJ, 127, 1531 Noël, N. E. D., Gallart, C., Costa, E., & Méndez, R. A. 2006, AJ, submitted Pietrinferni, A., Cassisi, S., Salaris, M., & Castelli, F. 2004, ApJ, 612, 168
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we initiate a study on Gauss factorials of polynomials over finite fields, which are the analogues of Gauss factorials of positive integers.' address: - 'School of Mathematical Sciences, Qufu Normal University, Qufu Shandong, 273165, China' - 'School of Mathematics and Statistics, University of New South Wales, Sydney, NSW 2052, Australia' author: - Xiumei Li - Min Sha title: Gauss factorials of polynomials over finite fields --- Introduction ============ A well-known result in number theory, called Wilson’s theorem, says that for any prime number $p$, we have $$\label{eq:Wilson} (p-1)! \equiv -1 \pmod p.$$ If we replace $p$ by any composite integer in , it is not correct any more. Gauss proved a composite analogue of Wilson’s theorem: for any integer $n>1$, $$\label{eq:Gauss} \begin{split} \prod_{\substack{1\le j \le n-1 \\ \gcd(j,n)=1}} j & \equiv \left\{\begin{array}{ll} -1 \pmod n & \textrm{for $n=2,4,p^k,$ or $2p^k$,}\\ \\ 1 \pmod n &\textrm{otherwise}, \end{array} \right. \end{split}$$ where $p$ is an odd prime and $k$ is a positive integer. In [@CD2008], Cosgrave and Dilcher called the product $$\label{eq:CD} \prod_{\substack{1\le j \le N \\ \gcd(j,n)=1}} j$$ a *Gauss factorial*, where $N$ and $n$ are positive integers. We refer to [@CD2011] for a very good survey on Gauss factorials of integers, and [@CD2010; @CD2011a; @CD2014; @CD2015] for recent work. In particular, when $n\ge 3$ is odd and $N=(n-1)/2$, the multiplicative order of the Gauss factorial modulo $n$ has been determined in [@CD2008 Theorem 2]. Throughout the paper, the letter $p$ always denotes a prime, and $q=p^s$ for some integer $s\ge 1$. Let $\F_q$ be the finite field of $q$ elements. We denote by $\F_q[X]$ the polynomial ring over $\F_q$. In the sequel, for simplicity let $\A=\F_q[X]$. It is known that $\A$ has many properties in common with the integers $\Z$, and thus many number theoretic problems about $\Z$ have their analogues for $\A$. For any polynomial $f(X)\in \A$ of degree $\deg f \ge 1$, we define the *Gauss factorial* (denoted by $G(f)$) of $f$ as follows: $$G(f) = \prod_{\substack{g\in \A \\ 0 \le \deg g < \deg f \\ \gcd(g,f)=1}} g.$$ That is, $G(f)$ is the product of all *non-zero* polynomials of degree less than $\deg f$ and coprime to $f$. By convention, if $f\in \F_q$, we define $G(f)=1$. Note that if $f$ is irreducible, the definition of $G(f)$ only depends on the degree of $f$. Moreover, given integer $n\ge 1$ and polynomial $f\in \A$, we define $$G(n,f) = \prod_{\substack{g\in \A \\ 0 \le \deg g \le n \\ \gcd(g,f)=1}} g,$$ which is also called a *Gauss factorial*. In particular, $G(f)=G(\deg f - 1, f)$. In this paper, we initiate the study of Gauss factorials of polynomials over finite fields by generalizing and the main result in [@CD2008]. Certainly, there are many other things remaining to be explored. We want to remark that one can similarly define *factorials* of polynomials over finite fields, and consider generalizing related classical results (some of them are listed in [@Bhargava]). Preliminaries ============= In this section, for the convenience of the reader we recall some basic results about polynomials over finite fields, which can be found in [@Rosen Chapter 1 and Chapter 3]. For $f\in \A$, set $|f|=q^{\deg f}$ if $f\ne 0$, and $|f|=0$ otherwise. For a non-constant polynomial $f\in \A$, write its prime factorization as $$\label{eq:factor} f= a P_1^{e_1} \cdots P_t^{e_t},$$ where $a \in \F_q^*$, integer $e_j \ge 1$ ($1\le j \le t$), and each $P_j$ ($1\le j \le t$) is a monic irreducible polynomial. Here, a monic irreducible polynomial in $\A$ is said to be a *prime polynomial*, and so in each $P_j$ $(1\le j \le t)$ is a *prime divisor* of $f$. Given a non-zero polynomial $f\in \A$, denote by $\A/f\A$ the residue class ring of $\A$ modulo $f$ and by $(\A/f\A)^*$ its unit group. \[lem:structure1\] Let $f\in \A$ be a non-constant polynomial with prime factorization as in . Then, we have $$(\A/f\A)^* \cong (\A/P_1^{e_1}\A)^* \times \cdots \times (\A/P_t^{e_t}\A)^*.$$ \[lem:structure2\] Let $P\in \A$ be an irreducible polynomial, and $e$ a positive integer. Then, we have $$(\A/P^e\A)^* \cong (\A/P\A)^* \times (\A/P^e\A)^{(1)},$$ where $(\A/P\A)^*$ is a cyclic group of order $|P|-1$, and $(\A/P^e\A)^{(1)}$ is a $p$-group of order $|P|^{e-1}$. For any non-zero polynomial $f\in \A$, let $\Phi(f)= |(\A / f\A)^*|$, which is the so-called *Euler totient function* of $\A$. \[lem:Euler\] For any non-zero polynomial $f\in \A$, we have $$\Phi(f) = |f| \prod_{P\mid f} \big( 1- 1/|P| \big),$$ where the product runs through all the prime divisors of $f$. Let $P\in \A$ be an irreducible polynomial. When $q$ is odd, given a non-zero polynomial $g \in \A$ with $\gcd(g,P)=1$, as usual we define the *Legendre symbol* $\left(\frac{g}{P}\right) = \pm 1$ such that $$\left(\frac{g}{P}\right) \equiv g^{\frac{|P|-1}{2}} \pmod P.$$ Let $f\in \A$ be a non-constant polynomial with the prime factorization as in . Given a non-zero polynomial $g \in \A$ with $\gcd(g,f)=1$, the *Kronecker symbol* $\left(\frac{g}{f}\right) = \pm 1$ is defined as $$\left(\frac{g}{f}\right) = \prod_{j=1}^{t} \left(\frac{g}{P_j}\right)^{e_j}.$$ As usual, for two non-zero polynomials $g_1, g_2 \in \A$ with $\gcd(g_1g_2, f)=1$, we have $$\left(\frac{g_1g_2}{f}\right) = \left(\frac{g_1}{f}\right) \left(\frac{g_2}{f}\right).$$ \[lem:reciprocity\] Let $f,g \in \A$ be relatively prime non-zero polynomials. Assume that $q$ is odd, and both $f$ and $g$ are monic. Then, we have $$\left(\frac{g}{f}\right) = (-1)^{\frac{q-1}{2}\deg f \deg g} \left(\frac{f}{g}\right).$$ The following lemma is essentially a special case of a result due to Artin [@Artin Section 18, Equation (10)]. \[lem:Artin\] Assume that $q$ is odd. Let $P\in \A$ be a prime polynomial of odd degree. Denote by $h(-P)$ the class number of the quadratic function field $\F_q(X, \sqrt{-P})$. Then, we have $$h(-P)= \sum_{\substack{\textrm{$g$ monic} \\ 0 \le \deg g < \deg P}} \left(\frac{g}{P}\right).$$ Following Artin [@Artin] the quadratic function field $\F_q(X, \sqrt{-P})$ is imaginary (see also [@Rosen Proposition 14.6] and the discussions therein). So, using [@Artin Section 18, Equation (10)] and [@Artin Section 17, Equation (4)] directly (alternatively, applying arguments similar to those in the proof of [@Rosen Theorem 16.8]), we obtain $$h(-P)= \sum_{\substack{\textrm{$g$ monic} \\ 0 \le \deg g < \deg P}} \left(\frac{-P}{g}\right).$$ Here one should note that since $-P$ can be viewed as a monic polynomial with respect to $-X$, we can apply the results in [@Artin]. For each summation term in the above formula, using the reciprocity law (Lemma \[lem:reciprocity\]) and noticing that $\deg P$ is odd, we deduce that $$\left(\frac{-P}{g}\right) = (-1)^{\frac{(q-1)\deg g}{2}} \left(\frac{P}{g}\right) = \left(\frac{g}{P}\right),$$ which implies the desired result. Finally, we reproduce a useful result due to Miller [@Miller Section 1], which can yield a simple proof of . \[lem:Miller\] Let $A$ be an abelian group. Then, the product $\prod_{a \in A} a$ is the identity element if $A$ either has no element of order two or has more than one element of order two, otherwise the product is equal to the element of order two. Congruence formulas =================== In this section, our main objective is to generalize to $G(f)$ by following the approach in [@Miller]. One can see that we have two quite different cases depending on the characteristic of $\A$ (that is, $p$). We first remark that it is known that for any irreducible polynomial $P\in \A$, we have (for instance see [@Rosen Chapter 1, Corollary 2]) $$G(P) \equiv -1 \pmod P,$$ which is an analogue of . Besides, it is easy to see that if non-zero $f\in \A$ is reducible and square-free, then we have $$\prod_{\substack{g\in \A \\ 0\le \deg g < \deg f}} g \equiv 0 \pmod f.$$ \[thm:con1\] Assume that $p$ is odd. Then, for any polynomial $f \in \A$ of degree $\deg f \ge 1$, we have $$\begin{split} G(f) & \equiv \left\{\begin{array}{ll} -1 \pmod f & \textrm{if $f$ has only one prime divisor,}\\ \\ 1 \pmod f &\textrm{otherwise}. \end{array} \right. \end{split}$$ Note that $p$ is odd. Then, for any irreducible polynomial $P\in \A$ and integer $e\ge 1$, by Lemma \[lem:structure2\] the group $(\A/P^e\A)^*$ has only one element of order two (that is, $-1$). So, applying Lemma \[lem:structure1\], it is easy to see that the abelian group $(\A/f\A)^*$ has only one element of order two if and only if $f$ has only one prime divisor. Then, the desired result now follows from Lemma \[lem:Miller\]. \[thm:con2\] Assume that $p=2$. For any polynomial $f\in \A$ of degree $\deg f \ge 1$ with the prime factorization as in , we have $$\begin{split} G(f) & \equiv \left\{\begin{array}{ll} f/P_1 +1 \pmod f & \textrm{if $q=2, \deg P_1=1, 2\le e_1 \le 3$, and all the }\\ & \quad \textrm{other exponents $e_j=1$ if they exist, }\\ 1 \pmod f &\textrm{otherwise}. \end{array} \right. \end{split}$$ We first remark that $-1$ is not an element of order two, because $p=2$. We also note that for any irreducible polynomial $P\in \A$, the order of $(\A/P\A)^*$ is an odd number, and thus the group $(\A/P\A)^*$ has no element of order two. Let $P\in \A$ be an irreducible polynomial, and $e$ a positive integer. If $h$ is an element of order two of $(\A/P^e\A)^*$, then $P^e \mid (h+1)^2$. So, it is easy to see that when $e$ is even, the set of elements of order two of $(\A/P^e\A)^*$ is $$\Big\{gP^{e/2} +1:\, g\in \A, 0\le \deg g < \frac{e}{2}\deg P \Big\},$$ whose cardinality is $q^{\frac{1}{2}e\deg P}-1$. Similarly, if $e$ is odd and greater than 1, then the set of elements of order two of $(\A/P^e\A)^*$ is $$\Big\{gP^{(e+1)/2} +1:\, g\in \A, 0\le \deg g < \frac{e-1}{2}\deg P \Big\},$$ whose cardinality is $q^{\frac{1}{2}(e-1)\deg P}-1$. Thus, $(\A/P^e\A)^*$ has only one element of order two if and only if $\deg P=1, 2\le e \le 3$, and $q=2$. Hence, the desired result follows by using Lemma \[lem:structure1\] and Lemma \[lem:Miller\], and noticing that $f/P_1 +1$ is indeed an element of order two of $(\A/f\A)^*$ if $e_1 \ge 2$. Furthermore, we can get a congruence identity for $G(n,f)$ when $n \ge \deg f$. This can also be viewed as an analogue of . \[thm:con3\] For any polynomial $f\in \A$ of degree $\deg f \ge 1$, if the integer $n$ satisfies $n\ge \deg f$, we have $$\begin{split} G(n,f) & \equiv \left\{\begin{array}{ll} -1 \pmod f & \textrm{if $p$ is odd and $f$ has only one prime divisor,}\\ \\ 1 \pmod f &\textrm{otherwise}. \end{array} \right. \end{split}$$ Fix an arbitrary integer $m \ge \deg f$. For any polynomial $g\in \A$ with $\deg g < \deg f$ and $\gcd(g,f)=1$, it is easy to see that the set of polynomials in $\A$ of degree $m$ and congruent to $g$ modulo $f$ is $$\Big\{g+fr: r\in \A, \deg r = m - \deg f \Big\},$$ whose cardinality is $(q-1)q^{m- \deg f}$. Thus, we obtain $$\begin{aligned} G(n,f) & \equiv G(f)\prod_{m= \deg f}^{n} G(f)^{(q-1)q^{m- \deg f}} \\ & \equiv G(f)^{q^{n+1-\deg f}} \pmod f.\end{aligned}$$ We then conclude the proof by using Theorem \[thm:con1\] and Theorem \[thm:con2\]. Multiplicative orders ===================== As mentioned before, when the integer $n\ge 3$ is odd and $N=(n-1)/2$, Cosgrave and Dilcher have determined the multiplicative order of the Gauss factorial modulo $n$ in [@CD2008 Theorem 2]. That is, they only considered *half* of the positive integers less than $n$. In this section, assume that $q$ is odd, then our aim is to get some similar results concerning a special divisor of $G(f)$. For this divisor, we only consider *half* of the polynomials of degree less than $\deg f$ and coprime to $f$. For any non-zero $g \in \A$, we denote by $\sgn(g)$ the leading coefficient of $g$, which is called the *sign* of $g$. Let $S$ be a subset of $\F_q^*$ such that $|S|= (q-1)/2$ and for any $\alpha \in \F_q^*$ if $\alpha \in S$ then $-\alpha \not\in S$. Obviously, the set $S$ has $2^{(q-1)/2}$ choices. Define $\delta(S) = \prod_{a \in S} a.$ Notice that $$\label{eq:delta} \delta(S) ^ 2 = (-1)^{\frac{q-1}{2}} \prod_{a\in \F_q^*} a = (-1)^{\frac{q+1}{2}}.$$ So, if $q\equiv 3 \pmod 4$, we have $\delta(S) ^ 2 = 1$, and then $\delta(S)=\pm 1$. Note that for any non-zero $g \in \A$, $\sgn(g) \in S$ if and only if $\sgn(-g) \not \in S$. So, for any polynomial $f \in \A$ of degree $\deg f \ge 1$, we easily have $$G(f) = (-1)^{\Phi(f)/2} G(f,S)^2,$$ where $$G(f,S) = \prod_{\substack{g\in \A, \sgn(g) \in S \\ 0 \le \deg g < \deg f \\ \gcd(g,f)=1}} g.$$ Thus, by Theorem \[thm:con1\] we obtain $$\begin{split} G(f,S)^2 & \equiv \left\{\begin{array}{ll} -(-1)^{\Phi(f)/2} \pmod f & \textrm{if $f$ has only one prime divisor,}\\ \\ (-1)^{\Phi(f)/2} \pmod f &\textrm{otherwise}. \end{array} \right. \end{split}$$ This implies that the multiplicative order of $G(f,S)$ modulo $f$ only can possibly be 1, 2 or 4. Here, we want to determine the multiplicative order of $G(f,S)$ modulo $f$, which is denoted by $\ord_f G(f,S)$. The main result is as follows. \[thm:extension\] Assume that $q$ is odd, and let $f \in \A$ be a polynomial having the prime factorization as in . Then 1. $\ord_f G(f,S)=4$ when $t=1$, and either $q \equiv 1 \pmod 4$ or $\deg P_1$ is even. 2. $\ord_f G(f,S)=2$ when 1. $t=1$, $q \equiv 3 \pmod 4$, $\deg P_1$ is odd, $\delta(S)=1$, and $e_1 + \frac{1}{2}(h(-P_1)-3) \equiv 1 \pmod 2$, or 2. $t=1$, $q \equiv 3 \pmod 4$, $\deg P_1$ is odd, $\delta(S)=-1$, and $e_1 + \frac{1}{2}(h(-P_1)-3) \equiv 0 \pmod 2$, or 3. $t=2$, $q \equiv 1 \pmod 4$, and $P_1$ is not a quadratic residue modulo $P_2$, or 4. $t=2$, $q \equiv 3 \pmod 4$, both $\deg P_1$ and $\deg P_2$ are even, and $P_1$ is not a quadratic residue modulo $P_2$, or 5. $t=2$, $q \equiv 3 \pmod 4$, and either $\deg P_1$ or $\deg P_2$ is odd; 3. $\ord_f G(f,S)=1$ in all other cases. From Theorem \[thm:extension\] one can see that for many cases $\ord_f G(f,S)$ is independent of the choice of $S$. Especially some results are related to the class numbers of function fields. Actually, we do more in the paper: the value of $G(f,S)$ modulo $f$ is either explicitly given or easily computable. Before proving Theorem \[thm:extension\], we illustrate an example to confirm some results in Theorem \[thm:extension\] by using the computer algebra system PARI/GP [@Pari]. Choose $q=3$ and $P=X^3+2X+2$, then by Lemma \[lem:Artin\] we have $h(-P)=7$. If furthermore we choose $S=\{1\}$, we have $\delta(S)=1$ and $G(P,S)=-1$, and thus $\ord_P G(P,S)=2$, which is compatible with Theorem \[thm:extension\] (2); otherwise if we choose $S=\{-1\}$, we get $\ord_P G(P,S)=1$, which is also consistent with Theorem \[thm:extension\] (3). In the following, we divide the proof of Theorem \[thm:extension\] into several cases. One prime divisor ----------------- We continue the general discussion about $G(f,S)$. Suppose that $f$ has the prime factorization as in . Then, by Lemma \[lem:Euler\] we get $$\Phi(f) = \prod_{i=1}^{t} q^{(e_i-1)\deg P_i} (q^{\deg P_i}-1).$$ Note that $q$ is odd, so $$(-1)^{\frac{1}{2}\Phi(f)}= (-1)^{\frac{1}{2} \prod_{i=1}^{t} (q^{\deg P_i}-1)}.$$ If $t\ge 2$, that is, $f$ has at least two distinct prime divisors, then we have $(-1)^{\frac{1}{2}\Phi(f)}=1$, and thus $$G(f,S)^2 \equiv 1 \pmod f.$$ Now assume that $t=1$, that is, $f$ has only one prime divisor $P_1$. Then it is easy to see that $$\begin{split} G(f,S)^2 & \equiv \left\{\begin{array}{ll} -1 & \textrm{if $q\equiv 1 \pmod 4$, or $\deg P_1$ is even,}\\ \\ 1 &\textrm{otherwise}. \end{array} \right. \end{split}$$ Thus, for any $f\in \A$ of positive degree and with the prime factorization as in , we have $$\label{eq:G2} \begin{split} G(f,S)^2 & \equiv \left\{\begin{array}{ll} -1 \pmod f & \textrm{if $t=1$, and either $q\equiv 1$ (mod 4) or $\deg P_1$ is even,}\\ \\ 1 \pmod f &\textrm{otherwise}. \end{array} \right. \end{split}$$ The above equation immediately gives the following partial result: \[thm:order4\] Assume that $q$ is odd. If $f$ has only one prime divisor $P$, and either $q \equiv 1 \pmod 4$ or $P$ has even degree, then $\ord_f G(f,S)=4$. Otherwise, $\ord_f G(f,S)=1$ or $2$. For further deductions, we need to get a new expression of $G(f,S)$. Fix an arbitrary polynomial $f\in \A$ of degree $\deg f \ge 1$. Let $i_{n}$ be the cardinality of the set $\{g\in \A: \, g \ \textrm{monic},\deg g =n, \gcd(g,f)=1\}$. Note that $$\sum_{n=0}^{\deg f-1} i_n = \frac{\Phi(f)}{q-1}.$$ Denote $$M(f) = \Big( \prod_{\substack{g \textrm{ monic} \\ 0 \le \deg g < \deg f, \,\gcd(g,f)=1}} g \Big)^{(q-1)/2},$$ which is also a polynomial in $\A$. Now, we deduce that $$\label{eq:GM} \begin{split} G(f,S) & = \prod_{a \in S} \Big( \prod_{n=0}^{\deg f -1 } a^{i_{n}} \prod_{\substack{ \textrm{$g$ monic} \\ \deg g =n, \, \gcd(g,f)=1}} g \Big) \\ & = \Big( \prod_{a\in S} a \Big)^{\frac{\Phi(f)}{q-1}} \Big( \prod_{\substack{g \textrm{ monic} \\ 0 \le \deg g < \deg f, \,\gcd(g,f)=1}} g \Big)^{\frac{q-1}{2}}\\ & = \delta(S)^{\frac{\Phi(f)}{q-1}}M(f). \end{split}$$ So, our problem is reduced to studying $M(f)$ modulo $f$. By definition we also rewrite $$\begin{aligned} G(f) = \prod_{\substack{0 \le \deg g < \deg f \\ \gcd(g,f)=1}} g & = \prod_{n=0}^{\deg f -1 } \Big( \prod_{\substack{ \deg g =n \\ \gcd(g,f)=1}} g \Big) \\ & = \prod_{n=0}^{\deg f -1 } \left( \big( \prod_{a\in \F_q^*} a \big)^{i_n} \Big(\prod_{\substack{ \textrm{$g$ monic} \\ \deg g =n, \, \gcd(g,f)=1}} g \Big)^{q-1}\right) \\ & = (-1)^{\frac{\Phi(f)}{q-1}} M(f)^2.\end{aligned}$$ Now assume that $f$ has the prime factorization as in . So, using Theorem \[thm:con1\] we obtain $$\label{eq:M2} \begin{split} M(f)^2 & \equiv \left\{\begin{array}{ll} -1 \pmod f & \textrm{if $t=1$ and $\deg P_1$ is even,}\\ \\ 1 \pmod f &\textrm{otherwise}. \end{array} \right. \end{split}$$ We first handle $M(f)$ in the case when $f$ is irreducible. \[lem:MP\] Assume that $q \equiv 3 \pmod 4$. If $f \in \A$ is an irreducible polynomial of odd degree with the form $f=aP$, where $a \in \F_q^*$ and $P\in \A$ is a prime polynomial, then $$M(f) \equiv (-1)^{\frac{1}{2}(h(-P)-1)} \pmod P.$$ We apply similar arguments as in [@Mordell]. By definition, we directly have $M(f)=M(P)$. So, it is equivalent to determine the value of $M(P)$ modulo $P$ (or equivalently modulo $f$). Put $d= \deg P$. We first make some preparations. Let $N$ (resp. $R$) be the number of monic polynomials in $\A$ of degree less than $d$ which are quadratic non-residues (resp. quadratic residues) modulo $P$. So, $$N+R= 1+ q + \cdots + q^{d -1},$$ which, together with Lemma \[lem:Artin\], implies that $$N = \frac{1}{2} \Big(1+ q + \cdots + q^{d -1} - h(-P) \Big) .$$ Note that $q \equiv 3 \pmod 4$ and $d$ is odd, so we have $$1+ q + \cdots + q^{d -1} \equiv 1 \pmod 4.$$ Thus, we get $$\label{eq:nonresidue} N \equiv \frac{1}{2} \big( h(-P) - 1 \big) \pmod 2.$$ Given a non-zero polynomial $g$ which is a quadratic residue modulo $P$, $bg$ is also a quadratic residue for any square element $b\in \F_q^*$ (equivalently, $b$ is a quadratic residue modulo $P$), and $$\prod_{b\in \F_q^*, \, \left(\frac{b}{P}\right)=1} bg =g^{(q-1)/2}.$$ Besides, note that for any non-zero polynomial $g\in \A$, we can write $g=cg_0$ for some $c\in \F_q^*$ and monic polynomial $g_0 \in \A$. Then, noticing $q \equiv 3 \pmod 4$, we get $$g^{(q-1)/2}= \pm g_0^{(q-1)/2} = (\pm g_0) ^{(q-1)/2}.$$ Based on the above observations and the fact that $-1$ is not a quadratic residue modulo $P$ and is the unique element of order two in $(\A/P\A)^*$, we deduce that $$\label{eq:identity} 1= \prod_{\substack{\textrm{$\left(\frac{g}{P}\right)=1$} \\ 0 \le \deg g < d}} g =\prod_{\substack{\textrm{$\sgn(g)=\pm 1$, $\left(\frac{g}{P}\right)=1$} \\ 0 \le \deg g < d}} g^{(q-1)/2},$$ where we also use the simple fact that the inverse of a quadratic residue is also a quadratic residue (modulo $P$). Using , we obtain $$\begin{split} M(P) & = \Big( \prod_{\substack{\textrm{$g$ monic, $\left(\frac{g}{P}\right)=1$} \\ 0 \le \deg g < d}} g \prod_{\substack{\textrm{$g$ monic, $\left(\frac{g}{P}\right)=-1$} \\ 0 \le \deg g < d}} g\Big)^{(q-1)/2} \\ & = (-1)^N \prod_{\substack{\textrm{$g$ monic, $\left(\frac{g}{P}\right)=1$} \\ 0 \le \deg g < d}} g^{(q-1)/2} \prod_{\substack{\textrm{$g$ monic, $\left(\frac{g}{P}\right)=-1$} \\ 0 \le \deg g < d}} (-g)^{(q-1)/2} \\ & = (-1)^N \prod_{\substack{\textrm{$\sgn(g)=\pm 1$, $\left(\frac{g}{P}\right)=1$} \\ 0 \le \deg g < d}} g^{(q-1)/2}\\ & \equiv (-1)^N \pmod P. \end{split}$$ Now, the desired result follows from . In the following, we extend the result in Lemma \[lem:MP\] to the case when $f$ is a power of some prime polynomial up to a constant. Actually, we can obtain a more general form. \[lem:MPe\] If $f \in \A$ is a polynomial of the form $f=aP^e$, where $a \in \F_q^{*}$, $P\in \A$ is a prime polynomial and $e$ is a positive integer, then $$M(f)\equiv (-1)^{\frac{(e-1)(q-1)}{2}\deg P }M(P) \pmod P.$$ Put $d= \deg P$. Note that for any non-zero polynomial $g\in \A$ of degree $\deg g < \deg f=de$, by the Euclidean division we can write $$g = g_1P+g_2 \quad \textrm{for some $g_1,g_2 \in \A,$}$$ where $g_1$ and $g_2$ satisfy $\deg g_1 <d(e-1)$ and $g_2=0$ or $0\leq \deg g_2 <d$. Then by definition, we have $$M(f) = \Big( Q_{0}Q_{1} \Big)^{(q-1)/2},$$ where $$Q_{0}=\prod_{\substack{g \textrm{ monic} \\ 0 \le \deg g < d}} g \quad \textrm{and} \quad Q_{1}=\prod_{\substack{g_2 \in \A \\ 0 \le \deg g_2 < d}}\prod_{\substack{g_1 \textrm{ monic} \\ 0 \le \deg g_1 < d(e-1),}} (g_1P+g_2).$$ Now for $Q_{1}$, we deduce that $$\begin{aligned} Q_{1} & \equiv \Big(\prod_{\substack{g \in \A \\ 0 \le \deg g < d}} g \Big)^{\frac{q^{d(e-1)}-1}{q-1}}\pmod P \\ & \equiv G(P)^{\frac{q^{d(e-1)}-1}{q-1}}\pmod P \\ & \equiv (-1)^{d(e-1)} \pmod P,\end{aligned}$$ where we have applied Theorem \[thm:con1\]. Therefore $$\begin{aligned} M(f) &= \Big( Q_{0}Q_{1} \Big)^{(q-1)/2}\\ &\equiv (-1)^{\frac{d(e-1)(q-1)}{2}}M(P) \pmod P,\end{aligned}$$ which completes the proof. We also need a simple but useful result, which is an analogue of [@CD2008 Lemma 1]. One can prove it in a straightforward manner; we therefore omit its proof. \[lem:lift\] Suppose that $q$ is odd. Let $f,g\in \A$ be two non-constant polynomials. Given an integer $e\ge 1$, assume that $f^2 \equiv 1 \pmod{g^e}$. Then, $f \equiv \pm 1 \pmod{g^e}$ if and only if $f \equiv \pm 1 \pmod g$, with the signs corresponding to each other. Now we are ready to get a partial result about the value of $G(f,S)$ modulo $f$. We use the product $\delta(S)$ defined just before . \[thm:GPe\] Assume that $q \equiv 3 \pmod 4$. If $f \in \A$ is a polynomial of the form $f=aP^e$, where $a \in \F_q^{*}$, $P\in \A$ is a prime polynomial of odd degree and $e$ is a positive integer, then $$G(f,S)\equiv (-1)^{e+\frac{1}{2}(h(-P)-3)}\delta(S) \pmod f.$$ By , we first note that $$G(f,S) = \delta(S)^{\frac{\Phi(f)}{q-1}}M(f) = \delta(S)M(f),$$ where we use the fact that $\delta(S)=\pm 1$ since $q \equiv 3 \pmod 4$. From Lemma \[lem:MPe\], we have $$M(f)\equiv (-1)^{\frac{(e-1)(q-1)}{2}\deg P }M(P) \equiv (-1)^{e-1}M(P) \pmod P,$$ which, together with Lemma \[lem:MP\], gives $$\label{eq:MP} M(f)\equiv (-1)^{e-1+\frac{1}{2}(h(-P)-1)} \pmod P.$$ Since $M(f)^2 \equiv 1 \pmod{P^e}$ by , using Lemma \[lem:lift\] we know that $M(f) \equiv \pm 1 \pmod{P^e}$ if and only if $M(f) \equiv \pm 1 \pmod P$, with the signs corresponding to each other. Thus, by we obtain $$M(f)\equiv (-1)^{e+\frac{1}{2}(h(-P)-3)} \pmod{P^e}.$$ Now, the desired result follows. Two or more prime divisors -------------------------- In this section, we deal with the case when $f$ has more than one prime divisors. We start with a key lemma. \[lem:reduction\] If $f \in \A$ is a polynomial having the prime factorization as in , then $$\begin{split} M(f) & \equiv \left\{\begin{array}{ll} (-1)^{\frac{q-1}{2}\deg P_{2} }P_{2}^{-\frac{\Phi(P_{1}^{e_{1}})}{2}} \pmod{P_{1}^{e_{1}}} & \textrm{if $t=2$,}\\ \\ P_{t}^{-\frac{\Phi\big(P_{1}^{e_{1}}P_{2}^{e_{2}}\cdots P_{t-1}^{e_{t-1}}\big)}{2}} \pmod{P_{1}^{e_{1}}P_{2}^{e_{2}}\cdots P_{t-1}^{e_{t-1}}} &\textrm{if $t\geq 3$.} \end{array} \right. \end{split}$$ Put $\widetilde{f}=P_{1}^{e_{1}}P_{2}^{e_{2}}\cdots P_{t-1}^{e_{t-1}},d_{t}= \deg P_{t}$. Note that for any non-zero polynomial $g\in \A$ of degree $\deg g < \deg f$, by the Euclidean division we can write $$g = g_1\widetilde{f}+g_2 \quad \textrm{for some $g_1,g_2 \in \A$,}$$ where $g_1$ and $g_2$ satisfy $\deg g_1 <d_{t}e_{t}$ and $g_2=0$ or $0\leq \deg g_2 <\deg \widetilde{f}$. Then by definition, we have $$\label{eq:M(f)} M(f) = \Big( Q_{0}Q_{1} \Big)^{(q-1)/2},$$ where $$Q_{0}=\prod_{\substack{g_{2} \textrm{ monic} \\ 0 \le \deg g_{2} < \deg \widetilde{f}\\ \gcd(g_{2},f)=1}} g_{2} \ \ \ \textrm{ and}\ \ \ Q_{1}=\prod_{\substack{g_2 \in \A \\ 0 \le \deg g_2 < \deg \widetilde{f}\\ \gcd(g_{2},f)=1}}\prod_{\substack{g_1 \textrm{ monic} \\ 0 \le \deg g_1 < d_{t}e_{t},}} (g_1\widetilde{f}+g_2).$$ Now, we define $$\overline{Q_{0}}=\prod_{\substack{g_{2} \textrm{ monic} \\ 0 \le \deg g_{2} < \deg \widetilde{f}\\ (g_{2},\widetilde{f})=1}} g_{2} \quad \textrm{and} \quad \overline{Q_{1}}=\prod_{\substack{g_2 \in \A \\ 0 \le \deg g_2 < \deg \widetilde{f}\\ \gcd(g_{2},\widetilde{f})=1}}\prod_{\substack{g_1 \textrm{ monic} \\ 0 \le \deg g_1 < d_{t}e_{t}}} (g_1\widetilde{f}+g_2),$$ and obtain $$\begin{aligned} \overline{Q_{1}} &\equiv \Big(\prod_{\substack{g_2 \in \A \\ 0 \le \deg g_2 < \deg \widetilde{f}\\ \gcd(g_{2},\widetilde{f})=1}}g_2\Big)^{\frac{q^{d_{t}e_{t}}-1}{q-1}} \pmod{\widetilde{f}} \\ &\equiv G(\widetilde{f})^{\frac{q^{d_{t}e_{t}}-1}{q-1}} \pmod{\widetilde{f}}.\end{aligned}$$ For relating $Q_j$ to $\overline{Q_{j}}$, we multiply all relevant multiples of $P_t$ back to $Q_0$ and $Q_1$. More precisely, on the right-hand side of we multiply numerator and denominator by $$\Big(\prod_{\substack{g \textrm{ monic} \\ 0\leq \deg g < \deg \widetilde{f} +d_{t}(e_{t}-1)\\ \gcd(g,\widetilde{f})=1}} gP_{t}\Big)^{\frac{q-1}{2}},$$ which is equal to $$\begin{aligned} &\Big(\prod_{\substack{g \textrm{ monic} \\ 0 \le \deg g < \deg \widetilde{f}\\ \gcd(g,\widetilde{f})=1}}gP_{t}\Big)^{\frac{q-1}{2}}\Big(\prod_{\substack{g_2 \in \A \\ 0 \le \deg g_2 <\deg \widetilde{f}\\ \gcd(g_{2},\widetilde{f})=1}}\prod_{\substack{g_1 \textrm{ monic} \\ 0 \le \deg g_1 < d_{t}(e_{t}-1),}} (g_1\widetilde{f}+g_2)P_{t}\Big)^{\frac{q-1}{2}}\\ &\equiv M(\widetilde{f})P_{t}^{\frac{\Phi(\widetilde{f})}{2}}\left(\prod_{\substack{g_2 \in \A \\ 0 \le \deg g_2 <\deg \widetilde{f}\\ \gcd(g_{2},\widetilde{f})=1}}\Big(g_2P_{t}\Big)^{\frac{q^{d_{t}(e_{t}-1)}-1}{q-1}}\right)^{\frac{q-1}{2}} \pmod{\widetilde{f}}\\ &\equiv M(\widetilde{f})P_{t}^{\frac{\Phi(\widetilde{f})}{2}}G(\widetilde{f})^{\frac{q^{d_{t}(e_{t}-1)}-1}{2}} \pmod{\widetilde{f}}.\end{aligned}$$ Hence, from the above discussions, we deduce that $$\begin{aligned} M(f) & \equiv \frac{\Big(\overline{Q_{0}}\cdot\overline{Q_{1}}\Big)^{\frac{q-1}{2}}} {M(\widetilde{f})P_{t}^{\frac{\Phi(\widetilde{f})}{2}}G(\widetilde{f})^{\frac{q^{d_{t}(e_{t}-1)}-1}{2}}} \pmod{\widetilde{f}} \\ &\equiv\frac{M(\widetilde{f})\cdot G(\widetilde{f})^{\frac{q^{d_{t}e_{t}}-1}{2}}}{M(\widetilde{f})P_{t}^{\frac{\Phi(\widetilde{f})}{2}}G(\widetilde{f})^{\frac{q^{d_{t}(e_{t}-1)}-1}{2}}} \pmod{\widetilde{f}} \\ & \equiv\frac{ G(\widetilde{f})^{\frac{1}{2}q^{d_{t}(e_{t}-1)}(q^{d_{t}}-1)}}{P_{t}^{\frac{\Phi(\widetilde{f})}{2}}} \pmod{\widetilde{f}}.\end{aligned}$$ By Theorem \[thm:con1\], this completes the proof. Now, we first address the case when $f$ has exactly two prime divisors. \[thm:P1P2\] Assume that $q$ is odd, and $f \in \A$ is a polynomial having the prime factorization as in with $t=2$. Then, if one of the following two conditions holds: 1. $q \equiv 1 \pmod 4$, 2. $q \equiv 3 \pmod 4$ and both $\deg P_1$ and $\deg P_2$ are even, we have $$G(f,S) \equiv \Big( \frac{P_1}{P_2} \Big) \pmod{f};$$ otherwise, we have $$G(f,S) \not\equiv \pm 1 \pmod{f}.$$ First, since $q$ is odd and $t=2$, by and we have $$\label{eq:GM2} G(f,S)= \delta(S)^{\frac{\Phi(f)}{q-1}}M(f)=M(f).$$ Put $d_j = \deg P_j, j=1,2$. By Lemma \[lem:reduction\], we have $$\begin{aligned} M(f) & \equiv (-1)^{\frac{(q-1)d_2}{2} }P_{2}^{-\frac{\Phi(P_{1}^{e_{1}})}{2}} \pmod{P_1} \\ & \equiv (-1)^{\frac{(q-1)d_2}{2} } \Big( \frac{P_2}{P_1}\Big)^{-1} \pmod{P_1} \\ & \equiv (-1)^{\frac{(q-1)d_2}{2} } \Big( \frac{P_2}{P_1}\Big) \pmod{P_1}.\end{aligned}$$ From , we know that $M(f)^2 \equiv 1 \pmod{P_1^{e_1}}$. So, applying Lemma \[lem:lift\] we get $$\label{eq:modP1} M(f) \equiv (-1)^{\frac{(q-1)d_2}{2} } \Big( \frac{P_2}{P_1}\Big) \pmod{P_1^{e_1}}.$$ By symmetry, we have $$\label{eq:modP2} M(f) \equiv (-1)^{\frac{(q-1)d_1}{2} } \Big( \frac{P_1}{P_2}\Big) \pmod{P_2^{e_2}}.$$ Now, assume that either $q \equiv 1 \pmod 4$, or $q \equiv 3 \pmod 4$ and both $d_1$ and $d_2$ are even. Then, by the reciprocity law (Lemma \[lem:reciprocity\]) we obtain $$\Big( \frac{P_2}{P_1}\Big) = \Big( \frac{P_1}{P_2}\Big),$$ and thus $$M(f) \equiv \Big( \frac{P_1}{P_2} \Big) \qquad \textrm{(mod $P_1^{e_1}$) and (mod $P_2^{e_2}$)}.$$ Using the Chinese Remainder Theorem, we get $$M(f) \equiv \Big( \frac{P_1}{P_2} \Big) \quad \pmod f.$$ So, by we have $$G(f,S) \equiv \Big( \frac{P_1}{P_2} \Big) \quad \pmod f.$$ In all other cases, one can similarly see that the product of the right-hand sides of and is equal to $-1$. Hence, applying again the Chinese Remainder Theorem, we can conclude the proof. Finally, the case when $f$ has more than two prime divisors is much easier. \[thm:P1P2P3\] Assume that $q$ is odd. If $f \in \A$ is a polynomial having the prime factorization as in with $t\ge 3$, then we have $$G(f,S) \equiv 1 \pmod{f}.$$ By Lemma \[lem:reduction\], we have $$\begin{aligned} M(f) & \equiv P_{t}^{-\frac{\Phi\big(P_{1}^{e_{1}}P_{2}^{e_{2}}\cdots P_{t-1}^{e_{t-1}}\big)}{2}} \pmod{P_1} \\ & \equiv \Big( \frac{P_t}{P_1}\Big)^{-\Phi\big(P_{2}^{e_{2}}\cdots P_{t-1}^{e_{t-1}}\big)} \pmod{P_1} \\ & \equiv 1 \pmod{P_1}.\end{aligned}$$ From , we know that $M(f)^2 \equiv 1 \pmod{P_1^{e_1}}$. So, applying Lemma \[lem:lift\] we get $$M(f) \equiv 1 \pmod{P_1^{e_1}}.$$ By symmetry, we have $$M(f) \equiv 1 \pmod{P_j^{e_j}} \quad \textrm{for $2\le j \le t$.}$$ So, by the Chinese Remainder Theorem we obtain $$M(f) \equiv 1 \pmod f.$$ Noticing $G(f,S)= \delta(S)^{\frac{\Phi(f)}{q-1}}M(f)=M(f)$, we complete the proof. Acknowledgements {#acknowledgements .unnumbered} ================ The authors are very grateful to the referee for careful reading and useful comments. The research of the first author was supported by National Natural Science Foundation of China Grant No.11526119, and the second author was supported by the Australian Research Council Grant DP130100237. [99]{} E. Artin, *Quadratische K[" o]{}rper im Gebiete der h[" o]{}heren Kongruenzen I, II*, Math. Zeit., **19** (1924), 153–246. M. Bhargava, *The factorial function and generalizations*, Amer. Math. Monthly, **107** (2000), 783–799. J. B. Cosgrave and K. Dilcher, *Extensions of the Gauss–Wilson theorem*, Integers, **8** (2008), A39, available at <http://www.integers-ejcnt.org/vol8.html>. J. B. Cosgrave and K. Dilcher, *Mod $p^3$ analogues of theorems of Gauss and Jacobi on binomial coefficients*, Acta Arith., **142** (2010), 103–118. J. B. Cosgrave and K. Dilcher, *The multiplicative orders of certain Gauss factorials*, Int. J. Number Theory, **7** (2011), 145–171. J. B. Cosgrave and K. Dilcher, *An introduction to Gauss factorials*, Amer. Math. Monthly, **118** (2011), 812–829. J. B. Cosgrave and K. Dilcher, *The Gauss–Wilson theorem for quarter–intervals*, Acta Math. Hungar., **142** (2014), 199–230. J. B. Cosgrave and Karl Dilcher, *A role for generalized Fermat numbers*, Math. Comp., DOI:http://dx.doi.org/10.1090/mcom/3111. G. A. Miller, *A new proof of the generalized Wilson’s theorem*, Ann. Math., **4** (1903), 188–190. L. J. Mordell, *The congruence $(p - 1/2)! \equiv \pm 1 (\pmod p)$*, Amer. Math. Monthly, **68** (1961), 145–146. M. Rosen, *Number theory in function fields*, Springer-Verlag, New York, 2002. The PARI Group, PARI/GP version [2.7.5]{}, Bordeaux, 2015, <http://pari.math.u-bordeaux.fr/>.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The aim of the note is to prove an obstruction theorem for $A_\infty$-structures over a commutative ring $R$. Given a $\Z$-graded $A_m$-algebra, with $m\geq 3$, we give conditions on the Hochschild cohomology of the associative algebra $H(A)$ so that the $A_{m-1}$-structure can be lifted to an $A_{m+1}$-structure. These conditions apply in case we start with an associative algebra up to homotopy and want to lift this structure to an $A_\infty$-structure. The hidden purpose of the note is to show that there are no assumptions needed on the commutative ring $R$ nor bounded assumptions on the complex $A$.' address: 'Université Paris 13, CNRS, UMR 7539 LAGA, 99 avenue Jean-Baptiste Clément, 93430 Villetaneuse, France' author: - Muriel Livernet title: 'Pre-Lie systems and obstruction to $A_\infty$-structures over a ring' --- Introduction {#introduction .unnumbered} ============ The purpose of this note is to fill a gap in the litterature concerning $A_\infty$-structures in the category of differential $\Z$-graded $R$-modules when $R$ is a commutative ring. We are concerned with obstruction theory for the existence of an $A_\infty$-structure on a dgmodule $V$ endowed with a product which is associative up to homotopy. We answer the question of the existence of higher homotopies, in terms of the Hochschild cohomology of the associative algebra $H_*(V)$. In the context of $A_\infty$-spaces or $A_\infty$-spectra this question has been answered by Robinson in [@Rob89]. If one applies the zig-zag of equivalences between the category of modules over the Eilenberg Mac Lane ring spectrum $HR$ and the category of differential graded $R$-modules described by Shipley in [@Shipley07], one gets the result we want. Our purpose is to give a direct account of the method in the differential graded context. Note that the question has also been studied by Lefèvre-Hasegawa for minimal $A_\infty$-algebras on a field in [@Lefevre03]. We follow the lines of his approach. We recall that $A_\infty$-structures were defined by Stasheff in [@Stasheff63] for spaces, in order to give a recognition principle for loop spaces. In order to do so, he built an operad based on associahedra. The simplicial chain complex of this operad is what is known as the $A_\infty$-operad in the category of differential graded modules. Algebras over this operad are called $A_\infty$-algebras. Kadesishvili studied in [@Kadei80] an obstruction theory for the uniqueness of $A_\infty$-structures, also in terms of the Hochschild cohomology of $H_*(A)$, when $A$ is an $A_\infty$-algebra. In this note we study the existence rather than the uniqueness of such a structure. In the process of building an obstruction theory for the existence of $A_\infty$-structures on a ring $R$ we encountered two assumptions commonly needed on $R$-modules. The first one is the assumption that every $R$-module considered should have no $2$-torsion. We discovered that this hypothesis is not needed, if one takes a closer look at the Lie algebra structure usually used to solve obstruction issues. Indeed, in our context, the Lie algebra structure not only comes from a pre-Lie algebra structure but from a pre-Lie system as defined by Gerstenhaber in [@Ger63]. The first section of the note is concerned with pre-Lie systems. The second assumption is that every graded module over $R$ should be $\N$-graded. Again this hypothesis is not needed, and we prove in the second section that, under some projectivity conditions, there is an isomorphism between $H({\mathrm{Hom}}(C,D))$ and ${\mathrm{Hom}}(H(C),H(D))$ for any differential $\Z$-graded $R$-modules $C$ and $D$. The last section is devoted to obstruction theory. [**Notation.**]{} We work over a commutative ring $R$. We denote by ${{\sf dgmod}}$ the category of lower $\Z$-graded $R$-modules with a differential of degree $-1$. Objects in this category are called dgmodules for short. - The category ${{\sf dgmod}}$ is symmetric monoidal for the tensor product $$(C\otimes D)_n=\oplus_{i+j=n} C_i\otimes_R D_j$$ with the differential given by $$\partial(c_i\otimes d_j)=\partial_C(c_i)\otimes d_j+(-1)^i c_i\otimes\partial_D(d_j),\quad\forall c_i\in C_i, d_j\in D_j.$$ - For $C$ a dgmodule, we denote by $sC$ its suspension, that is, $(sC)_i=C_{i-1}$ with differential $\partial(sc)=-s\partial_C(c)$. - Let $C$ be a dgmodule. For $c\in C_i$ and $d\in C_j$, we will use the notation $\epsilon_{c,d}=(-1)^{ij}.$ - Let $C$ and $D$ be dgmodules. We denote by ${\mathrm{Hom}}(C,D)$ the dgmodule $${\mathrm{Hom}}_i(C,D)=\prod_n {\mathrm{Hom}}_R(V_n,W_{n+i})$$ with differential $\partial: {\mathrm{Hom}}_i(C,D)\rightarrow{\mathrm{Hom}}_{i-1} (C,D)$ defined for $c\in C_n$ by $$(\partial f)_n(c)=\partial_D(f_n(c))-(-1)^i f_{n-1}(\partial_C c).$$ - We use the Koszul sign rule: let $C,C',D$ and $D'$ be dgmodules; for $f\in {\mathrm{Hom}}(C,D)$ and $g\in {\mathrm{Hom}}(C',D')$ the map $f\otimes g\in {\mathrm{Hom}}(C\otimes C',D\otimes D')$ is defined by $$\forall x\in C\otimes C', (f\otimes g)(x\otimes y)=\epsilon_{x,g} f(x)\otimes g(y).$$ - The suspension map $s: C\rto (sC)$ has degree $+1$. The Koszul sign rule implies that $$(s^{-1})^{\otimes n}\circ s^{\otimes n}=(-1)^{\frac{n(n-1)}{2}}\; {\mathrm{id}}_{C^{\otimes n}}.$$ [**Aknowledgment.**]{} I am indebted to Benoit Fresse, Birgit Richter and Sarah Whitehouse for valuable discussions. pre-Lie systems and graded pre-Lie algebras {#sec:preLie} =========================================== Pre-Lie systems and pre-Lie algebras have been introduced by M. Gerstenhaber in [@Ger63], in order to understand the richer algebra structure on the complex computing the Hochschild cohomology of an associative algebra $A$, yielding to the “Gerstenhaber structure" on the Hochschild cohomology of $A$. In this section we review some of the results of Gerstenhaber, together with variations on the gradings and signs involved. Namely, different pre-Lie structures, as in Proposition \[P:unsignedcirc\] and in Theorem \[P:end\], are described from a given pre-Lie system, depending on the grading we choose. The main result of the section is the technical Lemma \[L:preLie\_formula\], allowing to use pre-Lie systems on a ring with no assumptions concerning the $2$-torsion. It is one of the key ingredient in the proof of the obstruction Theorem \[T:main\]. Throughout the section we are given a $(\Z,\N)$-bigraded $R$-module $\bigoplus\limits_{n\in\N,i\in\Z}{\O}_i^n$. The examples we have in mind are - ${\mathrm{End}}^n_i(V)={\mathrm{Hom}}_i(V^{\otimes n},V)$, for a dgmodule $V$. - More generally ${\O(n)}_i$, for an operad $\O$, symmetric or not, see [*e.g.*]{} [@KapMan01]. - ${\mathrm{Hom}}(\cal C,\cal P)$ for given (non-symmetric) cooperad $\cal C$ and operad $\cal P$ or ${\mathrm{Hom}}_{\mathbb S}(\cal C,\cal P)$ for a cooperad $\cal C$ and an operad $\cal P$ where ${\mathrm{Hom}}_{\mathbb S}$ is the subset of ${\mathrm{Hom}}$ of invariant maps under the action of the symmetric group. This example is an application of the previous one, since ${\mathrm{Hom}}(\cal C,\cal P)$ forms an operad, the [*convolution operad*]{}, as defined by Berger and Moerdijk in [@BerMoe03]. The paper will deal with the first item. It may be understood as the toy model for obstruction theory for $\O_\infty$-algebras, where $\O$ is a Koszul operad. If one would like to extend the result of this paper for operads, one would use the third item, as suggested in the book in progress of Loday and Vallette [@LodVal]. Let $\cal O$ be a $(\Z,\N)$-bigraded $R$-module. For $x\in \cal O^n_i$, the integer $n$ is called the [*arity*]{} of $x$, the integer $i$ is called the [*degree*]{} of $x$ and the integer $i+n-1$ is called the [*weight*]{} of $x$ and denoted by $|x|$. For a fixed $n$, we consider the $\Z$-graded $R$-module $\cal O^n=\oplus_i \cal O^n_i$. General case ------------ \[D:preLiesystem\] Let $\mathcal O$ be a $(\Z,\N)$-bigraded $R$-module. A [*graded pre-Lie system*]{} on $\O$ is a sequence of maps, called composition maps, $$\circ_k: \O^n_i\otimes \O^m_j \rightarrow \O^{n+m-1}_{i+j},\ \forall 1\leq k\leq n,$$ satisfying the relations: for every $f\in {\O}^n_i, g\in {\O}^m_j$ and $h\in {\O}^p_l$ $$\begin{aligned} f\circ_u (g\circ_v h)&=&(f\circ_u g)\circ_{v+u-1} h, \ \ \forall 1\leq u\leq n \text{ and } 1\leq v\leq m, \\ (f\circ_u g)\circ_{v+m-1} h&=&(-1)^{jl}(f\circ_v h)\circ _u g, \ \ \forall 1\leq u<v\leq n. \end{aligned}$$ We will denote by $(\cal O,\circ)$ a graded pre-Lie system. \[D:wpreLiesystem\] Let $\mathcal O$ be a $(\Z,\N)$-bigraded $R$-module. A [*weight graded pre-Lie system*]{} on $\O$ is a sequence of maps, called composition maps, $$\circ_k: \O^n_i\otimes \O^m_j \rightarrow \O^{n+m-1}_{i+j},\ \forall 1\leq k\leq n$$ satisfying the relations: for every $f\in {\O}^n_i, g\in {\O}^m_j$ and $h\in {\O}^p_l$ $$\begin{aligned} \label{E:syst1} f\circ_u (g\circ_v h)&=&(f\circ_u g)\circ_{v+u-1} h,\ \ \forall 1\leq u\leq n \text{ and } 1\leq v\leq m, \\ (f\circ_u g)\circ_{v+m-1} h&=&(-1)^{(j+m-1)(l+p-1)}(f\circ_v h)\circ _u g,\ \ \forall 1\leq u<v\leq n. \label{E:syst2}\end{aligned}$$ Note that the composition maps preserve the weight grading. A short computation proves the following Proposition. \[P:graded2weight\] Any graded pre-Lie system gives rise to a weight graded pre-Lie system and vice versa. Namely, if $(\cal O,\star)$ is a graded pre-Lie system, then the collection $$\circ_k :\O^n_i\otimes \O^m_j \rightarrow \O^{n+m-1}_{i+j},\ \forall 1\leq k\leq n$$ defined by $$f \circ_k g=(-1)^{(j+m-1)(n-1)+(m-1)(k-1)} f\star_k g$$ is a weight graded pre-Lie system. \[E:fund\] Given a graded operad $\cal O$, the definition of the axioms for partial composition coincides with the one for pre-Lie systems. Hence the collection $\O^n_i=\O(n)_i$ forms a pre-Lie system. In particular, let $V$ be a dgmodule. The collection of graded $R$-modules ${\mathrm{End}}^n(V):={\mathrm{Hom}}(V^{\otimes n}, V)$ forms an operad, hence a graded pre-Lie system. Recall that, for $1\leq k\leq n$, the [*insertion map at place $k$*]{}, $\circ_k: {\mathrm{End}}^n_i(V)\otimes {\mathrm{End}}^m_j(V)\rightarrow {\mathrm{End}}^{n+m-1}_{i+j}(V)$ is defined by $$f\circ_k g=f({\mathrm{id}}^{\otimes k-1}\otimes g\otimes {\mathrm{id}}^{\otimes n-k}).$$ Let $C\in{{\sf dgmod}}$. A [*graded pre-Lie algebra*]{} structure on $C$ is a graded $R$-bilinear map $\circ: C\otimes C\rightarrow C$ satisfying $$\label{E:preLie} \forall a,b,c\in C,\quad (a\circ b)\circ c-a\circ(b\circ c)=\epsilon_{b,c}\; a\circ (c\circ b)-\epsilon_{b,c}\; (a\circ c)\circ b.$$ \[P:Jacobi\] Let $C$ be a graded pre-Lie algebra. The bracket defined by $$\forall c,d\in C, \quad [c,d]=c\circ d-\epsilon_{c,d}\; d\circ c,$$ endows $C$ with a graded Lie algebra structure. Namely, it satisfies the graded antisymmetry and graded Jacobi relations: $$[c,d]=-\epsilon_{c,d}\; [d,c],$$ $$\epsilon_{a,c}\; [a,[b,c]]+ \epsilon_{b,a}\; [b,[c,a]]+ \epsilon_{c,b}\; [c,[a,b]]=0.$$ The first equation is immediate. The second one relies on the pre-Lie relation (\[E:preLie\]): $$\begin{gathered} \epsilon_{a,c}\; [a,[b,c]]+ \epsilon_{b,a}\; [b,[c,a]]+ \epsilon_{c,b}\; [c,[a,b]]= \epsilon_{a,c}\;(a\circ (b\circ c)-(a\circ b)\circ c-\epsilon_{b,c}\; a\circ(c\circ b)+ \epsilon_{b,c}\; (a\circ c)\circ b) + \\ \epsilon_{b,a}\; (b\circ (c\circ a)-(b\circ c)\circ a-\epsilon_{a,c}\; b\circ (a\circ c)+\epsilon_{a,c}\; (b\circ a)\circ c) +\epsilon_{c,b}\; (c\circ (a\circ b)-(c\circ a)\circ b-\epsilon_{a,b}\; c\circ (b\circ a)+\epsilon_{a,b}\; (c\circ b)\circ a)=0.\end{gathered}$$ \[P:unsignedcirc\] Any graded pre-Lie system $(\O,\circ)$ gives rise to a graded pre-Lie algebra $\O_L=\oplus_n \O^n$ with the pre-Lie product given by $$\begin{array}{cccc} \star:& \O^n\otimes \O^m\subset \O_L\otimes \O_L&\rto& \O^{n+m-1}\subset \O_L \\ &f\otimes g & \mapsto & f\star g=\sum_{k=1}^n f\circ_k g \end{array}$$ The associated graded Lie structure is denoted by $$\{f,g\}=f\star g-(1)^{ij}g\star f,\ \text{ with } f\in \O^n_i,g\in \O^m_j.$$ Using Proposition \[P:graded2weight\], one gets the following Corollary. \[C:signcirc\] Any graded pre-Lie system $(\O,\circ)$ gives rise to a (weight) graded pre-Lie algebra $$(\O_{wL})_p=\bigoplus\limits_{i,n|i+n-1=p} \O^n_i$$ with the pre-Lie product given by: $\forall f\in \O^n_i, g\in \O^m_j$, $$f\circ g=(-1)^{|g|(n-1)}\sum_{k=1}^n (-1)^{(m-1)(k-1)} f\circ_k g,$$ with $|g|=m+j-1$. The associated (weight) graded Lie structure is denoted by $$[f,g]=f\circ g-(-1)^{|f||g|} g\circ f.$$ Next Lemma is a technical lemma that will be useful in the sequel. We will see in the proof, that this lemma is independent of the ring $R$ that we consider and there is no assumption concerning the $2$-torsion of the $R$-modules considered. This is a new fact that can be of independent interest. \[L:preLie\_formula\] Let $(\cal O,\circ)$ be a graded pre-Lie system. - Let $g\in \O$ be an odd degree element. Then $\forall f \in \O$ one has $$(f\star g) \star g=f\star (g\star g) \text{ and }$$ $$\{f,g\star g\}=-\{g,\{g,f\}\}=-\{g\star g,f\}.$$ - Let $g\in\O$ be an element of odd weight, [*i.e.*]{} $|g|$ is odd. Then $\forall f \in \O$, one has $$\label{E:J1} (f\circ g) \circ g=f\circ (g\circ g) \text{ and }$$ $$\label{E:J2} [f,g\circ g]=-[g,[g,f]]=-[g\circ g,f].$$ The proof is the same in the two cases. Let us focus on the weight graded case. Let $i$ be the weight of $f$. Relation (\[E:J2\]) is a consequence of Relation (\[E:J1\]), for $$\begin{gathered} -[g,[g,f]]=-g\circ (g\circ f)+(-1)^i g\circ (f\circ g)+(-1)^{i+1} (g\circ f)\circ g-(-1)^{i+1+i}(f\circ g)\circ g=\\ (-1)^i(\underbrace{ g\circ (f\circ g)- (g\circ f)\circ g-(-1)^i g\circ (g\circ f)+(-1)^i (g\circ g)\circ f}_{=0 \text { by (\ref{E:preLie})}}) -(g\circ g)\circ f+\underbrace{(f\circ g)\circ g}_{=f\circ (g\circ g)}=[f,g\circ g].\end{gathered}$$ Note that Relation (\[E:J1\]) is a consequence of the pre-Lie relation in case every $R$-module $\cal O_i^n$ has no 2-torsion, for if $g$ has odd weight, then $2(f\circ g) \circ g-2f\circ (g\circ g)=0$. This is still true without this assumption, if one looks closely at the definition of the (weight) graded pre-Lie structure of Corollary \[C:signcirc\]. The weight graded pre-Lie system relation (\[E:syst1\]) gives $$(f\circ g)\circ g =f\circ (g\circ g)+\sum_{u=1}^n\sum_{v=1}^{u-1} (f\circ_u g)\circ_v g+ \sum_{u=1}^n\sum_{v=u+m}^{n+m-1}(f\circ_u g)\circ_v g.$$ The weight graded pre-Lie system relation (\[E:syst2\]) implies that $$\sum_{u=1}^n\sum_{v=u+m}^{n+m-1}(f\circ_u g)\circ_v g=\sum_{u=1}^n\sum_{k=u+1}^n(f\circ_u g)\circ_{k+m-1} g=-\sum_{k=1}^n\sum_{u=1}^{k-1} (f\circ_k g)\circ_u g,$$ which ends the proof. Application to ${\mathrm{End}}(V)$ ---------------------------------- In the sequel we will be concerned with the $(\Z,\N)$-bigraded $R$-module ${\mathrm{End}}^n_i(V)={\mathrm{End}}_i(V^{\otimes n},V)$ where $V$ is a dgmodule. Example \[E:fund\], Proposition \[P:unsignedcirc\] and Corollary \[C:signcirc\] assemble in the following Proposition. \[P:end\] Let $V$ be a dgmodule. The $(\Z,\N)$-bigraded $R$-module ${\mathrm{End}}(V)$ forms a graded pre-Lie system. Consequently, $\forall f\in{\mathrm{End}}^n(V)$, the product $$f\star g=\sum_{k=1}^{n} f({\mathrm{id}}^{k-1}\otimes g\otimes {\mathrm{id}}^{n-k})$$ endows ${\mathrm{End}}(V)$ with a structure of graded pre-Lie algebra and the product $$f\circ g=(-1)^{|g|(n-1)}\sum_{k=1}^n (-1)^{(m-1)(k-1)} f({\mathrm{id}}^{k-1}\otimes g\otimes {\mathrm{id}}^{n-k})$$ endows ${\mathrm{End}}(V)$ with a structure of (weight) graded pre-Lie algebra. The signs obtained in the equivalence between graded pre-Lie systems and weight graded pre-Lie systems in Proposition \[P:graded2weight\] come from a bijection between ${\mathrm{End}}(V)$ and ${\mathrm{End}}(sV)$. Let us consider the isomorphism $\Theta$ of Getzler and Jones in [@GetJon90] $$\label{E:Theta} \begin{array}{cccc} \Theta: & {\mathrm{Hom}}_i((sV)^{\otimes n},sV)&\rto& {\mathrm{Hom}}_{i+n-1}((V)^{\otimes n}, V) \\ &F & \mapsto & \Theta(F) \end{array}$$ defined by $$s\Theta(F)(s^{-1})^{\otimes n}=F.$$ For $F\in {\mathrm{End}}^n_i(sV)$ and $G\in {\mathrm{End}}^m_j(sV)$ one has, $$\begin{aligned} s\Theta(F\circ_k G)(s^{-1})^{\otimes n+m-1}&=&F({\mathrm{id}}^{\otimes k-1}\otimes G\otimes {\mathrm{id}}^{n-k})= s\Theta(F)(s^{-1})^{\otimes n}({\mathrm{id}}^{\otimes k-1}\otimes s\Theta(G)(s^{-1})^{\otimes m}\otimes {\mathrm{id}}^{\otimes n-k})\\ &=&(-1)^{j(n-k)}s\Theta(F)((s^{-1})^{\otimes k-1}\otimes \Theta(G)(s^{-1})^{\otimes m}\otimes (s^{-1})^{\otimes n-k})\\ &=&(-1)^{j(n-k)}(-1)^{(k-1)(j+m-1)} s\Theta(F)({\mathrm{id}}^{\otimes k-1}\otimes \Theta(G)\otimes {\mathrm{id}}^{n-k})(s^{-1})^{\otimes n+m-1},\end{aligned}$$ hence $(-1)^{(m-1)(k-1)+|\Theta(G)|(n-1)}\Theta(F)\circ_k\Theta(G)=\Theta(F\circ_k G)$. Consequently, $$\label{E:comparecirc} \Theta(F)\circ \Theta(G)=\Theta(F\star G).$$ In particular, Lemma \[L:preLie\_formula\] applies for the graded pre-Lie system ${\mathrm{End}}(V)$. Next proposition states that both pre-Lie products behave well with respect to the differential of the dgmodule $V$. \[P:partialcirc\] Let $V$ be a dgmodule with differential $m_1$. The induced differential $\partial$ on ${\mathrm{End}}(V)$ satisfies, $\forall f\in {\mathrm{End}}^n_i(V), \; \forall g\in {\mathrm{End}}(V),$ $$\begin{aligned} \partial f&=\{m_1, f\}, &\partial f&= [m_1,f]; \\ \partial(f\star g)&=\partial f\star g+(-1)^{i} f\star \partial g & \partial(f\circ g)&=\partial f\circ g+(-1)^{|f|} f\circ \partial g; \\ \partial\{f,g\}&=\{\partial f,g\}+(-1)^{i} \{f,\partial g\}, &\partial[f,g]&=[\partial f,g]+(-1)^{|f|} [f,\partial g].\end{aligned}$$ As a consequence ${\mathrm{End}}(V)$ is a differential graded Lie algebra and a differential (weight) graded Lie algebra. The differential $m_1$ is considered as an element of ${\mathrm{End}}_{-1}^1(V)$, hence of degree $-1$ and of weight $-1$. Recall that $\forall f\in{\mathrm{End}}^n_i(V),$ one has $$\partial f=m_1\circ_1 f-(-1)^i\sum_{k=1}^m f\circ_k m_1= \{m_1,f\} =(m_1\circ f-(-1)^{i+n-1}f\circ m_1)=[m_1,f].$$ The proof will be the same for the pre-Lie product $\star$ and the pre-Lie product $\circ$. Let us prove it for $\circ$. Using the pre-Lie relation (\[E:preLie\]), one gets $$\begin{gathered} \partial(f\circ g)-\partial f\circ g-(-1)^{|f|} f\circ \partial g= m_1\circ (f\circ g)-(-1)^{|f|+|g|}(f\circ g)\circ m_1 -(m_1\circ f)\circ g+\\ (-1)^{|f|}(f\circ m_1)\circ g -(-1)^{|f|}f\circ (m_1\circ g)+(-1)^{|f|+|g|}f\circ (g\circ m_1)= m_1\circ (f\circ g)-(m_1\circ f)\circ g.\end{gathered}$$ The last term of the equalities vanish because of Relation (\[E:syst1\]). The relation $\partial[f,g]=[\partial f,g]+(-1)^{|f|} [f,\partial g]$ is immediate. \[R:op\] Assume we are given an operad $\mathcal P$ in graded $R$-modules, that is, a collection $(\mathcal P(n))_{n\geq 1}$ where $\mathcal P(n)$ is a graded $R$-module $\mathcal P(n)=\oplus_{i\in\Z} \mathcal P^n_i$. The axioms for the operad are exactly the ones of Proposition \[P:unsignedcirc\], where $\circ_k$ denotes the partial composition. As a consequence $\star$ determines a graded pre-Lie structure on $\mathcal P$ and $\circ$ a weight graded pre-Lie structure on $\mathcal P$ where $f\in\mathcal P^n_i$ has weight $i+n-1$. Lemma \[L:preLie\_formula\] applies also in this case. The same is true for the convolution operad ${\mathrm{Hom}}(\cal C,\cal P)$ as noticed in the introduction of the section. In particular, the convolution operad forms a graded pre-Lie system. Homology of graded R-modules of homomorphisms {#sec:homology} ============================================= In this section, we give the conditions on the complexes $C$ and $D$ so that the map $$H({\mathrm{Hom}}(C,D))\rto {\mathrm{Hom}}(H(C),H(D))$$ is an isomorphism (Proposition \[P:lift\]) and that the map $$H({\mathrm{Hom}}(C^{\otimes n}, C))\rto {\mathrm{Hom}}(H(C)^{\otimes n},H(C))$$ is an isomorphism (Corollary \[C:lift\]). The last result is one of the key ingredient in order to prove the obstruction Theorem \[T:main\]. It might be also of independent interest. Let $C$ and $D$ be dgmodules. We denote by ${\mathrm{Hom}}(C,D)$ the dgmodule $${\mathrm{Hom}}_i(C,D)=\prod_n {\mathrm{Hom}}_R(V_n,W_{n+i})$$ with differential $\partial: {\mathrm{Hom}}_i(C,D)\rightarrow{\mathrm{Hom}}_{i-1} (C,D)$ defined for $c\in C_n$ by $$(\partial f)_n(c)=\partial_D(f_n(c))-(-1)^i f_{n-1}(\partial_C c).$$ The graded $R$-module of cycles in $C$ is $Z_i(C)={\mathrm{Ker}}(\delta_C: C_i\rto C_{i-1})$ and $B_i(C)={\mathrm{Im}}(\delta_C: C_{i+1}\rto C_{i})$ is the graded $R$-module of boundaries in $C$. The homology of $C$ is the graded $R$-module $H_i(C)=Z_i(C)/B_i(C)$. One has $\partial f=0$ if and only if $f$ is a morphism of differential graded $R$-modules. In particular $f(Z(C))\subset f(Z(D))$ and $f(B(C))\subset B(D)$. As a consequence, if $f\in{\mathrm{Hom}}_i(C,D)$ and $\partial f=0$, then $f$ defines a map $\bar f\in {\mathrm{Hom}}_i(H(C),H(D))$ as $\bar f([c])=[f(c)]$. Moreover, if $f=\partial u$, then $f(Z(C))\subset B(D)$ and $\bar f=0$. Thus one has a well defined map $${\mathcal H}_{C,D}: H({\mathrm{Hom}}(C,D))\rightarrow {\mathrm{Hom}}(H(C),H(D)).$$ We say that a dgmodule $C$ satisfies [*assumption (A)*]{}, if the sequences $0\rto Z(C)\rto C \xrightarrow{\partial_C} B(C)\rto 0$ and $0\rto B(C)\rto Z(C)\rto H(C)\rto 0$ are split exact. \[P:lift\] Let $C$ and $D$ be dgmodules satisfying assumption (A). - Given $g\in {\mathrm{Hom}}_i(H(C),H(D))$, there exists $f\in {\mathrm{Hom}}_i(C,D)$ such that $\partial f=0$ and $\bar f=g$. - For $f\in {\mathrm{Hom}}_i(C,D)$ satisfying $\partial f=0$ and $\bar f=0\in {\mathrm{Hom}}_i(H(C),H(D))$, there exists $u\in {\mathrm{Hom}}_{i+1}(C,D)$ such that $\partial u=f$. Consequently the map ${\mathcal H}_{C,D}: H({\mathrm{Hom}}(C,D))\rto {\mathrm{Hom}}(H(C),H(D))$ is an isomorphism of graded $R$-modules and the dgmodule ${\mathrm{Hom}}(C,D)$ satisfies assumption (A). The short exact sequence $0\rto Z(C)\rto C\xrightarrow{\partial_C} B(C)\rto 0$ splits. Let $\tau_C:B_{n-1}(C)\rto C_n$ denote a splitting so that $C_n=Z_n(C)\oplus \tau_C(B_{n-1}(C))$. The short exact sequence $0\rto B(C)\rto Z(C)\rto H(C)\rto 0$ splits. Let $\sigma:H(C)\rto Z(C)$ denote a splitting so that $Z_n(C)=\sigma(H_n(C))\oplus B_n(C)$. Consequently $$C_n=\sigma(H_n(C))\oplus B_n(C)\oplus \tau_C(B_{n-1}(C))\text{ with } \partial_C(\sigma(h)+ y+\tau_C(z))=z\in B_{n-1}(C).$$ We use the same notation for $D$. Part a) of the proposition is proved building $f$ as $$\begin{array}{cccc} f_n:& C_n=\sigma(H_n(C))\oplus B_{n}(C)\oplus \tau_C(B_{n-1}(C))&\rightarrow& D_{n+i}=\sigma(H_{n+i}(D))\oplus B_{n+i}(D)\oplus \tau_D(B_{n+i-1}(D)) \\ & c=\sigma(h)+y+\tau_C(z) & \mapsto& \sigma(g_n(h)) \in \sigma(H_{n+i}(D)). \end{array}$$ The equality $\partial_D f_n(c)=0=(-1)^if_{n-1}(\partial_C c)$ implies that $\partial f=0$ and $\overline{f}=g$. Let us prove part b). Since $\bar f=0$, the map $f$ satisfies $f(\sigma(H(C)))\subset B(D)$. The map $u\in {\mathrm{Hom}}_{i+1}(C,D)$ defined by $$u_n(\sigma(h)+y+\tau_C(z))=(-1)^i f_{n+1}(\tau_C(y))+\tau_Df_n(\sigma(h)),$$ satisfies $\partial u=f$, for $(\partial u)_n(\sigma(h)+ y+\tau_C(z))=\partial_D\left((-1)^i f_{n+1}(\tau_C(y))+\tau_Df_n(\sigma(h))\right)-(-1)^{i+1}(-1)^if_n(\tau_C z)= f_n(\partial_C\tau_C y)+f_n(\sigma(h))+f_n(\tau_Cz)$ and $\partial_C\tau_C y=y$. As a consequence, the map ${\mathcal H}_{C,D}$ is an isomorphism. The proof of part a) builds an explicit splitting of the projection $Z({\mathrm{Hom}}(C,D))\rto H({\mathrm{Hom}}(C,D))\simeq {\mathrm{Hom}}(HC,HD)$ while the proof of part b) builds an explicit splitting of the map $\partial:{\mathrm{Hom}}(C,D)\rto B({\mathrm{Hom}}(C,D))$. Hence, if $C$ and $D$ satisfy assumption (A), so does ${\mathrm{Hom}}(C,D)$. \[C:lift\] Let $C$ be a dgmodule such that $Z(C)$ and $H(C)$ are projective graded $R$-modules. For every $n\geq 1$ the map $$H({\mathrm{Hom}}(C^{\otimes n}, C))\rto {\mathrm{Hom}}(H(C)^{\otimes n}, H(C))$$ is an isomorphism of graded $R$-modules. Note that if $H(C)$ is projective, then the short exact sequence $0\rto B(C)\rto Z(C) \rto H(C)\rto 0$ splits and $B(C)$ is projective because it is a direct summand of $Z(C)$ which is projective. Consequently, the short exact sequence $0\rto Z(C)\rto C\rto B(C)\rto 0$ splits and $C$ satisfies assumption (A). The proof is by induction on $n$, applying recursively Proposition \[P:lift\]. For $n=1$, the corollary amounts to the statement of Proposition \[P:lift\], with $D=C$. Moreover ${\mathrm{Hom}}(C,C)$ satisfies assumption (A). Because $C$ is a dgmodule such that $Z(C)$ and $H(C)$ are projective, the Künneth formula applies (see e.g. [@MacLane95]), that is, for every $n$ one has $H(C^{\otimes n})\simeq H(C)^{\otimes n}$. Let $n>1$. Assume that the map $H({\mathrm{Hom}}(C^{\otimes n-1},C))\rto {\mathrm{Hom}}(H(C^{\otimes n-1}),H(C))$ is an isomorphism and that ${\mathrm{Hom}}(C^{\otimes n-1},C)$ satisfies assumption (A). Then the following sequence of maps is a sequence of isomorphisms: $$\begin{gathered} H({\mathrm{Hom}}(C^{\otimes n}, C))\simeq H({\mathrm{Hom}}(C,{\mathrm{Hom}}(C^{\otimes n-1},C)))\rto {\mathrm{Hom}}(H(C),H({\mathrm{Hom}}(C^{\otimes n-1},C)))\rto \\ {\mathrm{Hom}}(H(C),{\mathrm{Hom}}(H(C^{\otimes n-1}),H(C)))\simeq {\mathrm{Hom}}(H(C),{\mathrm{Hom}}(H(C)^{\otimes n-1},H(C)))\simeq {\mathrm{Hom}}(H(C)^{\otimes n},H(C)))\end{gathered}$$ and ${\mathrm{Hom}}(C^{\otimes n},C)={\mathrm{Hom}}(C,{\mathrm{Hom}}(C^{\otimes n-1},C))$ satisfies assumption (A). Obstruction to $A_\infty$-structures {#sec:obstruction} ==================================== This section is devoted to the obstruction theorem. We first introduce $A_\infty$-algebras, $A_r$-algebras, Hochschild cohomology and prove Theorem \[T:main\]. $A_\infty$-algebras {#ssec:Ainfty} ------------------- There are mainly two equivalent definitions of $A_\infty$-algebras. \[D:Ainfty\] Let $V$ be a graded $R$-module. We denote by $T^c(sV)$ the free conilpotent coalgebra generated by the suspension of $V$. An [*$A_\infty$-algebra structure*]{} on $V$ is a degree $-1$ coderivation $\partial$ on $T^c(sV)$ of square $0$. Namely, the universal property of $T^c(sV)$ implies that $\partial$ is determined by the sequence $\partial_n: (sV)^{\otimes n}\rto sV$ for $n\geq 1$, obtained as the composite $$(sV)^{\otimes n}\hookrightarrow T^c(sV)\xrightarrow{\partial} T^c(sV){\twoheadrightarrow}sV.$$ Conversely, given a sequence $\partial_n\in{\mathrm{Hom}}_{-1}((sV)^{\otimes n}, sV)$ the unique coderivation on $T^c(sV)$ extending $\partial$ is given by $$\partial(sv_1\otimes\ldots\otimes sv_n)=\sum_{j=1}^n\sum_{k=1}^{n+1-j} (-1) ^{|sv_1|+\ldots+|sv_{k-1}|} sv_1\otimes\ldots\otimes sv_{k-1}\otimes\partial_j(sv_k\otimes\ldots\otimes sv_{k+j-1})\otimes\ldots\otimes sv_n.$$ Let $V$ be a graded $R$-module. The following definitions are equivalent. An $A_\infty$-algebra structure on $V$ is - a collection of elements $\partial_i\in{\mathrm{Hom}}_{-1}((sV)^{\otimes i},sV), i\geq 1,$ satisfying with the notation of Proposition \[P:unsignedcirc\] $$\forall n\geq 1,\ \sum_{i+j=n+1} \partial_i\star \partial_j=0, \text{ or }$$ - a collection of elements $m_i\in{\mathrm{Hom}}_{i-2}((V)^{\otimes i},V), i\geq 1$ satisfying, with the notation of Theorem \[P:end\] $$\label{E:Ainfty} \forall n\geq 1,\ \sum_{i+j=n+1} m_i\circ m_j=\sum_{i+j=n+1} (-1)^{i-1}\sum_{k=1}^i(-1)^{(j-1)(k-1)}m_i\circ_k m_j=0.$$ To prove part a) of the proposition, it is enough to apply Definition \[D:Ainfty\] and compute $$\begin{gathered} (\partial^2)_n(sv_1\otimes\ldots\otimes sv_n)= \\ \sum_{i+j=n+1}\sum_{k=1}^{n+1-j} (-1) ^{|sv_1|+\ldots+|sv_{k-1}|} \partial_i(sv_1\otimes\ldots\otimes sv_{k-1}\otimes\partial_j(sv_k\otimes\ldots\otimes sv_{k+j-1})\otimes\ldots\otimes sv_n)=\\ \sum_{i+j=n+1}(\partial_i\star \partial_j)(sv_1\otimes\ldots\otimes sv_n).\end{gathered}$$ To prove part b) we use the isomorphism $\Theta$ defined in (\[E:Theta\]), setting $m_i$ to be $\Theta(\partial_i)$. By relation (\[E:comparecirc\]), one gets $$\partial^2=0 \Longleftrightarrow \forall n, \sum_{i+j=n+1} m_i\circ m_j=0.$$ For the definition of $m_i\circ m_j$ we refer to Theorem \[P:end\]. Note that there exist different sign conventions for the definition of an $A_\infty$-algebra. Choosing the bijection $$\widetilde\Theta: {\mathrm{End}}(sV)\rightarrow {\mathrm{End}}(V)$$ defined by $\widetilde\Theta(F)=s^{-1}Fs^{\otimes n}$ and letting $\widetilde m_i=\widetilde\Theta(b_i)$ one gets the original definition of J. Stasheff in [@Stasheff63]: the collection of operations $\tilde m_i:A^{\otimes i}\rightarrow A$ of degree $i-2$ satisfies the relation $$\label{E:Ainftygeom} \forall n\geq 1,\sum_{i+j=n+1} (-1)^{jn}\sum_{k=1}^i(-1)^{k(j-1)}\tilde m_i\circ_k \tilde m_j=0.$$ This is equivalent to our definition, because $$m_i=(-1)^{\frac{i(i-1)}{2}} \widetilde m_i.$$ Let $r>0$ be an integer. A graded $R$-module $V$ is an [*$A_r$-algebra*]{} if there exists a collection of elements $m_i\in{\mathrm{Hom}}_{i-2}(V^{\otimes i},V),$ for $1\leq i\leq r$, such that $$\forall 1\leq n\leq r,\ \sum_{i+j=n+1} m_i\circ m_j=0.$$ Note that $V$ is an $A_1$-algebra if and only if $V$ is a dgmodule, with differential $m_1$ of degree $-1$. Recall from Proposition \[P:partialcirc\] that the induced differential $\partial$ on ${\mathrm{End}}(V)$ satisfies $\partial f=[m_1,f]$. The dgmodule $V$ is an $A_2$-algebra if and only if there exists an element $m_2\in{\mathrm{End}}^2_0(V)$ such that $\partial m_2=0$, that is, $m_2$ is a morphism of dgmodules. An $A_3$-algebra is an $A_2$-algebra such that $m_2$ is associative up to homotopy: there exists $m_3\in {\mathrm{End}}^3_1(V)$ such that $\partial m_3=-m_2\circ m_2.$ Since $m_2$ has odd weight and $\partial m_2=0$ one gets from section \[sec:homology\] that $m_2$ defines a map $\overline{m_2}\in{\mathrm{End}}(H(V))$ such that $[\overline{m_2},\overline{m_2}]=\overline{[m_2,m_2]}=2\,\overline{m_2\circ m_2}=0$. Namely the graded $R$-module $H(V)$ is a graded associative algebra. Hochschild cohomology --------------------- In this section we recall some facts concerning Hochschild cohomology of graded associative algebras. Let $(A,m_2)$ be a graded associative algebra. Recall that $m_2\in {\mathrm{End}}^2_0(A)$ is associative if and only if $m_2\circ m_2=0$. Let $(A,m_2)$ be a graded associative algebra. The map ${\mathrm d}=[m_2,-]: {\mathrm{End}}^n_i(A)\rto {\mathrm{End}}^{n+1}_i(A)$ has weight degree $1$ and is a differential. The complex so obtained is the Hochschild cochain complex of $A$ and its cohomology is called the Hochschild cohomology of $A$. Note that the cohomology is bigraded and is denoted by $HH^n_i(A)$ when the grading needs to be specified. From relation (\[E:J2\]) one has $\dd^2(f)=[m_2,[m_2,f]]=-[f,m_2\circ m_2]=0$. When $A$ is an $A_2$-algebra, the map $\dd$ is still defined but does not satisfy $\dd^2=0$. Nevertheless, we have the following lemma: \[L:commute\] Let $A$ be an $A_2$-algebra, with structure maps $m_1$ and $m_2$. The maps $\partial=[m_1,-]:{\mathrm{End}}^n_i(A)\rto {\mathrm{End}}^n_{i-1}(A)$ and $\mathrm d=[m_2,-]:{\mathrm{End}}^n_i(A)\rto {\mathrm{End}}^{n+1}_i(A)$ satisfy $$\begin{cases} \partial^2=0, \\ \partial \dd=-\dd\partial. \end{cases}$$ Note that since $m_1$ has weight $-1$ and $m_2$ has weight $1$, they are both elements of odd weight. Hence, equality $\partial^2=0$ is a consequence of $m_1\circ m_1=0$ and relation (\[E:J2\]). Let $f$ be an element of weight $i$ in ${\mathrm{End}}(A)$. Proposition \[P:Jacobi\] and relation $m_1\circ m_2+m_2\circ m_1=0=[m_1,m_2]$ imply that $\partial\dd(f)=[m_1,[m_2,f]]=(-1)^{i+1}\left(-[m_2,[f,m_1]]+ (-1)^i[f,[m_1,m_2]]\right)=-[m_2,[m_1,f]]=-\dd\partial(f).$ Obstruction theory {#ssec:obstruction} ------------------ \[T:main\]Let $r\geq 3$. Let $A$ be a dgmodule such that $H(A)$ and $Z(A)$ are graded projective $R$-modules. Assume $A$ is an $A_r$-algebra, with structure maps $m_i\in{\mathrm{End}}^i_{i-2}(A)$ for $1\leq i\leq r$. The obstruction to lift the $A_{r-1}$-structure of $A$ to an $A_{r+1}$-structure lies in $HH^{r+1}_{r-2}(H(A))$. By assumption, one has $\forall n\leq r,\ \sum_{i+j=n+1} m_i\circ m_j=0$ which writes $$\forall n\leq r,\quad \partial m_n=-\sum\limits_{\substack{i+j=n+1,\\ i,j>1}} m_i\circ m_j.$$ The weight of $m_i$ is $i-2+i-1=2i-3$, thus odd. Let $$\mathcal O_{r+1}=\sum\limits_{\substack{i+j=r+2, \\ i,j>1}} m_i\circ m_j \in {\mathrm{Hom}}_{r-2}(A^{\otimes{r+1}},A).$$ Proposition \[P:partialcirc\] gives $$\partial \mathcal O_{r+1}=\sum\limits_{\substack{a+b+c=r+3, \\ a,b,c>1}} -(m_a\circ m_b)\circ m_c+m_a\circ (m_b\circ m_c) $$ The sum splits into the following sums: - If $a,b,c\in\{2,\ldots,r\}$ are distinct integers, one gets the twelve terms of the Jacobi relation, i.e. $$\sum\limits_{\substack{1< a<b<c\leq r\\ a+b+c=r+3}}[m_a,[m_b,m_c]]+[m_b,[m_c,m_a]]+[m_c,[m_a,m_b]]=0.$$ - Regrouping the terms where $a=b$ and $c\not=a$ or $a=c$ and $b\not=a$, one gets the four terms of the pre-Lie relation of the form, $$\sum\limits_{\substack{\alpha\not=\gamma, \alpha,\gamma>1\\ 2\alpha+\gamma=r+3}} -(m_\alpha\circ m_\alpha)\circ m_\gamma+ m_\alpha\circ (m_\alpha\circ m_\gamma)-(m_\alpha\circ m_\gamma)\circ m_\alpha+ m_\alpha\circ (m_\gamma\circ m_\alpha)=0$$ - If $b=c\in\{2,\ldots,r\}$, relation (\[E:J1\]) implies $$\sum\limits_{\substack {1\leq a, 1<b\leq r\\ a+2b=r+3}}-(m_a\circ m_b)\circ m_b+m_a\circ (m_b\circ m_b)=0.$$ Consequently $\partial\mathcal O_{r+1}=0$ and $\mathcal O_{r+1}$ gives rise to an element $\overline{\mathcal{O}_{r+1}}\in {\mathrm{End}}^{r+1}_{r-2}(H(A))$. Again, by splitting the sum, $$\dd {\mathcal{O}_{r+1}}=\sum_{a+b=r+2, a,b>1}[m_2,m_a\circ m_b]$$ and using the relation (\[E:J2\]) one gets - If $a=2$ or $b=2$, then $[m_2,[m_2,m_r]]=[m_2\circ m_2,m_r]=-[\partial m_3,m_r]$; - If $a\not=b, a,b>2$, then $[m_2,[m_a,m_b]]=-[m_a,[m_2,m_b]]-[m_b,[m_2,m_a]]$; - If $a=b, a>2$, then $[m_2,m_a\circ m_a]=-[m_a,[m_2,m_a]).$ Thus, on the one hand, $$\dd {\mathcal{O}_{r+1}}=-[\partial m_3,m_r]-\sum_{a+b=r+2, a,b>2}[m_a,[m_2,m_b]].$$ On the other hand, by splitting the sum and using the computation of $\partial\mathcal O_{r+1}$, one gets that $$\begin{aligned} \partial(\sum\limits_{\substack{ a+b=r+3, \\ a,b>2}} m_a\circ m_b)&=\sum\limits_{\substack{a+b=r+3, \\ a,b>2}} (\partial m_a)\circ m_b-m_a\circ\partial(m_b)\\ &=[\partial m_3,m_r]+\sum\limits_{\substack{a+b=r+2, \\ a,b>2}}- [m_2,m_b]\circ m_a+m_a\circ[m_2,m_b] \\ &=[\partial m_3,m_r]+\sum\limits_{\substack{a+b=r+2, \\ a,b>2}}[m_a,[m_2,m_b]]= -\dd {\mathcal{O}_{r+1}}\end{aligned}$$ As a consequence $\dd(\overline{\mathcal{O}_{r+1}})=0$. If the class of $\overline{\mathcal{O}_{r+1}}$ vanishes in $HH^{r+1}_{r-2}(H(A))$, then there exists $u\in {\mathrm{End}}^{r}_{r-2}(H(A))$ such that $\dd u=\overline{\mathcal{O}_{r+1}}$. The hypotheses made on the $R$-module $A$ allow us to apply Corollary \[C:lift\]. There exists $m'_r\in {\mathrm{End}}^r_{r-2}(A)$ such that $\partial m'_r=0$ and $\overline{m'_r}=u$. Moreover $$\overline{[m_2,m'_r]}=\overline{\dd m'_r}=\dd u=\overline{\mathcal O_{r+1}}=\overline{[m_2,m_r]+\sum\limits_{\substack{i+j=r+2,\\ i,j>2}}m_i\circ m_j}.$$ By Corollary \[C:lift\], there exists $m_{r+1}\in {\mathrm{End}}^{r+1}_{r-1}(A)$ such that $$\partial m_{r+1}=[m_2,m'_r-{m}_r]-\sum\limits_{\substack{i+j=r+2,\\ i,j>2}} m_i\circ m_j.$$ As a consequence the collection $\{m_1,\ldots,m_{r-1},m_r-m'_r,m_{r+1}\}$ is an $A_{r+1}$-structure on $A$ extending its $A_{r-1}$-structure. Let $A$ be an associative algebra up to homotopy such that $H(A)$ and $Z(A)$ are graded projective $R$-modules. If $HH^{r+1}_{r-2}(H(A))=0, \forall r\geq 3$, then there exists an $A_\infty$-structure on $A$ with $m_1$ the differential of $A$ and $m_2$ its product. \[2\][ [\#2](http://www.ams.org/mathscinet-getitem?mr=#1) ]{} \[2\][\#2]{} [10]{} Clemens Berger and Ieke Moerdijk, *Axiomatic homotopy theory for operads*, Comment. Math. Helv. **78** (2003), no. 4, 805–831. Murray Gerstenhaber, *The cohomology structure of an associative ring*, Ann. of Maths **78** (1963), no. 2, 267–288. Ezra Getzler and John D. S. Jones, *[$A\sb \infty$]{}-algebras and the cyclic bar complex*, Illinois J. Math. **34** (1990), no. 2, 256–283. T. V. Kadei[š]{}vili, *On the theory of homology of fiber spaces*, Uspekhi Mat. Nauk **35** (1980), no. 3(213), 183–188, International Topology Conference (Moscow State Univ., Moscow, 1979). Michael M. Kapranov and Yuri Manin, *Modules and [M]{}orita theorem for operads*, Amer. J. Math. **123** (2001), no. 5, 811–838. Kenji Lefèvre-Hasegawa, *Sur les $a_\infty$-catégories*, PhD thesis Université Paris 7, 2003. Jean-Louis Loday and Bruno Vallette, *Algebraic operad*, Book in preparation, 2010. Saunders Mac Lane, *Homology*, Classics in Mathematics, Springer-Verlag, Berlin, 1995, Reprint of the 1975 edition. Alan Robinson, *Obstruction theory and the strict associativity of [M]{}orava [$K$]{}-theories*, Advances in homotopy theory ([C]{}ortona, 1988), London Math. Soc. Lecture Note Ser., vol. 139, Cambridge Univ. Press, Cambridge, 1989, pp. 143–152. Brooke Shipley, *[$H\Bbb Z$]{}-algebra spectra are differential graded algebras*, Amer. J. Math. **129** (2007), no. 2, 351–379. James Dillon Stasheff, *Homotopy associativity of [$H$]{}-spaces. [I]{}, [II]{}*, Trans. Amer. Math. Soc. 108 (1963), 275-292; ibid. **108** (1963), 293–312.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The frequent reuse of test sets in popular benchmark problems raises doubts about the credibility of reported test-error rates. Verifying whether a learned model is overfitted to a test set is challenging as independent test sets drawn from the same data distribution are usually unavailable, while other test sets may introduce a distribution shift. We propose a new hypothesis test that uses only the original test data to detect overfitting. It utilizes a new unbiased error estimate that is based on adversarial examples generated from the test data and importance weighting. Overfitting is detected if this error estimate is sufficiently different from the original test error rate. We develop a specialized variant of our test for multiclass image classification, and apply it to testing overfitting of recent models to the popular ImageNet benchmark. Our method correctly indicates overfitting of the trained model to the training set, but is not able to detect any overfitting to the test set, in line with other recent work on this topic.' author: - | Roman Werpachowski András György Csaba Szepesvári\ DeepMind, London, UK\ `{romanw,agyorgy,szepi}@google.com`\ bibliography: - 'citations.bib' title: Detecting Overfitting via Adversarial Examples --- Introduction ============ Deep neural networks achieve impressive performance on many important machine learning benchmarks, such as image classification [@CIFAR10; @Krizhevsky2012; @Inception; @Simonyan15; @he2016deep], automated translation [@BahdanauCB14; @WuSCLNMKCGMKSJL16] or speech recognition [@DengSpeechRecog; @graves2013speech]. However, the benchmark datasets are used a multitude of times by researchers worldwide. Since state-of-the-art methods are selected and published based on their performance on the corresponding test set, it is typical to see results that continuously improve over time; see, e.g., the discussion of @recht2018cifar10.1 and for the performance improvement of classifiers published for the popular CIFAR-10 image classification benchmark [@CIFAR10]. [r]{}[0.5]{} ![image](cifar10_accuracy_timeline) This process may naturally lead to models overfitted to the test set, rendering test error rate (the average error measured on the test set) an unreliable indicator of the actual performance. Detecting whether a model is overfitted to the test set is challenging, since independent test sets drawn from the same data distribution are generally not available, while alternative test sets often introduce a distribution shift. To estimate the performance of a model on unseen data, one may use generalization bounds to get upper bounds on the expected error rate. The generalization bounds are also applicable when the model and the data are dependent (e.g., for cross validation or for error estimates based on the training data or the reused test data), but they usually lead to loose error bounds. Therefore, although much tighter bounds are available if the test data and the model are independent, comparing confidence intervals constructed around the training and test error rates leads to an underpowered test for detecting the dependence of a model on the test set. Recently, several methods have been proposed that allow the reuse of the test set while keeping the validity of test error rates [@Dwork636]. However, these are *intrusive*: they require the user to follow a strict protocol of interacting with the test set and are thus not applicable in the more common situation when enforcing such a protocol is impossible. In this paper we take a new approach to the challenge of detecting overfitting of a model to the test set, and devise a *non-intrusive* statistical test that does not restrict the training procedure and is based on the original test data. To this end, we introduce a new error estimator that is less sensitive to overfitting to the data; our test rejects the independence of the model and the test data if the new error estimate and the original test error rate are too different. The core novel idea is that the new estimator is based on adversarial examples [@ExplainAdvExampl], that is, on data points[^1] that are not sampled from the data distribution, but instead are cleverly crafted based on existing data points so that the model errs on them. Several authors showed that the best models learned for the above-mentioned benchmark problems are highly sensitive to adversarial attacks [@ExplainAdvExampl; @TransMLAdvSamp; @Uesato; @Carlini017a; @Carlini017b; @papernot2017practical]: for instance, one can often create adversarial versions of images properly classified by a state-of-the-art model such that the model will misclassify them, yet the adversarial perturbations are (almost) undetectable for a human observer; see, e.g., , where the adversarial image is obtained from the original one by a carefully selected translation. [r]{}[0.5]{} ---------------------------------------- -- ------------------------------------------ ![image](org_scale){width="18.00000%"} ![image](adv_toaster){width="18.00000%"} scale, weighing machine toaster ---------------------------------------- -- ------------------------------------------ The *adversarial (error) estimator* proposed in this work uses adversarial examples (generated from the test set) together with importance weighting to take into account the change in the data distribution (covariate shift) due to the adversarial transformation. The estimator is unbiased and has a smaller variance than the standard test error rate if the test set and the model are independent.[^2] More importantly, since it is based on adversarially generated data points, the adversarial estimator is expected to differ significantly from the test error rate if the model is overfitted to the test set, providing a way to detect test set overfitting. Thus, the test error rate and the adversarial error estimate (calculated based on the same test set) must be close if the test set and the model are independent, and are expected to be different in the opposite case. In particular, if the gap between the two error estimates is large, the independence hypothesis (i.e., that the model and the test set are independent) is dubious and will be rejected. Combining results from multiple training runs, we develop another method to test overfitting of a model architecture and training procedure (for simplicity, throughout the paper we refer to both together as the *model architecture*). The most challenging aspect of our method is to construct adversarial perturbations for which we can calculate importance weights, while keeping enough degrees of freedom in the way the adversarial perturbations are generated to maximize power, the ability of the test to detect dependence when it is present. To understand the behavior of our tests better, we first use them on a synthetic binary classification problem, where the tests are able to successfully identify the cases where overfitting is present. Then we apply our independence tests to state-of-the-art classification methods for the popular image classification benchmark, ImageNet [@ImageNet]. As a sanity check, in all cases examined, our test rejects (at confidence levels close to 1) the independence of the individual models from their respective training sets. Applying our method to VGG16 [@Simonyan15] and Resnet50 [@he2016deep] models/architectures, their *independence to the ImageNet test set cannot be rejected at any reasonable confidence*. This is in agreement with recent findings of [@BenRechtImageNet], and provides additional evidence that despite of the existing danger, it is likely that no overfitting has happened during the development of ImageNet classifiers. The rest of the paper is organized as follows: In , we introduce a formal model for error estimation using adversarial examples, including the definition of adversarial example generators. The new overfitting-detection tests are derived in , and applied to a synthetic problem in , and to the ImageNet image classification benchmark in . Due to space limitations, some auxiliary results, including the in-depth analysis of our method on the synthetic problem, are relegated to the appendix. Adversarial Risk Estimation {#sec:formal-model} =========================== We consider a classification problem with deterministic (noise-free) labels, which is a reasonable assumption for many practical problems, such as image recognition (we leave the extension of our method to noisy labels for future work). Let ${\mathcal{X}}\subset {\mathbb{R}}^D$ denote the input space and ${\mathcal{Y}}=\{0,\ldots,K-1\}$ the set of labels. Data is sampled from the distribution ${\mathcal{P}}$ over ${\mathcal{X}}$, and the class label is determined by the *ground truth* function $f^*:{\mathcal{X}}\to {\mathcal{Y}}$. We denote a random vector drawn from ${\mathcal{P}}$ by $X$, and its corresponding class label by $Y=f^*(X)$. We consider deterministic classifiers $f: {\mathcal{X}}\to {\mathcal{Y}}$. The performance of $f$ is measured by the zero-one loss: $L(f,x)={\mathbb{I}(f(x) \neq f^*(x))}$,[^3] and the *expected error* (also known as the *risk* or *expected risk* in the learning theory literature) of the classifier $f$ is defined as $R(f) = {\mathbb{E}}[{\mathbb{I}(f(X) \neq Y)}] = \int_{{\mathcal{X}}} L(f, x) {\mathrm{d}{\mathcal{P}}}(x)$. Consider a test dataset $S=\{(X_1,Y_1)\ldots,(X_m,Y_m)\}$ where the $X_i$ are drawn from ${\mathcal{P}}$ independently of each other and $Y_i=f^*(X_i)$. In the learning setting, the classifier $f$ usually also depends on some randomly drawn training data, hence is random itself. If $f$ is (statistically) independent from $S$, then $L(f, X_1),\ldots,L(f,X_m)$ are i.i.d., thus the empirical error rate $${\widehat{R}}_S(f) = \frac{1}{m} \sum_{i=1}^m L(f,X_i)= \frac{1}{m} \sum_{i=1}^m {\mathbb{I}(f(X_i) \neq Y_i)} \vspace{-0.1cm}$$ is an unbiased estimate of $R(f)$ for all $f$; that is, $R(f)={\mathbb{E}}[{\widehat{R}}_S(f)|f]$. If $f$ and $S$ are not independent, the performance guarantees on the empirical estimates available in the independent case are significantly weakened; for example, in case of overfitting to $S$, the empirical error rate is likely to be much smaller than the expected error. Another well-known way to estimate $R(f)$ is to use *importance sampling* (IS) [@vKAH49b]: instead of sampling from the distribution ${\mathcal{P}}$, we sample from another distribution ${\mathcal{P}}'$ and correct the estimate by appropriate reweighting. Assuming ${\mathcal{P}}$ is absolutely continuous with respect to ${\mathcal{P}}'$ on the set $E=\{x \in {\mathcal{X}}: L(f,x)\neq 0 \}$, $R(f) = \int_{\mathcal{X}} L(f, x) {\mathrm{d}{\mathcal{P}}}(x) = \int_{E} L(f, x) h(x) {\mathrm{d}{\mathcal{P}}}'(x)$, where $h = \tfrac{{\mathrm{d}{\mathcal{P}}}}{{\mathrm{d}{\mathcal{P}}}'}$ is the density (Radon-Nikodym derivative) of ${\mathcal{P}}$ with respect to ${\mathcal{P}}'$ on $E$ ($h$ can be defined to have arbitrary finite values on ${\mathcal{X}}\setminus E$). It is well known that the the corresponding empirical error estimator $$\begin{aligned} \label{eq:hR} {\widehat{R}}'_{S'}(f)=\frac{1}{m}\sum_{i=1}^m L(f,X'_i) h(X'_i) =\frac{1}{m} \sum_{i=1}^m {\mathbb{I}(f(X'_i) \neq Y'_i)} h(X'_i)\end{aligned}$$ obtained from a sample $S'=\{(X'_1,Y'_1),\ldots,(X'_m,Y'_m)\}$ drawn independently from ${\mathcal{P}}'$ is unbiased (i.e., ${\mathbb{E}}[{\widehat{R}}_{S'}(f)|f]=R(f)$) if $f$ and $S'$ are independent. The variance of ${\widehat{R}}'_{S'}$ is minimized if ${\mathcal{P}}'$ is the so-called zero-variance IS distribution, which is supported on $E$ with $h(x)=\tfrac{R(f)}{L(f,x)}$ for all $x \in E$ (see, e.g., [@Bucklew04 Section 4.2]). This suggest that an effective sampling distribution ${\mathcal{P}}'$ should concentrate on points where $f$ makes mistakes, which also facilitates that ${\widehat{R}}'_{S'}(f)$ become large if $f$ is overfitted to $S$ and hence ${\widehat{R}}_S(f)$ is small. We achieve this through the application of adversarial examples. Generating adversarial examples {#sec:aeg} ------------------------------- ![Generating adversarial examples. The top row depicts the original dataset $S$, with blue and orange points representing the two classes. The classifier’s prediction is represented by the color of the striped areas (checkmarks and crosses denote if a point is correctly or incorrectly classified). The arrows show the adversarial transformations via the AEG $g$, resulting in the new dataset $S'$; misclassified points are unchanged, while some correctly classified points are moved, but their original class label is unchanged. If the original data distribution is uniform over $S$, the transformation $g$ is density preserving, but not measure preserving: after the transformation the two rightmost correctly classified points in each class have probability $0$, while the leftmost misclassified point in each class has probability $3/16$; hence, the density $h_g$ for the latter points is $1/3$. \[fig:AEG\] ](advest.pdf){width="60.00000%"} In this section we introduce a formal framework for generating adversarial examples. Given a classification problem with data distribution ${\mathcal{P}}$ and ground truth $f^*$, an *adversarial example generator* (AEG) for a classifier $f$ is a (measurable) mapping $g:{\mathcal{X}}\to{\mathcal{X}}$ such that 1. \[g:pres\] $g$ preserves the class labels of the samples, that is, $f^*(x)=f^*(g(x))$ for ${\mathcal{P}}$-almost all $x$; 2. \[g:correct\] $g$ does not change points that are incorrectly classified by $f$, that is, $g(x)=x$ if $f(x) \neq f^*(x)$ for ${\mathcal{P}}$-almost all $x$. illustrates how an AEG works. In the literature, an adversarial example $g(x)$ is usually generated by staying in a small vicinity of the original data point $x$ (with respect to, e.g., the $2$- or the max-norm) and assuming that the resulting label of $g(x)$ is the same as that of $x$ (see, e.g., [@ExplainAdvExampl; @Carlini017a]). This foundational assumption—which is in fact a margin condition on the distribution—is captured in condition . formalizes the fact that there is no need to change samples which are already misclassified. Indeed, existing AEGs comply with this condition. The performance of an AEG is usually measured by how successfully it generates misclassified examples. Accordingly, we call a point $g(x)$ a *successful adversarial example* if $x$ is correctly classified by $f$ and $f(g(x))\neq f(x)$ (i.e., $L(f,x)=0$ and $L(f,g(x))=1$). In the development of our AEGs for image recognition tasks, we will make use of another condition. For simplicity, we formulate this condition for distributions ${\mathcal{P}}$ that have a density $\rho$ with respect to the uniform measure on ${\mathcal{X}}$, which is assumed to exist (notable cases are when ${\mathcal{X}}$ is finite, or ${\mathcal{X}}=[0,1]^D$ or when ${\mathcal{X}}= {\mathbb{R}}^D$; in the latter two cases the uniform measure is the Lebesgue measure). The assumption states that the AEG needs to be *density-preserving* in the sense that 3. \[g:measure\] $\rho(x) = \rho(g(x))$ for ${\mathcal{P}}$-almost all $x$. Note that a density-preserving map may not be measure-preserving (the latter means that for all measurable $A\subset {\mathcal{X}}$, ${\mathcal{P}}(A) = {\mathcal{P}}(g(A))$). We expect to hold when $g$ perturbs its input by a small amount and if $\rho$ is sufficiently smooth. The assumption is reasonable for, e.g., image recognition problems (at least in a relaxed form, $\rho(x) \approx \rho(g(x))$) where we expect that very close images will have a similar likelihood as measured by $\rho$. An AEG employing image translations, which satisfies , will be introduced in . Both and can be relaxed (to a soft margin condition or allowing a slight change in $\rho$, resp.) at the price of an extra error term in the analysis that follows. For a fixed AEG $g:{\mathcal{X}}\to {\mathcal{X}}$, let ${\mathcal{P}}_g$ denote the distribution of $g(X)$ where $X\sim {\mathcal{P}}$ (${\mathcal{P}}_g$ is known as the pushforward measure of ${\mathcal{P}}$ under $g$). Further, let $h_g = \frac{d{\mathcal{P}}}{d{\mathcal{P}}_g}$ on $E = \{x\,:\, L(f,x)\ne 0\}$ and arbitrary otherwise. It is easy to see that $h_g$ is well-defined on $E$ and $h_g(x) \le 1$ for all $x\in E$: This follows from the fact that ${\mathcal{P}}(A) \le {\mathcal{P}}_g(A)$ for any measurable $A\subset E$, which holds since $$\begin{aligned} {\mathcal{P}}_g(A) = {\mathbb{P}(g(X)\in A)}\ge {\mathbb{P}(g(X)\in A, X\in E)} = {\mathbb{P}(X\in A)} = {\mathcal{P}}(A),\end{aligned}$$ where the second to last equality holds because $g(X)=X$ for any $X \in E$ under condition . One may think that implies that $h_g(x)=1$ for all $x\in E$. However, this does not hold. For example, if ${\mathcal{P}}$ is a uniform distribution, any $g:{\mathcal{X}}\to \operatorname{supp}{\mathcal{P}}$ satisfies , where $\operatorname{supp}{\mathcal{P}}\subset {\mathcal{X}}$ denotes the support of the distribution ${\mathcal{P}}$. This is also illustrated in . Risk estimation via adversarial examples {#sec:change-of-measure} ---------------------------------------- Combining the ideas of this section so far, we now introduce unbiased risk estimates based on adversarial examples. Our goal is to estimate the error-rate of $f$ through an adversarially generated sample $S'=\{(X_1',Y_1),\ldots,(X_m',Y_m)\}$ obtained through an AEG $g$, where $X_i'=g(X_i)$ with $X_1,\ldots,X_m$ drawn independently from ${\mathcal{P}}$ and $Y_i=f^*(X_i)$. Since $g$ satisfies by definition, the original example $X_i$ and the corresponding adversarial example $X'_i$ have the same label $Y_i$. Recalling that $h_g = {\mathrm{d}{\mathcal{P}}}/{\mathrm{d}{\mathcal{P}}}_g\le 1$ on $E=\{x\in {\mathcal{X}}\,:\, L(f,x)=1\}$, one can easily show that the importance weighted adversarial estimate $$\begin{aligned} \label{eq:hR2} {\widehat{R}}_g(f)=\frac{1}{m} \sum_{i=1}^m {\mathbb{I}(f(X_i') \neq Y_i)} h_g(X_i')\end{aligned}$$ obtained from for the adversarial sample $S'$ has smaller variance than that of the empirical average ${\widehat{R}}_S(f)$, while both are unbiased estimates of $R(f)$. Recall that both ${\widehat{R}}_g(f)$ and ${\widehat{R}}_S(f)$ are unbiased estimates of $R(f)$ with expectation ${\mathbb{E}}[{\widehat{R}}_g(f)]={\mathbb{E}}[{\widehat{R}}_S(f)]=R(f)$, and so $$\begin{aligned} {\mathbb{V}}[{\widehat{R}}_g(f)] &= \frac{1}{m}\left( {\mathbb{E}}[L(f,g(X))^2 h_g(g(X))^2] - R(f)^2 \right) \\ & \le \frac{1}{m} \left( {\mathbb{E}}[L(f,g(X)) h_g(g(X))] - R^2(f) \right) = \frac{1}{m}\left( R(f) - R^2(f) \right) = {\mathbb{V}}[{\widehat{R}}_S(f)]~.\end{aligned}$$ Intuitively, the more successful the AEG is (i.e., the more classification error it induces), the smaller the variance of the estimate ${\widehat{R}}_g(f)$ becomes. Detecting overfitting {#sec:detection} ===================== In this section we show how the risk estimates introduced in the previous section can be used to test the *independence hypothesis* that 1. \[H\] the sample $S$ and the model $f$ are independent. If holds, ${\mathbb{E}}[{\widehat{R}}_g(f)]={\mathbb{E}}[{\widehat{R}}_S(f)]=R(f)$, and so the difference $T_{S,g}(f)={\widehat{R}}_g(f) - {\widehat{R}}_S(f)$ is expected to be small. On the other hand, if $f$ is overfitted to the dataset $S$ (in which case ${\widehat{R}}_S(f)<R(f)$), we expect ${\widehat{R}}_S(f)$ and ${\widehat{R}}_g(f)$ to behave differently (the latter being less sensitive to overfitting) since (i) ${\widehat{R}}_g(f)$ depends also on examples previously unseen by the training procedure; (ii) the adversarial transformation $g$ aims to increase the loss, countering the effect of overfitting; (iii) especially in high dimensional settings, in case of overfitting one may expect that there are misclassified points very close to the decision boundary of $f$ which can be found by a carefully designed AEG. Therefore, intuitively, can be rejected if $|T_{S,g}(f)|$ exceeds some appropriate threshold. Test based on confidence intervals {#sec:basic-test} ---------------------------------- The simplest way to determine the threshold is based on constructing confidence intervals for these estimator based on concentration inequalities. Under , standard concentration inequalities, such as the Chernoff or empirical Bernstein bounds [@BoLuMa13], can be used to quantify how fast ${\widehat{R}}_S$ and ${\widehat{R}}_g(f)$ concentrate around the expected error $R(f)$. In particular, we use the following empirical Bernstein bound [@EmpBernStop]: Let $\bar{\sigma}_S^2=(1/m) \sum_{i=1}^m (L(f,X_i)-{\widehat{R}}_S(f))^2$ and $\bar{\sigma}_g^2=(1/m) \sum_{i=1}^m (L(f,g(X_i))h_g(g(X_i))-{\widehat{R}}_g(f))^2$ denote the empirical variance of $L(f,X_i)$ and $L(f,g(X_i))h_g(g(X_i))$, respectively. Then, for any $0<\delta\le1$, with probability at least $1-\delta$, $$\label{eq:bernstein1} |{\widehat{R}}_S(f) - R(f)| \le B(m,\bar{\sigma}^2_S,\delta,1), $$ where $B(m,\sigma^2,\delta,1)=\sqrt{\frac{2\sigma^2\ln(3/\delta)}{m}}+\frac{3 \ln(3/\delta)}{m}$ and we used the fact that the range of $L(f,x)$ is $1$ (the last parameter of $B$ is the range of the random variables considered). Similarly, with probability at least $1-\delta$, $$\label{eq:bernstein2} |{\widehat{R}}_g(f) - R(f)| \le B(m,\bar{\sigma}^2_g,\delta,1).$$ It follows trivially from the union bound that if the independence hypothesis holds, the above two confidence intervals $[{\widehat{R}}_S(f) - B(m,\bar{\sigma}^2_S,\delta,1), {\widehat{R}}_S(f)+B(m,\bar{\sigma}^2_S,\delta,1)]$ and $[{\widehat{R}}_g(f) - B(m,\bar{\sigma}^2_g,\delta,1), {\widehat{R}}_S(f)+B(m,\bar{\sigma}^2_g,\delta,1)]$, which both contain $R(f)$ with probability at least $1-\delta$, intersect with probability at least $1-2\delta$. On the other hand, if $f$ and $S$ are not independent, the performance guarantees and may be violated and the confidence intervals may become disjoint. If this is detected, we can reject the independence hypothesis at a confidence level $1-2\delta$ or, equivalently, with $p$-value $2\delta$. In other words, we reject if the absolute value of the difference of the estimates $T_{S,g}(f)={\widehat{R}}_g(f) - {\widehat{R}}_S(f)$ exceeds the threshold $B(m,\bar{\sigma}^2_S,\delta,1)+ B(m,\bar{\sigma}^2_g,\delta,1)$ (note that ${\mathbb{E}}[T_{S,g}(f)=0]$ if $S$ and $f$ are independent). Pairwise test {#sec:pairwise-test} ------------- A smaller threshold for $|T_{S,g}(f)|$, and hence a more effective independence test, can be devised if instead of independently estimating the behavior of ${\widehat{R}}_S$ and ${\widehat{R}}_g(f)$, one utilizes their apparent correlation. Indeed, $T_{S,g}(f) = (1/m)\sum_{i=1}^m T_{i,g}(f)$ where $$\begin{aligned} \label{eq:Ti} T_{i,g}(f) &= L(f,g(X_i))h_g(g(X_i))-L(f,X_i) $$ and the two terms in $T_{i,g}(f)$ have the same mean and are typically highly correlated by the construction of $g$. Thus, we can apply the empirical Bernstein bound [@EmpBernStop] to the pairwise differences $T_{i,g}(f)$ to set a tighter threshold in the test: if the independence hypothesis holds (i.e., $S$ and $f$ are independent), then for any $0<\delta<1$, with probability at least $1-\delta$, $$\label{eq:pairwise} |T_{S,g}(f)| \le B(m,\bar{\sigma}_T^2,\delta,U)$$ with $B(m,\sigma^2,\delta,U)=\sqrt{\frac{2\sigma^2\ln(3/\delta)}{m}}+\frac{3U \ln(3/\delta)}{m}$, where $\bar{\sigma}_T^2 = (1/m)\sum_{i=1}^m (T_i (f)- T_{S,g}(f))^2$ is the empirical variance of the $T_{i,g}(f)$ terms and $U = \sup T_{i,g}(f) - \inf T_{i,g}(f)$; we also used the fact that the expectation of each $T_{i,g}(f)$, and hence that of $T_{S,g}(f)$, is zero. Since $h_g \le 1$ if $L(f,x)=1$ (as discussed in ), it follows that $U \le 2$, but further assumptions (such as $g$ being density preserving) can result in tighter bounds. This leads to our pairwise dependence detection method: *if $|T_{S,g}(f)| > B(m,\bar{\sigma}_T^2,\delta,2)$, reject at a confidence level $1-\delta$ ($p$-value $\delta$).* For a given statistic $(|T_{S,g}(f)|, \bar{\sigma}_T^2)$, the largest confidence level (smallest $p$-value) at which can be rejected can be calculated by setting the value of the statistic $|T_{S,g}(f)|-B(m,\bar{\sigma}_T^2,\delta,2)$ to zero and solving for $\delta$. This leads to the following formula for the $p$-value (if the solution is larger than $1$, which happens when the bound is loose, $\delta$ is capped at $1$): $$\label{eq:delta} \delta=\min\left\{ 1, 3e^{-\frac{m}{9 U^2}\left(\bar{\sigma}_T^2 + 3U |T_{S,g}(f)| - \bar{\sigma}_T\sqrt{\bar{\sigma}_T^2 + 6 U |T_{S,g}(f)}|\right)}\right\}.$$ Note that in order for the test to work well, we not only need the test statistic $T_{S,g}(f)$ to have a small variance in case of independence (this could be achieved if $g$ were the identity), but we also need the estimators ${\widehat{R}}_S(f)$ and ${\widehat{R}}_g(f)$ behave sufficiently differently if the independence assumption is violated. The latter behavior is encouraged by stronger AEGs, as we will show empirically in (see in particular). Dependence detector for randomized training {#sec:detection-randomised} ------------------------------------------- The dependence between the model and the test set can arise from (i) selecting the “best” random seed in order to improve the test set performance and/or (ii) tweaking the model architecture (e.g., neural network structure) and hyperparameters (e.g., learning-rate schedule). If one has access to a single instance of a trained model, these two sources cannot be disentangled. However, if the model architecture and training procedure is fully specified and computational resources are adequate, it is possible to isolate (i) and (ii) by retraining the model multiple times and calculating the $p$-value for every training run separately. Assuming $N$ models, let $f_j, j=1,\ldots,N$ denote the $j$-th trained model and $p_j$ the $p$-value calculated using the pairwise independence test (i.e., from Eq. \[eq:delta\] in ). We can investigate the degree to which (i) occurs by comparing the $p_j$ values with the corresponding test set error rates $R_S(f_j)$. To investigate whether (ii) occurs, we can average over the randomness of the training runs. For every example $X_i \in S$, consider the average test statistic $\bar{T}_i = \frac{1}{N}\sum_{j=1}^{N} T_{i,g_j}(f_j)$, where $T_{i,g_j}(f_j)$ is the statistic calculated for example $X_i$ and model $f_j$ with AEG $g_j$ selected for model $f_j$ (note that AEGs are model-dependent by construction). If, for each $i$ and $j$, the random variables $T_i(f_j)$ are independent, then so are the $\bar{T}_i$ (for all $i$). Hence, we can apply the pairwise dependence detector with $\bar{T}_i $ instead of $T_i$, using the average $\bar{T}_{S} = (1/m) \sum_{i=1}^m \bar{T}_i$ with empirical variance $\bar{\sigma}^2_{T,N} = (1/m) \sum_{i=1}^m (\bar{T}_i - \bar{T}_{S})^2$, giving a single $p$-value $p_N$. If the training runs vary enough in their outcomes, different models $f_j$ err on different data points $X_j$, leading to $\bar{\sigma}^2_{T,N} < \bar{\sigma}_T^2$, and therefore strengthening the power of the dependence detector. For brevity, we call this independence test an $N$-model test. Synthetic experiments {#sec:synthetic} ===================== [r]{}[0.5]{} ![image](both_models_p_value_w_cis_multi){width="0.5\columnwidth"} First we verify the effectiveness of our method on a simple linear classification problem. Due to space limitations, we only convey high-level results here, details are given in . We assume that the data is linearly separable with a margin and the density $\rho$ is known. We consider a linear classifiers of the form $f(x) = \operatorname{sgn}(w^\top x + b)$ trained with the cross-entropy loss $c$, and we employ a one-step gradient method (which is an $L_2$ version of the fast gradient-sign method of [@ExplainAdvExampl; @TransMLAdvSamp]) to define our AEG $g$, which tries to modify a correctly classified point $x$ with label $y$ in the direction of the gradient of the cost function, yielding $x'=x - {{\varepsilon}}y w/ \| w \|_2$, where ${{\varepsilon}}\ge 0$ is the strength of the attack. To comply with the requirements for an AEG, we define $g$ as follows: $g(x)=x'$ if $L(f,x)=0$ and $f^*(x)=f^*(x')$ (corresponding to and , respectively), while $g(x)=x$ otherwise. Therefore, if $x'$ is misclassified by $f$, $x$ and $x'$ are the only points mapped to $x'$ by $g$. This simple form of $g$ and the knowledge of $\rho$ allows to compute the density $h_g$, making it easy to compute the adversarial error estimate . shows the average $p$-values produced by our $N$-model independence test for a dependent (solid lines) and an independent (dashed lines) test set. It can be seen that in the dependent case the test can reject independence with high confidence for a large range of attack strength ${{\varepsilon}}$, while the independence hypothesis is not rejected in the case of true independence. More details (including why only a range of ${{\varepsilon}}$ is suitable for detecting overfitting) are given in . Testing overfitting on ImageNet {#sec:numerical-experiments} =============================== In the previous section we showed that the proposed adversarial-example-based dependence test works for a synthetic problem where the densities can be computed exactly. In this section we apply our estimates to a popular image classification benchmark, ImageNet [@ImageNet]; here the main issue is to find sufficiently strong AEGs that make computing the corresponding densities possible. To facilitate the computation of the density $h_g$, we only consider density-preserving AEGs as defined by (recall that is different from requiring $h_g = 1$). Since in and , $h_g(x)$ is multiplied by $L(f,x)$, we only need to determine the density $h_g$ for data points that are misclassified by $f$. AEGs based on translations {#sec:translation} -------------------------- To satisfy , we implement the AEG using translations of images, which have recently been proposed as means of generating adversarial examples [@azulay2018]. Although relatively weak, such attacks fit our needs well: unless the images are procedurally centered, it is reasonable to assume that translating them by a few pixels does not change their likelihood.[^4] We also make the natural assumption that the small translations used do not change the true class of an image. Under these assumptions, translations by a few pixels satisfy conditions and . An image-translating function $g$ is a valid AEG if it leaves all misclassified images in place (to comply with ), and either leaves a correctly classified image unchanged or applies a small translation. The main benefit of using a translational AEG $g$ (with bounded translations) is that its density $h_g(x)$ for an image $x$ can be calculated exactly by considering the set of images $x'$ that can be mapped to $x$ by $g$ (this is due to our assumption ). We considered multiple ways for constructing translational AEGs. The best version (selected based on initial evaluations on the ImageNet training set), which we called the *strongest perturbation*, seeks a non-identical neighbor of a correctly classified image $x$ (neighboring images are the ones that are accessible through small translations) that causes the classifier to make an error with the largest confidence. Formally, we model images as 3D tensors in $[0, 1]^{W \times H \times C}$ space, where $C=3$ for RGB data, and $W$ and $H$ are the width and height of the images, respectively. Let $\tau_v(x)$ denote the translation of an image $x$ by $v \in \mathbb{Z}^2$ pixels in the (X, Y) plane (here $\mathbb{Z}$ denotes the set of integers). To control the amount of change, we limit the magnitude of translations and allow $v \in {\mathcal{V}}_{\varepsilon}=\{u \in \mathbb{Z}^2: u\neq (0,0), \|u\|_\infty \le {\varepsilon}\}$ only, for some fixed positive ${\varepsilon}$. Thus, we considers AEGs in the form $g(x) \in \{\tau_v(x): v \in {\mathcal{V}}\} \cup \{x\}$ if $f(x) = f^*(x)$ and $g(x)=x$ otherwise (if $x$ is correctly classified, we attempt to translate it to find an adversarial example in $\{\tau_v(x): v \in {\mathcal{V}}\}$ which is misclassified by $f$, but $x$ is left unchanged if no such point exists). Denoting the density of the pushforward measure ${\mathcal{P}}_g$ by $\rho_g$, for any misclassified point $x$, $$\rho_g(x) = \rho(x) + \sum_{v \in {\mathcal{V}}} \rho(\tau_{-v}(x)) {\mathbb{I}(g(\tau_{-v}(x)) = x)} = \rho(x)\left(1+ \sum_{v \in {\mathcal{V}}} {\mathbb{I}(g(\tau_{-v}(x)) = x)}\right) \vspace{-0.1cm}$$ where the second equality follows from . Therefore, the corresponding density is $$\label{eq:hg} h_g(x) =1/( 1 + n(x) ) \, $$ where $n(x)=\sum_{v \in {\mathcal{V}}} {\mathbb{I}(g(\tau_{-v}(x)) = x)}$ is the number of neighboring images which are mapped to $x$ by $g$. Note that given $f$ and $g$, $n(x)$ can be easily calculated by checking all possible translations of $x$ by $-v$ for $v \in {\mathcal{V}}$. It is easy to extend the above to non-deterministic perturbations, defined as distributions over AEGs, by replacing the indicator with its expectation ${\mathbb{P}}(g(\tau_{-v}(x)) = x | x,v)$ with respect to the randomness of $g$, yielding $$\label{eq:hgnd} h_g(x) = \frac{1}{1 + \sum_{v \in {\mathcal{V}}} {\mathbb{P}}(g(\tau_{-v}(x)) = x | x,v)}~.$$ If $g$ is deterministic, we have $h_g(x) \le 1/2$ for any successful adversarial example $x$. Hence, for such $g$, the range $U$ of the random variables $T_i$ defined in has a tighter upper bound of 3/2 instead 2 (as $T_i \in [-1,1/2]$), leading to a tighter bound in and a stronger pairwise independence test. In the experiments, we use this stronger test. We provide additional details about the translational AEGs used in . Tests of ImageNet models {#sec:vgg16-imagenet} ------------------------ We applied our test to check if state-of-the-art classifiers for the ImageNet dataset [@ImageNet] have been overfitted to the test set. In particular, we use the VGG16 classifier of [@Simonyan15] and the Resnet50 classifier of [@he2016deep]. Due to computational considerations, we only analyzed a single trained VGG16 model, while the Resnet50 model was retrained 120 times. The models were trained using the parameters recommended by their respective authors. The preprocessing procedure of both architectures involves rescaling every image so that the smaller of width and height is 256 and next cropping centrally to size $224\times 224$. This means that translating the image by $v$ can be trivially implemented by shifting the cropping window by $-v$ without any loss of information for $\| v \|_\infty \le 16$, because we have enough extra pixels outside the original, centrally located cropping window. This implies that we can compute the densities of the translational AEGs for any $\| v \|_\infty \le {{\varepsilon}}= \floor{16/3} = 5$ (see for detailed explanation). Because the ImageNet data collection procedure did not impose any strict requirements on centering the images [@ImageNet], it is reasonable to assume (as we do) that small (lossless) translations respect the density-preserving condition . In our first experiment, we applied our pairwise independence test with the AEGs described in (strongest, nearest, and the two random baselines) to all 1,271,167 training examples, as well as to a number of its randomly selected (uniformly without replacement) subsets of different sizes. Besides this being a sanity check, we also used this experiment to select from different AEGs and compare the performance of the pairwise independence test to the basic version of the test described in . The left graph in shows that with the “strongest perturbation”, we were able to reject independence of the trained model and the training samples at a confidence level very close to 1 when enough training samples are considered (to be precise, for the whole training set the confidence level is $99.9994\%$). Note, however, that the much weaker “smallest perturbation” AEG, as well as the random transformations, are not able to detect the presence of overfitting. At the same time, the graph on the right hand side shows the relative strength of the pairwise independence test compared to the basic version based on independent confidence interval estimates as described in detail in : the $97.5\%$-confidence intervals of the error estimates ${\widehat{R}}_S(f)$ and ${\widehat{R}}_g(f)$ overlap, not allowing to reject independence at a confidence level of $95\%$ (note that here $S$ denotes the training set). ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![$p$-values for the independence test on the ImageNet training set for different sample sizes and AEG variants (left); original and adversarial risk estimates, ${\widehat{R}}_S(f)$ and ${\widehat{R}}_g(f)$, on the ImageNet training set with 97.5% two-sided confidence intervals for the ‘strongest attack’ AEG (right).[]{data-label="fig:imagenet-train-transl"}](p_values_1576690 "fig:"){width="40.00000%"} ![$p$-values for the independence test on the ImageNet training set for different sample sizes and AEG variants (left); original and adversarial risk estimates, ${\widehat{R}}_S(f)$ and ${\widehat{R}}_g(f)$, on the ImageNet training set with 97.5% two-sided confidence intervals for the ‘strongest attack’ AEG (right).[]{data-label="fig:imagenet-train-transl"}](rates_1576690_strongest "fig:"){width="40.00000%"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ On the other hand, when applied to the test set, we obtained a $p$-value of 0.96, not allowing at all to reject the independence of the trained model and the test set. This result could be explained by the test being too weak, as no overfitting is detected to the *training* set at similar sample sizes (see ), or simply the lack of overfitting. Similar results were obtained for Resnet50, where even the $N$-model test with $N=120$ independently trained models resulted a $p$ value of 1, not allowing to reject independence at any confidence level. The view of no overfitting can be backed up in at least two ways: first, “manual” overfitting to the relatively *large* ImageNet test set is hard. Second, since training an ImageNet model was just too computationally expensive until quite recently, only a relatively small number of different architectures were developed for this problem, and the evolution of their design was often driven by computational efficiency on the available hardware. On the other hand, it is also possible that increasing $N$ sufficiently might show evidence of overfitting (this is left for future work). Conclusions {#sec:conclusions} =========== We presented a method for detecting overfitting of models to datasets. It relies on an importance-weighted risk estimate from a new dataset obtained by generating adversarial examples from the original data points. We applied our method to the popular ImageNet image classification task. For this purpose, we developed a specialized variant of our method for image classification that uses adversarial translations, providing arguments for its correctness. Luckily, and in agreement with other recent work on this topic [@recht2018cifar10.1; @BenRechtImageNet; @feldman2019multiclass; @mania2019modelSimilarity; @ChhaviBottou2019ColdCase], we found no evidence of overfitting of state-of-the-art classifiers to the ImageNet test set. The most challenging aspect of our methods is to construct adversarial perturbations for which we can calculate the importance weights; finding stronger perturbations than the ones based on translations for image classification is an important question for the future. Another interesting research direction is to consider extensions beyond image classification, for example, by building on recent adversarial attacks for speech-to-text methods [@CaWa18], machine translation [@EbrahimiLD18] or text classification [@EbrahimiRLD18b]. Acknowledgements {#acknowledgements .unnumbered} ================ We thank J. Uesato for useful discussions and advice about adversarial attack methods and sharing their implementations [@Uesato] with us, as well as M. Rosca and S. Gowal for help with retraining image classification models. We also thank B. O’Donoghue for useful remarks about the manuscript, and L. Schmidt for an in-depth discussion of their results on this topic. Finally, we thank D. Balduzzi, S. Legg, K. Kavukcuoglu and J. Martens for encouragement, support, lively discussions and feedback. Synthetic experiments {#app:synthetic} ===================== In this section, we present full details of the experiments on a simple synthetic classification problem, which we presented briefly in . These experiments illustrate the power of the method of . The advantage of the simple setup considered here is that we are able to compute the density $h_g$ in an analytic form (see for an illustration). [synthetic.png]{} (49, 51)[0]{} (95, 52)[$x_1$]{} (50, 95)[$x_2$]{} (23, 87)[$y=-1$]{} (55, 87)[$y=+1$]{} (80.5, 65)[$x_A$]{} (59, 77)[$x'_A$]{} (63.5, 52.5)[$x_B$]{} (49.5, 67.5)[$x'_B$]{} (16, 72)[$x_C$]{} (33.5, 56)[$x'_C$]{} (56.5, 27.3)[$x_D$]{} (35, 39.3)[$x'_D$]{} (55.5, 5)[$x_E$]{} (34, 17)[$x'_E$]{} (55, 41)[$w$]{} Data distribution and model --------------------------- Let ${\mathcal{X}}={\mathbb{R}}^{500}$ and consider an input distribution with a density $\rho$ that is an equally weighted mixture of two 500-dimensional isotropic truncated Gaussian distributions $N^\text{trunc}_\pm(\mu_\pm, \sigma^2 I)$ with coordinate-wise standard deviation $\sigma=\sqrt{500}$ ($I$ denotes the identity matrix of size $500\times 500$), means $\mu_\pm = [\pm 1, 0, 0, \ldots, 0]$ and densities $\rho_\pm$ truncated in the first dimension such that $\rho_+(x) = 0$ if $x_1 \le 0.025$ and $\rho_-(x) = 0$ if $x_1 \ge -0.025$. The label of an input point $x$ is $f^*(x)=\operatorname{sgn}(x_1)$, which is the sign of its first coordinate. We consider linear classifiers of the form $f(x) = \operatorname{sgn}(w^\top x + b)$ trained with the cross-entropy loss $c((w,b),x,y)= \ln (1+ e^{-y(w^\top x +b)})$ where $y=f^*(x)$. We employ a one-step gradient method (which is an $L_2$ version of the fast gradient-sign method of [@ExplainAdvExampl; @TransMLAdvSamp]) to define our AEG $g$, which tries to modify a correctly classified point $x$ with label $y$ in the direction of the gradient of the cost function $c$: $x'=x + {{\varepsilon}}\nabla_{x} c((w,b),x,y) / \| \nabla_{x} c((w,b),x,y) \|_2$ for some ${{\varepsilon}}>0$. For our specific choice of $c$, the above simplifies to $x'=x - {{\varepsilon}}y w/ \| w \|_2$. To comply with the requirements for an AEG, we define $g$ as follows: $g(x)=x'$ if $L(f,x)=0$ and $f^*(x)=f^*(x')$ (corresponding to and , respectively), while $g(x)=x$ otherwise. Therefore, if $x'$ is misclassified by $f$, $x$ and $x'$ are the only points mapped to $x'$ by $g$. Thus, the density at $x'$ after the transformation $g$ is $\rho'(x')=\rho(x) + \rho(x')(1-L(f,x)) {\mathbb{I}(f^*(x)=f^*(x'))}$ and $$h_g(x') = \frac{\rho(x')}{\rho'(x')}=\frac{\rho(x')}{ \rho(x') + \rho(x) (1-L(f,x)) {\mathbb{I}(f^*(x)=f^*(x'))}}$$ (note that ${\mathbb{I}(L(f,x)=0)}=1-L(f,x)$). Experiment setup ---------------- We present two experiments showing the behavior of our independence test: one where the training and test sets are independent, and another where they are not. In the first experiment a linear classifier was trained on a training set ${S_\text{Tr}}$ of size 500 for 50,000 steps using the RMSProp optimizer [@Tieleman2012] with batch size 100 and learning rate 0.01, obtaining zero (up to numerical precision) final training loss $c$ and, consequently, 100% prediction accuracy on the training data. Then the trained classifier was tested on a large test set ${S_\text{Te}}$ of size 10,000.[^5] Both sets were drawn independently from $\rho$ defined above. We used a range of ${{\varepsilon}}$ values matched to the scale of the data distribution: from $10^{-2}$, which is the order of magnitude of the margin between two classes (0.05), to $10^2$, which is the order of magnitude of the width of the Gaussian distribution used for each classes ($\sigma=\sqrt{500}$). In the second experiment we consider the situation where the training and test sets are not independent. To enhance the effects of this dependence, the setup was modified to make the training process more amenable to overfitting by simulating a situation when the model has a wrong bias (this may happen in practice if a wrong architecture or data preprocessing method is chosen, which, despite the modeler’s best intentions, worsens the performance). Specifically, during training we added a penalty term $10^4 w_1^2$ to the training loss $c$, decreased the size of the test set to 1000 and used 50% of the test data for training (the final penalized training loss was 0.25 with 100% prediction accuracy on the training set). Note that the small training set and the large penalty on $w_1$ yield classifiers that are essentially independent of the only interesting feature $x_1$ (recall that the true label of a point $x$ is $\operatorname{sgn}(x_1)$) and overfit to the noise in the data, resulting in a true model risk $R(f) \approx 1/2$. Results ------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Risk and overfitting metrics for a synthetic problem with linear classifiers as a function of the perturbation strengths ${{\varepsilon}}$ (log scale). Left: unbiased model tested on a large, independent test set (in this case ${\widehat{R}}_S(f) \approx {\widehat{R}}_g(f) \approx R(f)$); right: trained model overfitted to the test set (${\widehat{R}}_S(f) \le {\widehat{R}}_g(f)$ while both are smaller than $R(f)$). *First row*: Average $p$-value $\delta$ for the pairwise independence test with over $100$ runs ($N=1$) or the $N$-model independence test ($N > 1$). The bounds plotted are either empirical $95\%$ two-sided confidence intervals ($N \le 2$) or ranges between minimum and maximum value ($N=10, 25$). *Second row*: Empirical two-sided $97.5\%$ confidence intervals for the empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$. On the left, $R(f) \approx {\widehat{R}}_S(f)$, while $R(f)$ is shown separately on the right. *Third row*: Average densities (Radon-Nikodym derivatives) for originally misclassified points and for the new data points obtained by successful adversarial transformations (with empirical 97.5% two-sided confidence intervals). *Fourth row*: The empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$ for a single realization with $97.5\%$ two-sided confidence intervals computed from Bernstein’s inequality, the adversarial error rate ${\widehat{R}}_{S'}(f)$, and the expected error $R(f)$ (on the right, on the left $R(f) \approx {\widehat{R}}_S(f)$). *Fifth row*: Histograms of $p$-values for selected ${{\varepsilon}}$ values over $100$ runs. \[fig:linear-model\]](good_model_p_value_w_cis_multi "fig:"){width="42.00000%"} ![Risk and overfitting metrics for a synthetic problem with linear classifiers as a function of the perturbation strengths ${{\varepsilon}}$ (log scale). Left: unbiased model tested on a large, independent test set (in this case ${\widehat{R}}_S(f) \approx {\widehat{R}}_g(f) \approx R(f)$); right: trained model overfitted to the test set (${\widehat{R}}_S(f) \le {\widehat{R}}_g(f)$ while both are smaller than $R(f)$). *First row*: Average $p$-value $\delta$ for the pairwise independence test with over $100$ runs ($N=1$) or the $N$-model independence test ($N > 1$). The bounds plotted are either empirical $95\%$ two-sided confidence intervals ($N \le 2$) or ranges between minimum and maximum value ($N=10, 25$). *Second row*: Empirical two-sided $97.5\%$ confidence intervals for the empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$. On the left, $R(f) \approx {\widehat{R}}_S(f)$, while $R(f)$ is shown separately on the right. *Third row*: Average densities (Radon-Nikodym derivatives) for originally misclassified points and for the new data points obtained by successful adversarial transformations (with empirical 97.5% two-sided confidence intervals). *Fourth row*: The empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$ for a single realization with $97.5\%$ two-sided confidence intervals computed from Bernstein’s inequality, the adversarial error rate ${\widehat{R}}_{S'}(f)$, and the expected error $R(f)$ (on the right, on the left $R(f) \approx {\widehat{R}}_S(f)$). *Fifth row*: Histograms of $p$-values for selected ${{\varepsilon}}$ values over $100$ runs. \[fig:linear-model\]](bad_model_p_value_w_cis_multi "fig:"){width="42.00000%"} ![Risk and overfitting metrics for a synthetic problem with linear classifiers as a function of the perturbation strengths ${{\varepsilon}}$ (log scale). Left: unbiased model tested on a large, independent test set (in this case ${\widehat{R}}_S(f) \approx {\widehat{R}}_g(f) \approx R(f)$); right: trained model overfitted to the test set (${\widehat{R}}_S(f) \le {\widehat{R}}_g(f)$ while both are smaller than $R(f)$). *First row*: Average $p$-value $\delta$ for the pairwise independence test with over $100$ runs ($N=1$) or the $N$-model independence test ($N > 1$). The bounds plotted are either empirical $95\%$ two-sided confidence intervals ($N \le 2$) or ranges between minimum and maximum value ($N=10, 25$). *Second row*: Empirical two-sided $97.5\%$ confidence intervals for the empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$. On the left, $R(f) \approx {\widehat{R}}_S(f)$, while $R(f)$ is shown separately on the right. *Third row*: Average densities (Radon-Nikodym derivatives) for originally misclassified points and for the new data points obtained by successful adversarial transformations (with empirical 97.5% two-sided confidence intervals). *Fourth row*: The empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$ for a single realization with $97.5\%$ two-sided confidence intervals computed from Bernstein’s inequality, the adversarial error rate ${\widehat{R}}_{S'}(f)$, and the expected error $R(f)$ (on the right, on the left $R(f) \approx {\widehat{R}}_S(f)$). *Fifth row*: Histograms of $p$-values for selected ${{\varepsilon}}$ values over $100$ runs. \[fig:linear-model\]](good_model_seeded "fig:"){width="42.00000%"} ![Risk and overfitting metrics for a synthetic problem with linear classifiers as a function of the perturbation strengths ${{\varepsilon}}$ (log scale). Left: unbiased model tested on a large, independent test set (in this case ${\widehat{R}}_S(f) \approx {\widehat{R}}_g(f) \approx R(f)$); right: trained model overfitted to the test set (${\widehat{R}}_S(f) \le {\widehat{R}}_g(f)$ while both are smaller than $R(f)$). *First row*: Average $p$-value $\delta$ for the pairwise independence test with over $100$ runs ($N=1$) or the $N$-model independence test ($N > 1$). The bounds plotted are either empirical $95\%$ two-sided confidence intervals ($N \le 2$) or ranges between minimum and maximum value ($N=10, 25$). *Second row*: Empirical two-sided $97.5\%$ confidence intervals for the empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$. On the left, $R(f) \approx {\widehat{R}}_S(f)$, while $R(f)$ is shown separately on the right. *Third row*: Average densities (Radon-Nikodym derivatives) for originally misclassified points and for the new data points obtained by successful adversarial transformations (with empirical 97.5% two-sided confidence intervals). *Fourth row*: The empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$ for a single realization with $97.5\%$ two-sided confidence intervals computed from Bernstein’s inequality, the adversarial error rate ${\widehat{R}}_{S'}(f)$, and the expected error $R(f)$ (on the right, on the left $R(f) \approx {\widehat{R}}_S(f)$). *Fifth row*: Histograms of $p$-values for selected ${{\varepsilon}}$ values over $100$ runs. \[fig:linear-model\]](bad_model_seeded "fig:"){width="42.00000%"} ![Risk and overfitting metrics for a synthetic problem with linear classifiers as a function of the perturbation strengths ${{\varepsilon}}$ (log scale). Left: unbiased model tested on a large, independent test set (in this case ${\widehat{R}}_S(f) \approx {\widehat{R}}_g(f) \approx R(f)$); right: trained model overfitted to the test set (${\widehat{R}}_S(f) \le {\widehat{R}}_g(f)$ while both are smaller than $R(f)$). *First row*: Average $p$-value $\delta$ for the pairwise independence test with over $100$ runs ($N=1$) or the $N$-model independence test ($N > 1$). The bounds plotted are either empirical $95\%$ two-sided confidence intervals ($N \le 2$) or ranges between minimum and maximum value ($N=10, 25$). *Second row*: Empirical two-sided $97.5\%$ confidence intervals for the empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$. On the left, $R(f) \approx {\widehat{R}}_S(f)$, while $R(f)$ is shown separately on the right. *Third row*: Average densities (Radon-Nikodym derivatives) for originally misclassified points and for the new data points obtained by successful adversarial transformations (with empirical 97.5% two-sided confidence intervals). *Fourth row*: The empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$ for a single realization with $97.5\%$ two-sided confidence intervals computed from Bernstein’s inequality, the adversarial error rate ${\widehat{R}}_{S'}(f)$, and the expected error $R(f)$ (on the right, on the left $R(f) \approx {\widehat{R}}_S(f)$). *Fifth row*: Histograms of $p$-values for selected ${{\varepsilon}}$ values over $100$ runs. \[fig:linear-model\]](good_model_rn_der_w_cis "fig:"){width="42.00000%"} ![Risk and overfitting metrics for a synthetic problem with linear classifiers as a function of the perturbation strengths ${{\varepsilon}}$ (log scale). Left: unbiased model tested on a large, independent test set (in this case ${\widehat{R}}_S(f) \approx {\widehat{R}}_g(f) \approx R(f)$); right: trained model overfitted to the test set (${\widehat{R}}_S(f) \le {\widehat{R}}_g(f)$ while both are smaller than $R(f)$). *First row*: Average $p$-value $\delta$ for the pairwise independence test with over $100$ runs ($N=1$) or the $N$-model independence test ($N > 1$). The bounds plotted are either empirical $95\%$ two-sided confidence intervals ($N \le 2$) or ranges between minimum and maximum value ($N=10, 25$). *Second row*: Empirical two-sided $97.5\%$ confidence intervals for the empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$. On the left, $R(f) \approx {\widehat{R}}_S(f)$, while $R(f)$ is shown separately on the right. *Third row*: Average densities (Radon-Nikodym derivatives) for originally misclassified points and for the new data points obtained by successful adversarial transformations (with empirical 97.5% two-sided confidence intervals). *Fourth row*: The empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$ for a single realization with $97.5\%$ two-sided confidence intervals computed from Bernstein’s inequality, the adversarial error rate ${\widehat{R}}_{S'}(f)$, and the expected error $R(f)$ (on the right, on the left $R(f) \approx {\widehat{R}}_S(f)$). *Fifth row*: Histograms of $p$-values for selected ${{\varepsilon}}$ values over $100$ runs. \[fig:linear-model\]](bad_model_rn_der_w_cis "fig:"){width="42.00000%"} ![Risk and overfitting metrics for a synthetic problem with linear classifiers as a function of the perturbation strengths ${{\varepsilon}}$ (log scale). Left: unbiased model tested on a large, independent test set (in this case ${\widehat{R}}_S(f) \approx {\widehat{R}}_g(f) \approx R(f)$); right: trained model overfitted to the test set (${\widehat{R}}_S(f) \le {\widehat{R}}_g(f)$ while both are smaller than $R(f)$). *First row*: Average $p$-value $\delta$ for the pairwise independence test with over $100$ runs ($N=1$) or the $N$-model independence test ($N > 1$). The bounds plotted are either empirical $95\%$ two-sided confidence intervals ($N \le 2$) or ranges between minimum and maximum value ($N=10, 25$). *Second row*: Empirical two-sided $97.5\%$ confidence intervals for the empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$. On the left, $R(f) \approx {\widehat{R}}_S(f)$, while $R(f)$ is shown separately on the right. *Third row*: Average densities (Radon-Nikodym derivatives) for originally misclassified points and for the new data points obtained by successful adversarial transformations (with empirical 97.5% two-sided confidence intervals). *Fourth row*: The empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$ for a single realization with $97.5\%$ two-sided confidence intervals computed from Bernstein’s inequality, the adversarial error rate ${\widehat{R}}_{S'}(f)$, and the expected error $R(f)$ (on the right, on the left $R(f) \approx {\widehat{R}}_S(f)$). *Fifth row*: Histograms of $p$-values for selected ${{\varepsilon}}$ values over $100$ runs. \[fig:linear-model\]](good_model "fig:"){width="42.00000%"} ![Risk and overfitting metrics for a synthetic problem with linear classifiers as a function of the perturbation strengths ${{\varepsilon}}$ (log scale). Left: unbiased model tested on a large, independent test set (in this case ${\widehat{R}}_S(f) \approx {\widehat{R}}_g(f) \approx R(f)$); right: trained model overfitted to the test set (${\widehat{R}}_S(f) \le {\widehat{R}}_g(f)$ while both are smaller than $R(f)$). *First row*: Average $p$-value $\delta$ for the pairwise independence test with over $100$ runs ($N=1$) or the $N$-model independence test ($N > 1$). The bounds plotted are either empirical $95\%$ two-sided confidence intervals ($N \le 2$) or ranges between minimum and maximum value ($N=10, 25$). *Second row*: Empirical two-sided $97.5\%$ confidence intervals for the empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$. On the left, $R(f) \approx {\widehat{R}}_S(f)$, while $R(f)$ is shown separately on the right. *Third row*: Average densities (Radon-Nikodym derivatives) for originally misclassified points and for the new data points obtained by successful adversarial transformations (with empirical 97.5% two-sided confidence intervals). *Fourth row*: The empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$ for a single realization with $97.5\%$ two-sided confidence intervals computed from Bernstein’s inequality, the adversarial error rate ${\widehat{R}}_{S'}(f)$, and the expected error $R(f)$ (on the right, on the left $R(f) \approx {\widehat{R}}_S(f)$). *Fifth row*: Histograms of $p$-values for selected ${{\varepsilon}}$ values over $100$ runs. \[fig:linear-model\]](bad_model "fig:"){width="42.00000%"} ![Risk and overfitting metrics for a synthetic problem with linear classifiers as a function of the perturbation strengths ${{\varepsilon}}$ (log scale). Left: unbiased model tested on a large, independent test set (in this case ${\widehat{R}}_S(f) \approx {\widehat{R}}_g(f) \approx R(f)$); right: trained model overfitted to the test set (${\widehat{R}}_S(f) \le {\widehat{R}}_g(f)$ while both are smaller than $R(f)$). *First row*: Average $p$-value $\delta$ for the pairwise independence test with over $100$ runs ($N=1$) or the $N$-model independence test ($N > 1$). The bounds plotted are either empirical $95\%$ two-sided confidence intervals ($N \le 2$) or ranges between minimum and maximum value ($N=10, 25$). *Second row*: Empirical two-sided $97.5\%$ confidence intervals for the empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$. On the left, $R(f) \approx {\widehat{R}}_S(f)$, while $R(f)$ is shown separately on the right. *Third row*: Average densities (Radon-Nikodym derivatives) for originally misclassified points and for the new data points obtained by successful adversarial transformations (with empirical 97.5% two-sided confidence intervals). *Fourth row*: The empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$ for a single realization with $97.5\%$ two-sided confidence intervals computed from Bernstein’s inequality, the adversarial error rate ${\widehat{R}}_{S'}(f)$, and the expected error $R(f)$ (on the right, on the left $R(f) \approx {\widehat{R}}_S(f)$). *Fifth row*: Histograms of $p$-values for selected ${{\varepsilon}}$ values over $100$ runs. \[fig:linear-model\]](good_model_p_value_hist "fig:"){width="42.00000%"} ![Risk and overfitting metrics for a synthetic problem with linear classifiers as a function of the perturbation strengths ${{\varepsilon}}$ (log scale). Left: unbiased model tested on a large, independent test set (in this case ${\widehat{R}}_S(f) \approx {\widehat{R}}_g(f) \approx R(f)$); right: trained model overfitted to the test set (${\widehat{R}}_S(f) \le {\widehat{R}}_g(f)$ while both are smaller than $R(f)$). *First row*: Average $p$-value $\delta$ for the pairwise independence test with over $100$ runs ($N=1$) or the $N$-model independence test ($N > 1$). The bounds plotted are either empirical $95\%$ two-sided confidence intervals ($N \le 2$) or ranges between minimum and maximum value ($N=10, 25$). *Second row*: Empirical two-sided $97.5\%$ confidence intervals for the empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$. On the left, $R(f) \approx {\widehat{R}}_S(f)$, while $R(f)$ is shown separately on the right. *Third row*: Average densities (Radon-Nikodym derivatives) for originally misclassified points and for the new data points obtained by successful adversarial transformations (with empirical 97.5% two-sided confidence intervals). *Fourth row*: The empirical test error rate ${\widehat{R}}_S(f)$ and the adversarial risk estimate ${\widehat{R}}_g(f)$ for a single realization with $97.5\%$ two-sided confidence intervals computed from Bernstein’s inequality, the adversarial error rate ${\widehat{R}}_{S'}(f)$, and the expected error $R(f)$ (on the right, on the left $R(f) \approx {\widehat{R}}_S(f)$). *Fifth row*: Histograms of $p$-values for selected ${{\varepsilon}}$ values over $100$ runs. \[fig:linear-model\]](bad_model_p_value_hist "fig:"){width="42.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The results of the two experiments are shown in , plotted against different perturbation strengths: the left column corresponds to the first experiment while the right column to the second. The first row presents the $p$-values for rejecting the independence hypothesis, calculated by repeating the experiment (sampling data and training the classifier) $100$ times and applying the single-model (, labelled as $N=1$ in the plots) and $N$-model (, labelled as $N=2, 10, 25, 100$ in the plots) independence test, and taking the average over models (or model sets of size $N$) for each ${{\varepsilon}}$. We also plot empirical 95% two-sided confidence intervals ($N \le 2$) or, due to limited number of $p$-values available after dividing 100 runs into disjoint bins of size $N \ge 10$, ranges between minimum and maximum value ($N=10, 25$). For all methods of detecting dependence, it can be seen that for the independent case the test is correctly not able to reject the independence hypothesis (the average $p$-value is very close to $1$, although in some runs it can drop to as low as $0.5$). On the other hand, for $10 \le {{\varepsilon}}\le 50$, the non-independent model failed the independence test at confidence level $1 - \delta \approx 100\%$, hence, in this range of ${{\varepsilon}}$ our independence test reliably detects overfitting. In fact, it is easy to argue that our test should only work for a limited range of ${{\varepsilon}}$, that is, it should not reject independence for too small or too large values of ${{\varepsilon}}$. First we consider the case of small ${{\varepsilon}}$ values. Notice that except for points $g(x)$ ${{\varepsilon}}$-close (in $L_2$-norm) to the true decision boundary or the decision boundary of $f$, $g(x)$ is invertible: if $g(x)$ is correctly classified and is ${{\varepsilon}}$-away from the true decision boundary, there is exactly one point, $x$, which is translated to $g(x)$, while if $g(x)$ is incorrectly classified and ${{\varepsilon}}$-away from the decision boundary of $f$, no translation leads to $g(x)$ and $x=g(x)$; any other points are ${{\varepsilon}}$-close to the decision boundary of either $f$ or $f^*$. Thus, since $\rho$ is bounded, $g(x)$ is invertible on a set of at least $1-O({{\varepsilon}})$ probability (according to $\rho$). When ${{\varepsilon}}\to 0$, $g(x) \to x$, and so $\rho(g(x)) \to \rho(x)$ for all points $x$ with $|x_1| \neq 0.025$ (since $\rho$ is continuous in all such $x$), implying $h_g(g(x)) \approx 1$ on these points. It also follows that $L(f,x) \neq L(f,g(x))$ can only happen to a set of points with an $O({{\varepsilon}})$ $\rho$-probability. This means that $L(f,g(x))h_g(g(x)) \approx L(f,x)$ on a set of $1-O({{\varepsilon}})$ $\rho$-probability, and for these points $T_g(x)=L(f,g(x)) h_g(g(x)) - L(f,x) \approx 0$. Thus, $T_g(X) \approx 0$ with $\rho$-probability $1-O({{\varepsilon}})$. Unless the test set $S$ is concentrated in large part on the set of remaining points with $O({{\varepsilon}})$ $\rho$-probability, the test statistic $|T_{S,g}(f)|= O({{\varepsilon}})$ with high probability and our method will not reject the independence hypothesis for ${{\varepsilon}}\to 0$. When ${{\varepsilon}}$ is large (${{\varepsilon}}\to \infty$), notice that for any point $x$ with non-vanishing probability (i.e., with $\rho(x)>c$ for some $c>0$), if $g(x) \neq x$ than $\rho(g(x)) \approx 0$. Therefore, for such an $x$, if $L(f,x)=0$ and $L(f,g(x))=1$, $h_g(g(x))=\rho(g(x))/(\rho(x)+\rho(g(x))) \approx 0$, and so $T_g(x) \approx 0$ (if $L(f,g(x))=0$, we trivially have $T_g(x)=0$). If $L(f,x)=1$, we have $g(x)=x$. If $g$ is invertible at $x$ then $h_g(x)=1$ and $T_g(x)=0$. If $g$ is not invertible, then there is another $x'$ such that $g(x')=x$; however, if $\rho(x)>c$ then $\rho(x') \approx 0$ (since ${{\varepsilon}}$ is large), and so $h_g(g(x)) = \rho(x)/(\rho(x)+\rho(x')) \approx 1$, giving $T_g(x) \approx 0$. Therefore, for large ${{\varepsilon}}$, $T_g(X) \approx 0$ with high probability (i.e., for points with $\rho(x)>c$), so the independence hypothesis will not be rejected with high probability. To better understand the behavior of the test, the second row of shows the empirical test error rate ${\widehat{R}}_S(f)$, the (unadjusted) adversarial error rate ${\widehat{R}}_{S'}(f)$, and the adversarial risk estimate ${\widehat{R}}_g(f)$, together with their confidence intervals. For the non-independent model, we also show the expected error $R(f)$ (estimated over a large independent test set), while it is omitted for the independent model where it approximately coincides with both ${\widehat{R}}_S(f)$ and ${\widehat{R}}_g(f)$. While the reweighted adversarial error estimate ${\widehat{R}}_g(f)$ remains the same for all perturbations in case of an independent test set (left column), the adversarial error rate ${\widehat{R}}_{S'}(f)$ varies a lot for both the dependent and independent test sets. For example, in the case when the test samples and the model $f$ are not independent, it undershoots the true error for ${{\varepsilon}}< 10$ and overshoots it for larger perturbations. For very large perturbations (${{\varepsilon}}$ close to $100$), the behavior of ${\widehat{R}}_{S'}(f)$ depends on the model $f$: in the independent case ${\widehat{R}}_{S'}(f)$ decreases back to ${\widehat{R}}_S(f)$ because such large perturbations increasingly often change the true label of the original example, so less and less adversarial points are generated. In the case when the data and the model are not independent (right column), the adversarial perturbations are almost always successful (i.e., lead to a valid adversarial example for most originally correctly classified points), yielding an adversarial error rate close to one for large enough perturbations. This is because the decision boundary of $f$ is almost orthogonal to the true decision boundary, and so the adversarial perturbations are parallel with the true boundary, almost never changing the true label of a point. The plots of the densities (Radon-Nikodym derivatives), given in the third row of , show how the change in their values compensate the increase of the adversarial error rate ${\widehat{R}}_{S'}(f)$: in the independent case, the effect is completely eliminated yielding an unbiased adversarial error estimate ${\widehat{R}}_g(f)$, which is essentially constant over the whole range of ${{\varepsilon}}$ (as shown in the first row), while in the non-independent case the similar densities do not bring back the adversarial error rate ${\widehat{R}}_{S'}(f)$ to the test error rate ${\widehat{R}}_S(f)$, allowing the test to detect overfitting. Note that the densities exhibit similar trends (and values) in both cases, driven by the dependence of typical values of the $\rho(x) / \rho(g(x))$ ratio on the perturbation strength ${{\varepsilon}}$ for originally misclassifed points ($L(f,x)=1$) and for successful adversarial examples (i.e., $L(f,x)=0$ and $L(f,g(x))=1$). To compare the behavior of our improved, pairwise test and the basic version, the fourth row of depicts a single realization of the experiments where the $97.5\%$ confidence intervals (as computed from Bernstein’s inequality) are shown for the estimates. For the independent case, the confidence intervals of ${\widehat{R}}_S(f)$ and ${\widehat{R}}_g(f)$ overlap for all ${{\varepsilon}}$, and thus the basic test is not able to detect overfitting. In the non-independent case, the confidence intervals overlap for ${{\varepsilon}}=10$ and ${{\varepsilon}}=75$, thus the basic test is not able to detect overfitting with at a $95\%$ confidence level, while the improved test (second row) is able to reject the independence hypothesis for these ${{\varepsilon}}$ values at the same confidence level. Finally, in the fifth row of we plotted the histograms of the empirical distribution of $p$-values for both models, over 100 independent runs (between the runs, all the data was regenerated and the models were retrained). For ${{\varepsilon}}=0.1, 5, 20$, they concentrate heavily on either $\delta=0$ or $\delta=1$, and have very thin tails extending far towards the opposite end of the $[0,1]$ interval. This explains the surprisingly wide $95\%$ confidence intervals for $p$-values plotted in the first row. In particular, the fact that some $p$-values for the independent model are as low as $0.5$ does not mean the independence test is not reliable, because almost all calculated $\delta$ values are close or equal to $1$, and the few outliers are a combined consequence of the finite sample size and the effectiveness of the AEG. The additional ${{\varepsilon}}=6$ histogram for the non-independent model illustrates a regime which is in between the single-model pairwise test () completely failing to reject the independence hypothesis and clearly rejecting it. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Histograms of $p$-values from $N$-model ($N=1,2,10$) independence tests for both synthetic models and selected ${{\varepsilon}}$ values, over $100$ runs. \[fig:linear-model-multi\]](good_model_p_val_hist_multimodel_eps=10 "fig:"){width="42.00000%"} ![Histograms of $p$-values from $N$-model ($N=1,2,10$) independence tests for both synthetic models and selected ${{\varepsilon}}$ values, over $100$ runs. \[fig:linear-model-multi\]](bad_model_p_val_hist_multimodel_eps=6 "fig:"){width="42.00000%"} independent model, ${{\varepsilon}}=10$ non-independent model, ${{\varepsilon}}=6$ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- To verify experimentally whether the $N$-model independence test can be a more powerful detector of overfitting than the single-model version, in (right panel) we plotted $p$-value histograms for $N=1,2,10$ for the intermediate AEG strength ${{\varepsilon}}=6$ applied to the non-independent model over 100 training runs. Indeed, as $N$ increases, the concentration of $p$-values around in the low ($\delta \le 0.2$) range increases. For $N > 10$ we did not have enough values to plot a histogram: for $N = 25$ we obtained $\delta = 0.1851, 0.1599, 0.0661$ and $0.1941$, while for $N=100$ the $p$-value is $0.1153$. The increase of the test power becomes apparent when we compare the last value with the mean of $p$-values obtained by testing every training run separately, equal $0.5984$, and the median $0.6385$. For comparison, we also plotted in (left panel) the corresponding histograms for the independent model and a slightly higher attack strength, ${{\varepsilon}}=10$, at which the independence tests fails for the overfitted model even without averaging (see , first row, right panel). The histograms are all clustered in the $\delta$ region close to 1, indicating that the $N$-model test is not overly pessimistic. Translational AEGs for image classification models {#app:aeg-image-transl} ================================================== For image classification we consider two translation variants that are used in constructing a translational AEG. For every correctly classified image $x$, we consider translations from ${\mathcal{V}}_{\varepsilon}$ (for some ${\varepsilon}$), choosing $g(x)$ from the set $G(x)=\{ \tau_v(x) : v \in {\mathcal{V}}_{\varepsilon}\} \cup \{x\}$. If all translations result in correctly classified examples, we set $g(x) = x$. Otherwise, we use one of two possible ways to select $g(x)$ (and we call the resulting points successful adversarial examples): - *Strongest perturbation:* Assuming the number of classes is $K$, let $l(f,x) \in \mathbb{R}^K$ denote the vector of the $K$ class logits calculated by the model $f$ for image $x$, and let $l_\text{exc}(f,x) = \max_{0 \le i < K} l_i(f,x) - l_y(f, x)$. We define $$g_\text{strongest}(x) = \operatorname{argmax}_{x' \in G(x)} l_\text{exc}(f,x'),$$ with ties broken deterministically by choosing the first translation from the candidate set, going top to bottom and left to right in row-major order. Thus, here we seek a non-identical “neighbor” that causes the classifier to err the most, reachable from $x$ by translations within a maximum range ${\varepsilon}$. - *Nearest misclassified neighbor:* Here we aim to find the nearest image in $G(x)$ that is misclassified. That is, letting $d(x,x') = \|v\|_2$ if $x'=\tau_v(x)$ and $\infty$ otherwise, we define $$g_\text{nearest}(x) := \operatorname{argmin}_{x' \in G(x), L(f,x') = 1} d(x,x')$$ with ties broken deterministically as above. The two perturbation variants are successful on exactly the same set of images, hence they lead to the same adversarial error rates ${\widehat{R}}_{S'}(f)$. However, they are characterized by different values of the density $h_g$ and, consequently, yield different adversarial risk estimates ${\widehat{R}}_g(f)$ and associated $p$-values for the independence test. The main difference between them is that the “strongest” version is more likely to map multiple images to the same adversarial example, thus decreasing the densities for successful adversarial examples and, counterintuitively, increasing them for originally misclassified points (as their neighbors are less likely to be mapped to these points). To better see the effect of adversarial perturbations, we also consider two random baselines that do not take into account the success of a translation in generating misclassified points: $g_\text{random}(x)$ is chosen uniformly at random from $G(x) \setminus \{x\}$, and $g_\text{random2}(x)$ is chosen uniformly at random from $G(x)$. Maximum translations {#app:maxtranslation} -------------------- In practice, translating an image is not always simple, as the new image has to be padded with new pixels. When (central) crops of a larger image are used (as is typical for ImageNet classifiers), translations can easily be implemented as long as the resulting new cropping window stays within the original image boundaries. Even if an image can be translated by a vector $v$, this limits our ability to compute $h_g(x')$ for the adversarial image $x'$ by or for $g_\text{strongest}$ or $g_\text{nearest}$. Indeed, if an image $x$ is shifted by $v \in {\mathcal{V}}_{\varepsilon}$ to generate adversarial example $x'$, we need to examine translations of $x'$ with vectors in ${\mathcal{V}}_{\varepsilon}$ to find the neighbors $x''$ of $x'$ potentially contributing to $n(x')$ when computing $h_g(x')$. Finally we need to consider translations of $x''$ with vectors in ${\mathcal{V}}_{\varepsilon}$ to determine the exact value they contribute, that is, to compute the exact probabilities in (see for an illustration). Thus, to be able to compute the density $h_g$ for the adversarial points obtained by translations from ${\mathcal{V}}_{\varepsilon}$, we might need to be able to perform translations within ${\mathcal{V}}_{3{\varepsilon}}$. ![Image translations which need to be considered for a translational AEG with ${{\varepsilon}}=3$. The red, blue and green balls represent the center of the original image $x$, adversarial example $x' = g(x)$ and another image $x''$ contributing to $\rho_g(x')$, respectively, while the semi-translucent squares of corresponding colors represent the possible translations which need to be considered for each of $x$, $x'$ and $x''$. Solid light grey arrows represent the relationships $x'=g(x)$ and $x'=g(x'')$. Finally, the dashed arrow and the semi-translucent grey ball represent an alternative mapping, which has to be ruled out while calculating the value of $g(x'')$ and, consequently, of $h_g(x')$. It is easy to see that the colored squares (which contain the translations needing to be evaluated) extend as far as $3{{\varepsilon}}$ from the original image $x$.[]{data-label="fig:grid"}](transl_grid){width="0.4\columnwidth"} [^1]: Throughout the paper, we use the words “example” and “point” interchangeably. [^2]: Note that the adversarial error estimator’s goal is to estimate the error rate, not the adversarial error rate (i.e., the error rate on the adversarial examples). [^3]: For an event $B$, ${\mathbb{I}(B)}$ denotes its indicator function: ${\mathbb{I}(B)}=1$ if $B$ happens and ${\mathbb{I}(B)}=0$ otherwise. [^4]: Note that this assumption limits the applicability of our method, excluding such centered or essentially centered image classification benchmarks as MNIST [@MNIST] or CIFAR-10 [@CIFAR10]. [^5]: The large number of test examples ensures that the random error in the empirical error estimate is negligible.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Manufacturing superconducting circuits out of ultrathin films is a challenging task when it comes to pattern complex compounds, which are likely to be deteriorated by the patterning process. With the purpose of developing high-T$\text{c}$ superconducting photon detectors, we designed a novel route to pattern ultrathin YBCO films down to the nanometric scale. could be of interest to engineer high-Tc superconducting devices (SQUIDS, SIS/SIN junctions, Josephson junctions), as well as to treat other sensitive compounds.' address: - '$^1$ Group of Applied Physics, University of Geneva, 1211, Geneva 4, Switzerland' - '$^2$ Department of Condensed Matter Physics, University of Geneva, 1211, Geneva 4, Switzerland' author: - 'N Curtz$^{1,2}$, E Koller$^2$, H Zbinden$^1$, M Decroux$^2$, L Antognazza$^2$, [Ø]{} Fischer$^2$ and N Gisin$^1$' bibliography: - 'article.bib' title: 'Patterning of ultrathin YBCO nanowires using a new focused-ion-beam process' --- Introduction ============ Superconducting Single-Photon Detectors (SSPDs) are superconducting devices designed with the purpose of detecting light down to single-photon level. They present a good quantum efficiency (QE &gt; 10%), low dark-count rate (DK &lt; 10 Hz), and high operating frequency (&gt; GHz), outperforming in a number of cases InGaAs Avalanche PhotoDiodes (APDs) [@Thew-NIMA-2009]. These characteristics make them a premium candidate for single-photon telecommunication and applications like Quantum Key Distribution. Their underlying mechanism is based upon the formation of a hotspot in a current-biased superconducting stripe [@Goltsman-APL-2001]. The creation of the hotspot is triggered by the incoming photon whose energy locally thermalizes the stripe, confining the bias current hence raising its density, up to the point of overcoming its critical value, resulting in a local transition of the stripe. To achieve that, the circuit geometry has to fulfill drastic geometrical constraints. First, the stripe has to be narrow enough, otherwise the variation of the current density isn’t sufficiently significant, preventing the transition to take place and the voltage pulse to be detectable. The section of the nanowire should also be extremely homogeneous, since any constriction locally lowers the critical current, hence affects the whole device with for aftermath a drop of the QE; the closer to 1 is the ratio $I_\text{bias}/I_\text{c}$, the closer to the dissipative state is the device, and therefore the more important is the section homogeneity. Finally, the detector’s recovery time is governed by the device thickness, necessitating devices less than 15 nm thick. Whereas such devices have been successfully produced with low-Tc superconductors such as NbN operated at 2.6 K [@Marsili-OE-2008], their realization with high-Tc compounds remains a challenge. High-T$\text{c}$ SSPDs would nevertheless allow a higher working temperature, hence a significant reduction of the associated cryogenic costs. Among high-T$_\text{c}$ materials, cuprates present the advantage of a low kinetic inductance, leading to fast response times. From a purely structural point of view, Nd$_{\text{1+}x}$Ba$_{\text{2-}x}$Cu$_\text{3}$O$_{\text{7+}\delta}$ presents excellent crystallographic and planeity properties [@Badaye-SST-1997], which are interesting features given the aforementioned geometrical constraints of SSPDs. It turns out however that the intrinsic loose stoichiometry of Nd atoms, who interdiffuse with Ba ones, leads to a high unstability of the oxygen content and therefore to an important loss of T$_\text{c}$ during the patterning process. YBa$_\text{2}$Cu$_\text{3}$O$_{\text{7-}\delta}$ (YBCO) is much more stable and appears to be a good candidate for high-T$_\text{c}$ detectors. Several routes have been carried out to create junctions or patterned structures out of thin films, such as Selective Epitaxial Growth [@Damen-SST-1998], Electron Beam Lithography / Ion Beam Etching (EBL/IBE) [@Schneidewind-PHYSC-1995], Ion Irradiation [@Bergeal-APL-2006], using an Atomic Force Microscope [@Delacour-APL-2007] or a Focused Electron Beam Irradiation [@Booij-PRB-1997]. Nanobridges have been fabricated with a Focused Ion Beam [@Lee-PHYSC-2007]. However those experiments were performed on films with thickness $d$ &gt; 20 nm. Here we report a new method using such an apparatus to write an arbitrary pattern upon an ultrathin (&lt; 20 nm) film, allowing to manufacture YBCO superconducting circuits. The key point of this method is that to produce the structure the superconducting phase is locally altered rather than etched. Experimental ============ Overview of the FIB-based protocol ---------------------------------- We designed a 2-step protocol, involving a preliminary chemical etching followed by a focused ion beam (FIB) managed nanostructuration. An overview of the scheme used to create YBCO circuits embedding paths to characterize their transport properties with 4-point measurements is given in figure \[fig:all\_sem\]. ![ \[fig:all\_sem\] Overview of the patterning protocol. of the preliminary structure created from a thin film by photolithography and chemical etching. micrograph of a 20 $\mu$m stripe engineered with a high-current Focused Ion Beam. micrograph of a 12 nm thick, 15 $\mu$m long, 500 nm wide stripe patterned with a lower current. micrograph of a 1 $\mu$m wide meandering circuit written with a lower current. ](fig1_all_sem){width="\fgwidth"} Growth of YBCO films -------------------- $c$-axis YBCO films were deposited by RF magnetron sputtering on (100) SrTiO$_\text{3}$ substrates. Samples are heated to 700°C and exposed to the sputtering of a YBCO target in a Argon plasma. Along with the Ar flow a complementary O$_\text{2}$ flow (1:5 ratio) accounts for the growth of a tetragonal, non-superconducting phase, under a total pressure P=8.10$^\text{-2}$ mbar. An in-situ 2-hour-long annealing at 580°C in a O$_\text{2}$ atmosphere causes this YBa$_\text{2}$Cu$_\text{3}$O$_{6}$ phase to undergo a tetragonal-orthorhombic transition to optimally-doped superconducting YBa$_\text{2}$Cu$_\text{3}$O$_{\text{7-}\delta}$. The critical temperature of bulk YBCO is 92 K; this critical temperature decreases with the film’s thickness, down to T$_\text{c0}$(d=12 nm) $\approx$ 80 K. The high crystallinity of the films was demonstrated with X-ray diffraction measurements such as depicted in figure \[fig:ybco\_thickness\]. Up to 4 degrees the grazing incidence scan leads to Kiessig fringes contributions from both films [@Stoev-SAB-1999]. Around the (001) YBCO Bragg peak are located secondary fringes clearly showing a finite size effect (Laue oscillations), demonstrating the high quality of the crystallographic layers. The fringes of X-ray spectra allow to determine a film’s thickness with unit cell accuracy. In the following, all the films processed are at d=12 nm, with T$_\text{c0} \approx$ 80 K. ![ \[fig:ybco\_thickness\] Typical X-ray $\theta$-2$\theta$-spectrum of a 12 nm thick YBCO film passivated with a 8 nm thick amorphous PBCO layer. ](fig2_nc0001){width="\fgwidth"} The sputtering process also deposits CuO$_\text{2}$ surface particles located on the top of the YBCO phase, with a typical diameter estimated at 250 nm; those might indeed be the origin of shortcuts after patterning. No reproducible way of getting particle-free samples or eliminating them has been found, however most of the time these particles are not a real obstacle to the patterning process, since the core structure of the devices can most of the times be chosen to be located in a clear area (figure \[fig:all\_sem\]c). Even when it’s not the case, such as in figure \[fig:all\_sem\]d, table \[tab:repro\] shows that ultimately these particles are not an issue. In order to improve electrical contacts, we adapted gold evaporation system involving a mechanical mask to deposit gold slots in-situ immediatly after the YBCO deposition. In addition it was observed that the in-situ deposition of a 8 nm thick amorphous PrBaCuO passivation cap layer over the whole sample attenuates the loss of critical temperature occurring during the chemical etching process [@Jaeger-ITAS-1993]. Therefore, we end up with a YBCO/Au/PBCO topology. Photolithography and chemical etching ------------------------------------- The patterning of a 50 $\mu$m wide stripe is done by optical lithography and chemical etching with orthophosphoric acid H$_\text{3}$PO$_\text{4}$ (1%). Four independent structures are patterned in a single run (figure \[fig:all\_sem\]a). One of them, top-left on the figure, is specifically designed to electrically characterize the sample at this point of the protocol. FIB writing ----------- As the second step of the protocol, a dual beam (FEI Nova 600 Nanolab scanning electron microscope / 30 kV focused Ga$^\text{3+}$ ion beam) was used to carry out a finer patterning of the YBCO films into meandering stripes suitable for optical characterization of SSPDs. In the context of YBCO thin layers, some precautions to make a relevant use of the FIB are necessary since the standard manner of working is too destructive for the samples. Accordingly, we devised a specific modus operandi to satisfy the needed requirements. First of all, due to the extreme thickness of the involved films, the alignment routines cannot be handled with the standard procedure, as the whole window, containing areas destined to remain superconducting is exposed to the Ga$^\text{3+}$ beam during the operation, and irreversibly damaged. To circumvent this problem we have to synchronize the SEM and the FIB on a non-critical area, then align the pattern to etch with the sample using the SEM. Results and discussion ====================== The $\rho$ vs T curves of the samples can be followed during the different steps of the protocol: after the film deposition the resistivity is measured with the Van der Pauw method [@VanderPauw-PTR-1958]; after chemical etching a four-point measurement is carried out along a 50 $\mu$m wide stripe using the top-left structure of figure \[fig:all\_sem\]a. Eventually the final circuit is characterized as shown in figure \[fig:all\_sem\]b. Results are plotted in figure \[fig:rvst\], and demonstrate that the obtained circuits are superconducting, although a small loss of T$_\text{c}$ as well as a broadening of the resistive transition is observed. This could be explained by the fact that the samples are intrinsically inhomogeneous, and by restricting the superconducting geometry to narrow circuits new areas with lower T$_\text{c}$ enter the current path; but it could also be material damages occuring during the processing. On another hand, $\rho_\text{100K}$ and $\rho_\text{300K}$ are slightly higher at the end of the process. As mentioned previously the writing routine with the focused ion beam doesn’t etch the film; instead the beam turns locally the irradiated area into an insulating phase by implanting columnar defaults inside it. The penetration depth of Ga$^\text{3+}$ ions is about 70 nm, far superior to the YBCO and amorphous PBCO combined thickness. From this point of view this method differs from EBL/IBE processes where parts of the film are physically removed to create the superconducting pattern, but it also implies caution if one desires to implement it on with thicker films. This approach prevents the YBCO phase from being in contact with air, which could be an escape path for oxygen, both with the PBCO top passivation layer and the lateral insulating phase. Table \[tab:repro\] presents resistance measurements obtained for different structures, the last column showing the consistency of the results and ensuring that the method presented in the paper leads to reproducible structures. ![ \[fig:rvst\] Resistivity vs temperature for different samples, with additional curves at intermediary steps of the process. ](fig4_rvst){width="\fgwidth"} [@cccccc]{} shape & length & width & T$_\text{c0}$ & R(100K) & R$_\text{norm}$ ($k\Omega$)\ meander & 180 $\mu$m & 2 $\mu$m & 72 K & 32 k$\Omega$ & 5.3\ meander & 430 $\mu$m & 1 $\mu$m & 70 K & 190 k$\Omega$ & 6.6\ meander & 430 $\mu$m & 1 $\mu$m & 75 K & 140 k$\Omega$ & 4.9\ stripe & 15 $\mu$m & 1 $\mu$m & 70 K & 5 k$\Omega$ & 5\ stripe & 15 $\mu$m & .8 $\mu$m & 75 K & 6.6 k$\Omega$ & 5.3\ stripe & 15 $\mu$m & .5 $\mu$m & 65 K & 12 k$\Omega$ & 6\ Figure \[fig:rhovsj\] present the current-voltage and the $\rho$-$j$ characteristics of one sample. Both present a negative curvature in the whole range of temperature covered, yielding the existence, in every case, of a true critical current density $j_\text{c}$. This ensures that the superconducting phase isn’t in a flux-creep state in spite of the thinness of the sample, which is consistent with the observation that above flux line lattices behave like a 3D system [@Triscone-REPRO-1997]. The critical current is generally defined using the standard criterion of an electric field of 1$\mu$V/cm. In our case the measurement noise due to the high impedance of the line sets a resolution limit of 2 $\mu$V across the whole meander, corresponding to an equivalent electric field as shown on figures \[fig:rhovsj\]a and \[fig:rhovsj\]b. We clearly see on figure \[fig:rhovsj\]b, especially at T = 65 K, that at 110 $\mu$V/cm (or 2 $\mu$V) the critical current densities are overestimated and that such a determination isn’t relevant. However, it has been reported [@Antognazza-PHYSC-2002] that a precise measurement of the flux flow resistivity at high current density allows to extract $j_\text{c}$ from the best power law fit , $j_\text{c}$ and $n$ being the fitting parameters. Figure \[fig:jc\] shows the very good fits obtained with power laws, giving good confidence for the determination of $j_\text{c}$’s by this procedure. Moreover, $n$ is found to be around 5 with little variation over the whole temperature range. A significant advantage of this fitting method is that the knowledge of the current-voltage characteristic at high current density is sufficient to infer $j_\text{c}$. The temperature-dependence of $j_\text{c}$ is described within the Ginzburg-Landau theory of superconductivity as [@Poole-Super-1995]. Figure \[fig:gl\] shows that this model fits extremely well our experimental data and allows to extrapolate , which is two orders of magnitude smaller than the depairing limit , demonstrating the good quality of the samples after the processing. It’s worth noting that the hypothesis of inhomogeneities in the sample previously mentioned is reinforced by the fact that the best Ginzburg-Landau fit is obtained for , below the critical temperature of 72 K found with resistivity measurements (see table \[tab:repro\]). ![ \[fig:rhovsj\] ([*[a]{}*]{}) Voltage vs current density at various temperatures for a 2 $\mu$m wide meandering circuit. vs current density for the same sample. ](fig5a_ic "fig:"){width="\fgwidth"} ![ \[fig:rhovsj\] ([*[a]{}*]{}) Voltage vs current density at various temperatures for a 2 $\mu$m wide meandering circuit. vs current density for the same sample. ](fig5b_rhovsj "fig:"){width="\fgwidth"} \[0pt\]\[0pt\][![ \[fig:jc\] Same data as in figure \[fig:rhovsj\]a represented as voltage vs $j/j_\text{c}-1$. Power law fits $V=B(j-j_\text{c})^n$ (continuous lines) correctly describe the curves and allow to determine $j_\text{c}$. In inset is reported $n$ as temperature drops from 65 K to 30 K. ](fig6_lnjsjcm1_inset "fig:"){width="\iswidth"}]{} ![ \[fig:gl\] Critical current density vs reduced temperature $t=T/T_\text{c}$ for the same meandering circuit. $j_\text{c}$ is obtained through the power law fits presented in figure \[fig:jc\]. The solid line is the best fit with the theoretical Ginzburg-Landau model. ](fig7_gl){width="\fgwidth"} Conclusion ========== To summarize, a modus operandi to create ultrathin superconducting YBCO circuits by implanting Ga$^\text{3+}$ ions with a Focused Ion Beam was devised. We confined current in straight stripes down to 500 nm wide and produced 1 $\mu$m wide meandering wires using a 50 pA beam current. The consistency of the resistivity vs temperature profiles measured on the samples at the different steps of the processing ensures the reproducibility of this patterning method for superconducting films. For one sample, the critical current density extrapolated to 0 K has been found to be only two orders of magnitude times smaller than the depairing limit, demonstrating its quality. The most natural application of this protocol would be the manufacturing of high-T$_\text{c}$ superconducting devices. Photoresponse experiments to characterize the devices as single-photon detectors are left for future work. The authors would like to thank Michaël Pavius, Kevin Lister, Samuel Clabecq, and Philippe Flückiger for providing access and training to EPFL’s focused ion beam, as well as Jean-Claude Villégier for helpful discussions. This work is supported by the European project Sinphonia (contract No. NMP4-CT-2005-16433), and the Swiss poles NCCR MaNEP and NCCR Quantum Photonics.
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper presents a high-order method for solving an interface problem for the Poisson equation on embedded meshes through a coupled finite element and integral equation approach. The method is capable of handling homogeneous or inhomogeneous jump conditions without modification and retains high-order convergence close to the embedded interface. We present finite element-integral equation (FE-IE) formulations for interior, exterior, and interface problems. The treatments of the exterior and interface problems are new. The resulting linear systems are solved through an iterative approach exploiting the second-kind nature of the IE operator combined with algebraic multigrid preconditioning for the FE part. Assuming smooth continuations of coefficients and right-hand-side data, we show error analysis supporting high-order accuracy. Numerical evidence further supports our claims of efficiency and high-order accuracy for smooth data.' address: - 'Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign' - 'Department of Computer Science, University of Illinois at Urbana-Champaign' author: - 'Natalie N. Beams' - Andreas Klöckner - 'Luke N. Olson' bibliography: - 'FErefs.bib' title: 'High-order Finite Element–Integral Equation Coupling on Embedded Meshes ' --- Interface problem [,]{} Fictitious domain [,]{} Layer potential [,]{} FEM-IE coupling [,]{} Iterative methods [,]{} Algebraic Multigrid Introduction {#sec:intro} ============ The focus of this work is on the following model interface problem for the Poisson equation: \[eq:interface\] $$\begin{aligned} {2} -\triangle u(x) &=\, f(x) &\quad& \textnormal{in $\Omega^i \cup \Omega^e$}\\ u^i(x) &=\,cu^e(x) + a(x) &\quad& \textnormal{on $\Gamma$}\\ {\frac{\partialu^i(x)}{\partialn}} &=\, \kappa {\frac{\partialu^e(x)}{\partialn}} + b(x) &\quad& \textnormal{on $\Gamma$},\end{aligned}$$ where two bounded domains $\Omega^i,\Omega^e\subset \mathbb R^d$ are separated by an interface $\Gamma=\bar \Omega^i \cap \bar \Omega^e$ so that $\partial \Omega^i=\Gamma$. The restriction of $u$ to domain $\Omega^\alpha$ is written as $u^\alpha$ ($\alpha\in\{i,e\}$). We assume $\Omega^i$ has a smooth boundary. Example domains are illustrated in Figure \[fig:interface\]. The forcing function $f$ may be discontinuous across $\Gamma$, provided smooth extensions are available in a large enough region across $\Gamma$, as discussed in more detail in Section \[sec:int-poisson\]. General interface problems of this kind describe, e.g., steady-state diffusion in multiple-material domains, and are closely related to problems from multi-phase low Reynolds number flow, such as viscous drop deformation and breakup [@StoneReview1994]. The presented method is usable and much of the related analysis are valid in two or three dimensions. Numerical experiments in two dimensions support the validity of our claims; experiments in three dimensions are the subject of future investigation. (-1,-1) rectangle (1,1); plot \[smooth cycle\] coordinates [(-0.3,-0.5) (0.5,-0.45) (0.2,0.5) (-0.45,0.3) (-0.2,-0.1)]{}; at (0.1,-0.1) [$\Omega^i$]{}; at (0.38, 0.52) [$\Gamma$]{}; at (0.7, -0.8) [$\Omega^e$]{}; Though finite element methods offer great flexibility with respect to domain geometry, generating domain-conforming meshes is often difficult. Furthermore, fully unstructured meshes may require more computation for a given level of uniform accuracy than structured meshes, and evaluation of the solution at non-mesh points is considerably more complicated than for regular Cartesian grids. Consequently, there is much interest in embedded domain finite element methods, where the problem domain $\Omega$ is placed inside of a larger domain $\hat{\Omega}$, which may be of any shape but is here chosen to be rectangular and discretized with a structured grid for numerical convenience. The problem is recast on the new domain while still satisfying the boundary conditions on the original boundary $\partial\Omega$. Examples of this type of approach include *finite cell methods* [@ParvizianFCM:2007; @Kollmannsberger:2015], which cast the problem in the form of a functional to be minimized, where the functional is an integral only on the original domain $\Omega$. These methods require careful treatment of elements that are partially inside $\Omega$, especially those containing only small pieces of $\Omega$. The boundary conditions are enforced weakly using Lagrange multipliers or penalty terms in the functional. *Fictitious domain methods* [@Glowinski:1994] avoid special treatment of cut-cell elements by extending integration outside $\Omega$ and extending the right-hand side as necessary. This has also been developed in the context of least-squares finite elements [@ParussiniPediroda:2009], where Dirichlet boundary conditions are enforced with Lagrange multipliers. A conceptual overview of these methods is given in Figures \[fig:method-fcm\] and \[fig:method-fdm\]. plot \[smooth cycle\] coordinates [(-0.7,-0.5) (0.5,-0.6) (0.6,0.8) (-0.45,0.3) (-0.4,-0.1)]{}; (-1.,-0.8) grid (1.,1.); at (0.3,-0.1) [$\Omega$]{}; at (0.7, -0.5) [$\hat\Omega$]{}; (-1., -0.8) rectangle (1.,1.); plot \[smooth cycle\] coordinates [(-0.7,-0.5) (0.5,-0.6) (0.6,0.8) (-0.45,0.3) (-0.4,-0.1)]{}; (-1.,-0.8) grid (1.,1.); at (0.3,-0.1) [$\Omega$]{}; at (0.7, -0.5) [$\hat\Omega$]{}; (-1.,-0.8) rectangle (1.,1.); plot \[smooth cycle\] coordinates [(-0.84,-0.6) (0.6,-0.72) (0.72,0.96) (-0.54,0.36) (-0.56,-0.12)]{}; plot \[smooth cycle\] coordinates [(-0.5,-0.4) (0.40,-0.48) (0.48,0.64) (-0.36,0.24) (-0.28,-0.08)]{}; plot \[smooth cycle\] coordinates [(-0.7,-0.5) (0.5,-0.6) (0.6,0.8) (-0.45,0.3) (-0.4,-0.1)]{}; (-1.,-0.8) grid (1.,1.); at (0.3,-0.1) [$\Omega$]{}; at (-0.7, 0.5) [$\hat\Omega$]{}; The *immersed interface method (IIM)* [@LeVequeLiIIM:1994] was introduced to solve elliptic interface problems with discontinuous coefficients and singular sources using finite differences on a regular Cartesian grid. In the IIM, a finite difference stencil is modified to satisfy the interface conditions at the boundary. The immersed interface method is closely related is to the *immersed boundary method* [@MittalIaccarino-Rev:2005]. The IIM has also been extended to finite element methods, often called *immersed finite element* methods (IFEM) [@LiIIM-FEM:1998; @LiIFEM:2003]. Similar to the IIM, the IFEM changes the finite element representation by creating special basis functions that satisfy the interface conditions at the interface. The method has also been modified to handle non-homogeneous jump conditions [@GongIFEM:2008; @HeIFEM:2011]. The family of immersed methods is represented by Figure \[fig:method-ife\]. In this paper, we propose a combined finite element-integral equation (FE-IE) method for solving interface problems such as . Integral equation (IE) methods excel at solving homogeneous equations: a solution is constructed in the entire domain through an equation defined and solved only on the boundary, resulting in a substantial cost savings over volume-discretizing methods (including FEM) and reducing the difficulty of mesh generation. Meanwhile, FE methods deal easily with inhomogeneous equations and other complications in the PDE. Based on these complementary strengths, the method presented in this paper combines boundary IE and volume FE methods in a way that retains the high-order accuracy achievable in both schemes. As such, the novel contributions of this paper are as follows: - Introduce high-order accurate, coupled FE-IE methods for three foundational problems: interior, exterior, and domains with interfaces; - develop appropriate layer potential representations, leading to integral equations of the second kind; - establish theoretical properties supporting existence and uniqueness of solutions to the various coupled FE-IE problems presented; and - support the accuracy and efficiency of our methods with rigorous numerical tests involving the three foundational problem types. Limitations of the current contribution include the need for smooth continuation of the right-hand side $f$ across the interface in order to obtain high-order accuracy, the fact that some of the theory and our numerical experiments are limited to two dimensions at present even if the scheme should straightforwardly generalize to three dimensions in principle, and the need for smooth geometry and convexity (for some results) to obtain theoretical guarantees of high-order accuracy. The literature includes examples of related coupling approaches combining finite element methods with integral equations for irregular domains, such as for example work on the Laplace equation on domains with exclusions [@celorrio_overlapped_2004] (similar in spirit to the problem of Section \[sec:int-ext\]) with a focus on Schwarz iteration, work for transmission-type problems on the Helmholtz equation [@dominguez_overlapped_2007], work on the Stokes equations on a structured mesh [@BirosStokesFE-IE:2003] or on the Poisson and biharmonic equations [@Mayo:1984]. These approaches use a layer potential representation that exists in both the actual domain $\Omega$ and in $\hat{\Omega}\backslash\Omega$ and is discontinuous across the domain boundary $\partial\Omega$. Using known information about the discontinuity in this representation and the derivatives at the boundary $\partial\Omega$, modifications to the resulting finite-element stencil are calculated for the differential operator of the PDE so that the integral representation is valid on the volume mesh. The coupling of FE and boundary integral or boundary element methods through a combined variational problem has been applied to solving unbounded exterior problems, e.g. Poisson [@JohnsonNedelec1980; @MeddahiEtAl1996], Stokes [@SequeiraStokes1983; @MeddahiSayasStokes2000], and wave scattering [@GaneshMorgenstern2016; @HassellSayas2016]. In contrast, the method of [@RubergCirak:2010] separates the solution into two additive parts: a finite element solution found on the regular mesh, and an integral equation solution defined by the boundary of the actual domain. Unlike Lagrange multiplier methods or variational coupling, the boundary conditions in this method are enforced exactly at every discretization point on the embedded boundary, and no extra variables are introduced or additional terms added to the finite element functional. We follow the basic approach of [@RubergCirak:2010] in this contribution while improving on accuracy, layer potential representations, and introducing efficient iterative solution approaches. Unlike the XFEM family of methods [@moes1999finite; @belytschko2001arbitrary] which do not support curvilinear interfaces without further work (e.g. [@cheng2010higher]), the FE-IE method presented here does not constrain the shape of the interface and does not require the creation of special basis functions to satisfy the boundary conditions; in fact, the underlying computational implementations of the FE and IE solvers remain largely unchanged. This is especially advantageous when considering many different domains $\Omega$, as FE discretizations, matrices, and preconditioners can be re-used. The method also avoids the need to impose additional jump conditions to use higher order basis functions, as in [@AdjeridLin:2009], at the cost of a decreased accuracy for data that cannot be smoothly extended across the boundary. The method handles general jump conditions (given by $c$, $\kappa$, $a(x)$ and $b(x)$ in ) in a largely unmodified manner. Indeed, the functions $a(x)$ and $b(x)$ appear only in the right hand side of the problem. This is achieved through considering a notional splitting of : First, an interior problem embedded in a rectangular fictitious domain $\hat\Omega$, and second, a domain with an exclusion — i.e., identifying the interior domain $\Omega_i$ as the excluded area — as illustrated in Figure \[fig:interface-split\]. Each subproblem is then decomposed into an integral equation part and a finite element part, with coupling necessary in the case of the domain with an exclusion. Finally, the two subproblems are coupled through the interface conditions. [0.45]{} (-1,-1) rectangle (1,1); (-0.6,-0.6) rectangle (0.6,0.6); plot \[smooth cycle\] coordinates [(-0.3,-0.5) (0.5,-0.45) (0.2,0.5) (-0.45,0.3) (-0.2,-0.1)]{}; at (0.1,-0.1) [$\Omega_i$]{}; at (0.4, 0.48) [$\Gamma$]{}; at (-0.4, -0.25) [$\hat\Omega$]{}; [0.45]{} (-1,-1) rectangle (1,1); plot \[smooth cycle\] coordinates [(-0.3,-0.5) (0.5,-0.45) (0.2,0.5) (-0.45,0.3) (-0.2,-0.1)]{}; at (0.38, 0.52) [$\Gamma$]{}; at (0.7, -0.8) [$\Omega_e$]{}; The paper is organized as follows. First, the FE-IE decomposition is developed for each type of subproblem, starting with the interior embedded domain problem in Section \[sec:int-poisson\]. The form of the error is derived and the method is shown to achieve high-order accuracy. Then, we present a new splitting for a domain with an exclusion in which the IE part is considered as a pure exterior problem. This leads to a a coupled system in Section \[sec:int-ext\]. Finally, in Section \[sec:interface\], the interior and exterior subproblems are coupled to solve the interface problem  showing how to retain well-conditioned integral operators and second-kind integral equations in the resulting system of equations. Briefly on Integral Equation Methods {#sec:ie-intro} ------------------------------------ To fix notation and for the benefit of the reader unfamiliar with boundary integral equation methods, we briefly summarize the approach taken by this family of methods when solving boundary value problems for linear, homogeneous, constant-coefficient partial differential equations. Let $\Gamma \subset \mathbb{R}^d$ ($d=2,3$) be a smooth, rectifiable curve. For a scalar linear partial differential operator ${\mathcal{L}}$ with associated free-space Green’s function $G(x, x_0)$, the single-layer and double-layer potential operators on a density function ${\gamma}$ are defined as \[eq:layerpots\] $$\begin{aligned} \label{eq:slp-definition} {\mathcal{S}}{\gamma}(x) & = \int_{\Gamma} G(x,x_0) {\gamma}(x_0) \, dx_0,\\ {\mathcal{D}}{\gamma}(x) & = \int_{\Gamma} \left(\nabla_{x_0} G(x,x_0) \cdot {\hat n}(x_0)\right) {\gamma}(x_0) \, dx_0.\end{aligned}$$ Here $\nabla_{x_0}$ denotes the gradient with respect to the variable of integration and ${\hat n}(x_0)$ is the outward-facing normal vector. In addition, the normal derivatives of the layer potentials are denoted $${\mathcal{S}'}{\gamma}(x)={\hat n}(x)\cdot \nabla_{x}{\mathcal{S}}{\gamma}(x),\quad\text{and}\quad {\mathcal{D}'}{\gamma}(x)={\hat n}(x)\cdot \nabla_{x}{\mathcal{D}}{\gamma}(x),$$ respectively. For the Laplacian $\triangle$ in two dimensions, $G(x,x_0) = -{(2 \pi)}^{-1} \log(x - x_0)$. As ${\mathcal{D}}{\gamma}(x)$ and ${\mathcal{S}'}{\gamma}(x)$ are discontinuous across the boundary $\partial\Omega$, we use the notation ${\bar{\mathcal{D}}}{\gamma}(x)$ and ${\bar{\mathcal{S}'}}{\gamma}(x)$ to denote the principal value of these operators for target points $x\in\partial\Omega$. Consider, as a specific example, the exterior Neumann problem in two dimensions for the Laplace equation: $$\triangle u(x) = 0\quad (x\in \mathbb R^d\setminus \Omega),\quad ( {\hat n}(x) \cdot \nabla u(y)) \to g\quad (x\in \partial\Omega, y\to x_+),\quad u(x)\to 0 \quad(x\to\infty),$$ where $\lim_{y\to x_+}$ denotes a limit approaching the boundary from the exterior of $\Omega$. By choosing the integration surface $\Gamma$ in the layer potential as $\partial \Omega$ and representing the solution $u$ in terms of a single layer potential $u(x)=\mathcal S{\gamma}(x)$ with an unknown density function ${\gamma}$, we obtain that the Laplace PDE and the far-field boundary condition are satisfied by $u$. The remaining Neumann boundary condition becomes, by way of the well-known jump relations for layer potentials (see [@Kress:2014]) the integral equation of the second kind $$-\frac {\gamma}2 + \mathcal{\bar S}'{\gamma}= g.$$ The boundary $\Gamma$ and density ${\gamma}$ may then be discretized and, using the action of $\mathcal{\bar S}'$ and, e.g. an iterative solver, solved for the unknown density ${\gamma}$. Once ${\gamma}$ is known, the representation of $u$ in terms of the single-layer potential  may be evaluated anywhere in $\mathbb R^d\setminus\Omega$ to obtain the sought solution $u$ of the boundary value problem. Interior problems {#sec:int-poisson} ================= We base our discussion in this section on a coupled finite element-integral equation (FE-IE) method presented in [@RubergCirak:2010], which, as described there, achieves low-order accuracy. In this paper we modify the discretization to achieve high-order accuracy, with analysis and numerical data in support. In addition, we quantify the extent to which reduced smoothness in the data results in degraded accuracy. We derive this combined method and present an error analysis for our implementation of the method, demonstrating the convergence for problems of varying smoothness. Assume a smooth domain $\Omega \subset \mathbb{R}^n$ and consider the boundary value problem $$\begin{aligned} -\triangle u(x) = &\, f \qquad x \in \Omega \nonumber \\ u(x) = & \, g \qquad x \in \partial\Omega. \label{eq:int-poisson}\end{aligned}$$ We introduce a domain $\hat\Omega$ such that $\Omega \subset \hat\Omega$ with $\partial \hat \Omega \cap \partial \Omega = \emptyset$. We represent the solution of  as $$u(x)= u_1(x)+u_2(x) \mathbf{1}_\Omega(x) \qquad x\in\hat \Omega,\\$$ where $u_1$ is constructed as a finite element solution obtained on the artificial larger domain $\hat\Omega$ and $u_2$ represents the integral equation solution defined in $\Omega$. $\mathbf 1_\Omega(x)$ represents the indicator function that evaluates to 1 if $x\in \Omega$ and $0$ otherwise. If necessary, the indicator function may be evaluated to the same accuracy as $u_2$ as the negative double layer potential with the unit density — i.e., $\mathbf{1}_\Omega(x)=-{\mathcal{D}}_{\partial\Omega} 1(x)$. This yields two problems: $$\begin{aligned} \text{[FE]} \quad -\triangle u_1(x) & = f \quad x \in \hat\Omega & \quad \text{[IE]} \quad -\triangle u_2(x) & = 0 \qquad\quad x \in \Omega\nonumber \\ u_1 & = 0 \quad x \in \partial\hat\Omega & u_2 & = g - u_1 \;\;\, x \in \partial\Omega. \label{eq:int-split}\end{aligned}$$ Because $u_1$ does not depend on $u_2$, the two problems may be solved with forward substitution. Furthermore, the integral equation solution $u_2$ solves the Laplace equation; consequently, the finite element solver alone handles the right hand side of the original problem . In , data for $f$ is only available on $\Omega$. In  however, $f$ is assumed to be defined on the entirety of the larger domain $\hat \Omega$. In many situations (e.g., when a global expression for the right-hand side is available), this poses no particular problem. If a natural extension of $f$ from $\Omega$ to $\hat\Omega$ is unavailable, it may be necessary to compute one. In this case, the degree of smoothness of the resulting right-hand $f$ may become the limiting factor in the convergence of the overall method. A simple, linear-time (though non-local) method to obtain such an extension involves the solution of an (in this case) exterior Laplace Dirichlet problem, yielding an $f$ of class $C^0$. This may be efficiently accomplished using layer potentials [@askham2017adaptive]. The use of a biharmonic problem yields a smoother $f\in C^1$, albeit at greater cost. Below, we show convergence data for various degrees of smoothness of $f$ but otherwise leave this issue to future work. Finite element formulation -------------------------- First, we solve the FE problem, which, in the form of , is not coupled to the IE part. The weak form of  requires finding a $$u_1 \in H^1_0(\hat\Omega) \text{ such that}\quad {\mathcal{F}}(v)\, u_1 = \mathcal{M}(v)\,f \quad \forall v \in H^1_0(\hat\Omega), \label{eq:int-weak-poisson}$$ where ${\mathcal{F}}$ and $\mathcal{M}$ are defined for any $v \in H^1_0(\hat\Omega)$ as $${\mathcal{F}}(v)\, u_1 = \int_{\hat\Omega} \nabla u_1 \cdot \nabla v \, dV \qquad \text{and} \qquad \mathcal{M}(v)\,f = \int_{\hat\Omega} fv \, dV.$$ For the discrete FE solution of the continuous weak problem (\[eq:int-weak-poisson\]), we consider a Galerkin formulation with $u_1$, $v \in V^h \subset H^1_0(\hat\Omega)$. Layer potential representation ------------------------------ We represent $u_2 = {\mathcal{D}}{\gamma}$ in terms of the double layer potential of an unknown density ${\gamma}$. $$\left\{-\frac{1}{2}I +{\bar{\mathcal{D}}}\right\} {\gamma}= g - u_1\rvert_{\partial\Omega}. \label{eq:int-ie}$$ This operator is known to have a trivial nullspace and thus we are guaranteed existence and uniqueness of a density solution ${\gamma}\in C^0(\partial\Omega)$ for $g-u_1\rvert_{\partial\Omega} \in C^0(\partial\Omega)$ by the Fredholm alternative for a sufficiently smooth curve $\partial\Omega$ [@Kress:2014]. For concreteness, we next discuss the Nyström discretization for this problem type, omitting analogous detail for subsequent problem types. We fix a family of composite quadrature rules with weights $w_{h,i}$ and nodes $\xi_{h,i} \subset \partial\Omega$ parametrized by the element size $h$ so that $$\left| \sum_{i=1}^{n_h} w_{h,i,x} K^{{\mathcal{A}}}(x,\xi_{h,i}) {\gamma}(\xi_{h,i}) - \int_{\partial\Omega} K^{{\mathcal{A}}}(x,\xi) {\gamma}(\xi) d\xi \right| \le C h^q$$ for a kernel $K^{{\mathcal{A}}}$ associated with an integral operator ${\mathcal{A}}$ and densities ${\gamma}\in C^q(\partial\Omega)$. (We use this notation throughout, e.g. we use $K^{{\mathcal{D}}}$ to denote the kernel of the double layer potential ${\mathcal{D}}$.) We let $${\mathcal{A}}^h {\gamma}^h= \sum_{i=1}^{n_h} w_{h,i,x} K^{{\mathcal{A}}}(x,\xi_{h,i}) {\gamma}(\xi_{h,i})$$ for general layer potential operators ${\mathcal{A}}$. Using a conventional Nyström discretization, the unknown discretized density $${\gamma}^h={[{\gamma}^h(\xi_{h,1}), \dots, {\gamma}^h(\xi_{h,N_h})]}^T \label{eq:discrete-density}$$ satisfies the linear system given by $$-\frac 12 {\gamma}^h(\xi_{h,i}) + \sum_{j=1}^{N_h} w_{h,j,\xi_{h,i}} K^{{\mathcal{D}}}(\xi_{h,i},\xi_{h,j}) {\gamma}^h(\xi_{h,j}) =g(\xi_{h,i})-u_1(\xi_{h,i}). \label{eq:nystrom-linsys}$$ Once ${\gamma}^h$ is known, the solution $u_2$ can be computed as $$u_2(x)=\sum_{j=1}^{N_h} w_{h,j,x} K^{{\mathcal{D}}}(x,\xi_{h,j}) {\gamma}^h(\xi_{h,j}). \label{eq:int-ie-volume-eval}$$ We note that, although the density is numerically represented only in terms of pointwise degrees of freedom, ${\gamma}^h$ can be extended to a function defined everywhere on $\partial\Omega$ by making use of the fact that it solves an integral equation of the second kind , yielding $${\gamma}^h(x) = 2\sum_{j=1}^{N_h} w_{h,j,x} K^{{\mathcal{D}}}(x,\xi_{h,j}) {\gamma}^h(\xi_{h,j}) -2(g(x)-u_1(x)),$$ while, on account of , agreeing with the prior definition of ${\gamma}^h$. Error analysis -------------- In this section we establish a decoupling estimate that allows us to express the error in the overall solution in terms of the errors achieved by the IE and FE solutions to their associated sub-problems given in . For notational convenience, we introduce $r_1 = u_1\rvert_{\partial\Omega}$ for the restriction of $u_1$ to the boundary of $\Omega$ and $r_1^h = u_1^h\rvert_{\partial\Omega}$ for the restriction of the approximate solution $u_1^h$ in the finite element subspace $V^h \subset H^1_0(\hat\Omega)$. \[lem:int-decoupling\] Suppose $\partial\Omega$ is a sufficiently smooth bounding curve, and let $$u^h(x)= u_1^h(x)+u_2^h(x) \mathbf{1}_\Omega(x) \qquad x\in\hat \Omega,\\$$ where $u_1^h$ solves the variational problem  and where $u_2^h$ is the potential obtained by solving  and computed according to . Further suppose that the family of discretizations ${\bar{\mathcal{D}}}^h$ is collectively compact and pointwise convergent. Then the overall solution error satisfies $$\label{eq:decoupling} {{\|u-u^h\|_{\infty;\Omega}}} \le C \left( {{\|u_1-u_1^h \|_{\infty;\Omega}}} +{{\|{\mathcal{D}}- {\mathcal{D}}^h\|_{\infty;\Omega}}}{{\|{\gamma}\|_{\infty;\partial\Omega}}} +{{\|g-g^h\|_{\infty;\partial\Omega}}} +{{\|r_1-r_1^h\|_{\infty;\partial\Omega}}} \right),$$ for a constant $C$ independent of the mesh size $h$ or other discretization parameters, as soon as $h$ is sufficiently small. In , $\gamma$ refers to the solution of the integral equation . The purpose of Lemma \[lem:int-decoupling\] is to reduce the error encountered in the coupled problem to a sum of errors of boundary value problems each solved by a single, uncoupled method, so that standard FEM and IE error analysis techniques apply to each part. By the triangle inequality, $${{\|u-u^h \|_{\infty;\Omega}}} \leq {{\|u_1-u_1^h \|_{\infty;\Omega}}} + {{\|u_2-u_2^h\|_{\infty;\Omega}}}. \label{eq:int-err-total-general}$$ First consider ${{\| u_2-u_2^h\|_{\infty;\Omega}}}$ in the IE solution, which we bound with $${{\|u_2-u_2^h\|_{\infty;\Omega}}} \le {{\|({\mathcal{D}}- {\mathcal{D}}^h) {\gamma}\|_{\infty;\Omega}}} + {{\|{\mathcal{D}}^h ({\gamma}- {\gamma}^h)\|_{\infty;\Omega}}}. \label{eq:int-eval-plus-dens}$$ To estimate the second term, we make use of the fact that $-1/2I+{\bar{\mathcal{D}}}$ is has no nullspace [@Kress:2014] and is thereby invertible by the Fredholm Alternative. For a sufficiently small $h$, and because the ${\bar{\mathcal{D}}}^h$ is collectively compact and pointwise convergent, Anselone’s Theorem [@anselone1964approximate] yields invertibility of the discrete operator $(-I/2+{\bar{\mathcal{D}}}^h)$ as well as the estimate $${{\|{\gamma}- {\gamma}^h\|_{\infty;\partial\Omega}}} \le 2C' {{\|({\bar{\mathcal{D}}}^h-{\bar{\mathcal{D}}}){\gamma}\|_{\infty;\partial\Omega}}} +{{\|(g-r_1)-(g^h-r_1^h)\|_{\infty;\partial\Omega}}}$$ where $$C'= \frac{1+2{{\|{(I-2{\bar{\mathcal{D}}})}^{-1}{\bar{\mathcal{D}}}^h\|_{\infty;\partial\Omega}}}} {1-4{{\|{(I-2{\bar{\mathcal{D}}})}^{-1}({\bar{\mathcal{D}}}^h-{\bar{\mathcal{D}}}){\bar{\mathcal{D}}}^h\|_{\infty;\partial\Omega}}}},$$ which is bounded independent of discretization parameters once $h$ (and thus $({\bar{\mathcal{D}}}^h-{\bar{\mathcal{D}}})$) is small enough. Using submultiplicativity and gathering terms in  yields $${{\|u_2-u_2^h\|_{\infty;\Omega}}} \le (1+2C'{{\|{\mathcal{D}}^h\|_{\infty;\Omega}}}){{\|{\mathcal{D}}- {\mathcal{D}}^h\|_{\infty;\Omega}}}{{\|{\gamma}\|_{\infty;\partial\Omega}}} + {{\|{\mathcal{D}}^h\|_{\infty;\Omega}}} {{\|(g-g^h)-(r_1-r_1^h)\|_{\infty;\partial\Omega}}}.$$ Because ${{\|{\mathcal{D}}^h\|_{\infty;\Omega}}}$ is bounded independent of $h$ by assumption and with some constant $C$, we find $${{\|u_2-u_2^h\|_{\infty;\Omega}}} \le \\ C\big( {{\|{\mathcal{D}}- {\mathcal{D}}^h\|_{\infty;\Omega}}}{{\|{\gamma}\|_{\infty;\partial\Omega}}} +{{\|g-g^h\|_{\infty;\partial\Omega}}} +{{\|r_1-r_1^h\|_{\infty;\partial\Omega}}} \big),$$ allowing us to bound ${{\|u_2-u_2^h\|_{\infty;\Omega}}}$ in terms of the quadrature error ${{\|{\mathcal{D}}- {\mathcal{D}}^h\|_{\infty;\Omega}}}$ of the numerical layer potential operator as well as the interpolation error ${{\|g-g^h\|_{\infty;\partial\Omega}}}$ and the FEM evaluation error ${{\|r_1-r_1^h\|_{\infty;\partial\Omega}}}$. The latter, along with ${{\|u_1-u_1^h \|_{\infty;\Omega}}}$ is controlled by conventional $L^\infty$ FEM error bounds, for example the contribution [@haverkamp_aussage_1984] (2D) or the recent contribution [@leykekhman_finite_2016 (5) and Thm. 12] (3D). These references provide bounds that are applicable with minimal smoothness assumptions on $f$ and homogeneous Dirichlet BCs as in . They apply generally on convex polyhedral domains, a setting that is well-adapted to our intended application (cf. Figure \[fig:interface\]). Analogous bounds can be derived in Sobolev spaces, specifically the $H^{1/2}(\partial \Omega)$ norm on the boundary and the $H^1(\Omega)$, which in turn can be related to $L^2$ the norms included in the results of the numerical experiments described below. Numerical experiments {#sec:int-num-results} --------------------- The method we develop in this paper is a combined FE-IE solver that improves on the approach in [@RubergCirak:2010] in a number of ways. First, we make use of a representation of the solution that gives rise to an integral equation of the second kind, leading to improved conditioning and the applicability of the Nyström method. Second, we use high-order accurate quadrature for the evaluation of the layer potentials, leading to improved accuracy. The earlier method [@RubergCirak:2010] uses a coordinate transformation to remove the singularity and employs adaptive quadrature for points near the singularity. We make use of quadrature by expansion (QBX) [@KloecknerQBX:2013] using the `pytential` [@pytential-github] library. QBX evaluates layer potentials on and off the source surface by exploiting the smoothness of the potential to be evaluated. It forms local/Taylor expansions of the potential off the source surface using the fact that the coefficient integrals are non-singular. Compared with the classical singularity removal method based on polar coordinates, QBX is more general in terms of the kernels it can handle, and it unifies on- and off-surface evaluation of layer potentials. It is also naturally amenable to acceleration via fast algorithms [@gigaqbx2d]. The finite element terms are evaluated in standard $Q^n$ spaces using the `deal.II` library [@dealII84; @BangerthHartmannKanschat2007]. We consider three different right-hand sides with varying smoothness: a constant function, a $C^0$ piecewise bilinear function, and a piecewise constant function. $$\begin{aligned} f_\text{c}(x,y) &= 1, \label{eq:rhs0}\\ f_\text{bl}(x,y) &= \xi(x)\xi(y), \, \text{where} \,\, \xi(z) = \begin{cases} \phantom{-}\frac{5}{3}z + 1 & \text{if} \quad z \leq 0, \\ -\frac{5}{3}z + 1 & \text{if} \quad z > 0,\\ \end{cases} \label{eq:rhs1}\\ f_\text{pw}(x,y) &= \eta(x)\eta(y), \, \text{where} \,\, \eta(z) = \begin{cases} -1 & \text{if} \quad z \leq 0, \\ \phantom{-}1 & \text{if} \quad z > 0. \\ \end{cases} \label{eq:rhs2} \end{aligned}$$ These cases are selected to expose different levels of regularity in the problem. A classical $C^2$ solution is expected from $f_\text{c}$, while $f_\text{bl}$ and $f_\text{pw}$ admit solutions only in $H^3$ and $H^2$, respectively. All problems are defined on the domain $\Omega = \{x: |x|_2 \leq 0.5\}$, a circle of radius $0.5$. In addition, the domain is embedded in a square domain $\hat\Omega = [-0.6, -0.6] \times [0.6, 0.6]$, as illustrated in Figure \[fig:int-solution\] along with the solution obtained when using the right-hand side $f_\text{c}$. [0.3]{} (-1.5, -2) rectangle (1.5,2); (-1.2,-1.2) rectangle (1.2,1.2); (0., 0.) circle \[radius=1.0\]; at (0,0) [$\Omega$]{}; at (1.0, -0.8) [$\hat\Omega$]{}; [0.65]{} ![ Solution domains and sample solution for an interior embedded mesh calculation. []{data-label="fig:int-solution"}](plot-rhs0-group2-export-fe-ie-total.png){width="95.00000%"} Table \[tab:rhsconv\] reports the self-convergence error in the finite element and integral equation portions of the solution for each test case compared to a fine-grid solution whose parameters are given. We see that the method exhibits the expected order of accuracy given the smoothness of the data. In particular, the method is high-order *even near the embedded boundary*, in contrast with standard immersed boundary methods. The degree of the FE polynomials space $p$ and the truncation order ${p_{\text{QBX}}}$ in Quadrature by Expansion are chosen so as to yield equivalent orders of accuracy in the solution–for instance $p=1$ and ${p_{\text{QBX}}}=2$ yield second-order accurate approximations [@brenner_mathematical_2013; @Epstein:2013]. In the lower-smoothness test cases, we note a marked difference between the error in the $\infty$- and the 2-norm of the error, both shown. Also, for linear basis functions, the finite element convergence rate in ${{\|\cdot\|_{\infty;\Omega}}}$ is suboptimal. This matches analytical expectations as known error estimates of the error in this norm for this basis include a factor of $\log(1/h)$ [@Scott:1976; @Schatz:1980]. ----------- ----- -------------------- -------------------------------------- --------------------------------------------- ----- ---------------------------------------- ----- RHS func. $p$ ${p_{\text{QBX}}}$ ${h_{\text{fe}}}$, ${h_{\text{ie}}}$ ${{\|u-u_{\text{ref}} \|_{\infty;\Omega}}}$ EOC ${{\|u-u_{\text{ref}} \|_{2;\Omega}}}$ EOC 0.04, 0.105 4.564[$\times 10^{-4}$]{} – 9.576[$\times 10^{-5}$]{} – 0.02, 0.052 9.333[$\times 10^{-5}$]{} 2.3 1.975[$\times 10^{-5}$]{} 2.3 0.01, 0.026 1.812[$\times 10^{-5}$]{} 2.4 3.982[$\times 10^{-6}$]{} 2.3 0.04, 0.105 9.383[$\times 10^{-5}$]{} – 2.189[$\times 10^{-5}$]{} – 0.02, 0.052 1.032[$\times 10^{-5}$]{} 3.2 2.414[$\times 10^{-6}$]{} 3.2 0.01, 0.026 8.840[$\times 10^{-7}$]{} 3.5 2.052[$\times 10^{-7}$]{} 3.6 0.04, 0.105 2.405[$\times 10^{-5}$]{} – 6.072[$\times 10^{-6}$]{} – 0.02, 0.052 1.510[$\times 10^{-6}$]{} 4.9 3.727[$\times 10^{-7}$]{} 4.0 0.01, 0.026 6.887[$\times 10^{-8}$]{} 4.5 1.672[$\times 10^{-8}$]{} 4.5 0.04, 0.105 1.091[$\times 10^{-4}$]{} – 3.040[$\times 10^{-5}$]{} – 0.02, 0.052 2.110[$\times 10^{-5}$]{} 2.4 6.377[$\times 10^{-6}$]{} 2.3 0.01, 0.026 4.201[$\times 10^{-6}$]{} 2.3 1.555[$\times 10^{-6}$]{} 2.0 0.04, 0.105 2.294[$\times 10^{-5}$]{} – 5.736[$\times 10^{-6}$]{} – 0.02, 0.052 2.453[$\times 10^{-6}$]{} 3.2 6.375[$\times 10^{-7}$]{} 3.2 0.01, 0.026 2.219[$\times 10^{-7}$]{} 3.5 5.424[$\times 10^{-8}$]{} 3.6 0.04, 0.105 5.594[$\times 10^{-6}$]{} – 1.597[$\times 10^{-6}$]{} – 0.02, 0.052 4.103[$\times 10^{-7}$]{} 3.8 9.799[$\times 10^{-8}$]{} 4.0 0.01, 0.026 2.396[$\times 10^{-8}$]{} 4.1 4.417[$\times 10^{-9}$]{} 4.5 0.04, 0.105 4.918[$\times 10^{-4}$]{} – 1.008[$\times 10^{-4}$]{} – 0.02, 0.052 1.882[$\times 10^{-4}$]{} 1.4 2.475[$\times 10^{-5}$]{} 2.0 0.01, 0.026 5.620[$\times 10^{-5}$]{} 1.7 4.222[$\times 10^{-6}$]{} 2.6 0.04, 0.105 2.811[$\times 10^{-4}$]{} – 2.417[$\times 10^{-5}$]{} – 0.02, 0.052 9.098[$\times 10^{-5}$]{} 1.6 3.583[$\times 10^{-6}$]{} 2.8 0.01, 0.026 2.489[$\times 10^{-5}$]{} 1.9 4.847[$\times 10^{-7}$]{} 2.9 0.04, 0.105 1.775[$\times 10^{-4}$]{} – 9.453[$\times 10^{-6}$]{} – 0.02, 0.052 4.730[$\times 10^{-5}$]{} 1.9 1.294[$\times 10^{-6}$]{} 2.9 0.01, 0.026 1.219[$\times 10^{-5}$]{} 2.0 1.594[$\times 10^{-7}$]{} 3.0 *— all —* 4 5 0.005, 0.013 ----------- ----- -------------------- -------------------------------------- --------------------------------------------- ----- ---------------------------------------- ----- : Self-convergence to a fine mesh solution $u_{\text{ref}}$ vs. smoothness of the right-hand side for the interior problems of Section \[sec:int-num-results\]. EOC refers to the empirical order of convergence.[]{data-label="tab:rhsconv"} FE-IE for domains with exclusions {#sec:int-ext} ================================= As the next step toward solving the interface problem (\[eq:interface\]), we extend our FE-IE method to a domain with an exclusion as shown in Figure \[fig:interface-split\]. In contrast to the *interior* Poisson problem, the solution is sought on the intersection of the unbounded domain $\mathbb{R}^2\setminus \Omega$ and the bounded domain $\hat\Omega$. That is, $$\begin{aligned} \label{eq:int-ext-bvp} -\triangle u(x) = &\, f(x) \qquad x \in \hat\Omega\backslash\Omega \nonumber, \\ u(x) = &\, g(x) \qquad x \in \partial\Omega \nonumber, \\ u(x) = &\, \hat g(x) \qquad x \in \partial\hat\Omega.\end{aligned}$$ The generalization to other boundary conditions is left to future work. Our new approach to FE-IE decomposition for this problem is to solve an interior finite element problem on $\hat\Omega$ and an exterior integral equation problem on $\Omega$, with the two solutions coupled only through their boundary values. The setup for this problem is illustrated symbolically in Figure \[fig:int-ext-domain\]. [0.3]{} [0.3]{} [0.3]{} In this way, we allow both methods to play to their individual strengths: the finite element solution exists on a regular, bounded mesh with no exclusions, while the layer potential solution handles boundary conditions present on the boundary of an exclusion, $\partial \Omega$, that is potentially geometrically complex. FE-IE decomposition {#sec:FE-IE-decomp} ------------------- We solve  as before by representing $$u(x)=u_1(x)+u_2(x)\quad(x\in\hat\Omega \setminus\Omega)$$ and posing a system of BVPs for $u_1$ and $u_2$: $$\begin{aligned} \label{eq:int-ext} \text{[FE]} \quad -\triangle u_1(x) & = f(x), & \quad x \in \hat\Omega, \\ u_1(x) & = \hat g(x) - u_2(x), & x \in \partial\hat\Omega, \nonumber\\ \text{[IE]} \quad -\triangle u_2(x) & = 0, &x \in \mathbb{R}^d\backslash\Omega,\nonumber\\ u_2(x) & = g(x) - u_1(x), & x \in \partial\Omega,\nonumber\\ u_2(x) - A\log|x| & = o(1) & x\to\infty\nonumber \quad(d=2),\\ u_2(x) & = o(1) & x\to\infty\nonumber \quad(d=3),\end{aligned}$$ with a given constant $A$. In two dimensions,  includes a far-field boundary condition for $u_2$ that differs from the standard far-field BC for the exterior Dirichlet problem, $u(x)=O(1)$ as $x\to\infty$ [@Kress:2014]. There are two reasons for this modification. First, the BVP  allows solutions containing a logarithmic singularity within $\Omega$. Without permitting logarithmic blow-up at infinity, such solutions would be ruled out by the splitting . Neither $u_1$ (nonsingular throughout $\Omega$) nor $u_2$ would be able to represent them. Second, allowing nonzero additive constants in $u_2$ would lead to a non-uniqueness, since for any given constant $C$, $(u_1^\text{new}, u_2^\text{new})=(u_1+C,u_2-C)$ would likewise be an admissible solution. Next, we support the existence of a solution of the coupled BVPs  with the stricter-than-conventional decay condition $o(1)$, we let $A=0$ without loss of generality. We remind the reader that any solution of the exterior Dirichlet problem in two dimensions may be represented as ${\mathcal{D}}{\gamma}+ C$, for some constant $C$ [@Kress:2014 Thm. 6.25]. Since $({\mathcal{D}}{\gamma})(x)=O(|x|^{-1})$ ($x\to\infty$) [@Kress:2014 (6.19)], the only loss from our more restrictive decay condition is a constant, which, as discussed above, may be contributed by $u_1$. From , we see that the two subproblems are now fully coupled. We cast the subproblem for $u_1$ in variational form in anticipation of FEM discretization and the subproblem for $u_2$ in terms of layer potentials. To arrive at the coupled system, we first define the operator ${\mathcal{R}}$ as the restriction to the boundary $\partial\Omega$ and ${\hat{\mathcal{R}}}$ as the restriction to the boundary $\partial\hat\Omega$. We write $u_2$ in terms of an unknown density ${\gamma}$ using a layer potential operator ${\mathcal{A}}$ such that $u_2 = {\mathcal{A}}{\gamma}$, while ensuring that the resulting integral equation is of the second kind: $${\left\{CI + \bar{\mathcal{A}}\right\}}{\gamma}= g - {\mathcal{R}}u_1,$$ for some constant $C$. Next, we decompose $u_1$ as $$u_1 = \tilde{u}_1 + \hat u_1 =\tilde u_1 +{\mathcal{E}}\hat r_1,$$ where $\tilde u_1 \in H^1_0(\hat\Omega)$ is zero on the boundary $\partial\hat\Omega$ and $\hat u_1 \in H^1(\hat\Omega)$ is used to enforce the boundary conditions. $\hat u_1$ is defined by a lifting operator ${\mathcal{E}}: H^{1/2}(\partial\hat\Omega) \rightarrow H^1(\hat\Omega)$ that selects a specific $\hat u_1$ in the volume from its boundary restriction $\hat r_1 \equiv {\hat{\mathcal{R}}}\hat{u}_1$. (The precise choice of the lifting operator within these guidelines has no influence on the obtained solution $\hat u_1$.) The coupled problem is then to find ${\gamma}\in C(\partial\Omega)$, $\tilde u_1 \in H^1_0(\hat\Omega)$, and $\hat r_1 \in H^{1/2}(\partial\hat\Omega)$ such that $$\left[{\renewcommand*{\arraystretch}{1.4}}\begin{array}{ccc} \ CI + \bar{\mathcal{A}}& {\mathcal{R}}& {\mathcal{R}}{\mathcal{E}}\\ 0 & {\mathcal{F}}(v) & {\mathcal{F}}(v) {\mathcal{E}}\\ {\hat{\mathcal{R}}}{\mathcal{A}}& 0 &I \end{array}\right] \left[{\renewcommand*{\arraystretch}{1.4}}\begin{array}{c} {\gamma}\\ \tilde u_1 \\ \hat r_1 \end{array}\right] = \left[{\renewcommand*{\arraystretch}{1.4}}\begin{array}{c} g \\ \mathcal{M}(v)\, f \\ \hat g \end{array}\right] \qquad \forall v \in H^1_0(\hat\Omega), \label{eq:int-ext-coupled}$$ where ${\mathcal{A}}{\gamma}$ for $u_2$ is used in . Next, we isolate the density equation in  using a Schur complement, which results in $$\label{eq:densitywithFE} \left\{C I + \bar{\mathcal{A}}- {\mathcal{R}}{\mathcal{U}}\left[{\renewcommand*{\arraystretch}{1.4}}\begin{array}{c} 0\\ {\hat{\mathcal{R}}}{\mathcal{A}}\end{array}\right]\right\} {\gamma}= g - {\mathcal{R}}{\mathcal{U}}\left[{\renewcommand*{\arraystretch}{1.4}}\begin{array}{c} f \\ \hat g \end{array}\right],$$ with the *solution operator* ${\mathcal{U}}: L^2(\hat\Omega) \times H^{1/2}(\partial\hat\Omega) \rightarrow H^1(\hat\Omega)$, where ${{\mathcal{U}}\,[\zeta; \; \hat \rho]}$ is defined as the function $\mu = \tilde{\mu}+{\mathcal{E}}\hat\rho$, and where $\tilde{\mu}\in H^1_0(\hat \Omega)$ satisfies $${\mathcal{F}}(v) (\tilde\mu + {\mathcal{E}}\hat \rho) = \mathcal M (v) \zeta, \quad v \in H^1_0(\hat \Omega). \label{eq:fe-solve-op-variational}$$ This allows us to express the IE solution to the coupled problem  in terms of input data $f$, $g$, and $\hat g$, along with the action of ${\mathcal{U}}$. Once ${\gamma}$ is known, the FE solution is found as $u_1 = {{\mathcal{U}}\,[ f ; \; \hat g - {\hat{\mathcal{R}}}{\mathcal{A}}\gamma]}$. The form in  identifies two remaining issues. The first is the choice of $C$, which is determined by selecting a layer potential representation ${\mathcal{A}}$ of $u_2$. The conventional choice, a double layer potential, is not suitable because the exterior limit of the double layer operator ${\mathcal{D}}$ has a nullspace spanned by the constants. A common way of addressing this issue involves adding a layer potential with a lower-order singularity [@Kress:2014]; however, this is inadequate for our coupled FE-IE system (for $d=2$), as we explain below. Instead, we choose ${\mathcal{A}}= {\mathcal{D}}+ {\mathcal{S}}$, the sum of the double and single layer potentials, each with the same density. This choice, by the exterior jump relations for the single and double layer potentials [@Kress:2014] establishes $C=1/2$. Second, uniqueness and existence for  hinges on joint compactness of the composition of operators ${\mathcal{R}}{{\mathcal{U}}\,[0; \; {\hat{\mathcal{R}}}{\mathcal{A}}]}$ using the next lemma. \[lem:compactcoupling\] Let $\hat\Omega\subseteq \mathbb R^n$ ($n=2,3$) be bounded, satisfy an exterior sphere condition at every boundary point, and contain a domain $\Omega$ with a $C^\infty$ boundary. Further assume that $d(\partial\hat\Omega, \partial \Omega)> 0$. Then the operator ${\mathcal{R}}{{\mathcal{U}}\,[0; \; {\hat{\mathcal{R}}}{\mathcal{A}}]}: C(\partial\Omega) \rightarrow C(\partial\Omega)$ is compact. First consider the operator ${\hat{\mathcal{R}}}{\mathcal{A}}$. Let ${\gamma}\in C(\partial\Omega)$. ${\hat{\mathcal{R}}}{\mathcal{A}}{\gamma}$ evaluates the layer potential on the outer boundary $\partial\hat\Omega$. Since $x\mapsto {\mathcal{A}}{\gamma}(x)$ is harmonic, ${\mathcal{A}}{\gamma}(x)$ is analytic for $x\not\in \partial \Omega$ [@Kress:2014 Thm. 6.6], and the restriction to the boundary $\partial\hat \Omega$, ${\hat{\mathcal{R}}}{\mathcal{A}}{\gamma}$, is at least continuous. Next, consider the composite operator ${\gamma}\mapsto {{\mathcal{U}}\,[0; \; {\hat{\mathcal{R}}}{\mathcal{A}}{\gamma}]}$. The boundary value problem $$\begin{aligned} \label{eq:coupling-classical} \quad -\triangle w(x) & = 0, & \quad x \in \hat\Omega, \\ w(x) & = {\hat{\mathcal{R}}}{\mathcal{A}}{\gamma}(x), & x \in \partial\hat\Omega \nonumber \end{aligned}$$ has a classical solution $w\in C^0(\overline{ \hat \Omega })\cap C^2(\hat\Omega)$ [@gilbarg_elliptic_2015 Thm. 6.13] because of the regularity of the domain and data. More precisely, even $w\in C^\infty(\hat \Omega)$ by [@gilbarg_elliptic_2015 Thm. 6.17]. The classical solution $w$ found above is identical to the unique ([@gilbarg_elliptic_2015 Cor. 8.2]) variational solution ${{\mathcal{U}}\,[ 0 ; \; {\hat{\mathcal{R}}}{\mathcal{A}}\gamma]}\in H^1(\hat\Omega)$. The classical maximum principle (e.g. [@gilbarg_elliptic_2015 Thm. 3.1]) then yields that $$\|{\mathcal{R}}{{\mathcal{U}}\,[0; \; {\hat{\mathcal{R}}}{\mathcal{A}}{\gamma}]}\|_\infty \le \|{\hat{\mathcal{R}}}{\mathcal{A}}{\gamma}\|_\infty.$$ Consequently, we have that ${\mathcal{R}}{\mathcal{U}}$ is bounded and ${\hat{\mathcal{R}}}{\mathcal{A}}$ is compact. The composition of a compact operator with a bounded operator is compact, which completes the proof. Using slightly different machinery, a discrete version of Lemma \[lem:compactcoupling\] is available at least in $\mathbb R^2$. To this end, let $\hat \Omega\subset \mathbb R^2$ be convex and polygonal and define a finite element subspace $V^h\subset H^1(\hat \Omega)$ of continuous polynomials of degree $\ge 1$ on a quasi-uniform triangulation of $\hat \Omega$ (in the sense of [@Schatz:1980]). Also define $V^h_0:=H^1_0(\hat \Omega)\cap V^h$. Further define the *discrete lifting operator* ${\mathcal{E}^h}:H^{1/2}(\partial \hat \Omega)\to V^h$ and the *discrete solution operator* $ {\mathcal{U}^h}: V^h \times H^{1/2}(\partial\hat\Omega) \to V^h $ where ${{\mathcal{U}^h}\,[\zeta; \; \hat\rho]}$ is defined as the function $\tilde\mu^h+{\mathcal{E}^h}\hat\rho$, where $\tilde\mu^h\in V^h_0$ satisfies $${\mathcal{F}}(v^h) (\tilde\mu^h + {\mathcal{E}^h}\hat\rho) = \mathcal M (v^h) \zeta, \quad v^h \in V^h_0.$$ (Again, the precise choice of the discrete lifting operator within these guidelines has no influence on the obtained solution.) \[thm:compactcoupling-discrete\] Assume that $\hat\Omega\subset \mathbb R^2$ is bounded, convex, and polygonal and contains a domain $\Omega$ with a $C^\infty$ boundary. Further assume that $d(\partial\hat\Omega, \partial \Omega)> \epsilon$ for some finite $\epsilon>0$. Let the family of operators $${\{{\hat{\mathcal{R}}}{\mathcal{A}^h}:C(\partial\Omega) \rightarrow C(\partial\hat\Omega)\}}_h$$ be collectively compact and the functions in their ranges be harmonic. Then the family of operators $${\{{\gamma}\mapsto {\mathcal{R}}{{\mathcal{U}^h}\,[0; \; {\hat{\mathcal{R}}}{\mathcal{A}^h}{\gamma}]}: C(\partial\Omega) \rightarrow C(\partial\Omega)\}}_h$$ is collectively compact for sufficiently small $h$. First consider the operator ${\hat{\mathcal{R}}}{\mathcal{A}^h}$. Let ${\gamma}\in C(\partial\Omega)$. ${\hat{\mathcal{R}}}{\mathcal{A}^h}{\gamma}$ evaluates the layer potential on the outer boundary $\partial\hat\Omega$. Since $x\mapsto {\mathcal{A}^h}{\gamma}(x)$ is harmonic, it is also analytic for $x\not\in \partial \Omega$ [@Kress:2014 Thm. 6.6], and so is its restriction ${\hat{\mathcal{R}}}{\mathcal{A}^h}{\gamma}$ to the boundary $\partial\hat \Omega$ is at least continuous. The discrete maximum principle [@Schatz:1980] yields that $$\|{\mathcal{R}}{{\mathcal{U}^h}\,[0; \; {\hat{\mathcal{R}}}{\mathcal{A}^h}{\gamma}]}\|_\infty \le C\|{\hat{\mathcal{R}}}{\mathcal{A}^h}{\gamma}\|_\infty. \label{eq:discrete-max-principle}$$ where $C$ is independent of $h$. Noting ${\mathcal{R}}{{\mathcal{U}^h}\,[0; \; {\hat{\mathcal{R}}}{\mathcal{A}^h}{\gamma}]}\in V_h\subset C(\hat\Omega)$ by construction, we have that ${\mathcal{R}}{{\mathcal{U}^h}\,[0; \; {\hat{\mathcal{R}}}{\mathcal{A}^h}]} : C(\partial\Omega) \rightarrow C(\partial\Omega)$, with bounded ${\mathcal{R}}{\mathcal{U}^h}$ and compact ${\hat{\mathcal{R}}}{\mathcal{A}^h}$. We obtain our claim since the composition of a compact operator with a bounded operator is compact, noting that collective compactness follows from the $h$-independence of the constant in . The form of the operator in  is $$\label{eq:Z} \mathcal Z = CI + {\bar{\mathcal{A}}}- {\mathcal{R}}{{\mathcal{U}}\,[0; \; {\hat{\mathcal{R}}}{\mathcal{A}}]}.$$ Thus, Lemma \[lem:compactcoupling\] and Theorem \[thm:compactcoupling-discrete\] establish that the integral equation  is of the second kind and its discretization takes a form to which Anselone’s Theorem applies. The operator $\mathcal Z$ in  and its discrete version are the sum of an identity and a compact operator. Consequently, by the Fredholm alternative, if the operator has no nullspace, then existence of the solution is guaranteed. Again, convergence of the solution as $h\to 0$ is assured by Anselone’s theorem. We highlight some factors that influenced our choice of the IE representation. The purely IE part of the operator, $CI + {\bar{\mathcal{A}}}$, represents the behavior on $\partial\Omega$ of a harmonic function exterior to $\partial \Omega$, while the coupled FE part, ${\mathcal{R}}{{\mathcal{U}}\,[0; \; {\hat{\mathcal{R}}}]}{\mathcal{A}}$, is approximating a harmonic function interior to $\partial \hat\Omega$; both functions have the same value on the boundary $\partial\hat\Omega$ (but not on $\partial\Omega$). A nontrivial nullspace exists in  if the intersection of the ranges of the operators is nontrivial. Distinct decay behavior of interior and exterior Dirichlet solutions generally keeps the ranges of these operators from having a nontrivial intersection. ### Remarks on the Behavior of the Error The observed convergence behavior is similar to that of the interior case, but with additional components stemming from the FE error and the IE representation error in the operator ${\mathcal{R}}{{\mathcal{U}}\,[0; \; {\hat{\mathcal{R}}}{\mathcal{A}}]}$. As it is a composition of operators with known high-order accuracy, however, the composite scheme has the same asymptotic error behavior as the less accurate of its constituent parts, analogous to Lemma \[lem:int-decoupling\]. In particular, the error in the ${{\mathcal{U}}\,[0; \; {\hat{\mathcal{R}}}{\mathcal{A}}]}$ part of the overall operator on ${\gamma}$ is bounded by the error in its boundary conditions—the operator error on ${\hat{\mathcal{R}}}{\mathcal{A}}$—by the weak discrete maximum principle. Thus the leading term effect of the composition of the two is the same as the other FE or IE operator error terms, depending on which error is larger. The error behavior of the finite element solution $u_1$ once again follows standard finite element convergence theory, with additional error incurred through error in ${\hat{\mathcal{R}}}{\mathcal{A}}{\gamma}$ in the boundary condition. However, again the additional FE error is bounded by the error in ${\hat{\mathcal{R}}}{\mathcal{A}}{\gamma}$ through the discrete weak maximum principle. The net result is that we expect to retain the same overall order of convergence as in the interior case and with similar dependence on the FE and IE solvers. Numerical experiments {#numerical-experiments} --------------------- We consider the coupled system  with the exact solution $$u = \log(r_0) + 2\sin(\pi x)\sin(\pi y),$$ where $$r_0 = \sqrt{(x - 0.1)^2 + (y + 0.02)^2}.$$ In each numerical example, GMRES is used to solve the linear system with algebraic multigrid preconditioning in the case of the FE operators. The IE, FE, and coupled solutions are shown for a starfish exclusion in Figure \[fig:starfish-ext\] and where the parametrization is given by $$\gamma(t) = \begin{bmatrix} 1/2 + (1/8)\sin{(10\pi t)}\cos{(2\pi t)},\\ 1/2 + (1/8)\sin{(10\pi t)}\sin{(2\pi t)}, \end{bmatrix} \quad t \in [0, 1]. \label{eq:starfish}$$ Figure \[fig:int-ext-solution\] gives a visual impression of the obtained solution. As expected, we observe high-order convergence. [0.33]{} ![Individual and combined solutions on the true domain for the exterior embedded mesh problem.](ie-exterior-starfish.png){width="97.00000%"} [0.33]{} ![Individual and combined solutions on the true domain for the exterior embedded mesh problem.](fe-exterior-starfish.png){width="97.00000%"} [0.33]{} ![Individual and combined solutions on the true domain for the exterior embedded mesh problem.](total-exterior-starfish.png){width="99.00000%"}  \[fig:int-ext-solution\] $p$ ${p_{\text{QBX}}}$ ${h_{\text{fe}}}$, ${h_{\text{ie}}}$ ${\left\|\text{error}\right\|_{\infty;\hat\Omega\backslash\Omega}}$ order $\| \text{error} \|_{L^2(\hat\Omega\backslash\Omega)}$ order ----- -------------------- -------------------------------------- --------------------------------------------------------------------- ------- -------------------------------------------------------- ------- 0.133, 0.327 5.80[$\times 10^{-2}$]{} – 2.38[$\times 10^{-2}$]{} – 0.067, 0.170 1.28[$\times 10^{-2}$]{} 2.2 6.08[$\times 10^{-3}$]{} 2.0 0.033, 0.085 3.47[$\times 10^{-3}$]{} 1.9 1.47[$\times 10^{-3}$]{} 2.0 0.017, 0.043 8.46[$\times 10^{-4}$]{} 2.0 3.74[$\times 10^{-4}$]{} 2.0 0.133, 0.327 1.21[$\times 10^{-2}$]{} – 2.05[$\times 10^{-3}$]{} – 0.067, 0.170 2.82[$\times 10^{-3}$]{} 2.1 3.29[$\times 10^{-4}$]{} 2.6 0.033, 0.085 5.53[$\times 10^{-4}$]{} 2.3 3.70[$\times 10^{-5}$]{} 3.2 0.017, 0.043 6.48[$\times 10^{-5}$]{} 3.1 3.65[$\times 10^{-6}$]{} 3.3 0.133, 0.327 1.00[$\times 10^{-2}$]{} – 1.12[$\times 10^{-3}$]{} – 0.067, 0.170 1.73[$\times 10^{-3}$]{} 2.5 9.97[$\times 10^{-5}$]{} 3.5 0.033, 0.085 1.91[$\times 10^{-4}$]{} 3.2 7.19[$\times 10^{-6}$]{} 3.8 0.017, 0.043 1.47[$\times 10^{-5}$]{} 3.7 4.08[$\times 10^{-7}$]{} 4.1 : Convergence of coupled interior-exterior FE-IE system for the excluded starfish domain.[]{data-label="tab:int-ext-conv-starfish"} FE-IE for Interface Problems {#sec:interface} ============================ In this section, we combine the elements described in Sections \[sec:int-poisson\] and \[sec:int-ext\] to form a new embedded boundary method for the interface problem (\[eq:interface\]). In our approach, a fictitious domain $\hat\Omega^i$ is introduced so that $\Omega^i \subset \hat\Omega^i$, defining $\hat\Omega^e$ as $\Omega^e \cup \Omega^i$. Then the problem is separated into two subproblems with an appropriate FE-IE splitting on each. This is illustrated in Figure \[fig:interface-schematic\]. There are four components to the combined solution: $u_1^i : \hat\Omega^i\to \mathbb R$ and $u_2^i : \Omega^i \to \mathbb R$ for the interior solution, plus $u_1^e : \hat\Omega^e \to \mathbb R $ and $u_2^e: \mathbb{R}^d\backslash\Omega^i\to \mathbb R$ for the exterior solution. As before, $u_1^i$ and $u_1^e$ represent the finite element components of the solution. For the interior and exterior integral equation solutions, we choose the combined representation $$u_2^i = \alpha_1 {\mathcal{D}}{\gamma}^i + \alpha_2 {\mathcal{S}}{\gamma}^e, \quad \text{and} \quad u_2^e = \alpha_3 {\mathcal{D}}{\gamma}^i + \alpha_4 {\mathcal{S}}{\gamma}^e,$$ for some constant coefficients $\alpha_j$. We next determine $\alpha_j$ to ensure that all integral operators are of the second kind. Taking the limits of these expressions at the interface and adding the interface restrictions of the finite element solutions gives the following form for the interface conditions: $${\mathcal{R}}u_1^i + \alpha_1 \left({\mathcal{D}}_{-}\right) {\gamma}^i + \alpha_2 \left({\mathcal{S}}_{-}\right) {\gamma}^e = c {(\mathcal{R}\partial_n)}u_1^e + c \alpha_3 \left({\mathcal{D}}_{+}\right) {\gamma}^i + c \alpha_4 \left({\mathcal{S}}_{+}\right){\gamma}^e + a(x),$$ and $${(\mathcal{R}\partial_n)}u_1^i + \alpha_1\partial_n \left({\mathcal{D}}_{-}\right) {\gamma}^i + \alpha_2\partial_n \left({\mathcal{S}}_{-}\right){\gamma}^e = \kappa{(\mathcal{R}\partial_n)}u_1^e + \kappa\alpha_3\partial_n \left({\mathcal{D}}_{+}\right){\gamma}^i + \kappa\alpha_4 \partial_n \left({\mathcal{S}}_{+}\right){\gamma}^e + b(x),$$ where ${\mathcal{D}}_{\pm}$ indicates the interior ($-$) or exterior ($+$) limit of the double-layer operator; similarly for ${\mathcal{S}}_{\pm}$. The ${(\mathcal{R}\partial_n)}$ operator is defined analogous to ${\mathcal{R}}$, but for gradient of the FE solution normal to the interface $\Gamma$. $$\label{eq:collected-value-cond} {\mathcal{R}}u_1^i + \left[-\frac{\alpha_1 + c\alpha_3}{2}I + (\alpha_1 - c\alpha_3){\bar{\mathcal{D}}}\right]{\gamma}^i = c{\mathcal{R}}u_1^e + \left[-\alpha_2 + c\alpha_4\right]{\bar{\mathcal{S}}}{\gamma}^e + a(x)$$ and $$\label{eq:collected-deriv-cond} {(\mathcal{R}\partial_n)}u_1^i + \left[\frac{\alpha_2 + \kappa\alpha_4}{2}I + (\alpha_2 - \kappa\alpha_4){\bar{\mathcal{S}'}}\right]{\gamma}^e = \kappa{(\mathcal{R}\partial_n)}u_1^e + \left[ - \alpha_1 + \kappa\alpha_3\right] {\bar{\mathcal{D}'}}{\gamma}^i + b(x).$$ To determine suitable values for $\alpha_j$, first we eliminate the hypersingular operator ${\bar{\mathcal{D}'}}$ from , necessitating $$\label{eq:req-one} \alpha_1 = \kappa\alpha_3.$$ With ${\bar{\mathcal{D}'}}$ eliminated,  is an equation for ${\gamma}^e$ with operator $$\frac{\alpha_2 + \kappa\alpha_4}{2}I + (\alpha_2 - \kappa\alpha_4){\bar{\mathcal{S}'}}.$$ In order to have an operator with only the trivial nullspace, guaranteeing a unique solution for ${\gamma}^e$, we select the coefficients of $I$ and ${\bar{\mathcal{S}'}}$ to have opposite sign. Next consider the jump condition . Enforcing the requirement , the operator on ${\gamma}^i$ is $$\label{eq:sigmaiop} -\frac{(\kappa + c)\alpha_3}{2}I + (\kappa - c)\alpha_3 {\bar{\mathcal{D}}}.$$ This results in three possibilities based on $\kappa$ and $c$: 1. $\kappa \neq c$ and $\kappa \neq -c$, where both terms remain, 2. $\kappa = c$, where the double-layer term drops out, and 3. $\kappa = -c$, where the identity term is eliminated. We consider each case in the following sections. The case k!=c and k!=c ---------------------- Combining the interface condition equations and with the condition and the interior and exterior FE problems for $u_1^i$ and $u_1^e$ yields a coupled system for the interface problem: with ${\mathcal{F}}^i(v)$ and $v \in H^1_0(\hat\Omega^i)$ for the interior problem and ${\mathcal{F}}^e(w)$ and $w \in H^1_0(\hat\Omega^e)$ for the coupled exterior problem, we seek $({\gamma}^i, {\gamma}^e) \in C(\Gamma)$, $u_1^i \in H_0^1(\hat\Omega^i)$, $\tilde{u}_1^e \in H_0^1(\hat\Omega^e)$, and $\hat r_1 \in H^{1/2}(\partial\hat\Omega^e)$ such that $$\label{eq:generalmat-split} \mathcal{C} \left[{\renewcommand*{\arraystretch}{1.4}}\begin{array}{c} {\gamma}^i \\ {\gamma}^e \\ \hline u_1^i \\ \tilde{u}_1^e\\ \hat r_1^e \end{array}\right] = \left[{\renewcommand*{\arraystretch}{1.4}}\begin{array}{c} a(x) \\ b(x) \\ \hline \mathcal{M}^i(v) \, f^i\\ \mathcal{M}^e(w)\, f^e \\ \hat g \end{array}\right] \qquad \textnormal{for all $v \in H^1_0(\hat{\Omega}^i), w \in H_0^1(\hat{\Omega}^e)$,}$$ where $$\mathcal{C} = \left[{\renewcommand*{\arraystretch}{1.4}}\begin{array}{c c | c c c} -\frac{1}{2}(\kappa+c)\alpha_3I + (\kappa - c)\alpha_3{\bar{\mathcal{D}}}& (\alpha_2 - c\alpha_4){\bar{\mathcal{S}}}& {\mathcal{R}}& -c{\mathcal{R}}& -c{\mathcal{R}}{\mathcal{E}}^e \\ 0 & \frac{1}{2}(\alpha_2 + \kappa\alpha_4)I + (\alpha_2 - \kappa\alpha_4){\bar{\mathcal{S}'}}& {(\mathcal{R}\partial_n)}& -\kappa{(\mathcal{R}\partial_n)}& \kappa{(\mathcal{R}\partial_n)}{\mathcal{E}}^e \\ \hline 0 & 0 & {\mathcal{F}}^i(v) & 0 & 0 \\ 0 & 0 & 0 & {\mathcal{F}}^e(w) & {\mathcal{F}}^e(w){\mathcal{E}}^e \\ \alpha_3{\hat{\mathcal{R}}}^e{\mathcal{D}}& \alpha_4{\hat{\mathcal{R}}}^e{\mathcal{S}}& 0 & 0 & I \\ \end{array}\right].$$ The lifting operator ${\mathcal{E}}^e$ is as described in Section \[sec:FE-IE-decomp\] and acts on the boundary $\partial\hat\Omega^e$. The source functions $f^i:\hat\Omega^i\to \mathbb R$ and $f^e:\hat\Omega^e \to \mathbb R$ are once again suitably restricted and/or extended versions of the right-hand side $f$. From this we see that $a(x)$ and $b(x)$ in the jump conditions are handled as additional terms on the right-hand side of the system. ### A Solution Procedure Involving the Schur Complement For the exterior problem, we apply a Schur complement to . To simplify notation, we apply  and define the block IE operator as $${\underline{\underline{\mathcal{I}}}} + {\underline{\underline{{\bar{\mathcal{A}}}}}} = \begin{bmatrix}-\frac{1}{2}(\kappa+c)\alpha_3I + (\kappa - c)\alpha_3{\bar{\mathcal{D}}}& (\alpha_2 - c\alpha_4){\bar{\mathcal{S}}}\\ 0 & \frac{1}{2}(\alpha_2 + \kappa\alpha_4)I + (\alpha_2 - \kappa\alpha_4){\bar{\mathcal{S}'}}\end{bmatrix}.$$ We also define the block coupling operator $${\underline{\underline{{\mathcal{R}}}}} = \begin{bmatrix} {\mathcal{R}}& -c{\mathcal{R}}\\ {(\mathcal{R}\partial_n)}& -\kappa{(\mathcal{R}\partial_n)}\end{bmatrix}.$$ As restriction to the outer boundary $\partial\hat\Omega^e$ is only necessary for $u_2^e$, we do not need a block form of ${\hat{\mathcal{R}}}$. Rather, $${\hat{\mathcal{R}}}^e{\mathcal{A}}^e =\begin{bmatrix}\alpha_3{\hat{\mathcal{R}}}^e{\mathcal{D}}& \alpha_4{\hat{\mathcal{R}}}^e{\mathcal{S}}\end{bmatrix}$$ will suffice. Finally, we define the block FE solution operator ${\underline{\underline{{\mathcal{U}}}}} : L^2(\hat\Omega^i) \times L^2(\hat\Omega^e) \times H^{1/2}(\partial\hat\Omega^e) \rightarrow H_0^1(\hat\Omega^i) \times H^1(\hat\Omega^e)$ such that the $\mu^i \in H_0^1(\hat\Omega^i)$ and $\tilde{\mu}^e \in H^1_0(\hat\Omega^e)$ defined by ${[ \mu^i; \; \tilde{\mu}^e + {\mathcal{E}}^e\hat \rho ]} = {{\underline{\underline{{\mathcal{U}}}}} \,[\zeta^i; \, \zeta^e; \, \hat\rho]}$ satisfy $$\begin{aligned} {\mathcal{F}}^i(v) \mu^i & = \mathcal{M}^i(v) \zeta^i &\forall v \in H_0^1(\hat\Omega^i), \label{eq:blockFEsolve} \\ \nonumber {\mathcal{F}}^e(w)(\tilde{\mu}^e + {\mathcal{E}}^e\hat\rho) & = \mathcal{M}^e(w) \zeta^e & \forall w \in H_0^1(\hat\Omega^e). \end{aligned}$$ We then write the equations for the density functions as $$\left\{ {\underline{\underline{\mathcal{I}}}} + {\underline{\underline{{\bar{\mathcal{A}}}}}} - {\underline{\underline{{\mathcal{R}}}}}\,{\underline{\underline{{\mathcal{U}}}}}\begin{bmatrix}0 \\ 0 \\ {\hat{\mathcal{R}}}^e{\mathcal{A}}^e \end{bmatrix} \right \} \left[{\renewcommand*{\arraystretch}{1.4}}\begin{array}{c} {\gamma}^i \\ {\gamma}^e \end{array}\right] = \left[{\renewcommand*{\arraystretch}{1.4}}\begin{array}{c} a(x) \\ b(x) \end{array}\right] - {\underline{\underline{{\mathcal{R}}}}}\,{\underline{\underline{{\mathcal{U}}}}}\begin{bmatrix} f^i \\ f^e \\ \hat{g}\end{bmatrix}. \label{eq:interface-split-solve-IE}$$ Once the densities are known, the FE solutions are defined as ${{\underline{\underline{{\mathcal{U}}}}} \,[f^i; \, f^e; \, \hat g - {\hat{\mathcal{R}}}^e{\mathcal{A}}^e]}{{[ {\gamma}^i; \; {\gamma}^e ]}}$. The equation  very closely resembles  for the exterior case in Section \[sec:int-ext\]. The main difference here—apart from the doubling of the number of variables—is the appearance of ${(\mathcal{R}\partial_n)}$ terms in ${\underline{\underline{{\mathcal{R}}}}}$ and thus in the operator on ${{[ {\gamma}^i; \; {\gamma}^e ]}}$ in (\[eq:interface-split-solve-IE\]). As the output from ${\hat{\mathcal{R}}}^e{\mathcal{A}}^e{{[ {\gamma}^i; \; {\gamma}^e ]}}$ is smooth, however, the result of ${{\underline{\underline{{\mathcal{U}}}}} \,[0 ; \, 0; \, {\hat{\mathcal{R}}}^e{\mathcal{A}}^e]}{{[ {\gamma}^i; \; {\gamma}^e ]}}$ is smooth for most domains, as it is approximating a harmonic function for $u_1^e$ and will return $u_1^i = 0$ due to the homogeneous boundary conditions enforced on $u_1^i$. We expect this to mitigate any negative effects on the conditioning of the numerical system. The case k=c ------------ In this case, the ${\bar{\mathcal{D}}}$ term drops out of the operator on ${\gamma}^i$. We may choose $\alpha_2 = c\alpha_4 = \kappa\alpha_4$ which results in IE block in the form of a scaled identity: $$[\kappa = c \text{ case}] \qquad {\underline{\underline{\mathcal{I}}}} + {\underline{\underline{{\bar{\mathcal{A}}}}}} = \begin{bmatrix} -\frac{1}{2}(\kappa+c)\alpha_3 I & 0 \\ 0 & \kappa\alpha_2 I \end{bmatrix}.$$ The case k=c ------------ In this case, the operator on ${\gamma}^i$ as defined in  loses the identity term, resulting in an equation for ${\gamma}^i$ which is not of the second kind. We leave this to future work. Numerical experiments {#numerical-experiments-1} --------------------- We demonstrate the behavior of the method on two test problems, one with $\kappa \neq c$ and one with $\kappa = c$. The choices for the coefficients of the layer potential representation are summarized in Table \[tab:alphas\]. [r c c c c c ]{} Case & $\alpha_1$ & $\alpha_2$ & $\alpha_3$ & $\alpha_4$\ $\kappa \neq c$: & $\kappa\alpha_3$ & 0 & $1/(\kappa - c)$ & $1/\kappa$\ $\kappa = c$: & $\kappa\alpha_3$ & 1 & $-1/\kappa$ & $1/\kappa$\ ### The quadratic-log test case In this manufactured-solution experiment, the interface $\Gamma$ is a circle of radius $0.5$ and $\Omega^e \cup \Omega^i$ is the square $[-1, -1] \times [-1, 1]$. Relevant data for this test case is given in Table \[tab:testdata\] with the solution shown in Figure \[fig:quadsine\]. We note that the right-hand side $f$ in this example exhibits a discontinuity, however the reduction in convergence order discussed in Section \[sec:intro\] does not occur because, by way of their given expressions, both may be smoothly extended into the respective other domain. [0.45]{} ![Numerical solutions for the test cases.[]{data-label="fig:quadsine"}](circle-quad-log-total-no-colorbar.png "fig:"){width="\textwidth"} [0.45]{} ![Numerical solutions for the test cases.[]{data-label="fig:quadsine"}](circle-sine-sine-total-no-colorbar.png "fig:"){width="\textwidth"} --------------- -------------------- ----------------- ----------- --------------------------------------------------------------- ----------------- ----------- ---------- ----- --------- -------------------------- $u$ $f$ $\hat{g}$ $u$ $f$ $\hat{g}$ $\kappa$ $c$ $a(x)$ $b(x)$ quadratic-log $-\frac{5}{6} r^2$ $\frac{10}{3}$ — $-\frac{5}{4}\log{\left(\frac{1}{2r}\right)} - \frac{11}{24}$ 0 $u^*$ 1/3 1 1/4 0 sine-linear $s_x s_y + x + y$ $4\pi^2s_x s_y$ — $s_x s_y$ $4\pi^2s_x s_y$ $u^*$ 1 1 $x + y$ ${\bf 1} \cdot {\hat n}$ --------------- -------------------- ----------------- ----------- --------------------------------------------------------------- ----------------- ----------- ---------- ----- --------- -------------------------- : Data for the quadratic-log and sine-linear test cases. $s_x$ and $s_y$ denote $\sin{(2\pi x)}$ and $\sin{(2\pi y)}$.[]{data-label="tab:testdata"} Convergence is shown in achieve high-order convergence for the FE basis functions $p \geq 2$ and attribute the lower convergence order to the fact that the solution now depends on the FE derivative, and as a result we expect to lose an order of convergence in the gradient representation compared to the solution representation. An interesting artifact arises in the convergence for $p = 1$: there is negative convergence between the coarsest and second-coarsest meshes. This is due to the appearance of the derivative of the finite element solution in the system. For $p = 1$, the derivative of the FE solution is piecewise constant. On a coarse mesh, piecewise constants poorly represent even smooth solutions with variation. As a result, the $p = 1$ case is sensitive to element placement, especially for coarse meshes. Because of this effect we disregard data for $p=1$ and recommend using at least $p=2$. $p$ ${p_{\text{QBX}}}$ solution ${h_{\text{fe}}}$, ${h_{\text{ie}}}$ ${{\|\text{error}\|_{\infty;\Omega}}}$ order $\| \text{error} \|_{L^2(\Omega)}$ order ----- -------------------- ---------- -------------------------------------- ---------------------------------------- ------- ------------------------------------ ------- 0.080, 0.224 6.93[$\times 10^{-3}$]{} – 3.81[$\times 10^{-3}$]{} – 0.040, 0.108 1.43[$\times 10^{-2}$]{} -1.0 1.14[$\times 10^{-2}$]{} -1.5 0.020, 0.053 5.48[$\times 10^{-3}$]{} 1.4 3.88[$\times 10^{-3}$]{} 1.5 0.010, 0.026 1.51[$\times 10^{-3}$]{} 1.8 8.11[$\times 10^{-4}$]{} 2.2 0.080, 0.224 8.53[$\times 10^{-3}$]{} – 5.61[$\times 10^{-3}$]{} – 0.040, 0.108 1.51[$\times 10^{-2}$]{} -0.8 1.05[$\times 10^{-2}$]{} -0.9 0.020, 0.053 5.34[$\times 10^{-3}$]{} 1.5 3.43[$\times 10^{-3}$]{} 1.6 0.010, 0.026 1.48[$\times 10^{-3}$]{} 1.8 7.23[$\times 10^{-4}$]{} 2.2 0.080, 0.224 3.64[$\times 10^{-3}$]{} – 2.57[$\times 10^{-3}$]{} – 0.040, 0.108 6.41[$\times 10^{-4}$]{} 2.4 3.88[$\times 10^{-4}$]{} 2.6 0.020, 0.053 7.57[$\times 10^{-5}$]{} 3.0 4.34[$\times 10^{-5}$]{} 3.1 0.010, 0.026 1.05[$\times 10^{-5}$]{} 2.8 5.64[$\times 10^{-6}$]{} 2.9 0.080, 0.224 5.14[$\times 10^{-3}$]{} – 2.63[$\times 10^{-3}$]{} – 0.040, 0.108 7.79[$\times 10^{-4}$]{} 2.6 3.78[$\times 10^{-4}$]{} 2.7 0.020, 0.053 8.63[$\times 10^{-5}$]{} 3.1 4.15[$\times 10^{-5}$]{} 3.1 0.010, 0.026 1.12[$\times 10^{-5}$]{} 2.9 5.27[$\times 10^{-6}$]{} 2.9 0.080, 0.224 7.95[$\times 10^{-4}$]{} – 4.22[$\times 10^{-4}$]{} – 0.040, 0.108 8.04[$\times 10^{-5}$]{} 3.1 3.83[$\times 10^{-5}$]{} 3.3 0.020, 0.053 6.34[$\times 10^{-6}$]{} 3.6 2.75[$\times 10^{-6}$]{} 3.7 0.010, 0.026 4.52[$\times 10^{-7}$]{} 3.8 1.91[$\times 10^{-7}$]{} 3.8 0.080, 0.224 1.16[$\times 10^{-3}$]{} – 4.77[$\times 10^{-4}$]{} – 0.040, 0.108 9.99[$\times 10^{-5}$]{} 3.4 3.99[$\times 10^{-5}$]{} 3.4 0.020, 0.053 7.13[$\times 10^{-6}$]{} 3.7 2.77[$\times 10^{-6}$]{} 3.8 0.010, 0.026 4.80[$\times 10^{-7}$]{} 3.8 1.87[$\times 10^{-7}$]{} 3.8 : Convergence data for quadratic-log test problem with $\hat\Omega^i = [-0.6, -0.6] \times [0.6, 0.6]$[]{data-label="tab:quad-log-error"} -- -- -- -- : Convergence data for quadratic-log test problem with $\hat\Omega^i = [-0.6, -0.6] \times [0.6, 0.6]$[]{data-label="tab:quad-log-error"} ### The sine-linear test case Next, we consider a test case for which $\kappa = c$; its data is summarized in Table \[tab:testdata\] and shown for a circular interface in Figure \[fig:sine-sine\]. This is once again a manufactured solution with the same forcing function $f$ through $\Omega^i \cup \Omega^e$. The extra linear function added to the interior solution influences the numerical system only through the non-homogeneous jump conditions $a(x)$ and $b(x)$. Convergence data is shown in Again, we see high-order convergence. $p$ ${p_{\text{QBX}}}$ solution ${h_{\text{fe}}}$, ${h_{\text{ie}}}$ ${{\|\text{error}\|_{\infty;\Omega}}}$ order $\| \text{error} \|_{L^2(\Omega)}$ order ----- -------------------- ---------- -------------------------------------- ---------------------------------------- ------- ------------------------------------ ------- 0.080, 0.327 7.43[$\times 10^{-2}$]{} – 1.66[$\times 10^{-2}$]{} – 0.040, 0.170 5.54[$\times 10^{-3}$]{} 3.7 1.27[$\times 10^{-3}$]{} 3.7 0.020, 0.085 1.37[$\times 10^{-3}$]{} 2.0 3.17[$\times 10^{-4}$]{} 2.0 0.010, 0.043 3.63[$\times 10^{-4}$]{} 1.9 7.87[$\times 10^{-5}$]{} 2.0 0.080, 0.327 7.32[$\times 10^{-2}$]{} – 2.63[$\times 10^{-2}$]{} – 0.040, 0.170 5.63[$\times 10^{-3}$]{} 3.7 3.08[$\times 10^{-3}$]{} 3.1 0.020, 0.085 1.45[$\times 10^{-3}$]{} 2.0 7.28[$\times 10^{-4}$]{} 2.1 0.010, 0.043 3.48[$\times 10^{-4}$]{} 2.0 1.93[$\times 10^{-4}$]{} 1.9 0.080, 0.327 2.11[$\times 10^{-3}$]{} – 3.91[$\times 10^{-4}$]{} – 0.040, 0.170 3.41[$\times 10^{-4}$]{} 2.6 2.64[$\times 10^{-5}$]{} 3.9 0.020, 0.085 1.76[$\times 10^{-5}$]{} 4.3 2.69[$\times 10^{-6}$]{} 3.3 0.010, 0.043 2.90[$\times 10^{-6}$]{} 2.6 6.97[$\times 10^{-7}$]{} 1.9 0.080, 0.327 3.01[$\times 10^{-3}$]{} – 4.55[$\times 10^{-4}$]{} – 0.040, 0.170 1.29[$\times 10^{-4}$]{} 4.5 3.26[$\times 10^{-5}$]{} 3.8 0.020, 0.085 1.12[$\times 10^{-5}$]{} 3.5 3.98[$\times 10^{-6}$]{} 3.0 0.010, 0.043 2.87[$\times 10^{-6}$]{} 2.0 8.99[$\times 10^{-7}$]{} 2.1 0.080, 0.327 2.92[$\times 10^{-3}$]{} – 2.74[$\times 10^{-4}$]{} – 0.040, 0.170 4.78[$\times 10^{-4}$]{} 2.6 1.21[$\times 10^{-5}$]{} 4.5 0.020, 0.085 1.78[$\times 10^{-5}$]{} 4.7 3.33[$\times 10^{-7}$]{} 5.2 0.010, 0.043 1.25[$\times 10^{-6}$]{} 3.8 1.04[$\times 10^{-8}$]{} 5.0 0.080, 0.327 3.79[$\times 10^{-3}$]{} – 3.44[$\times 10^{-4}$]{} – 0.040, 0.170 1.88[$\times 10^{-4}$]{} 4.3 9.34[$\times 10^{-6}$]{} 5.2 0.020, 0.085 1.30[$\times 10^{-5}$]{} 3.9 2.43[$\times 10^{-7}$]{} 5.3 0.010, 0.043 5.24[$\times 10^{-7}$]{} 4.6 7.44[$\times 10^{-9}$]{} 5.0 : Convergence data for sine-linear test problem with $\Omega^i = $ starfish curve []{data-label="tab:sine-sine-starfish-error"} -- -- -- -- : Convergence data for sine-linear test problem with $\Omega^i = $ starfish curve []{data-label="tab:sine-sine-starfish-error"} Numerical considerations ------------------------ We consider two approaches for solving the coupled four-variable linear system: solving the full system together, or implementing the Schur complement based procedure described in –. In the following, the total $4\times 4$ system was preconditioned with a block preconditioner on the FE blocks; the preconditioner used was smoothed aggregation AMG from `pyamg` [@pyamg-github]. For the Schur complement solve, the action of inverting the FE operators was implemented by separating the Dirichlet nodes to regain a symmetric positive definite system for the interior points. This inner solve then used preconditioned CG. The total computational work in the Schur complement solve depends on both the number of outer GMRES iterations for the densities and inner iterations on the FE solutions, carried out every time the action of ${\underline{\underline{{\mathcal{U}}}}}$ is required as part of the outer operator. Considering the structure of the FE block of the coupled operator $\mathcal{C}$ from and the definition of the solution operator ${\underline{\underline{{\mathcal{U}}}}}$ in , the inner and exterior FE problems are only coupled through the total system ; we choose to solve the interior and exterior FE problems separately in each application of ${\underline{\underline{{\mathcal{U}}}}}$. Conclusions =========== We have demonstrated a method of coupling finite-element and integral equation solvers for the strong enforcement of boundary conditions on interior and exterior embedded boundaries. Furthermore, we have introduced a new method of coupling these FE-IE subproblems to solve a wide class of interface problems with homogeneous and non-homogeneous jump conditions. Our method does not require any special modifications to the finite element basis functions. It also does not need volume mesh refinement around the embedded domain, which means that time-dependent domains will not necessitate modifications to the volume-based finite element matrices—only the surface-based integral equation discretizations and the coupling matrices would be updated. One benefit is that our method can be implemented with off-the-shelf FE and IE packages. We have shown theoretical error bounds in the case of the interior embedded mesh problem and achieved empirical high-order convergence for interior, exterior, and interface examples, even very close to the embedded boundary. Acknowledgments {#acknowledgments .unnumbered} =============== Portions of this work were sponsored by the Air Force Office of Scientific Research under grant number FA9550–12–1–0478, and by the National Science Foundation under grant number CCF-1524433. Part of the work was performed while NB and AK were participating in the HKUST-ICERM workshop ‘Integral Equation Methods, Fast Algorithms and Their Applications to Fluid Dynamics and Materials Science’ held in 2017.
{ "pile_set_name": "ArXiv" }
--- author: - 'Satoshi Ejima[^1] and Holger Fehske[^2]' title: 'Charge-density-wave formation in the Edwards fermion-boson model at one-third band filling' --- Introduction ============ Strong correlations can affect the transport properties of low-dimensional systems to the point of insulating behavior. Prominent examples are broken symmetry states of quasi one-dimensional (1D) metals, where charge- or spin-density waves brought about by electron-phonon or by electron-electron interactions [@Gr94]. These interactions can be parametrized by bosonic degrees of freedom, with the result that the fermionic charge carrier becomes “dressed” by a boson cloud that lives in the particle’s immediate vicinity and takes an active part in its transport [@Be09]. A paradigmatic model describing quantum transport in such a “background medium” is the Edwards fermion-boson model [@Ed06; @AEF07]. The model exhibits a surprisingly rich phase diagram including metallic repulsive and attractive Tomonaga-Luttinger-liquid (TLL) phases, insulating charge-density-wave (CDW) states [@WFAE08; @EHF09; @EF09b; @FEWB12], and even regions where phase separation appears [@ESBF12]. The part of the Edwards Hamiltonian that accommodates boson-affected transport is $$\begin{aligned} H_{fb}= -t_b\sum_{\langle i, j \rangle} f_j^{\dagger}f_{i}^{\phantom{\dagger}} (b_i^{\dagger}+b_j^{\phantom{\dagger}})\,.\end{aligned}$$ Every time a spinless fermion hops between nearest-neighbor lattice sites $i$ and $j$ it creates (or absorbs) a local boson $b_j^{\dagger}$ ($b_i^{}$). As to $H_{b}=\omega_0\sum_i b_i^\dagger b_i^{\phantom{\dagger}}$ this enhances (lowers) the energy of the background by $\omega_0$. Moving in one direction only, the fermion creates a string of local bosonic excitations that will finally immobilize the particle (just as for a hole in a classical Néel background). Because of quantum fluctuations any distortion in the background should be able to relax however. Incorporating this effect the entire Edwards model takes the form $$\begin{aligned} H=H_{fb}-\lambda\sum_i(b_i^{\dagger}+b_i^{\phantom{\dagger}})+H_b\,, \label{model}\end{aligned}$$ where $\lambda$ is the relaxation rate. The unitary transformation $b_i\to b_i + \lambda/\omega_0$ replaces the second term in (\[model\]) by a direct, i.e., boson-unaffected, fermionic hopping term $H_f=-t_f\sum_{\langle i, j \rangle} f_j^{\dagger}f_{i}^{}$. In this way the particle can move freely, but with a renormalized transfer amplitude $t_f=2\lambda t_b/\omega_0$. We note that coherent propagation of a fermion is possible even in the limit $\lambda=t_f=0$, by means of a six-step vacuum-restoring hopping being related to an effective next-nearest-neighbor transfer. This process takes place on a strongly reduced energy scale (with weight $\propto t_b^6/\omega_0^5$), and is particularly important in the extreme low-density regime ($n^f\ll 1$), where the Edwards model mimics the motion of a single hole in a quantum antiferromagnet [@EEAF10]. At low-to-intermediate particle densities $n^f \leq 0.3$ the 1D Edwards model system stays metallic. If here the fermions couple to slow (low-energy) bosons ($\omega_0/t_b \lesssim 1)$, the primarily repulsive TLL becomes attractive, and eventually even phase segregation into particle-enriched and particle-depleted regions takes place at small $\lambda$ [@ESBF12]. No such particle attraction is observed, however, for densities $0.3\lesssim n^f\leq 0.5$. Perhaps, in this regime, the repulsive TLL might give way to an insulating state with charge order if the background is “stiff”, i.e., for small $\lambda/t_b$ and fast (high-energy) bosons $\omega_0/t_b >1$. So far, a correlation induced TLL-CDW metal-insulator transition like that has been proven to exist for the half-filled band case ($n^f=0.5$) [@WFAE08; @EHF09]. In the limit $\omega_0/t_b\gg 1 \gg \lambda/t_b$ the Edwards model can be approximated by an effective $t$-$V$ model, $H_{tV}=H_f+V\sum_i n_i^f n_{i+1}^f$, with nearest-neighbor Coulomb interaction $V=t_b^2/\omega_0$ [@NEF13]. The spinless fermion $t$-$V$ model on his part can be mapped onto the exactly solvable $XXZ$-Heisenberg model, which exhibits a Kosterlitz-Thouless [@KT73] (TLL-CDW) quantum phase transition at $(V/t_f)_c=2$, i.e., at $(\lambda/t_b)_{tV,c}=0.25$. The critical value is in reasonable agreement with that obtained for the half-filled Edwards model in the limit $\omega_0\to \infty$: $(\lambda/t_b)_c \simeq 0.16$ [@EHF09]. At lower densities, however, for example at $n^f=1/3$, a CDW instability occurs in 1D $t$-$V$-type models only if (substantially large) longer-ranged Coulomb interactions were included, such as a next-nearest-neighbor term $V_2$ [@SW04]. In order to clarify whether the 1D Edwards model by itself shows a metal-to-insulator transition off half-filling at large $\omega_0$ and what is the reason for the absence of phase separation for small $\omega_0$, in this work, we investigate the model at one-third band filling, using the density matrix renormalization group (DMRG) technique [@Wh92] combined with the pseudo-site approach [@JW98b; @JF07] and a finite-size analysis. This allows us to determine the ground-state phase diagram of the 1D Edwards model in the complete parameter range. Theoretical approach ==================== To identify the quantum phase transition between the metallic TLL and insulating CDW phases we inspect—by means of DMRG—the behavior of the local fermion/boson densities $n_i^{f/b}$, of the single-particle gap $\Delta_c$, and of the the TLL parameter $K_\rho$. In doing so, we take into account up to four pseudo-sites, and ensure that the local boson density of the last pseudo-site is always less than $10^{-7}$ for all real lattice sites $i$. We furthermore keep up to $m=1200$ density-matrix eigenstates in the renormalization process to guarantee a discarded weight smaller than $10^{-8}$. For a finite system with $L$ sites the single-particle charge gap is given by $$\begin{aligned} \Delta_c(L)=E(N+1)+E(N-1)-2E(N),\end{aligned}$$ where $E(N)$ and $E(N\pm1)$ are the ground-state energies in the $N$- and ($N\pm1$)-particle sectors, respectively. In the CDW state $\Delta_c$ is finite, but will decrease exponentially across the MI transition point if the transition is of Kosterlitz-Thouless type as for the $t$-$V$ model. This hampers an accurate determination of the TLL-CDW transition line. In this respect the TLL parameter $K_\rho$ is more promising. Here bosonization field theory predicts how $K_\rho$ should behave at a quantum critical point. In order to determine $K_\rho$ accurately by DMRG, we first have to calculate the static (charge) structure factor $$\begin{aligned} S_c(q)=\frac{1}{L}\sum_{j,l}e^{{\rm i}q(j-l)} \langle (f_j^\dagger f_j^{\phantom{\dagger}}-n) (f_l^\dagger f_l^{\phantom{\dagger}}-n) \rangle\,, \end{aligned}$$ where the momenta $q=2\pi m/L$ with integers $0<m<L$ [@EGN05]. The TLL parameter $K_\rho$ is proportional to the slope of $S_c(q)$ in the long-wavelength limit $q\to0^+$: $$\begin{aligned} K_\rho=\pi\lim_{q\to 0}\frac{S_c(q)}{q}\,.\end{aligned}$$ For a spinless-fermion system with one-third band filling, the TLL parameter should be $K_\rho^\ast=2/9$ at the metal-insulator transition point. For an infinitesimally doped three-period CDW insulator, on the other hand, bosonization theory yields $K_\rho^{\rm CDW}=1/9$ [@Sc94; @Gi03]. Numerical results ================= First evidence for the formation of a CDW state in the one-third filled Edwards model comes from the spatial variation of the local densities of fermions $n_i^f\equiv\langle f_i^\dagger f_i^{\phantom{\dagger}}\rangle$ and bosons $n_i^b\equiv\langle b_i^\dagger b_i^{\phantom{\dagger}}\rangle$. Fixing $\omega_0=2$, we find a modulation of the particle density commensurate with the band filling factor 1/3 for very small $\lambda=0.0125$ (see Fig. \[fig1\], right panel). Thereby, working with open boundary conditions (OBC), one of the three degenerate ground states with charge pattern (... 100100100 ...), (... 010010010 ...), or (... 001001001 ...) is picked up by initializing the DMRG algorithm. As a result the CDW becomes visible in the local density. Note that also in the metallic state, which is realized already for $\lambda$’s as small as 0.1 (cf. Fig. \[fig1\], left panel), a charge modulation is observed. Those, however, can be attributed to Friedel oscillations, which are caused by the OBC and will decay algebraically in the central part of the chain as $L$ increases. Thus, for $\omega_0=2$, a metal-to-insulator transition is expected to occur in between $10<\lambda^{-1}<80$. ![(Color online) Local fermion ($n_j^f$ – filled blue circles) and boson ($n_j^b$ – open red squares) densities in the central part of an Edwards model chain with $L=120$ sites and OBC. DMRG data shown in the left-hand (right-hand) panel indicate a homogeneous TLL (CDW) state for $n^f=1/3$ and $\lambda^{-1}=10$ ($\lambda^{-1}=80$), where $\omega_0=2$. In what follows all energies are measured in units of $t_b$.[]{data-label="fig1"}](fig1.eps){width="0.8\columnwidth"} To localize the point where—at given $\omega_0$ and $\lambda$—the quantum phase transition takes place, we first compute the single-particle gap $\Delta_c$ and TLL charge exponent $K_\rho$ for finite chains with up to $L=150$ sites and OBC. Then we perform a finite-size scaling as illustrated for $K_\rho$ by Fig. \[fig2\], left panel. Here open symbols give $K_\rho$ as a function of the inverse system size $L^{-1}$. The DMRG data can be extrapolated to the thermodynamic limit by third-order polynomial functions. Decreasing $\lambda$ at fixed $\omega_0=2$ the values of $K_\rho$ decreases too and becomes equal to $K_\rho^\ast=2/9$ at the Kosterlitz-Thouless transition point $(\lambda^{-1})_c \sim 36$; see Fig. \[fig2\], right panel. For $\lambda^{-1}>36$ the system embodies a $2k_{\rm F}$-CDW insulator with finite charge gap $\Delta_c$. Furthermore, calculating $K_\rho(L)$ for $N=L/3-1$ particles, we can show that the infinitesimally doped CDW insulator has $K_\rho^{\rm CDW}=1/9$ at $n^f=1/3$. Deep in the CDW phase, $K_\rho$ approaches $1/9$ in the thermodynamic limit \[cf. the $\lambda=0.01$ data (filled symbols) in the left panel of Fig. \[fig2\]\]. [cc]{} ![(Color online) Left panel: $K_\rho(L)$ in the one-third filled Edwards model as a function of the inverse system size for various values of $\lambda$ at $\omega_0=2$ (open symbols). The finite-size interpolated DMRG data at the metal-insulator transition point and for the infinitesimally doped CDW insulator \[$n^f=1/3-1/L$ (filled symbols)\] are in perfect agreement with the bosonization results $K_\rho^\ast=2/9$ and $K_\rho^{\rm CDW}=1/9$, respectively. Right panel: $L\to\infty$ extrapolated $K_\rho$ (circles) and $\Delta_{c}$ (squares), as functions of $\lambda^{-1}$ for $\omega_0=2$, indicate a TLL-CDW transition at $\lambda^{-1}\sim 36$.[]{data-label="fig2"}](fig2a.eps){width="0.72\columnwidth"} ![(Color online) Left panel: $K_\rho(L)$ in the one-third filled Edwards model as a function of the inverse system size for various values of $\lambda$ at $\omega_0=2$ (open symbols). The finite-size interpolated DMRG data at the metal-insulator transition point and for the infinitesimally doped CDW insulator \[$n^f=1/3-1/L$ (filled symbols)\] are in perfect agreement with the bosonization results $K_\rho^\ast=2/9$ and $K_\rho^{\rm CDW}=1/9$, respectively. Right panel: $L\to\infty$ extrapolated $K_\rho$ (circles) and $\Delta_{c}$ (squares), as functions of $\lambda^{-1}$ for $\omega_0=2$, indicate a TLL-CDW transition at $\lambda^{-1}\sim 36$.[]{data-label="fig2"}](fig2b.eps){width="0.95\columnwidth"} Our final result is the ground-state phase diagram of the one-third filled Edwards model shown in Fig. \[fig3\]. The TLL-CDW phase boundary is derived from the $L\to\infty$ extrapolated $K_\rho$ values. Within the TLL region $2/9<K_\rho<1$. Of course, the TLL appears at large $\lambda$, when any distortion of the background medium readily relaxes $(\propto \lambda$), or, in the opposite limit of small $\lambda$, when the rate of the bosonic fluctuations ($\propto\omega_0^{-1}$) is sufficiently high. Below $\omega_{0,c}\simeq 0.93$ the metallic state is stable $\forall \lambda$, because the background medium is easily disturbed and therefore does not hinder the particle’s motion much. Note that this value is smaller than the corresponding one for the half-filled band case, where $\omega_{0,c}\simeq 1.38$. On the other hand, the $2k_{\rm F}$-CDW phase with $\Delta_c>0$ and long-range order appears, at half-filling, for small $\lambda$ and by trend large $\omega_0$ (see dashed lines); $\lambda_c\simeq 0.16$ for $\omega_0\to\infty$ [@EHF09]. Interestingly, for $n^f=1/3$, we observe that the CDW will be suppressed again if the energy of a background distortion becomes larger than a certain $\lambda$-dependent value (see Fig. \[fig3\], left panel). In stark contrast to the half-filled band case, at $n^f=1/3$, it seems that the TLL is stable $\forall \lambda$, when $\omega_0\to\infty$. This is because in this limit in the corresponding one-third filled $t$-$V$ model not only a nearest-neighbor Coulomb repulsion $V$ but also a substantial next-nearest-neighbor interaction $V_2$ is needed to drive the TLL-to-CDW transition [@SW04]. Again in the limit $\omega_0/t_b\gg 1 \gg \lambda/t_b$, the Edwards model at one-third filling can be described by the effective $t$-$V$-$V_2$ model with $V=2t_b^2/3\omega_0$ and $V_2=8t_b^4/3\omega_0^3$, i.e., $V_2/t_f=4t_b^3/3\lambda \omega_0^2$, which clearly explains the absence of the CDW phase for $\omega_0\gg 1$. [cc]{} ![(Color online) DMRG ground-state phase diagram of the 1D Edwards model at one-third band filling, showing the stability regions of metallic TLL and insulating CDW phases in the $\lambda^{-1}$-$\omega_0^{-1}$ (left panel) and $\lambda$-$\omega_0$ (right panel) plane. The dashed line denotes the MI transition points at half band filling from Ref. [@EHF09].[]{data-label="fig3"}](fig3a.eps){width="0.95\columnwidth"} ![(Color online) DMRG ground-state phase diagram of the 1D Edwards model at one-third band filling, showing the stability regions of metallic TLL and insulating CDW phases in the $\lambda^{-1}$-$\omega_0^{-1}$ (left panel) and $\lambda$-$\omega_0$ (right panel) plane. The dashed line denotes the MI transition points at half band filling from Ref. [@EHF09].[]{data-label="fig3"}](fig3b.eps){width="0.92\columnwidth"} Conclusions =========== To summarize, using an unbiased numerical (density matrix renormalization group) technique, we investigated the one-dimensional fermion-boson Edwards model at one-third band filling. We proved that the model displays a metal-insulator quantum phase transition induced by correlations in the background medium. The metallic phase is a Tomonaga-Luttinger liquid with $2/9<K_\rho<1$. The insulator represents a $2k_{\rm F}$ charge density wave with $K_\rho^{\rm CDW}=1/9$ deep inside the long-range ordered state. Performing a careful finite-size scaling analysis, the phase transition point can be precisely determined by $K_\rho$. If the background medium is stiff, we can conclude—by analogy with the ground-state phase diagram of the one-third filled $t$-$V$-$V_2$ model—that the Edwards model incorporates the effects of both effective nearest-neighbor and next-nearest-neighbor Coulomb interactions between the fermionic charge carriers. The effect of the latter one is reduced when the energy of a local distortion in the background is very large, which maintains metallic behavior—different from the half-filled band case—even for weak boson relaxation. Acknowledgements {#acknowledgements .unnumbered} ================ The authors would like to thank S. Nishimoto for useful discussions. This work was supported by the Deutsche Forschungsgemeinschaft through SFB 652, project B5. [10]{} G. Gr[ü]{}ner: [*Density Waves in Solids*]{} (Addison Wesley, Reading, MA, 1994). M. Berciu: Physics [**2**]{} (2009) 55. D. M. Edwards: Physica B [**378-380**]{} (2006) 133. A. Alvermann, D. M. Edwards, and H. Fehske: Phys. Rev. Lett. [**98**]{} (2007) 056602. G. Wellein, H. Fehske, A. Alvermann, and D. M. Edwards: Phys. Rev. Lett. [**101**]{} (2008) 136402. S. Ejima, G. Hager, and H. Fehske: Phys. Rev. Lett. [**102**]{} (2009) 106404. S. Ejima and H. Fehske: Phys. Rev. B [**80**]{} (2009) 155101. H. Fehske, S. Ejima, G. Wellein, and A. R. Bishop: J. Phys.: Conference Series [**391**]{} (2012) 012152. S. Ejima, S. Sykora, K. W. Becker, and H. Fehske: Phys. Rev. B [**86**]{} (2012) 155149. D. M. Edwards, S. Ejima, A. Alvermann, and H. Fehske: J. Phys. Condens. Matter [**22**]{} (2010) 435601. S. Nishimoto, S. Ejima, and H. Fehske: Phys. Rev. B [**87**]{} (2013) 045116. J. M. Kosterlitz and D. J. Thouless: J. Phys. C [**6**]{} (1973) 1181. P. Schmitteckert and R. Werner: Phys. Rev. B [**69**]{} (2004) 195115. S. R. White: Phys. Rev. Lett. [**69**]{} (1992) 2863. E. Jeckelmann and S. R. White: Phys. Rev. B [**57**]{} (1998) 6376. E. Jeckelmann and H. Fehske: Rivista del Nuovo Cimento [**30**]{} (2007) 259. S. Ejima, F. Gebhard, and S. Nishimoto: Europhys. Lett. [**70**]{} (2005) 492. H. J. Schulz: in[*Strongly Correlated Electronic Materials*]{}, ed. K. S. Bedell, Z. Wang, and D. E. Meltzer (Addison-Wesley, Reading, MA, 1994), pp. 187; cond–mat/9412036. T. Giamarchi: [*Quantum Physics in One Dimension*]{} (Clerendon Press, Oxford, 2003). [^1]: E-mail address: [email protected] [^2]: E-mail address: [email protected]
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this article we show that the three-particle GHZ theorem can be reformulated in terms of inequalities, allowing imperfect correlations due to detector inefficencies. We show quantitatively that taking into account these inefficiencies, the published results of the Innsbruck experiment support the nonexistence of local hidden variables that explain the experimental results.' author: - | **J. Acacio de Barros**[^1] and **Patrick Suppes**[^2]\ CSLI - Ventura Hall\ Stanford University\ Stanford, CA 94305-4115 title: 'Inequalities for dealing with detector inefficiencies in Greenberger-Horne-Zeilinger-type experiments[^3]' --- The issue of the completeness of quantum mechanics has been a subject of intense research for almost a century. Recently, Greenberger, Horne and Zeilinger (GHZ) proposed a new test for quantum mechanics based on correlations between more than two particles [@GHZ]. What makes the GHZ proposal distinct from Bell’s inequalities is that they use perfect correlations that result in mathematical contradictions. The argument, as stated by Mermin in [@Mermin], goes as follows. We start with a three-particle entangled state $$|\psi \rangle =\frac{1}{\sqrt{2}}(|+\rangle _{1}|+\rangle _{2}|-\rangle _{3}+|-\rangle _{1}|-\rangle _{2}|+\rangle _{3}).$$ This state is an eigenstate of the following spin operators: $$\begin{aligned} \hat{\mathbf{A}} & = & \hat{\sigma }_{1x}\hat{\sigma }_{2y}\hat{\sigma }_{3y},\, \, \, \, \hat{\mathbf{B}}=\hat{\sigma }_{1y}\hat{\sigma }_{2x}\hat{\sigma }_{3y},\nonumber \\ \hat{\mathbf{C}} & = & \hat{\sigma }_{1y}\hat{\sigma }_{2y}\hat{\sigma }_{3x},\, \, \, \, \hat{\mathbf{D}}=\hat{\sigma }_{1x}\hat{\sigma }_{2x}\hat{\sigma }_{3x}.\nonumber \end{aligned}$$ From the above we have that the expected correlations $ E(\hat{\mathbf{A}})=E(\hat{\mathbf{B}})=E(\hat{\mathbf{C}})=1. $ However, $ \hat{\mathbf{D}}=\hat{\mathbf{A}}\hat{\mathbf{B}}\hat{\mathbf{C}}, $ and we also obtain that, according to quantum mechanics, $ E(\hat{\mathbf{D}})=E(\hat{\mathbf{A}}\hat{\mathbf{B}}\hat{\mathbf{C}})=-1. $ It is easy to show that these correlations yield a contradiction if we assume that spin exist independent of the measurement process. GHZ’s proposed experiment, however, has a major problem. How can one verify experimentally predictions based on perfect correlations? This was also a problem in Bell’s original paper. To “avoid Bell’s experimentally unrealistic restrictions”, Clauser, Horne, Shimony and Holt [@Clauseretal] derived a new set of inequalities that would take into account imperfections in the measurement process. A main purpose of this article is to derive a set of inequalities for the experimentally realizable GHZ correlations. We show that the following four inequalities are both necessary and sufficient for the existence of a local hidden variable, or, equivalently [@suppeszannoti], a joint probability distribution of $ \mathbf{A} $, $ \mathbf{B} $, $ \mathbf{C} $, and $ \mathbf{ABC} $, where $ \mathbf{A},\mathbf{B},\mathbf{C} $ are three $ \pm 1 $ random variables. $$\label{inequality1} -2\leq E(\mathbf{A})+E(\mathbf{B})+E(\mathbf{C})-E(\mathbf{ABC})\leq 2,$$ $$-2\leq -E(\mathbf{A})+E(\mathbf{B})+E(\mathbf{C})+E(\mathbf{ABC})\leq 2,$$ $$-2\leq E(\mathbf{A})-E(\mathbf{B})+E(\mathbf{C})+E(\mathbf{ABC})\leq 2,$$ $$\label{inequality4} -2\leq E(\mathbf{A})+E(\mathbf{B})-E(\mathbf{C})+E(\mathbf{ABC})\leq 2.$$ For the necessity argument we assume there is a joint probability distribution consisting of the eight atoms $ abc,\ldots ,\overline{a}\overline{b}\overline{c} $, where we use a notation where $ a $ is $ \mathbf{A}=1 $, $ \overline{a} $ is $ \mathbf{A}=-1 $, and so on. Then, $ E(\mathbf{A})=P(a)-P(\overline{a}) $, where $ P(a)=P(abc)+P(a\overline{b}c)+P(ab\overline{c})+P(a\overline{b}\overline{c}) $, and $ P(\overline{a})=P(\overline{a}bc)+P(\overline{a}\overline{b}c)+P(\overline{a}b\overline{c})+P(\overline{a}\overline{b}\overline{c}) $, and similar equations hold for $ E(\mathbf{B}) $ and $ E(\mathbf{C}) $. Next we do a similar analysis of $ E(\mathbf{ABC}) $ in terms of the eight atoms. Corresponding to (\[inequality1\]), we now sum over the probability expressions for the expectations $ F=E(\mathbf{A})+E(\mathbf{B})+E(\mathbf{C})-E(\mathbf{ABC}) $, and obtain $$\begin{aligned} F & = & 2[P(abc)+P(\overline{a}bc)+P(a\overline{b}c)+P(ab\overline{c})]\nonumber \\ & & -2[P(\overline{a}\overline{b}\overline{c})+P(\overline{a}\overline{b}c)+P(\overline{a}b\overline{c})+P(a\overline{b}\overline{c})].\nonumber \end{aligned}$$ Since all the probabilities are nonnegative and sum to $ \leq 1 $, we infer (\[inequality1\]) at once. The derivation of the other three inequalities is similar. To prove the converse, i.e., that these inequalities imply the existence of a joint probability distribution, is slightly more complicated. We restrict ourselves to the symmetric case $ P(a)=P(b)=P(c)\equiv p $, $ P(\mathbf{ABC}=1)\equiv q $ and thus $ E(\mathbf{A})=E(\mathbf{B})=E(\mathbf{C})=2p-1, $ $ E(\mathbf{ABC})=2q-1. $ In this case, (\[inequality1\]) can be written as $ 0\leq 3p-q\leq 2, $ while the other three inequalities yield just $ 0\leq p+q\leq 2 $. Let $ x\equiv P(\overline{a}bc)=P(a\overline{b}c)=P(ab\overline{c}) $, $ y\equiv P(\overline{a}\overline{b}c)=P(\overline{a}b\overline{c})=P(a\overline{b}\overline{c}) $, $ z\equiv P(abc) $ and $ w\equiv P(\overline{a}\overline{b}\overline{c}) $. It is easy to show that on the boundary $ 3p=q $ defined by the inequalities the values $ x=0,y=\frac{q}{3},z=0,w=1-q $ define a possible joint probability distribution, since $ 3x+3y+z+w=1 $. On the other boundary, $ 3p=q+2 $ a possible joint distribution is $ x=\frac{(1-q)}{3},y=0,z=q,w=0 $. Then, for any values of $ q $ and $ p $ within the boundaries of the inequality we can take a linear combination of these distributions with weights $ \frac{3p-q}{2} $ and $ 1-\frac{3p-q}{2} $ and obtain the joint probability distribution, $ x=(1-\frac{3p-q}{2})\frac{1-q}{3},y=\frac{3p-q}{2}\frac{q}{3},z=(1-\frac{3p-q}{2})q,w=\frac{3p-q}{2}(1-q) $, which proves that if the inequalities are satisfied a joint probability distribution exists, and therefore a local hidden variable as well. The generalization to the asymmetric case is tedious but straightforward. The correlations present in the GHZ state are so strong that even if we allow for experimental errors, the non-existence of a joint distribution can still be verified. Let (i) : $ E(\mathbf{A})=E(\mathbf{B})=E(\mathbf{C})\geq 1-\epsilon $, (ii) : $ E(\mathbf{ABC})\leq -1+\epsilon $, where $ \epsilon $ represents a decrease of the observed correlations due to experimental errors. To see this, let us compute the value of $ F $ defined above, $ F=3(1-\epsilon )-(-1+\epsilon ). $ But the observed correlations are only compatible with a local hidden variable theory if $ F\leq 2 $, hence $ \epsilon <\frac{1}{2}. $ Then, in the symmetric case, there cannot exist a joint probability distribution of $ \mathbf{A},\, \mathbf{B} $ and $ \mathbf{C} $ satisfying (i) and (ii) if $ \epsilon <1/2. $ We will give an analysis of what happens to the correlations when the detectors have efficiency $ d\in \lbrack 0,1] $ and a probability $ \gamma $ of detecting a dark photon within the window of observation when no real photon is detected. Our analysis will be based on the experiment of Bouwmeester et al. [@Innsbruck]. In their experiment, an ultraviolet pulse hits a nonlinear crystal, and pairs of correlated photons are created. There is also a small probability that two pairs are created within a window of observation, making them indistinguishable. When this happens, by restricting to states where only one photon is found on each output channel to the detectors, we obtain the following state, $$\frac{1}{\sqrt{2}}|+\rangle _{T}(|+\rangle _{1}|+\rangle _{2}|-\rangle _{3}+|-\rangle _{1}|-\rangle _{2}|+\rangle _{3}),$$ where the subscripts refer to the detectors and $ + $ and $ - $ to the linear polarization of the photon. Hence, if a photon is detected at the trigger $ T $ (located after a polarizing beam splitter) the three-photon state at detectors $ D_{1},D_{2} $, and $ D_{3} $ is a GHZ-correlated state (see FIG. 1). We will assume that double pairs created have the expected GHZ correlation, and the probability negligible of having triple pair produtions or of having fourfold coincidence registered when no photon is generated. (Our analysis is different from that of Żukowski [@zukowski], who considered only ideal detectors.) Two possibilities are left: i) a pair of photons is created at the parametric down converter; ii) two pairs of photons are created. We will denote by $ p_{1}p_{2} $ the pair creation, and by $ p_{1}...p_{4} $ the two-pair creation. We will assume that the probabilities add to one, i.e. $ P\left( p_{1}\ldots p_{4}\right) +P\left( p_{1}p_{2}\right) =1. $ We start with two photons. $ p_{1}p_{2} $ can reach any of the following combinations of detectors: $ TD_{1}, $ $ TD_{2}, $ $ TD_{3}, $ $ D_{1}D_{1}, $ $ D_{1}D_{2}, $ $ D_{1}D_{3}, $ $ D_{2}D_{2}, $ $ D_{2}D_{3}, $ $ D_{3}D_{3}, $ $ TT $. For an event to be counted as being a GHZ state, all four detectors must fire (this conditionalization is equivalent to the enhancement hypothesis). We take as our set of random variables $ \mathbf{T},\mathbf{D}_{1},\mathbf{D}_{2},\mathbf{D}_{3} $ which take values $ 1 $ (if they fire) or $ 0 $ (if they don’t fire). We will use $ t,d_{1},d_{2},d_{3} $ ($ \overline{t},\overline{d}_{1},\overline{d}_{2},\overline{d}_{3} $) to represent the value 1 (0). We want to compute $ P\left( td_{1}d_{2}d_{3}\mid p_{1}p_{2}\right) , $ the probability that all detectors $ T,D_{1},D_{2},D_{3} $ fire simultaneously given that only a pair of photons has been created at the crystal. We start with the case when the two photons arrive at detectors $ T $ and $ D_{3}. $ Since the efficiency of the detectors is $ d $, the probability that both detectors detect the photons is $ d^{2}, $ the probability that only one detects is $ 2d(1-d) $ and the probability that none of them detect is $ (1-d)^{2}. $ Taking $ \gamma $ into account, then the probability that all four detectors fire is $$P\left( td_{1}d_{2}d_{3}\mid p_{1}p_{2}=TD_{3}\right) =\gamma ^{2}\left( d+\gamma (1-d)\right) ^{2},$$ where $ p_{1}p_{2}=TD_{3} $ represents the simultaneous (i.e. within a measurement window) arrival of the photons a the trigger $ T $ and at $ D_{3}. $ Similar computations can be carried out for $ p_{1}p_{2}=TD_{1}, $ $ TD_{2}, $ $ D_{1}D_{3}, $ $ D_{1}D_{2}, $ $ D_{2}D_{3}. $ For $ p_{1}p_{2}=D_{i}D_{i} $ the computation of $ P\left( td_{1}d_{2}d_{3}\mid p_{1}p_{2}=D_{i}D_{i}\right) $ is different. The probability that exactly one of the photons is detected at $ D_{i} $ is $ d(1-d) $ and the probability that none of them are detected is $ (1-d)^{2}. $ Then, it is clear that $$P\left( td_{1}d_{2}d_{3}\mid p_{1}p_{2}=D_{i}D_{i}\right) =d\left( 1-d\right) \gamma ^{3}+(1-d)^{2}\gamma ^{4},$$ and we have at once that $$\begin{aligned} P\left( td_{1}d_{2}d_{3}\mid p_{1}p_{2}\right) & = & 6\gamma ^{2}\left( d+\gamma (1-d)\right) ^{2}\nonumber \\ & & +4\gamma ^{3}\left( 1-d\right) \left( d+\gamma \right) .\nonumber \end{aligned}$$ We note that the events involving $ P\left( td_{1}d_{2}d_{3}\mid p_{1}p_{2}\right) $ have no spin correlation, contrary to GHZ events. We now turn to the case when four photons are created. The probability that all four are detected is $ d^{4}, $ that three are detected is $ 4d^{3}(1-d), $ that two are detected is $ 6d^{2}(1-d)^{2}, $ that one is detected is $ 4d(1-d)^{3}, $ and that none is detected is $ (1-d)^{4}. $ If all four are detected, we have a true GHZ-correlated state detected. However, one can again have four detections due to dark counts. We will write $ p_{1}...p_{4}=GHZ $ to represent having the four GHZ photons detected, and $ p_{1}...p_{4}=\overline{GHZ} $ as having the four detections as a non-GHZ state. We can write that $$\label{21} P\left( td_{1}d_{2}d_{3}\mid p_{1}...p_{4}=GHZ\right) =d^{4}+\gamma \left( 1-d\right) d^{3}$$ and $$P\left( td_{1}d_{2}d_{3}\mid p_{1}...p_{4}=\overline{GHZ}\right) =3\gamma d^{3}(1-d)+6\gamma ^{2}d^{2}(1-d)^{2}+4\gamma ^{3}d(1-d)^{3}+\gamma ^{4}(1-d)^{4}.$$ The last term in (\[21\]) comes from the unique role of the trigger $ T, $ that needs to detect a photon but not necessarily one that has a GHZ correlation. How do the non-GHZ detections change the GHZ expectations? What is measured in the laboratory is the conditional correlation $ E\left( \mathbf{S}_{1}\mathbf{S}_{2}\mathbf{S}_{3}\mid td_{1}d_{2}d_{3}\right) $, where $ \mathbf{S}_{1}, $ $ \mathbf{S}_{2} $ and $ \mathbf{S}_{3} $ are random variables with values $ \pm 1, $ representing the spin measurement at $ D_{1},D_{2} $ and $ D_{3} $ respectively. We can write it as $$E\left( \mathbf{S}_{1}\mathbf{S}_{2}\mathbf{S}_{3}\mid td_{1}d_{2}d_{3}\right) =\frac{E\left( \mathbf{S}_{1}\mathbf{S}_{2}\mathbf{S}_{3}\mid td_{1}d_{2}d_{3}\, \, \&\, \, GHZ\right) P(GHZ)}{P(GHZ)+P(\overline{GHZ})}.$$ since for non-GHZ states we expect a correlation zero for the term $$\frac{E\left( \mathbf{S}_{1}\mathbf{S}_{2}\mathbf{S}_{3}\mid td_{1}d_{2}d_{3}\, \, \&\overline{GHZ}\right) P(\overline{GHZ})}{P(GHZ)+P(\overline{GHZ})}.$$ Neglecting terms of higher order than $ \gamma ^{2} $, using $ \gamma \ll d $, and $ P(p_{1}p_{2})\gg P(p_{1}...p_{4}), $ we obtain, from $ P(\overline{GHZ})=6P(p_{1}p_{2})\gamma ^{2}d^{2}+3P(p_{1}...p_{4})\gamma (1-d)d^{3} $ and $ P\left( GHZ\right) =P(p_{1}...p_{4})\left[ d^{4}+\gamma \left( 1-d\right) d^{3}\right] , $ that $$\label{18} E\left( \mathbf{S}_{1}\mathbf{S}_{2}\mathbf{S}_{3}\mid td_{1}d_{2}d_{3}\right) =\frac{E(\mathbf{S}_{1}\mathbf{S}_{2}\mathbf{S}_{3}\mid td_{1}d_{2}d_{3}\&GHZ)}{\left[ 1+6\frac{P(p_{1}p_{2})}{P(p_{1}...p_{4})}\frac{\gamma ^{2}}{d^{2}}\right] }.$$ This value is the corrected expression for the conditional correlations if we have detector efficiency taken into account. The product of the random variables $ \mathbf{S}_{1}\mathbf{S}_{2}\mathbf{S}_{3} $ can take only values $ +1 $ or $ -1. $ Then, if their expectation is $ E\left( \mathbf{S}_{1}\mathbf{S}_{2}\mathbf{S}_{3}\mid td_{1}d_{2}d_{3}\right) $ we have $$P\left( \mathbf{S}_{1}\mathbf{S}_{2}\mathbf{S}_{3}=1\mid td_{1}d_{2}d_{3}\right) =\frac{1+E\left( \mathbf{S}_{1}\mathbf{S}_{2}\mathbf{S}_{3}\mid td_{1}d_{2}d_{3}\right) }{2}.$$ The variance $ \sigma ^{2} $ for a random variable that assumes only $ 1 $ or $ -1 $ values is $ 4P(1)\left( 1-P(1)\right) . $ Hence, in our case we have as a variance $$\sigma ^{2}=1-\left[ E\left( \mathbf{S}_{1}\mathbf{S}_{2}\mathbf{S}_{3}\mid td_{1}d_{2}d_{3}\right) \right] ^{2}.$$ We will estimate the values of $ \gamma $ and $ d $ to see how much $ E\left( \mathbf{S}_{1}\mathbf{S}_{2}\mathbf{S}_{3}\mid td_{1}d_{2}d_{3}\right) $ would change due to experimental errors. For that purpose, we will use typical rates of detectors [@EGG] for the frequency used at the Innsbruck experiment, as well as their reported data [@Innsbruck]. First, modern detectors usually have $ d\cong 0.5 $ for the wavelengths used at Innsbruck. We assume a dark-count rate of about $ 3\times 10^{2} $ counts/s. With a time window of coincidence measurement of $ 2\times 10^{-9} $ s, we then have that the probability of a dark count in this window is $ \gamma \cong 6\times 10^{-7}. $ From [@Innsbruck] we use that the ratio $ P(p_{1}p_{2})/P(p_{1}...p_{2}) $ is on the order of $ 10^{10}. $ Substituting this three numerical values in (\[18\]) we have $ E\left( \mathbf{S}_{1}\mathbf{S}_{2}\mathbf{S}_{3}\mid td_{1}d_{2}d_{3}\right) \cong 0.9. $ From this expression it is clear that the change in correlation imposed by the dark-count rates is significant for the given parameters. However, it is also clear that the value of the correlation is quite sensitive to changes in the values of both $ \gamma $ and $ d. $ We can now compare the values we obtained with the ones observed by Bouwmeester et al. for GHZ and $ \overline{GHZ} $ states [@Innsbruck]. In their case, they claim to have obtained a ratio of $ 1:12 $ between $ \overline{GHZ} $ and GHZ states. In this case the correlations are $ E\left( \mathbf{S}_{1}\mathbf{S}_{2}\mathbf{S}_{3}\mid td_{1}d_{2}d_{3}\right) \cong 0.92. $ It is clear that a detailed analysis of the parameters would be necessary to fit the experimental result to the predicted correlations that take the inefficiencies into account, but at this point one can see that values close to an experimentally measured $ 0.92 $ can be obtained with appropriate choices of the parameters $ d $ and $ \gamma $ (see FIG. 2). This expected correlation also satisfies $$\label{ineq} E\left( \mathbf{S}_{1}\mathbf{S}_{2}\mathbf{S}_{3}\mid td_{1}d_{2}d_{3}\right) >1-\frac{1}{2}.$$ This result is enough to prove the nonexistence of a joint probability distribution. We should note that the standard deviation in this case is $$\sigma \cong \sqrt{\left( 1+0.92\right) \left( 1-0.92\right) }=0.39.$$ As a consequence, since $ 0.92-0.39=0.53, $ the result $ 0.92 $ is bounded away from the classical limit $ 0.5 $ by more than one standard deviation (see FIG. 3). We showed that the GHZ theorem can be reformulated in a probabilistic way to include experimental inefficiencies. The set of four inequalities (\[inequality1\])-(\[inequality4\]) sets lower bounds for the correlations that would prove the nonexistence of a local hidden-variable theory. Not surprisingly, detector inefficiencies and dark-count rates can change considerably the correlations. How do these results relate to previous ones obtained in the large literature of detector inefficiencies in experimental tests of local hidden-variable theories. We start with Mermin’s paper [@Mermin2], where an inequality for $ F $ similar to ours but for the case of $ n $-correlated particles is derived. Mermin does not derive a minimum correlation for GHZ’s original setup that would imply the non-existence of a hidden-variable theory, as his main interest was to show that the quantum mechanical results diverge exponentially from a local hidden-variable theory if the number of entangled particles increase. Braunstein and Mann [@Braunstein] take Mermin’s results and estimate possible experimental errors that were not considered here. They conclude that for a given efficiency of detectors the noise grows slower than the strong quantum mechanical correlations. Reid and Munru [@Reid] obtained an inequality similar to our first one, but there are sets of expectations that satisfy their inequality and still do not have a joint probability distribution. In fact, as we mentioned earlier, our complete set of inequalities is a necessary and sufficient condition to have a joint probability distribution. We have used an enhancement hypothesis, namely, that we only counted events with all four simultaneous detections, and showed that with the coincidence constraint a joint probability did not exist in the Innsbruck experiment. Enhancement hypotheses have to be used when detector efficiencies are low, but they may lead to loopholes in the arguments about the nonexistence of local hidden-variable theories. Loophole-free requirements for detector inefficiencies are based on the analysis of [@Eberhard] for the Bell case and for [@Larsson] for the GHZ experiment without enhancement. However, in the Innsbruck setup enhancement is necessary, as the ratio of pair to two-pair production is of the order of $ 10^{10} $ [@Innsbruck]. Until experimental methods are found to eliminate the use of enhancement in GHZ experiments, no loophole-free results seem possible. FIG. 3 shows the number of standard deviations, as computed above, by which the existence of a joint distribution is violated. We can see that if we change the experiment such that we reduce the dark-count rate to 50 per second, instead of the assumed 300, a large improvement in the experimental result would be expected. Detectors with this dark-count rate and the assumed efficiency are available [@EGG]. We emphasize that there are other possible experimental manipulations that would increase the observed correlation, e.g. the ratio $ P(p_{1}p_{2})/P(p_{1}...p_{2}), $ but we cannot enter into such details here. The point to hold in mind is that FIG. 3 provides an analysis that can absorb any such changes or other sources of error, not just the dark-count rate, to give a measure of reliability. We would like to thank Prof. Sebastião J. N. de Pádua for comments, as well as the anonymous referees. [10]{} D. M. Greenberger, M. Horne, and A. Zeillinger, in *Bell’s Theorem, Quantum Theory, and Conceptions of the Universe,* edited by M. Kafatos (Kluwer, Dordrecht, 1989). N. D. Mermin, *Am. J. Phys.* **58,** 731 (1990). J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, *Phys. Rev. Lett.*** 23,** 880 (1969). P. Suppes and M. Zanotti, *Synthese* **48,** 191 (1981). D. Bouwmeester, J-W. Pan, M. Daniell, H. Weingurter, and A. Zeilinger, *Phys. Rev. Lett.* **82,** 1345 (1999). M. Żukowski, “Violation of Local Realism in the Innsbruck Experiment”, quant-ph/9811013. Single photon count module specifications for EG&G’s SPCM-AG series were obtained from EG&G’s web page at `http://www.egginc.com.` S. Braunstein and A. Mann, *Phys. Rev.* **A47,** R2427 (1993). N. D. Mermin, *Phys. Rev. Lett.* **65**, 1838 (1990). M. D. Reid and W. J. Munro, *Phys. Rev. Lett.* **69**, 997 (1992). P. H. Eberhard, *Phys. Rev. A* **47**, R747 (1993). J. Larsson, *Phys. Rev. A* **57**, 3304 (1998); J. Larsson, *Phys. Rev. A* **57**, R3145 (1998). [^1]: E-mail: [email protected]. On leave from Dept. de Física – ICE, UFJF, 36036-330 MG, Brazil. [^2]: E-mail: [email protected]. [^3]: Copyright (2000) by the American Physical Society. To appear in *Phys. Rev. Lett.*
{ "pile_set_name": "ArXiv" }
--- abstract: 'Technical proofs are provided in the appendix.' author: - | Aiyou Chen and Timothy C. Au\ Google LLC\ 1600 Amphitheatre Pkwy, Mountain View, CA 94043\ {aiyouchen, timau}@google.com bibliography: - 'trimmed\_match.bib' title: '**Robust Causal Inference for Incremental Return on Ad Spend with Randomized Paired Geo Experiments**' --- [*Keywords:*]{} causal effect, online advertising, heterogeneity, studentized trimmed mean Appendix ======== This provides the technical proofs for Theorem 1 (Existence), Theorem 2 (Identifiability), Theorem 3 (Consistency), and Theorem 4 (Asymptotic Normality). Proof of Theorem 1 (Existence) ------------------------------ Proof of Theorem 2 (Identifiability) ------------------------------------ Some Notation for Asymptotic Analysis ------------------------------------- Derivative of $F_{\theta}^{-1}$ w.r.t. $\theta$ ----------------------------------------------- Supporting Lemmas for Asymptotics --------------------------------- Proof of Theorem 3 (Consistency) -------------------------------- Proof of Theorem 4 (Asymptotic Normality) -----------------------------------------
{ "pile_set_name": "ArXiv" }
--- --- [ **Polymer uncrossing and knotting in protein folding, and their role in minimal folding pathways** ]{}\ Ali R. Mohazab$^{1}$, Steven S. Plotkin$^{1,\ast}$\ **[1]{} Department of Physics and Astronomy, University of British Columbia, Vancouver, B.C. Canada\ $\ast$ E-mail: [email protected]** Abstract {#abstract .unnumbered} ======== We introduce a method for calculating the extent to which chain non-crossing is important in the most efficient, optimal trajectories or pathways for a protein to fold. This involves recording all unphysical crossing events of a ghost chain, and calculating the minimal uncrossing cost that would have been required to avoid such events. A depth-first tree search algorithm is applied to find minimal transformations to fold $\alpha$, $\beta$, $\alpha/\beta$, and knotted proteins. In all cases, the extra uncrossing/non-crossing distance is a small fraction of the total distance travelled by a ghost chain. Different structural classes may be distinguished by the amount of extra uncrossing distance, and the effectiveness of such discrimination is compared with other order parameters. It was seen that non-crossing distance over chain length provided the best discrimination between structural and kinetic classes. The scaling of non-crossing distance with chain length implies an inevitable crossover to entanglement-dominated folding mechanisms for sufficiently long chains. We further quantify the minimal folding pathways by collecting the sequence of uncrossing moves, which generally involve leg, loop, and elbow-like uncrossing moves, and rendering the collection of these moves over the unfolded ensemble as a multiple-transformation “alignment”. The consensus minimal pathway is constructed and shown schematically for representative cases of an $\alpha$, $\beta$, and knotted protein. An overlap parameter is defined between pathways; we find that $\alpha$ proteins have minimal overlap indicating diverse folding pathways, knotted proteins are highly constrained to follow a dominant pathway, and $\beta$ proteins are somewhere in between. Thus we have shown how topological chain constraints can induce dominant pathway mechanisms in protein folding. Author Summary {#author-summary .unnumbered} ============== Researchers have long focused on the problem of how to design and predict low-energy protein structures from amino acid sequence, without worrying very much about how those structures can be found without the protein getting tangled up in its own game of Twister gone awry. This problem becomes a serious one for proteins whose folded structures form knots, of which several hundred have now been found. Here, we develop and apply a formalism to find the way a protein would fold up if it could do so with the least amount of motion. We know proteins generally don’t fold up the same way every time. Nevertheless, one can’t help but wonder if the constraints due to the presence of the protein chain itself could, in some cases, be so severe that no matter where the protein started from, nearly a single folding pathway would be induced, akin to navigating a maze. We found that the answer depends on the structure- for a typical $\alpha$-helical protein there are many roads to Rome, while for a knotted protein the solution is much more maze-like: chain non-crossing constraints can induce a mechanism to folding, and a pathway to the folded structure. Introduction {#introduction .unnumbered} ============ Protein folding is a structural transformation, from a disordered-polymer conformational ensemble to an ordered, well-defined structure. Quantifying the dynamical mechanism by which this occurs has been a long-standing problem of interest to both theorists and experimentalists [@WolynesPG92:spinb; @Chan93; @Wolynes95; @Garel96; @Dobson98; @FershtBook00; @EatonWA00; @EnglanderSW00; @Pande00RMP; @Shea2001; @PlotkinSS02:quartrev1; @PlotkinSS02:quartrev2; @SnowCD02; @OlivebergM05rev; @KhatibF11; @Lindorff-Larsen11]. It is currently not possible experimentally to capture the full dynamical mechanism of a folding protein in atomic detail, start to finish. Photon counting analyses of single molecule folding trajectories can now extract the mean transition path time across the distribution of productive folding pathways [@ChungHS12]. Typically however, snapshots of the participation of various residues in the folding transition state are used to infer the relative importance of amino acids in defining the protein folding nucleus [@Fersht92; @AbkevichVI94; @FershtAR95; @DaggettV96; @GianniS03; @Klimov98; @OlivebergM98; @MartinezJC98; @FershtBook00; @ClementiC03jmb; @EjtehadiMR04; @OztopB04; @BodenreiderC05; @SosnickTR06; @WensleyBG09]. An idea of how the nucleus grows as folding proceeds may be gained by exploring the native shift in the transition state as denaturant concentration is increased [@TernstromT99], but ideally the goal is to quantify folding mechanisms under constant environmental conditions. To this end, simulations and theory have proved an invaluable tool [@ChanHS90; @Wolynes97cap; @NymeyerH98:pnas; @DuR98:jcp; @Nymeyer99:PNAS; @Shoemaker99b; @ZhouY99; @PlotkinSS00:pnas; @PlotkinSS02:Tjcp; @PlotkinSS02:quartrev2; @SnowCD02; @FavrinG03; @WeiklTR07], and have in many respects succeeded in reproducing the general features of the folding pathway (see e.g. references [@MaityH05; @WeinkamP05] for cytochrome c). One conceptual refinement to arise from theoretical and simulation studies is the study of “good” reaction coordinates that correlate with commitment probability to complete the protein folding reaction [@DellagoC98; @DuR98:jcp; @BolhuisPG02; @HummerG04; @BestRB05; @MaraglianoL06; @vanderVaartA07]. Reaction coordinates must generally take into account the energy surface on which the molecule of interest is undergoing conformational diffusion [@FischerS92; @YangH07; @BranduardiD07], and the Markovian or non-Markovian nature of the diffusion [@PlotkinSS98; @HummerG03]. In a system with many degrees of freedom on a complex energy landscape and obeying nontrivial steric restrictions, finding a best reaction coordinate or even a good reaction coordinate is a difficult task. Finding reaction paths between metastable minima is an old problem, in which many approaches have been developed to account for the underlying complex, multi-dimensional potential energy surface [@CerjanCJ81; @BellS84; @ElberR87; @WalesD93; @WalesD01; @KomatsuzakiT03; @PrentissMC10]. An alternate approach, in the spirit of defining order parameters in statistical and condensed matter physics, is to consider the geometry of the product and reactant in defining a reaction coordinate without reference to the underlying potential energy landscape. The overlap function $q$ of a spin-glass is an example of a geometrically-defined order parameter [@MezardM86], for which the underlying Hamiltonian determines behavior such as the temperature-dependence. We pursue such a geometric approach in this paper. A transformation connecting unfolded states with the native folded state can be considered as a reaction coordinate. A transformation can also be used as a starting point for refinement, by examining commitment probability or other reaction coordinate formalism. Several methods have been developed to find transformations between protein conformational pairs without specific reference to a molecular mechanical force field. These include coarse-grained elastic network models [@KimMK02; @KimMK02bj], coarse-grained plastic network models [@MaragakisP05], iterative cluster-normal mode analysis [@SchuylerAD09], restrained interpolation (the Morph server) [@KrebsWG00], the FRODA method [@WellsS05], and geometrical targeting (the geometrical pathways (GP) server) [@FarrellDW10]. In this paper we consider transformations between polymer conformation pairs that would not be viable by a conjugate-gradient type or direct minimization approach, in that dead-ends would inevitably be encountered. We focus specifically on how one might find geometrically optimal transformations that account for polymer non-crossing constraints, which would apply to knotted proteins for example. By a geometrically optimal transformation, we mean a transformation in which every monomer in a polymer, as represented by the $\alpha$-carbon backbone of a protein for example, would travel the least distance in 3-dimensional space in moving from conformation A to conformation B. This is a variational problem, and the equations of motion, along with the minimal transformation and the Euclidean distance covered, have been worked out previously [@PlotkinSS07; @MohazabAR08; @MohazabAR08:bj; @MohazabAR09]. Although minimal transformations have been found for the backbones of secondary structures, and the non-crossing problem has been treated [@MohazabAR08:bj], minimal transformations between unfolded and folded states for full protein chain lengths have not been treated before. The minimal transformation inevitably involves curvilinear motion if bond, angle, or stereochemical constraints are involved [@GrosbergAY04; @PlotkinSS07]. Such curvilinear transformations as a result of bond constraints were developed in [@PlotkinSS07; @MohazabAR08; @MohazabAR08:bj; @MohazabAR09]. If such constraints are neglected, the minimal distance corresponding to the minimal transformation reduces to the mean of the root squared distance (MRSD), or the mean of the straightline distances between pairs of atoms or monomers. This is not the conventional RMSD. For any typical pair of conformations, the MRSD is always less than the RMSD [@MohazabAR08]. Used as an alignment cost function, aligned configurations using MRSD are globally different than those using RMSD [@MohazabAR09]. The RMSD can be thought of as a least squares fit between the coordinates defining the two structures. Alternatively, it may also be thought of as the straight-line Euclidean distance between two structures in a high-dimensional space of dimension $3 N$, where $N$ is the number of atoms in the protein, or $C_\alpha$ atoms if the protein is coarse-grained. Fast algorithms have been constructed to align structures using RMSD [@KabschW76; @KabschW78; @KnellerGR91; @FlowerDR99; @CoutsiasEA04; @CoutsiasEA05]. If several intermediate states are known along the pathway of a transformation between a pair of structures, then the RMSD may be calculated consecutively for each successive pair. This notion of RMSD as an order parameter goes back to reaction dynamics papers from the early 1980’s [@CerjanCJ81; @BellS84; @ElberR87; @WalesD93], however in these approaches the potential energy governs the most likely reactive trajectories taken by the system, and RMSD is simply accumulated through the transition states. In the absence of a potential surface except for that corresponding to steric constraints, the incremental RMSD may be treated as a cost function and the corresponding transformation between two structures found algorithmically [@FarrellDW10]. However, the minimal transformation using RMSD (or $3ND$ Euclidean distance) as a cost function is different than the minimal transformation using $3D$ Euclidean distance (MRSD) as a cost function, and the RMSD-derived transformation does not correspond to the most straight-line trajectories. The RMSD is not equivalent to the total amount of motion a protein or polymer must undergo in transforming between structures, even in the absence of steric constraints enforcing deviations from straight-line motion. Conversely, the transformation corresponding to the MRSD will be curvilinear in the 3N-dimensional space. In what follows, we develop a computational scheme for describing how difficult it might be for different proteins to reach their folded configuration. The essence is a calculation of how much “effort” the protein chain must expend to avoid having to cross through itself as it tries to realize its folded state. This involves finding the different ways a polymer can uncross or “untangle” itself, and then calculating the corresponding distance for each of the untangling transformations. Since there are typically several avoided crossings during a minimal folding transformation, finding the optimal untangling strategy corresponds to finding the optimal combination of uncrossing operations with minimal total distance cost. After quantifying such a procedure, we apply this to full length protein backbone chains for several structural classes, including $\alpha$-helical proteins, $\beta$-sheet proteins, $\alpha$-$\beta$ proteins, 2-state and 3-state folders, and knotted proteins. We generate unfolded ensembles for each of the proteins investigated, and calculate minimal distance transformations for each member of the unfolded ensemble to fold. From this calculation, we obtain the mean minimal distance to fold from the unfolded ensemble, for a given structural class. We look for differences in the mean minimal distance between structural and kinetic classes, and compare these to differences in other order parameters between the respective classes. The extra non-crossing distance per residue $\mathcal {D}_{nx}/N$ turns out to be the most consistent discriminator between different structural and kinetic classes of proteins. We find the extra distance covered to avoid chain crossing is generally a small fraction ($\sim 1/10$) of the total motion. We also investigate how the various order parameters either correlate or are independent from each other. We then select three proteins, an $\alpha$-helical, a $\beta$-sheet, and a knotted protein, to further dissect the taxonomy of their minimal folding transformations. We construct what might be called “multiple transformation alignments” that describe the various different ways each protein can fold from an ensemble of unfolded conformations. We find that noncrossing motions of an N- or C-terminal leg are generally obligatory for a knotted protein, and only incidental for an $\alpha$ protein. A consensus minimal folding transformation is constructed for each of the above-mentioned native folds, and rendered schematically. By investigating a “pathway overlap” order parameter, we find that non-crossing constraints, as are prevalent in $\beta$ proteins and pervasive in knotted proteins, explicitly induce a pathway “mechanism” in protein folding, as defined by a common sequence of events independent of the initial unfolded conformation. We finally discuss our results and conclude. Methods {#methods .unnumbered} ======= Calculation of the transformation distance {#calculation-of-the-transformation-distance .unnumbered} ------------------------------------------ The value of the uncrossing or non-crossing distance, $\Dnx$, is calculated as follows: The chain transforms from conformation A to conformation B as a ghost chain, so the chain is allowed to pass through itself. The beads of the chain follow straight trajectories from initial to final positions. This is an approximation to the actual Euclidean distance $\cal D$ of the transformation, where straight line transformations of the beads are generally preceded or proceeded by non-extensive local rotations to preserve the link length connecting the beads as a rigid constraint [@PlotkinSS07; @MohazabAR08]. The instances of self-crossing along with their times are recorded. The associated cost for these crossings is computed retroactively, for example the distance cost for one arm of the chain to circumnavigate another obstructing part is then added to the “ghost” distance to compute the total distance. The method for calculating the non-crossing distance $\Dnx$ has three major components, evolution of the chain, crossing detection, and crossing cost calculation. Each are described in the subsections below. ### Evolution of the chain {#sec:evolution_of_the_chain .unnumbered} As mentioned above, the condition of constant link length between residues along the chain is relaxed, so that the non-extensive rotations that would generally contribute to the distance traveled are neglected here. This approximation becomes progressively more accurate for longer chains. Thus ideal transformations only involves pure straight-line motion. The approximate transformation is carried out in a way to minimize deviations from the true transformation ($\cal D$), such that link lengths are kept as constant as possible, given that all beads must follow straight-line motion. We thus only allow deviations from constant link length when rotations would be necessary to preserve it; this only occurs for a small fraction of the total trajectory, typically either at the beginning or the end of the transformation [@PlotkinSS07; @MohazabAR08]. #### A specific example As an example of the amount of distance neglected by this approximation, consider the pair of configurations in Figure \[fig:method\_delta\], where a chain of 10 residues that is initially horizontal transforms to a vertical orientation as shown in the figure. The distance neglecting rotations (our approximation) is 77.78, in reduced units of the link length, while the exact calculation including rotations [@PlotkinSS07; @MohazabAR08] gives a distance of 78.56. A few intermediate conformations are shown in the figure. In particular note the link length change (and hence violation of constant link length condition) in the fourth link for the gray conformation (conformation F), resulting from our approximation. If the link length is preserved, the transformation consists of local rotations at the boundary points. Also note that when transforming from cyan to magenta the first bead moves less than $\delta$, because it reaches its final destination and “sticks” to the final point, and will not be moved subsequently. A movie of this transformation is provided as Movie 1 in the Supplementary Material. #### General method {#sec:gen_method} The algorithm to evolve the chain is as follows. Straight-line paths from the positions of the beads in the initial chain configuration to the corresponding positions of the beads in the final configuration are constructed. The bead furthest away from the destination, i.e. the bead whose path is the longest line, is chosen. Let this bead be denoted by index $b$ where $0\le b \le N$. In the context of figure \[fig:method\_delta\], this bead corresponds to bead number 9 ($b_9$). The bead is then moved toward its destination by a small pre-determined amount $\delta$, and the new position of bead $b$ is recorded. In this way the transformation is divided into say $M$ steps: $M=d_{max} / \delta$, where $d_{max}$ is the maximal distance. Let $i$ be the step index $0 \le i \le M$. If initially the chain configuration was at step $i$ (e.g. $i=0$), the spatial position of bead $b$ at step $i$ before the transformation $\delta$ is denoted by ${\bf r}_{b,i}$, and after the transformation by ${\bf r}_{b,i+1}$. The upper bound $\delta$ to capture the essence of the transformation dynamics differs according to the complexity of the problem. To capture all of the instances of self-crossing, a step size $\delta$ of two percent of the link length sufficed for all cases. The neighboring beads ($b+1$ and $b-1$) should also follow paths on their corresponding straight-line trajectories. Their new position on their paths (${\bf r}_{b+1,i+1}$ and ${\bf r}_{b-1,i+1}$) are then calculated based on the constant link length constraints. This new position corresponds to moving the beads by $\delta_{b+1,i}$, $\delta_{b-1,i}$ respectively. Once ${\bf r}_{b+1,i+1}$ and ${\bf r}_{b-1,i+1}$ are calculated, we proceed and calculate ${\bf r}_{b+2,i+1}$ and ${\bf r}_{b-2,i+1}$ until we reach the end points of the chain. As an example consider figure \[fig:method\_delta\], going from the conformation B (Green) to the conformation C (Yellow). First, bead number 9, which is the bead farthest away from its final destination, is moved by $\delta$, then taking constant link length constraints and straight line trajectories into account, the new position of bead 8 is calculated and so on, until all the new bead positions which correspond to the yellow conformation are calculated. If somewhere during the propagation to the endpoints, a solution cannot be constructed or no continuous solution exists, i.e. $\lim_{\delta \to 0} ({\bf r}_{b+m,i+1} - {\bf r}_{b+m,i}) \ne 0$, then we set ${\bf r}_{b+m,i+1} = {\bf r}_{b+m,i}$. That is, the bead will remain stationary for a period of time. [^1] Consequently ${\bf r}_{b+n,i+1} = {\bf r}_{b+n,i}$ for all beads with $n > m$ that have not yet reached their final destination. This is because the new position of each bead is calculated by the position of the bead next to it for any particular step $i$. The same recipe is applied when propagating incremental motions $\delta_{b,i+1}$ along the other direction of the chain (going from $b-n$ to $b-n-1$) as well. When a given bead that has been held stationary becomes the farthese bead away from its final position, it is then moved again. I.e. stationary beads can move again at a later time during the transformation if they become the furthest beads away from the final conformational state. Such a scenario does not occur in the context of the simple example of figure \[fig:method\_delta\], however in Movie 2 in the Supplementary Material, a transformation is given for a full protein that involves such a process. During the course of such a transformation the viewer will notice that several beads on the chain (in the upper right in the movie) remain stationary for a part of the transformation. For these beads no continuous solution for the motion exists, i.e. as $\delta \rightarrow 0$ the beads in question cannot move without violating the constant link length constraint. At a later time during the transformation, when the beads in the given segment are farthest from the final folded conformation, the beads resume motion. Once the positions of all the beads in step $i+1$ are calculated, the same procedure is repeated for step $i+2$ and so on, until the chain reaches the final configuration. If the position of a given bead $b$ at step $i$ is such that $|{\bf r}_{b,i} - {\bf R}_{b}| < \delta$, where ${\bf R}_{b}$ is the spatial position of bead $b$ in the final conformation, then ${\bf r}_{b,i+1}$ is set to ${\bf R}_{b}$. In other words we discretely snap the bead to the final position if it is closer than the step size $\delta$. In the context of figure \[fig:method\_delta\], this corresponds to going from conformation D (Cyan) to conformation E (Magenta). Bead 0 ($b_0$) is snapped to the final conformation. Once a bead reaches its destination it locks there and will never move again. See conformation F (gray) in figure \[fig:method\_delta\]. Figure \[fig:link-deviation\] shows a histogram of the mean link length over the course of a transformation, for 200 transformations between random initial structures generated by self avoiding random walks (SAW), and one pre-specified SAW. The length of the random chains was 9 links. The chains were aligned by minimizing MRSD before the transformation took place [@MohazabAR08; @MohazabAR08:bj; @MohazabAR09], where MRSD stands for the mean root squared distance and is defined by ${1 \over N} \sum_{n=1}^N \sqrt{(\bfr_{A_{n}} - \bfr_{B_{n}})^2} = {1 \over N} \sum_{n=1}^N | \bfr_{A_{n}} - \bfr_{B_{n}} |$. Deviations from the full unperturbed link length are modest: the ensemble-averaged mean link length is 96% of the initial link length. ### Crossing Detection {#sec:crossing_detection .unnumbered} As stated earlier, during the transformation the chain is initially treated as a ghost chain, and so is allowed to cross itself. To keep track of the crossing instances of the chain, a crossing matrix $\mathbb{X}$ is updated at all time steps during the transformation. If the chain has $N$ beads and $N-1$ links, we can define an $(N-1) \times (N-1)$ matrix $\XX$ that contains the crossing properties of a 2D projection of the strand, in analogy with topological analysis of knots. The element $\XX_{ij}$ is nonzero if link $i$ is crossing link $j$ in the 2D projection at that instant. Without loss of generality we can assume that the projection is onto the XY plane, as in Figure \[fig:3linkchainX\]. We illustrate the independence of our method on projection plane explicitly for a crossing event in cold-shock protein (1CSP) in the Supplementary Material. We use the XY plane projection throughout this paper.[^2] We parametrize the chain uniformly and continuously in the direction of ascending link number by a parameter $s$ with range $0 \le s \le N$. So for example the middle of the second link is specified by $s = 3/2$. If the projection of link $i$ is crossing the projection of link $j$, then $|\mathbb{X}_{ij}|$ is the value of $s$ at the crossing point of link $i$ and $|\XX_{ji}|$ is the value of $s$ at the crossing point of link $j$. If link $i$ is over $j$ (i.e. the corresponding point of the cross on link $i$ has a higher $z$ value than the corresponding point of the cross on link $j$) then $\XX_{ij} > 0$, otherwise $\XX_{ij}<0$. Thus after the sign operation, $sign(\mathbb{X})$ is an anti-symmetric matrix. A simple illustrative example of the value of $\XX$ for the 3-link chain in figures \[fig:3linkchainX\]a and \[fig:3linkchainX\]b is \[eq:eqcross\] $$\begin{aligned} \label{eq:eqcross1} \XX \left(t_o \right) &=& \begin{bmatrix} 0 & 0 & -0.29\\ 0 & 0 & 0\\ +2.82& 0& 0 \end{bmatrix}\\ \label{eq:eqcross2} \XX \left(t_o + \delta \right) &=& \begin{bmatrix} 0 & 0 & +0.29\\ 0 & 0 & 0\\ -2.82& 0& 0 \end{bmatrix} \end{aligned}$$ The fact that $\XX_{13}$ is negative at time $t_o$ indicates that at that instant, link 1 is under link 3 in 3D space, above the corresponding point on the plane on which the projections of the links have crossed (green circle in figure \[fig:3linkchainX\]). At each step during the transformation of the chain, the matrix $\XX$ is updated. A true crossing event is detected by looking at $\XX$ for two consecutive conformations. A crossing event occurs when any non-zero element in the matrix $\XX$ discontinuously changes sign without passing through zero. Once $\XX_{ij}$ changes sign, $\XX_{ji}$ must change sign as well. If the chain navigates through a series of conformations that changes the crossing sense and thus the sign of $\XX_{ij}$, but does not pass through itself in the process, the matrix elements $\XX_{ij}$ will not change sign discontinuously but will have values of zero at intermediate times before changing sign. Movie 3 in the Supplementary Material shows the result of applying crossing detection. In the movie of the transformation, whenever an instance of self-crossing is detected, the transformation is halted and the image is rotated to make the location of the crossings easier to visualize. ### Crossing Cost calculation {#sec:crossing_cost_calculation .unnumbered} Even in the simplest case of crossing, there are multiple ways for the real chain to have avoided crossing itself. The extra distance that the chain must have traveled during the transformation to respect the fact that the chain cannot pass through itself is called the “non-crossing” distance $\Dnx$. If the chain were a ghost chain which could pass through itself, the corresponding distance for the whole transformation would be the MRSD, along with relatively small modifications that account for the presence of a conserved link length. Accounting for non-crossing always introduces extra distance to be traveled. As the chain is transforming from conformation A to conformation B as a ghost chain according to the procedure discussed above, a number of self-crossing incidents occur. Figure \[fig:simple\_untangle\] shows a continuous but topologically equivalent version of the crossing event shown in figure \[fig:3linkchainX\] (b). Even for this simple case, there are multiple ways for the transformation to have avoided the crossing event, each with a different cost. Furthermore, later crossings can determine the best course of action for the previous crossings. Figure \[fig:retrospect\_matters\] illustrates how non-crossing distances are non-additive, so that one must look at the whole collection of crossing events. Therefore to find the optimum way to “untangle” the chain (reverse the sense of the crossings), one must look at all possible uncrossing transformations, in retrospect. The recipe we follow is to evolve the chain as a ghost chain and write down all the incidents of self-crossings that happen during the transformation. Then looking at the global transformation, we find the best untangling movement that the chain could have taken. To compute the extra cost introduced by non-crossing constraints we proceed as follows: We construct a matrix that we call the cumulative crossing matrix $\YY$. $\YY_{ij}$ is non-zero if link $i$ has truly (in 3D) crossed link $j$, at any time during the transformation. This matrix is thus conceptually different than the matrix $\XX$, which holds only for one instant (one conformation) and which can have crossings in the 2D projection which are not true crossings during the transformation. The values of the elements of $\YY$ are calculated in the same way that the values are calculated for $\XX$. The sign also depends on whether the link was crossed from over to under or from under to over, so that a given projection plane is still assumed. The order in which the crossing have happened are kept track of in another matrix $\YY_O$. The coordinates of all the beads at the instant of a given crossing are also recorded. For example, if during the transformation of a chain, two crossing have happened, then two sets of coordinates for intermediate states are also stored. We describe a simple concrete example to illustrate the general method next. #### A Concrete Example Figure \[fig:concrete\_eg\] shows a simple transformation of a 7-link chain. During the transformation the chain crosses itself in two instances. The first instance of self-crossing is between link 5 and link 7. The second instance is when link 2 crosses link 4. The location of the crossing along the chain is also recorded: i.e. if we assume that the chain is parametrized by $s = 0$ to $N$, then at the instant of the first crossing (link 5 and link 7) $s = 4.4$ (link 5) and $s = 6.9$ (link 7). The second crossing occurs at $s=1.3$ (link 2) and $s=3.8$ (link 4). The full coordinates of all beads are also known: we separately record the full coordinates of all beads at each instant of crossing. The information that indicates which links have crossed and their over-under structure can be aggregated into the cumulative crossing matrix $\YY$. For the example in figure \[fig:concrete\_eg\], the cumulative matrix (up to a minus sign indicating what plane the crossing events have been projected on) is $$\YY = \begin{bmatrix} 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 1.3& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0\\ 0& -3.8& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& -4.4\\ 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 6.9& 0& 0 \end{bmatrix} \: .$$ $\YY$ tells us, during the whole process of transformation, which links have truly crossed one another and what the relative over-under structure has been at the time of crossing. For example, by glancing at the matrix we can see that two links 5,7 and 2,4 have crossed one another. We also know from the sign of the elements in $\YY$ that both links 2 and 7 were underneath links 4 and 5 just prior to their respective crossings in the reference frame of the projection. Two links will cross each other at most once during a transformation. If one link, e.g. link $i$, crosses several others during the transformation, elements $i,j$, $i,k$ etc... along with their transposes will be nonzero. The order of crossings can be represented in a similar fashion as a sparse matrix. $$\YY_O = \begin{bmatrix} 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 2& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0\\ 0& 2& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 1\\ 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 1& 0& 0 \end{bmatrix}.$$ Analyzing the structure of the crossings is similar to analyzing the structure of a knot, wherein one studies a knot’s 2D projections, noting the crossings and their over/under nature based on a given directional parameterization of the curve [@AdamsCCKnot; @NechaevSKStatKnot; @WiegelFW86]. One difference here is that we are not dealing with true closed-curve knots (in the mathematical sense), as a knot is a representation of S1 in S3. Here we treat open curves. ### Crossing substructures {#sec:substructures .unnumbered} By studying the crossing structure of open-ended pseudo-knots in the most general sense, one can identify a number of sub-structures that recur in crossing transformations. Any act of reversing the nature of all the crossings of the polymer can be cast within the framework of some ordered combination of reversing the crossings of these substructures. We identify three sub-structures: Leg, Loop, and Elbow. #### Leg {#sec:leg} Given any self-crossing point of a chain, a leg is defined from that crossing point to the end of the chain. Therefore for each self-crossing point two legs can identified as the shortest distance along the chain from that crossing point to each end—see figure \[fig:leg\_example\]. A single leg structure is shown in figure \[fig:all\_object\](a). #### Loop {#sec:loop} As stated earlier, when traveling along the polymer one arrives at each crossing twice. If the two instances of a single crossing are encountered consecutively while traveling along the polymer, and no intermediate crossing occurs, then the substructure that was traced in between is a loop. See Fig \[fig:all\_object\](b). #### Elbow {#sec:elbow} If two consecutive crossings have same over-under sense, then they form an elbow; see Fig \[fig:all\_object\](c). Note that the same two consecutive crossing instances will occur in reverse order on the second visit of the crossings: these form a dual of the elbow. By convention the segment with longer arc-length between the two consecutive crossings is defined as the elbow. This would be the horseshoe shaped strand in figure \[fig:all\_object\](c). ### Reversing the crossing nature {#reversing-the-crossing-nature .unnumbered} The goal of this formalism is to assist in finding a series of movements that will result in reversing the over-under nature of all the crossings, with the least amount of movement required by the polymer. So at this point we introduce basic movements that that will reverse the nature of the crossings for the above substructures. #### Using leg movement {#sec:legmove} A transformation that reverses the over/under nature of a leg involves the motion of all the beads constituting the leg. Each bead must move to the location of the crossing (the “root” of the leg), and then move back to its original location [@MohazabAR08:bj]. The canonical leg movement is shown schematically in figure \[fig:leg\_movement\]. We can reverse the nature of all the crossings that have occurred on a leg, if more than one crossing occurs, through a leg movement (see figure \[fig:leg\_movement\_extra\_cross\]). The move is topologically equivalent to the movement of the free end of the leg along the leg up to the desired crossing, and then moving all the way back to the original position while reversing the nature of the crossing on the way back. #### Loop twist and loop collapse {#sec:looptwist} Reversing the crossing of a loop substructure can be achieved by a move that is topologically equivalent to a twist, see figure \[fig:loop\_twist\_all\] (a). This type of move is called a Reidemeister type I move in knot theory. However the optimal motion is generally not a twist or rotation in 3-dimensional space (3D). Figure \[fig:loop\_twist\_all\](b) shows a move which is topologically equivalent to a twist in 3D, but costs a smaller distance, by simply moving the residues inside the loop in straight lines to their final positions, resulting in a “pinching” motion to close the loop and re-open it. From now on we refer to the optimal motion simply as loop twist, because it is topologically equivalent, but we keep in mind that the actual optimal physical move, and the distance calculated from it, is different. #### Elbow moves {#sec:elbowmove} Reversing the crossings of an elbow substructure can be done by moving the elbow segment in the motion depicted in figure \[fig:elbow\_move\]: Each segment moves in a straight line to its corresponding closest point on the obstruction chain, and then it moves in a straight line to its final position. ### Operator Notation {#sec:operatornotatoin .unnumbered} The transformations for leg movement, elbow, and twist can be expressed very naturally in terms of operator notation, where in order to untangle the chain the various operators are applied on the chain until the nature of all the self-crossing are reversed. If we uniquely identify each instance of self-crossing by a number, then a topological loop twist at crossing $i$ can be represented by the operator $R(i)$ ([*R*]{} for Reidemeister). An elbow move, for the elbow defined by crossings $i$ and $i+1$, can be represented as $E(i,i+1)$. As discussed above, for each self-crossing, two legs can be identified corresponding to the two termini of the chain. This was exemplified in figure \[fig:leg\_example\], by the red and blue legs. Since we choose a direction of parametrization for the chain, we refer to the two leg movements as the “start leg” movement and the “end leg” movement, and for a generic crossing $i$ we denote them as $L_N(i)$ and $L_C(i)$ respectively. The operators that we defined above are left acting (similar to matrix multiplication). So a loop twist at crossing $i$ followed by an elbow move at crossings $j$ and $j+1$ is represented by $E(j,j+1) R(i)$. #### Example {#sec:operatornotatoin_example} Figure \[fig:operator\_example\] shows sample configurations before and after untangling. The direction of parametrization is from the red terminus to the cyan terminus. It can be seen that there are several ways to untangle the chain. One example would be $R(3)L_C(2)R(1)$, which consists of a twist of the green loop, followed by the cyan leg movement, followed by a twist of the blue loop. Another path of untangling would be $E(2,3)L_N(1)$, which is movement of the red leg followed by the magenta elbow move. For the two above transformations, the order of operations can be swapped, i.e. they are commutative, and the resulting distance for each of the transformations will be the same. That is $\mathcal{D}[E(2,3)L_N(1)] = \mathcal{D}[L_N(1)E(2,3)]$. However, $E(2,3)L_N(1)$ is a more efficient transformation than $R(3)L_C(2)R(1)$, i.e. $\mathcal{D}[E(2,3)L_N(1)]< \mathcal{D}[R(3)L_C(2)R(1)]$. Other transformation moves are not commutative in the algorithm, for example in Figure \[fig:operator\_example\], $L_N(1)R(3)R(2)$ is not allowed, since $R(2)$ will only act on loops defined by two instances of a crossing that are encountered consecutively in traversing the polymer, i.e. no intermediate crossings can occur. Therefore even if crossing 2 happens kinetically before crossing 3 during the ghost transformation, only transformation $L_N(1) R(2) R(3)$ is allowed in the algorithm. ### Minimal uncrossing cost {#sec:minimal_untangling_cost .unnumbered} For each operator in the above formalism, a transformation distance/cost can be calculated. Hence the optimal untangling strategy is finding the optimal set of operator applications with minimal total cost. This solution amounts to a search in the tree of all possible transformations, as illustrated in Figure \[fig:tree\_of\_poss\]. The optimal application of operators can be computed by applying a version of the depth-first tree search algorithm. According to the algorithm, from any given conformation there are several moves that can be performed, each having a cost associated with the move. The pseudo-code for the search algorithm can be written as follows: procedure find_min_cost (moves_so_far=None, cost_so_far=0,\ min_total_cost=Infinity): optim_moves = NULL_MOVE if cost_so_far > min_total_cost: return [Infinity, optim_moves] endif for move in available_moves(moves_so_far): [temp_cost, temp_optim_moves] = find_min_cost (moves_so_far + move,\ cost_so_far + cost(move),\ min_total_cost) if temp_cost < min_total_cost: min_total_cost = temp_cost optim_moves = move + temp_optim_moves endif endfor return [min_total_cost,optim_moves] endprocedure The values to the right side of the equality sign in the arguments of the procedure are the default values that the procedure starts with. The procedure is called recursively, and returns both the set of optimal uncrossing moves (for a given crossing matrix corresponding to a starting and final conformation), and the distance corresponding to that set of optimal uncrossing moves. The algorithm visits all branches of the tree of possible uncrossing operations until it reaches the end. However it is smart enough to terminate the search along the branch if the cost of operations exceeds that of a solution already found. See figure \[fig:tree\_of\_poss\] for an illustraion of the depth-first search tree algorithm. The above procedure was implemented using both the GNU Octave programing language and C++. To optimize speed by eliminating redundant moves, only one permutation was considered when operators commuted. Generating unfolded ensembles {#sec:generate_unfold .unnumbered} ----------------------------- To generate transformations between unfolded and folded conformations, we adopt an off-lattice coarse grained $C_\alpha$ model [@ClementiC00:jmb; @SheaJE00], and generated an unfolded structural ensemble from the native structure as follows. For a native structure with $N$ links, we define three data sets: - [The set of $C_\alpha$ residue indices $i$, for which $i=1 \cdots N$ ]{} - [The set of native link angles $\theta_j$ between three consecutive $C_\alpha$ atoms, for which $j=2 \cdots N-1$]{} - [The set of native dihedral angles $\phi_k$ between four consecutive $C_\alpha$ atoms, i.e. the angle between the planes defined $C_\alpha$ atoms ($k-1$, $k$, and $k+1$), and ($k$, $k+1$, and $k+2$). The index $k$ runs from $k=2\cdots N-2$.]{} The distribution of C$_\alpha$-C$_\alpha$ distances in PDB structures is sharply peaked around 3.76Å  ($\sigma=0.09$Å). In practice we took the first C$_\alpha$-C$_\alpha$ distance from the N-terminus as representative, and used that number for the equilibrium link length for all C$_\alpha$-C$_\alpha$ distances in the protein. To generate an unfolded ensemble, we start by selecting at random a $C_\alpha$ atom $n$ ($2 \le n \le N-1$) in the native conformation, and we then perform rotations that change the angle centered at that randomly chosen residue $n$, $\theta_n$, and that change the dihedral defined by rotations about the bond $n$-($n$+1), $\phi_n$. If $n=N-1$ only the angle is changed. The new angle and dihedral are selected at random from the Boltzmann distributions as described below. After each rotation, $\theta_n \to \theta_n^{new}$ and $\phi_n \to \phi_n^{new}$. Changing these angles rotates the entire rest of the chain, i.e. all the beads $i$ with $i > n$ are rotated to a new position. This recipe corresponds to an extension of the pivot algorithm [@LalM69; @MadrasN88]. However, we additionally require that the values of each angle and dihedral that are present in the native structure, $\theta_n^{Nat}$ and $\phi_n^{Nat}$, are more likely to be observed. We implement this criterion in the following way. The new angle $\theta_n$ is chosen from a probability distribution proportional to $\exp{\left(-\beta E \left(\theta_n\right) \right)}$, where $\beta E(\theta_n)$ is computed from: $$\label{eq:EDeltatheta} \beta E(\theta_n) = k_\theta \left(\theta_n - \theta_n^{Nat}\right)^2 \: ,$$ where we have set $k_\theta = 20$. Similarly for the dihedral $\phi_n$, the probability distribution function is proportional to $\exp{\left(-\beta E \left(\phi_n\right)\right)}$, where $\beta E \left(\phi_n\right)$ is computed from $$\label{eq:EDeltaphi} \beta E \left(\phi_n\right) = k_{\phi1} [1 + \cos{( \phi_n - \phi_n^{Nat})}] + k_{\phi3} [1 + \cos{(3 (\phi_n - \phi_n^{Nat}))}] \: ,$$ where $k_{\phi1} = 1$, and $k_{\phi 3} = 0.5$. The fact that the $k_\phi$s are much smaller than $k_\theta$ means that for a given temperature, dihedral angles are more uniformly distributed than bond angles. If all $k_\theta$ and $k_\phi$ are set to zero, then all states are equally accessible and the algorithm reduces to the pivot algorithm, i.e. a generator for unbiased, self-avoiding random walks. If all $k_\theta$ and $k_\phi$ are set to $\infty$, then chain behaves as a rigid object and does not deviate from its native state. Each pivot operation results in a new structure that must be checked so that it has no steric overlap with itself, i.e. the chain must be self-avoiding. If the new chain conformation has steric overlap, then the attempted move is discarded, and a new residue is selected at random for a pivot operation. In practice, we defined steric overlap by first finding an approximate contact or cut-off distance for the coarse-grained model. The contact distance was taken to be the smaller of either the minimum $C_\alpha$-$C_\alpha$ distance between those residues in native contact (where two residues are defined to be in native contact if any of their heavy atoms are within 4.9Å), or the $C_\alpha$-$C_\alpha$ distance between the first two consecutive residues. For SH3 for example the minimum $C_\alpha$ distance in native contacts is $4.21$Å and the first link length is $3.77$Å, so for SH3 all non-neighbor beads must be further than $3.77$Å for a pivot move to be accepted. Future refinements of the acceptance criteria can involve the use of either the mean C$_\alpha$-C$_\alpha$ distance or other criteria more accurantly representing the steric excluded volume of residue side chains. In our recipe, to generate a single unfolded structure we start with the native structure and implement $\mathcal{N}$ [*successful*]{} pivot moves, where $\mathcal{N}$ is related to the number of residues $N$ by $\mathcal{N} = \ln(0.01)/{\ln[0.99 (N-2)/(N-1)]}$. For the next unfolded structure we start again from the native structure and pivot $\mathcal{N}$ successful times, following the above recipe. Note that $\mathcal{N}$ successful pivots does not generally affect all beads of the chain. In the most likely scenario some beads are chosen several times and some beads are not chosen at all, according to a Poisson distribution. This particular choice of $\mathcal{N}$ means that for polymers with $N<101$ where ${N-2 \over N-1} < 0.99$, the chance that any given link is not pivoted at all during the $\mathcal{N}$ pivot operations is $0.01$. On the other hand for longer polymers where ${N-2 \over N-1} > 0.99$, the probability that any particular segment of the protein with the length $0.01$ of the total length, has $0.01$ chance of not having any of its beads pivoted. For any $N$ however, the shear number of pivot moves generally ensures a large RMSD between the native and generated unfolded structures. Each unfolded structure generally retains small amounts of native-like secondary and tertiary structure, due to the native biases in angle and dihedral distrubutions. For example, for SH3 the number of successful pivot moves was 162 and the mean fraction of native contacts in the generated unfolded ensemble was $0.06$. Protein dataset {#protein-dataset .unnumbered} --------------- The 45 proteins used in this study are given in Table \[tab:proteins\_used\]. When divided into kinetic classes, they consist of 25 2-state folders, 13 non-knotted 3-state folders, and 7 knotted proteins not used in the kinetic analysis. Structurally there are 11 all $\alpha$-helix proteins, 14 all $\beta$-sheet proteins, 13 $\alpha$-$\beta$ proteins, and 5 knotted proteins. These proteins were selected randomly from the datasets in references [@IvankovDN03; @GromihaMM06], where kinetic rate data was available to categorize the proteins into 2-state or 3-state folders. Our dataset contains 27 out of the 52 proteins in [@IvankovDN03], and 38 out of the 72 proteins in [@GromihaMM06]. The datasets in [@IvankovDN03; @GromihaMM06] do not include knotted proteins however; the Knotted proteins were taken from several additional sources, including references [@MallamAL07] (1NS5), [@MallamAL06] (1MXI), [@KingNP10] (3MLG),  [@vanRoonAMM08] (2K0A), [@BolingerD10] (2EFV), and the protein knot server KNOTS [@KolesovG07] (1O6D, 2HA8). Aside from the Stevadore knot in [@BolingerD10] we did not consider pseudo-knots more complex than the $3_1$ trefoil. Several of these proteins ($\alpha$-amylase inhibitor 2AIT and MerP mercuric ion binding protein 2HQI) have disulfide bonds present in the native structure. These constraints are not used in the current analysis. The folding pathways we obtain may be thought of as relevant to the initial folding event before disulfide bonds are formed, or for a protein of equivalent topology but sequence lacking the disulfide bond. Lack of preservation of disulfide bonds is a shortcoming of the present algorithm; development of more accurate computational algorithms for unfolded ensemble generation are a topic of future work. Several of the proteins also have ligands present in the crystal or NMR structures. These include 1A6N and 1HRC (heme ligands), 1RA9 (Nicotinamide adenine), 1GXT (sulfate), 1MXI (iodide ion), 2K0A (3 Zn ions), 2EFV (phosphate ion). Since we have removed energetics in general from our analysis of geometrical pathways, these ligands and any effect they may have on the folding pathway due to protein-ligand interactions are not included here. In the folding kinetics analysis of references [@IvankovDN03; @GromihaMM06], they are generally not present either, e.g. the folding rate for 1A6N is actually that for apomyoglobin [@CavagneroS99]. ### Structural alignment properties of our protein dataset {#structural-alignment-properties-of-our-protein-dataset .unnumbered} To categorize proteins as two- or three-state, we have chosen proteins with folding rate data available. This dataset has somewhat different structural alignment statistics than that for a non-redundant (NR) database, e.g. [@Thiruv2005nh3d]. The TM-score based alignment of Zhang and Skolnick [@ZhangY05] can be used to obtain structural alignment statistics. Their method resolves the problems of outlier and length-dependent artifacts of RMSD-based alignments. Distributions of TM-score for both the above NR database, our dataset, and the datasets in references [@IvankovDN03; @GromihaMM06], which non-knotted proteins in our dataset were taken, are given in the Supplementary Material along with statistical analysis of the distributions. The bulk of our proteins (98%) have TM-scores consistent with the NR database of Thiruv [*et. al*]{} (see Figure S2 in the Supporting Information), however our dataset and those of [@IvankovDN03; @GromihaMM06] contain a small number of structural homologs not present in the NR dataset, which are tabulated in the Supplementary Material. We do not suspect that this small number of homologs will significantly modify the conclusions derived from statistical analysis of our dataset, however expansion and refinement to find the most relevant dataset is a topic for future work. Calculating distance metrics for the unfolded ensemble {#sec:details_of_method .unnumbered} ------------------------------------------------------ To obtain minimal transformations between unfolded and native structures for a given protein, the $C_\alpha$ backbone was extracted from the PDB native structure, and 200 coarse-grained unfolded structures were generated using the methods described above. The unfolded structures were then aligned using RMSD and the average (residual) RMSD was calculated. The unfolded structures were then aligned by minimizing MRSD, and the residual MRSD was calculated. Then conformations were further coarse-grained (smoothed) by sampling every other bead, hence reducing the total number of beads. By the above further-coarse graining which is in the spirit of the initial steps of Koniaris-Muthukumar-Taylor reduction [@KoniarisK91; @KoniarisK91-JCP; @TaylorWR00; @VirnauP05], we eliminate all instances of potential self-crossing in which the loop size or elbow size is smaller than three links. Each structure was then transformed to the folded state by the algorithm discussed earlier in Methods. The self-crossing instances, along with the coordinates of all the beads, were recorded as well. Appropriate data structures were formed and relevant crossing substructures (leg, elbow, and loop) were detected. With topological data structures at hand, the minimal uncrossing cost was found, through the depth-first search in the tree of possible uncrossing operations that was described above. Finally, the minimal uncrossing cost, $\Dnx$, and the total distance, $\mathcal{D}$ are calculated for each unfolded conformation. These differ from one unfolded conformation to the other; the ensemble average is recorded and used below. The ensemble average of MRSD and RMSD are also calculated from the 200 unfolded structures that were generated.\ ### Importance of non-crossing {#importance_of_non_crossing .unnumbered} We define the importance of non-crossing (INX) as the ratio of the extra untangling movement caused by non-crossing constraints, divided by the distance when no such constraints exists, i.e. if the chain behaved as a ghost chain. Mathematically this ratio is defined as $INX = \Dnx / \left( \mbox{\sc mrsd} \times N \right)$ ### Other metrics {#other-metrics .unnumbered} Other metrics investigated include absolute contact order ACO [@Plaxco98], relative contact order RCO [@Plaxco98], long-range order LRO [@GromihaMM01], and chain length N[@GutinAM96:prl; @GalzitskayaOV01]. Following [@GromihaMM01], we define Long-range Order (LRO) as: $$\label{eq:LRO_define} LRO = \sum_{i<j} n_{ij} / N \,\, \mbox { where } n_{ij} = \begin{cases} 1 & \mbox{if }|i - j | > 12 \\ 0 & \mbox{otherwise} \end{cases}$$ where i and j are the sequence indices for two residues for which the $C_\alpha - C_\alpha$ distance is $\le 8$ Å in the native structure. Likewise we define Relative Contact Order (RCO) following [@Plaxco98]: $$\label{eq:RCO_define} RCO = {1 \over L \times N} \sum_{i<j}^N \Delta L_{ij},$$ where $N$ is the total number of contacts between nonhydrogen atoms in the protein that are within 6 Å in the native structure, $L$ is the number of residues, and $\Delta L_{ij}$ is the sequence separation between contacts in units of the number of residues. Similarly, Absolute Contact order (ACO) [@Plaxco98] is defined to be: $$\label{eq:ACO_define} ACO = {1 \over N} \sum_{i<j}^N \Delta L_{ij} = RCO \times L$$ Results {#results .unnumbered} ======= Proteins were classified by several criteria: - [2-state vs. 3-state folders]{} - [$\alpha$-helix dominated, vs $\beta$-sheet dominated, vs mixed.]{} - [knotted vs unknotted proteins]{} Several questions are answered for each group of proteins: - [What fraction of the total transformation distance is due to non-crossing constraints?]{} - [How do the different order parameters distinguish between the different classes of proteins?]{} - [How do the different order parameters correlate with each other?]{} ### Order paramaters discriminate protein classes {#order-paramaters-discriminate-protein-classes .unnumbered} In table \[tab:order\_parameters\_for\_various\_classes\], we compare the unfolded ensemble-average of several metrics between different classes of proteins, and perform a p-value analysis based on the Welch t-test. The null hypothesis states that the two samples being compared come from normal distributions that have the same means but possibly different variances. Metrics compared in Table \[tab:order\_parameters\_for\_various\_classes\] are INX, LRO, RCO, ACO, MRDS, RMSD, $\Dnx$, $\Dnx/N$, $\mathcal{D}$, $\mathcal{D}/N$ and $N$. The most obvious check of the general method outlined in the present paper is to compare the non-crossing distance $\Dnx$ between knotted and unknotted proteins. Here we see that knotted proteins traverse about $3.5\times$ the distance as unknotted proteins in avoiding crossings, so that the two classes of proteins are different by this metric. The same conclusion holds for knotted [*vs.*]{} unknotted proteins if we use $\Dnx/N$, $\mathcal{D}$, $\mathcal{D}/N$, or INX. Of all metrics, the statistical significance is highest when comparing $\mathcal{D}/N$, which is important because the knotted proteins considered here tend to be significantly longer than the unknotted proteins, so that chain length $N$ distinguishes the two classes. Dividing by $N$ partially normalizes the chain-length dependence of $\Dist$, however $\Dist/N$ still correlates remarkably strongly with $N$ when compared for all proteins ($r=0.824$ see Supplementary Material, Table 8). It was somewhat unusual that MRSD and RMSD distinguished knotted proteins from unknotted proteins better than $\Dist$ (or $\Dnx$), which accounts for non-crossing. All other quantities, including INX, ACO, and RCO distinguish knotted from unknotted proteins. The only quantity that fails is LRO. The importance of noncrossing $INX$, measuring the ratio of the uncrossing distance $\Dnx$ to the ghost-chain distance $N\times MRSD$, was largest for knotted proteins, followed by $\beta$ proteins, with $\alpha$ proteins having the smallest $INX$. Mixed proteins had an average INX value in between that for $\alpha$ and $\beta$ proteins. In distinguishing all-$\alpha$ and all-$\beta$ proteins, we find that LRO and RCO are by far the best discriminants. Interestingly, INX and $\Dnx/N$ also discrimate these two classes comparably or better than ACO does. $\Dnx$ is marginal, while all other metrics fail. All metrics except for $N$ and $\Dist$ are able to discriminate $\alpha$ from mixed $\alpha$-$\beta$ proteins, with LRO performing the best by far. Interestingly, none of the above metrics can distinguish $\beta$ proteins from mixed $\alpha$-$\beta$ proteins. It is sensible that energetic considerations would be the dominant distinguishing mechanism between two- and three state folders. Intermediates are typically stabilized energetically. We can nevertheless investigate whether any geometrical quantities discriminates the two classes. Indeed LRO and RCO fail, as does INX. This supports the notion that intermediates are not governed by “topological traps” that are undone by uncrossing motion, but rather are energetically driven. ACO performs marginally. Three-state folders tend to be longer than 2-state folders, so that $N$ distinguishes them and in fact provides the strongest discriminant, consistent with previous results [@GalzitskayaOV03]. Interestingly RMSD, MRSD, and $\Dist$ perform comparably to $N$. However these measures also correlate strongly with N (see Supplementary Information Table 8). $\Dist/N$, $\Dnx$ and $\Dnx/N$ also perform well, but still correlate with N, albeit more weakly than the above metrics. Figure \[fig:cluster\]A shows a scatter plot of all proteins as a function of $\Dnx/N$ [*vs.*]{} and LRO. Knotted and unknotted proteins are indicated, as are $\alpha$, $\beta$, and mixed $\alpha$-$\beta$ proteins. Two and three state proteins are indicated as triangles and squares respectively. From the figure, it is easy to visualize how LRO provides a successful discriminant between $\alpha/\beta$ and $\alpha/$(mixed) proteins, but is unsuccessful in discriminating $\beta/$(mixed), knotted and unknotted, and two and three state folders. It is also clear from the figure how $\Dnx/N$ discriminates knotted from unknotted proteins. One can also see distribution overlap, but nevertheless successful discrimination between $\alpha$ and $\beta$ and $\alpha$ and mixed proteins. Figure \[fig:cluster\]B shows a scatter plot of all proteins as a function of $\Dnx$ [*vs.*]{} $N$, using the same rendering scheme for protein classes as in Figure \[fig:cluster\]A. From the figure, one can see how the metrics correlate with each other, and how they both discriminate knotted from unknotted proteins and 2-state from 3-state proteins. Moreover one can see how despite the significant correlation between $\Dnx$ and $N$, $\Dnx$ can discriminate $\alpha$ proteins from either $\beta$ proteins or mixed $\alpha$/$\beta$ proteins, while N cannot. As a control study for the above metrics, we took random selections of half of the proteins, to see if random partitioning of the proteins into two classes resulted in any of the metrics distinguishing the two sets with statistical significance. No metric in this study had significance: the p-values ranged from about 0.32 to 0.94. Figure \[fig:class\_distinguish\] shows a plot of the statistical significance for all the metrics in Table \[tab:order\_parameters\_for\_various\_classes\] to distinguish various pairs of proteins classes: 2-state from 3-state proteins, $\alpha$ from $\beta$, $\alpha$ from mixed $\alpha/\beta$, $\beta$ from mixed, and knotted from unknotted. We can define the most consistent discriminator between protein classes as that metric that is statistically significant for the most classes, and for those classes has the highest statistical significance. By this criterion $\Dnx/N$ is the most consistent discriminator between the general structural and kinetic classes considered here. Interestingly, in all cases, the extra distance introduced by non-crossing constraints is a very small fraction (less than 13% ) of the MRSD, which represents the ghost distance neglecting non-crossing. This was not an obvious result, but it was encouraging evidence for the reason simple order-parameters that neglect an explicit accounting of non-crossing have been so successful historically [@Onuchic96; @Plaxco98; @Baker2000:Nature; @NymeyerH00:pnas; @BestRB05; @DingF05jmb; @ChoSS06]. ### Scaling laws for pathway distances across domains and whole proteins {#scaling-laws-for-pathway-distances-across-domains-and-whole-proteins .unnumbered} Larger proteins will typically have larger MRSD. A protein of twice the chain length need not have twice the MRSD however; we plot the unfolded ensemble averaged MRSD of the proteins in our dataset as a function of $N$ in Figure \[figscaling\]A. The plot shows sub-extensive scaling for the straight-line path distance per residue: $MRSD \sim N^{0.65}$. On the other hand, the non-crossing distance per residue, $\Dnx/N$, shows superextensive scaling: $\Dnx/N \sim N^{1.33}$, indicating that non-crossing induced entanglement becomes progressively more important even on a per-residue basis for longer proteins, and likely polymers in general. In fact, the steeper slope of $\Dnx/N$ indicates a crossover such that when $N$ is larger than about $3600$, chain non-crossing dominates the motion of the minimal folding pathway. It is noteworthy that the scatter in the log-log scaling plot of Figure \[figscaling\]A is much larger for $\Dnx/N$ than for MRSD, illustrating the larger dispersion of $\Dnx/N$ for proteins of the same length but different native topology. The above analysis can be applied to domains within a single protein, to test how autonomous their folding mechanisms are as compared to separate proteins. Run on our dataset, the program DDomain [@ZhouH09ddomain] only finds multiple domains in methyltransferase domain of human tar (HIV-1) RNA binding protein (PDB 2HA8) [@WuH2HA8], between residues 20-88 and 89-178 (residue 20 is the first resolved residue in the crystal structure). The domain finding program DHcL [@KoczykG08] also finds domains in this protein between residues 20-83 and 84-178. DHcL also finds domains in several other proteins, some generally accepted as single domain, however one of these proteins is clearly a repeat protein containing a 36 residue helix-turn-helix motif: tumor suppressor P16INK4A (PDB 2A5E) [@ByeonIJL1998]. For this protein, DHcL finds domains between the 1st and 2nd, and 2nd and 3rd repeating units. We manually added a domain boundary between the 3rd and 4th repeating units to yield 4 domains containing residues 1-36, 37-72, 73-108, and 109-144. The domains of 2HA8 and 2A5E are illustrated in Figure \[figscaling\]C. Using the above domain structures for 2HA8 and 2A5E, we analyze the scaling of MRSD with chain length N in Figure \[figscaling\]B. In these plots the individual domains are considered as separate proteins, then combined together if the domains are contiguous, e.g. for 2A5E proteins consisting of domains 1, domains 1 and 2 together, 1, 2, and 3 together, all domains together, and all contiguous combinations therein are examined. This yields the same scaling law for both proteins: $MRSD \sim N^{0.76}$, which has a larger power law than the scaling between proteins above. Chain connectivity constraints apparently induce cross-talk between domains even for MRSD. Likewise, the scaling law for noncrossing distance per residue is $\Dnx/N \sim N^{2.51}$, indicating significant polymer chain interference between domain folding. The individual domains of multidomain proteins apparently show less severe chain constraints than single domain proteins of the same size. ### Quantifying minimal folding pathways {#quantifying-minimal-folding-pathways .unnumbered} The minimum folding pathway gives the most direct way that an unfolded protein conformation can transform by reconfiguration to the native structure. However, different configurations in the unfolded ensemble transform by different sequences of events, for example one unfolded conformation may require a leg uncrossing move, followed by a Reidemeister move elsewhere on the chain, followed by an uncrossing move of the opposite leg, while another unfolded conformation may require only a single leg uncrossing move. The sequence of moves can be represented as a color-coded bar plot, as shown in Figures \[figtransformsalpha\]-\[figtransformsknot\]. In these figures, the sequence of moves is taken from right to left, and the width of the bar indicates the non-crossing distance undertaken by that move. A scale bar is given underneath each figure indicating a distance of 100 in units of the link length. Red bars indicate moves corresponding to the N-terminal leg ($\LN$) of the protein, while green bars indicate moves corresponding to the C-terminal leg ($\LC$). Blue bars indicate Reidemeister “pinch and twist” moves, while cyan bars indicate elbow uncrossing moves. The typical sequence of moves varies depending on the protein. Figure \[figtransformsalpha\] shows the uncrossing transformations of the all-$\alpha$ protein acyl-coenzyme A binding protein (PDB id 2ABD [@AndersenKV93], see Figure \[fig:3protsrender\]A). Panels A and B depict the same set of transformations, but in A they are sorted from largest to smallest values of $\LN$ uncrossing, and in B they are sorted from largest to smallest values of $\LC$ uncrossing. The leg moves in each panel are aligned so that the left end of the bars corresponding to the moves being sorted are all lined up. Some transformations partway down in panel A do not require an $\LN$ move; these are then ordered from largest to smallest $\LC$ move. The converse is applied in panel B. Some moves do not require either leg move; these are sorted in decreasing order of the total distance of Reidemeister loop twist moves. Finally, some transformations require only elbow moves; these are sorted from largest to smallest total uncrossing distance. Figure \[figtransformsbeta\] shows the noncrossing transformations for the Src homology 3 (SH3) domain of phosphatidylinositol 3-kinase (PI3K), a largely-$\beta$ protein (about 23% helix, including 3 short $3_{10}$ helical turns; PDB id 1PKS [@KoyamaS93], see Figure \[fig:3protsrender\]B), sorted analogously to Figure \[figtransformsalpha\]. Figure \[figtransformsknot\] shows the uncrossing transformations involved in the minimal folding of the designed knotted protein 2ouf-knot (PDB id 3MLG [@KingNP10], Figure \[fig:3protsrender\]C). Interestingly, for the all-$\alpha$ protein 2ABD, $\approx 12\%$ of the 172 transformations considered did not require any uncrossing moves, and proceed directly from the unfolded to the folded conformation. These transformations are not shown in Figure \[figtransformsalpha\]. For the $\beta$ protein and knotted protein, every transformation that we considered (195 for 1PKS and 90 for 3MLG) required at least one uncrossing move. As a specific example, the top-most move in Figure \[figtransformsknot\] panel B consists of a C-leg move (green) covering $\approx 90\%$ of the non-crossing distance, followed by N-leg move (red) covering $\approx 7 \%$ of the distance, then a short elbow move (cyan), a short Reidemeister loop move (blue), another short elbow move (cyan), and finally a short Reidemeister move (blue). In some cases the elbow and loop moves commute if they involve different parts of the chain, but generally they do not. For this reason we have not made any attempt to cluster loop and elbow moves, rather we have just represented them in the order they occur. On the other hand, consecutive leg moves commute and can be taken in either order. In Figures \[figtransformsalpha\]-\[figtransformsknot\], one can see that significantly more motion is involved in the leg uncrossing moves than for other types of move. The total distance covered by leg moves is 82% for 3MLG, 69% for 1PKS, and 49% for 2ABD. For 3MLG, the total leg move distance is comprised of 44% $\LN$ moves, and 38% $\LC$ moves. For 1PKS, leg move distance is comprised of 18% $\LN$ moves, and 51% $\LC$ moves. For 2ABD, distance for the leg moves is roughly symmetric with 26% $\LN$ and 23% $\LC$. One difference that can be seen for the all-$\alpha$ protein compared to the $\beta$ and knotted proteins is in the persistence of the leg motion. For 2ABD, only 24% of the transformations require $\LN$ moves and only 30 % of the transformations require $\LC$ moves. On the other hand the persistence of leg moves is greater in the $\beta$ protein and greatest in the knotted protein. For 1PKS, $\LN$ and $\LC$ moves persist in 74% and 66 % of the transformations respectively. In 3MLG, $\LN$ and $\LC$ moves persist in 92 % and 41 % of the transformations respectively. Inspection of the transformations for the $\beta$ protein 1PKS in panels A and B of Figure \[figtransformsbeta\] reveals that uncrossing moves generally cover larger distance than in the $\alpha$ protein 2ABD (the mean uncrossing distance for is 136 for 1PKS [*vs.*]{} 77.5 for 2ABD). We also notice that in contrast to the leg uncrossing moves in 2ABD, both $\LN$ and $\LC$ moves are often required (44 % of the transformations require both $\LN$ and $\LC$ moves, compared to 5 % for 2ABD). The asymmetry of the protein is manifested in the asymmetry of the leg move distance: the $\LN$ moves are generally shorter than the $\LC$ moves, covering about 1/4 of the total leg move distance. As mentioned above, $\LC$ moves comprise about 51 % of the total distance for the 195 transformations in \[figtransformsbeta\], while $\LN$ moves only comprise about 18 % of the distance on average. Both $\LN$ and $\LC$ moves are persistent as mentioned above. A leg move of either type is present in 95 % of the transformations. Inspection of the transformations in Figure \[figtransformsknot\] reveals that every transformations requires either an $\LN$ or $\LC$ move. This is sensible for a knotted protein, and is in contrast to the transformations for the $\alpha$ protein 2ABD, where many moves do not require any leg uncrossing at all and consist of only short Reidemeister loop and elbow moves. In this sense the diversity of folding routes [@PlotkinSS00:pnas; @PlotkinSS02:Tjcp] for the knotted protein 3MLG is the smallest of the proteins considered here, and illustrates the concept that topological constraints induce a pathway-like aspect to the folding mechanism. The N-terminal $\LN$ leg move is the most persistently required uncrossing move, present in about 92% of the transformations. This is generally the terminal end of the protein that we found was involved in forming the pseudo-trefoil knot. Sometimes however, the C-terminal end is involved in forming the knot, though this move is less persistent and is present in only 41% of the transformations. However when an $\LC$ move is undertaken, the distance traversed is significantly greater, as shown in Panel B of Figure \[figtransformsknot\]. This asymmetry is a consequence of the asymmetry already present in the native structure of the protein. ### Consensus minimal folding pathways {#consensus-minimal-folding-pathways .unnumbered} From the transformations described in Figures \[figtransformsalpha\]-\[figtransformsknot\], we see that there are a multitude of different transformations that can fold each protein. The pathways for the $\alpha$ protein 2ABD are more diverse than those for the $\beta$ or knotted proteins. From the ensemble of transformations for each protein, we can average the amount of motion for each uncrossing move to obtain a quantity representing the consensus or most representative minimal folding pathway for that protein. This takes the form of the histograms in Figure \[fig:consensus\_moves\], with the x-axes representing the order of uncrossing/untangling events, right to left, and the y-axes representing the average amount of motion in each type of move. The ensemble of untangling transformations can be divided into three different classes: transformations in which leg $L_N$ is the largest move, transformations in which leg $L_C$ is the largest move, and transformations in which an elbow E or loop R (for Reidemeister type I) are the largest moves. Moreover, if $L_N$ and $L_C$ moves occur consecutively they can be commuted, so without loss of generality we take the $L_N$ move as occurring before the $L_C$ move in the x-axes of Figure \[fig:consensus\_moves\]. The leg moves, if they occur first, are then followed by either elbow (E) and/or loop (R) moves, of which there may be several. In general, the leg moves may both occur before the collection of loop and elbow moves, after them, or may bracket the elbow and loop moves (e.g. 2nd bar in Figure \[figtransformsknot\]b). By the construction of our approximate algorithm, if two $\LN$ moves were encountered during a trajectory (they were encountered only a few times during the course of our studies), they would be aggregated into one $\LN$ move involving the larger of the two motions, in order to remove any possible redundancy of motion. Hence no more than one $\LN$ or $\LC$ move is obtained for all transformations. We found that three pairs of elbow and loop moves was sufficient to desribe about 93 % of all transformations (see the x-axes of Figure \[fig:consensus\_moves\]). In summary, the sequence $L_N$, $L_C$, $R$, $E$, $R$, $E$, $R$, $E$, $L_N$, $L_C$ (read from left to right) characterized almost all transformations, and so was adopted as a general scheme. Any exceptions simply had more small elbow and loop moves that were of minor consequence; for these transformations we simply accumulated the extra elbow and loop moves into the most appropriate $R$ or $E$ move. The general recipe for rendering loops R in Figure \[fig:consensus\_moves\] is as follows: if one R move is encountered (regardless of where), each half is placed first and last (third) in the general scheme. If two R moves are encountered, they are placed first and last, and if three R moves are encountered, they are simply partitioned in the order they occured. For four or more R moves, the middle $N-2$ are accumulated into the middle slot in the general scheme. The same recipe is applied to elbow moves E. As a specific example, the first bar in Figure \[figtransformsknot\]b consists of $L_C$, $L_N$, $E_1$, $R_1$, $E_2$, $R_2$, which after permutation of the first two leg moves falls into the general scheme above as $L_N$, $L_C$, $R_1$, $E_1$, $0$, $0$, $R_2$, $E_2$, $0$, $0$. The bottom-most transformation in Figure \[figtransformsknot\]B consists of $R_1$, $R_2$, $R_3$, $E_1$, $E_2$, $E_3$, $L_N$, which becomes $0$, $0$, $R_1$, $E_1$, $R_2$, $E_2$, $R_3$, $E_3$, $L_N$, $0$ in the general scheme. Figure \[fig:consensus\_moves\] shows histograms of the minimal folding mechanisms, obtained from the above-described procedure. Note again there are 3 classes of transformation, one where $\LN$ is the largest move, one where $\LC$ is the largest move, and one where either loop R or elbow E is the largest move. Each uncrossing element of the transformation, C-leg, N-leg, Reidemeister loop, or elbow, contributes to the height of the corresponding bar, which represents the average over transformations in that class. The percentage of transformations that fall into each class is given in the legend to panels A-C of Figure \[fig:consensus\_moves\]. Most of the transformations ($71.5\%$) for the $\alpha$-protein 2ABD fall into the class with a dominant loop or elbow move, which itself tends to cover less uncrossing distance than either C- or N-leg uncrossing (ordinates of Panels A-C Figure \[fig:consensus\_moves\]). This is a signature of a diverse range of folding pathways: minimal folding pathways need not involve obligatory leg uncrossing constraints. In this sense, the $\beta$ protein 1PKS is more has a more constrained folding mechanism than the $\alpha$ protein; there is a significantly larger percentage of transformations for which a leg transformation $\LC$ or $\LN$ dominates, though the mean distances undertaken when a leg move does dominate are comparable for $\LC$ and even larger for the $\alpha$ protein for $\LN$. The knotted protein 3MLG has the most constrained minimal folding pathway. A leg move from either end dominates for 91% of the cases. Even for the transformations where loop or elbow moves dominate, there is still relatively significant $\LN$ motion. The dominant pathways for knotting 3MLG involve leg crossing from either N or C terminus. When the C terminus is involved in the minimal transformation, the motion can be significant (Figure \[fig:consensus\_moves\]B). Among all transformations of a given protein, a transformation can be found that is closest to the average transformation for one of the three classes in Figure \[fig:consensus\_moves\]. This consensus transformation has a sequence of moves that when mapped to the scheme in Figure \[fig:consensus\_moves\], has minimal deviations from the averages shown there. Further, we can find the transformation that has minimal deviation to any of the three classes in Figure \[fig:consensus\_moves\]. For the knotted protein 3MLG, the best fit transformation is to the class with $\LN$-dominated moves, for the $\alpha$ protein 2ABD, the best fit transformation is to the class with miscellaneous-dominated moves, and the $\beta$ protein 1PKS, the best fit transformation is the class with $\LC$-dominated moves. For the $\alpha$, $\beta$, and knotted proteins, these are the transformations denoted by a short arrows to the left of the transformation in panels A and B of Figures \[figtransformsalpha\], \[figtransformsbeta\], and \[figtransformsknot\] respectively. For the $\alpha$, $\beta$, and knotted proteins, the transformations are illustrated schematically in Figures \[fig:2abd\_move\], \[fig:1pks\_move\], and \[fig:3mlg\_move\] respectively. Inspection of the most representative transformation for the all-$\alpha$ protein 2ABD shown in Figure \[fig:2abd\_move\] indicates that the transformation requires remarkably little motion: it contains a negligible leg motion followed by a loop uncrossing of modest distance, followed by a short elbow move that is also inconsequential: in shorthand $E[9] R[20] \LN[1]$, where the numbers in brackets indicate the cost of the moves in units where the link length is unity. In constructing a schematic of the representative transformation in Figure \[fig:2abd\_move\], we ignore the smaller leg and elbow moves and illustrate the loop move roughly to scale. Although additional crossing points appear from the perspective of the figure, the remainder of the transformation involves simple straight-line motion. Figure \[fig:1pks\_move\] shows the most representative folding transformation for the $\beta$ protein 1PKS. The sequence of events constructed from the most representative minimal transformation, $E[18] R[24] \LC[48]$, consists of a dominant leg move depicted in steps 4 and 5 of the transformation, followed by shorter loop and elbow Reidemeister moves that are neglected in the schematic. Loops and crossing points appear from the perspective of the figure, however the remainder of the transformation involves simple straight-line motion. Figure \[fig:3mlg\_move\] shows the most representative folding transformation for the knotted protein 3MLG. The sequence of events constructed from the minimal transformation, $R[21] R[18] \LN[125]$ in the above notation, consists of a dominant leg move depicted in steps 4 and 5 of the transformation, and two relatively short loop moves that are neglected in the schematic as inconsequential. Loops appear from the perspective of the figure, and the crossing points appear to shift in position, however the remainder of the transformation involves simple straight-line motion. ### Topological constraints induce folding pathways {#topological-constraints-induce-folding-pathways .unnumbered} From Figures \[figtransformsalpha\]-\[figtransformsknot\], one can see that topological non-crossing constraints can induce pathway-like folding mechanisms, particularly for knotted proteins, and in part for $\beta$-sheet proteins as well. The locality of interactions in conjunction with simple tertiary arrangement of helices in the $\alpha$-helical protein profoundly affects the nature of the transformations that fold the protein, such that the distribution of minimal folding pathways is diverse. Conversely, the knotted protein, although largely helical, has non-trivial tertiary arrangement, which is manifested in the persistence of a leg crossing move in the minimal folding pathway. In this way, a folding “mechanism” is induced by the geometry of the native structure. We can quantify this notion by calculating the similarity between minimal folding pathways. To this end we note that, for example, the transformation that is 6 bars from the bottom in Figure \[figtransformsknot\]b, which contains an $\LN$ move followed by 2 short loops and an elbow, should not fundamentally be very different than the transformation 10 from the bottom in that figure, which contains a loop and 2 short elbows followed by a larger $\LN$ move. In general we treat the commonality of the moves as relevant to the overlap rather than the specific number of residues involved, or the order of the moves that arises from the depth-first tree search algorithm. Thus for each transformation pair we define two sequence overlap vectors in the following way. Overlaying the residues involved in moves for each transformation along the primary sequence on top of each other as in Figure \[fig:sequence\_move\_match\], we count as unity those moves of the same type that overlap in sequence for both transformations, otherwise a given move is assigned a value of zero. So for example in Figure \[fig:sequence\_move\_match\] the result is two vectors of binary numbers, one with 4 elements for transformation $\alpha$ and one with 5 elements for transformation $\beta$, based on the overlap of moves of the same type. That is, the first vector is $\vec{\Delta}^\alpha = (1,1,0,1)$ and the 2nd vector is $\vec{\Delta}^\beta = (1,0,1,0,1)$. To find the pathway overlap, we also record the noncrossing distances of the various transformations which here would be two vectors of the form $\vec{\Dist}^\alpha = (\Dist_{\LN}^\alpha,\Dist_{R_1}^\alpha,\Dist_{R_2}^\alpha,\Dist_{\LC}^\alpha )^{\intercal}$, and $\vec{\Dist}^\beta = (\Dist_{\LN}^\beta,\Dist_{R_1}^\beta,\Dist_{R_2}^\beta,\Dist_{E_1}^\beta,\Dist_{\LC}^\beta )^{\intercal}$. Square matrices $\mathbb{\Delta}$ are constructed for $\alpha$ and $\beta$, where each row is identical and equal to the vector $\vec{\Delta}$. This matrix then operates on $\vec{\Dist}$ to make a new vector that has distances for the elements that are nonzero in $\vec{\Delta}$, and is the same length for both $\alpha$ and $\beta$. In the above example shown in Figure \[fig:sequence\_move\_match\], $\mathbb{\Delta}^\alpha \vec{\Dist}^\alpha = (\Dist_{\LN}^\alpha,\Dist_{R_1}^\alpha,\Dist_{\LC}^\alpha )^{\intercal}$ and $\mathbb{\Delta}^\beta \vec{\Dist}^\beta = (\Dist_{\LN}^\beta,\Dist_{R_2}^\beta,\Dist_{\LC}^\beta )^{\intercal}$. These vectors are then multiplied through the inner product, and divided by the norms of $\vec{\Dist}^\alpha$ and $\vec{\Dist}^\beta$ to obtain the overlap $Q^{\alpha\beta}$. In the above example, $Q^{\alpha\beta} = (\Dist_{\LN}^\alpha \Dist_{\LN}^\beta + \Dist_{R_1}^\alpha \Dist_{R_2}^\beta + \Dist_{\LC}^\alpha \Dist_{\LC}^\beta )/\sqrt{ \sum_i \left(\Dist^\alpha\right)^2_i \sum_j (\Dist^\beta )^2_j} $. In general, the formula for the overlap is given by $$Q^{\alpha \beta} = \frac{ ( \mathbb{\Delta}^\alpha \vec{\Dist}^\alpha ) \cdot ( \mathbb{\Delta}^\beta \vec{\Dist}^\beta )}{ \sqrt{ (\vec{\Dist}^\alpha \cdot \vec{\Dist}^\alpha ) (\vec{\Dist}^\beta \cdot \vec{\Dist}^\beta ) } } \: . \label{eq:Qab}$$ When $\alpha=\beta$, $Q^{\alpha \beta} = 1$. In the above example, $Q^{\alpha \beta} < 1$ even if all loops were aligned, because there is no elbow move in transformation $\alpha$. If two transformations have an identical set of moves, $Q^{\alpha \beta} =1$ if all the moves have at least partial overlap with a move of the same type in primary sequence. If a loop move in transformation $\beta$ overlaps two loop moves in transformation $\alpha$, it is assigned to the loop with larger overlap in primary sequence. For the first two transformations in Figure \[figtransformsknot\]A, $\Qab = 0.988$, and for the first two transformations in Figure \[figtransformsknot\]B, $\Qab=0.999$. On the other hand for the first and last transformations in Figure \[figtransformsknot\]B, $\Qab=0.033$. Figure \[fig:Qdist\] shows the distributions of overlaps $\Qab$ between all pairs of transformations indicated in Figures \[figtransformsalpha\]-\[figtransformsknot\], for the three proteins shown in Figure \[fig:3protsrender\]. The distributions show a transition from multiple diverse minimal folding pathways for the $\alpha$ protein, to the emergence of a dominant minimal folding pathway for the knotted protein. The mean overlap $Q$ between transformations can be obtained by averaging $\Qab$ in Equation  over all pairs of transformations: $Q = \sum_{\alpha < \beta} \Qab / \left(N \left(N-1 \right)/2 \right)$. Mean overlaps for each protein are given in the caption to Figure \[fig:Qdist\]. This illustrates that topological constraints induce mechanistic pathways in protein folding. We elaborate on this in the Discussion section. Discussion {#discussion .unnumbered} ========== The Euclidean distance between points can be generalized mathematically to find the distance between polymer curves; this can be used to find the minimal folding transformation of a protein. Here, we have developed a method for calculating approximately minimal transformations between unfolded and folded states that account for polymer non-crossing constraints. The extra motion due to non-crossing constraints was calculated retroactively for all crossing events of a ghost chain transformation involving straightline motion of all beads on a coarse-grained model chain containing every other C$\alpha$ atom, from an ensemble of unfolded conformations, to the folded structure as defined by the coordinates in the protein databank archive. The distances undertaken by the uncrossing events correspond to straight-line motions of all the beads from the conformation before the crossing event, over and around the constraining polymer, and back to the essentially identical polymer conformation immediately after the crossing event. Given a set of chain crossing events, the various ways of undoing the crossings are explored using a depth-first tree search algorithm, and the transformation of least distance is recorded as the minimal transformation. We found that knotted proteins quite sensibly must undergo more noncrossing motion to fold than unknotted proteins. We also find a similar conclusion for transformations between all-$\beta$ and all-$\alpha$ proteins; all-$\alpha$ proteins generally undergo very little uncrossing motion during folding. In fact the uncrossing distance, $\Dnx$, averaged over the unfolded ensemble, can be used as a discrimination measure between various structural and kinetic classes of proteins. Comparing several metrics arising from this work with several common metrics in the literature such as RMSD, absolute contact order ACO, and long range order LRO, we found that the most reliable discriminator between structural classes, as well as between two- and three-state proteins, was $\Dnx$ per residue. (later paragraph moved here:) Knotted proteins, as compared to unknotted proteins, are the most distinguishable class of those we investigated, in that all metrics we investigated except for LRO significantly differentiated the knotted from unknotted proteins. The differentiation between structural or kinetic classes of proteins as studied here is a separate issue from the question of which order parameters may best correlate with folding rates [@PlaxcoKW00:biochem; @GromihaMM01; @IvankovDN03; @OztopB04; @IstominAY07]; this latter question is an interesting topic of future research. Differentiating native-structure based order parameters that provide good correlates of folding kinetics is a complicated issue, in that diffferent structural classes may correlate better or worse with a given order parameter [@IstominAY07]. Non-crossing distance per residue $\Dnx/N$ increases more rapidly with chain length than the mean straight line distance between residue pairs (MRSD). Considering proteins as separate domains indicates a crossover at long chain length, about $N=3600$. Considering proteins built up by adding successive domains, specifically for two representative multi-domain proteins in our dataset (2HA8 and 2A5E), indicates a crossover to entanglement-dominated folding mechanisms at shorter lengths- about $N=400$. This crossover point may indicate a regime where energetics begins to play a role in order to fold domains independently, and avoid progressively more significant polymer disentanglement in order to fold. Even for knotted proteins, the motion involved in avoiding non-crossing constraints is only about 13% of the total ghost chain motion undertaken had the noncrossing constraints been neglected. This was not an obvious result, to these authors at least. In contrast to melts of long polymers, chain non-crossing and the resultant entanglement does not appear to be a significant factor in protein folding, at least for the structures and ensembles we have studied here. It is tempting to conclude from this that chain non-crossing constraints play a minor role in determining folding mechanisms. It is nevertheless an empirical fact that knotted proteins fold significantly slower than unknotted proteins [@MallamAL08; @KingNP10]. As well, raw percentages of total motion do not take into account the difficulty in certain types of special polymer movement, in particular when the entropy of folding routes is tightly constrained [@PlotkinSS00:pnas; @PlotkinSS02:Tjcp; @PlotkinSS02:quartrev2; @ChavezLL04; @NorcrossTS06; @FergusonA09]. The small percentage of non-crossing motion may offer some explanation however, as to why simple order parameters, such as absolute contact order, that do not explicitly account for noncrossing in characterizing folding mechanisms have historically been so successful in predicting kinetics. The non-crossing distance was calculated here for a chain of zero thickness, so that non-crossing is decoupled from steric constraints. Finite volume steric effects would likely enhance the importance of non-crossing constraints, since the volume of phase space where chains are non-overlapping is reduced, and thus chain motions must be further altered to respect these additional constraints [@BanavarJR05]. Steric constraints may significantly alter the shape of reactive trajectories, and slow kinetics by enforcing entropic bottlenecks. Such constraints may become particularly important for collapsed or semi-collapsed proteins, and knotted proteins where they restrict stereochemically-allowed folding pathways. These effects may in principle be treated by extending the present formalism to include non-zero chain thickness, and by extending the minimal folding pathway to the partition function of pathways, with each pathway having weight proportional to the exponent of the distance [@PlotkinSS07]. Such a treatment is an interesting and important topic of future work. One potential issue in the construction of the algorithm used here is that the approximated minimal transformation is generally not equivalent to a kinetically realizable transformation. In the depth-first tree search algorithm illustrated in Figure \[fig:tree\_of\_poss\], the set of crossing points defines a set of uncrossing moves that may be permuted, or combined for example through a compound leg movement as in Figure \[fig:leg\_movement\_extra\_cross\]. However the kinetic sequence of crossing events, in particular those significantly separated in “time” along the minimal transformation, may not be permutable or combinable physically, at least not without modifying the distance travelled.[^3] Hence the transformations are treated here as approximations to the true minimal transformations that respect non-crossing. The algorithm as described above may underrepresent the amount of motion involved in noncrossing by allowing kinetically separated moves to be commutable. On the other hand, the motion assumed in the algorithm to be undertaken by a crossing event contains abrupt changes in the direction of the velocity (corners) at the time of the uncrossing event, and so is larger than the true minimal distance. These errors cancel at least in part. It is an interesting topic of future research to develop an improved algorithm that computes minimal transformations, perhaps using these approximate transformations as a starting point for further optimization or modification. The mathematical construction of minimal folding transformations can elucidate folding pathways. To this end we have dissected the morphology of protein structure formation for several different native structures. We found that the folding transformations of knotted proteins, and to a lesser extent $\beta$ proteins, are dominated by persistent leg uncrossing moves, while $\alpha$ proteins have diverse folding pathways dominated simply by loop uncrossing. A pathway overlap function can then be defined, the structure of which is fundamentally different for $\alpha$ proteins than for knotted proteins. While the overlap function supports the notion of a diverse collection of folding pathways for the $\alpha$ protein, the overlap function for the knotted protein indicates that topological polymer constraints can induce “mechanism” into how a protein folds, i.e. these constraints induce a dominant sequence of events in the folding pathway. This effect is observed to some extent in the $\beta$ protein we investigated, but is most pronounced for knotted proteins. Other approaches have been made previously to quantify topological frustration, and construct folding pathways that minimize such frustration. Norcross and Yeates [@NorcrossTS06] have extended the earlier analysis of Connolly [*et. al*]{} [@ConnollyM80], to show that edges between consecutaive $C_\alpha$ atoms in the coarse-grained primary sequence can be surrounded by a ring of other $C_\alpha$ atoms consisting of the vertices of tetrahedra from Delauney tesselation. They then find the folding pathways that minimize the number of times a ring forms before its thread is formed within a single-sequence approximation: these indicate topologically-frustrated pathways. As an interesting example in [@NorcrossTS06], strand IV of superoxide dismutase (SOD1) is highly buried by parts of the Zn-binding loop, electrostatic loop, and neighboring strands V and VII. In vitro folding studies [@KayatekinC08; @NordlundA2009] show however that this problem is resolved by Zn-binding after folding of the $\beta$-barrel, which is coupled with structural formation of the Zn-binding and electrostatic loops (loops IV and VII). The apo state is an energetically stressed, metastable intermediate [@DasA12pnas1]. In general, folding coupled to ligand binding could remove topological frustration by inducing unfrustrated pathways in the folding mechanism. Similar schematic “average” folding mechanisms as in Figures \[fig:2abd\_move\]-\[fig:3mlg\_move\], based on minimal folding pathways, were proposed for the complex Stevedore knotted protein $\alpha$-haloacid dehalogenase by Bölinger [*et al*]{} [@BolingerD10], based on folding simulation statistics of Gō models. Coarse-grained simulation studies of the reversible folder YibK [@MallamAL06] showed that non-native interactions between the C-terminal end and residues towards the middle of the sequence were a prerequisite for reliable folding to the trefoil knotted native conformation [@WallinS07], the evolutionary origins of which were supported by hydrophobicity and $\beta$-sheet propensity profiles of the SpoU methyltransferase family. This suggests a new aspect of evolutionary “design” involving selective non-native interactions, beyond the generic role that non-native interactions may play in accelerating folding rate [@PlotkinSS01:prot; @ClementiC04]. Low kinetic success rates $\sim 1-2\%$ in purely structure-based Gō simulations are also seen in coarse-grained simulation studies of YibK [@SulkowskaJI09] and all-atom simulation studies of the small $\alpha/\beta$ knotted protein MJ0366 [@NoelJK10]. In these studies by Onuchic and colleagues, a “slip-knotting” mechanism driven by native contacts is proposed, rather than the “plug” mechanism in [@WallinS07], which is driven by non-native contacts. Both slip-knotting and plug mechanisms were described by Mohazab and Plotkin as optimal un-crossing motions of protein chains in [@MohazabAR08:bj]. Such mechanisms may be facilitated by flexibility in the protein backbone: highly conserved glycines in the hinge regions of both knotted and slipknotted [@KingNP07] proteins modulate the knotted state of the corresponding subchain of the protein [@SulkowskaJI12]. Further bioinformatic studies that investigate evolutionary selection by strengthening critical native or non-native interactions in knotted proteins are an interesting topic of current and future research. There is certainly a precedent of selection for native interactions that penalize on-pathway intermediates in some proteins such as ribosomal protein S6 [@PlotkinSS00:pnas; @PlotkinSS02:Tjcp; @LindbergM02]. Structural analysis of the deeply buried trefoil knot in acetohydroxy acid isomeroreductase indicates swapping of secondary structural elements across replicated domains likely arising from gene duplication [@TaylorWR00], which argues in favor of knot formation driven by native interactions, through a mechanism apparently distinct from slipknotting. Lua and Grosberg have found that, due to enhanced return probabilities originating from finite globule size along with secondary structural preferences, protein chains have smaller degree of interpenetration than collapsed random walks, and thus fewer knots than would be expected for such collapsed random walks [@LuaRC06], in spite of the fact that collapse dramatically enhances the likelihood of knot formation [@VirnauP05], an effect foreshadowed by the dramatic decrease in characteristic length for knot formation as solvent quality changes from good to ideal (theta) [@KoniarisK91; @KoniarisK91-JCP]. It is still not definitively answered whether this statistical selection against knots in the protein universe is a cause or consequence of the above size and structural preferences. Similarly, Mansfield [@MansfieldML94; @MansfieldML97] has suggested that the polar nature of the N- and C- termini of the protein chain energetically penalize processes that would result in the formation of knots. Conversely, some functional roles may benefit from the presence of knotted topologies. Virnau and colleagues [@VirnauP06] have suggested that the presence of complex knots in proteins involved in regulation of ubiquitination and proteolysis serve a protective role against incidental proteasome degradation, and as well, they observe evidence for the modulation of function by alteration of an enzymatic binding site through either the presence or absence of a knot in homologues of transcarbamylase. Phylogenetic analysis indicates that the presence of a knot is most likely mediated by a single evolutionary event involving insertions of short segments in the primary sequence [@PotestioR10]. The interplay between sequence-determined energetics and chain connectivity in the folding of proteins with complex or knotted topologies is a topic of much current interest, despite the fact that the number of proteins exhibiting knots or slipknots in their native structures is relatively small. It will be interesting to see how evolution has optimized sequence or facilitated protein-chaperone interactions to enable folding for these “problem children” of the proteome. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Atanu Das, Will Guest, and Stephen Toope for helpful and/or supportive discussions. A.R.M. acknowledges Mohammad S. Mashayekhi for computational resource support. We also acknowledge funding from the Faculty of Graduate Studies 4YF Program at the University of British Columbia at Vancouver, and the Natural Sciences and Engineering Council for providing funding to defray publication page fees. [100]{} \[1\][`#1`]{} urlstyle \[1\][doi:\#1]{} \[1\][ ]{} \[2\] > <span style="font-variant:small-caps;">Key:</span> \#1\ > <span style="font-variant:small-caps;">Annotation:</span> \#2 Wolynes PG (1992) Spin glass ideas and the protein folding problems. In: Stein D, editor, Spin Glasses and Biology. Singapore: World Scientific, pp. 225–259. Chan HS, Dill KA (1993) The protein folding problem. Phys Today 46: 24–32. Wolynes PG, Onuchic JN, Thirumalai D (1995) Navigating the folding routes. Science 267: 1619–1620. Garel T, Orland H, Thirumalai D (1996) Analytical theories of protein folding. In: Elber R, editor, New Developments in theoretical studies of proteins, Singapore: World Scientific. pp. 197–268. Dobson CM, Sali A, Karplus M (1998) Protein folding: A perspective from theory and experiment. Angew Chem Int Ed Engl 37: 868–893. Fersht AL (2000) Structure and Mechanism in Protein Science: A guide to Enzyme Catalysis and Protein Folding. New York: W.H. Freeman and Company. Eaton WA, Munoz V, Hagen SJ, Jas GS, Lapidus LJ, et al. (2000) Fast kinetics and mechanisms in protein folding. Annual Review of Biophysics and Biomolecular Structure 29: 327–359. Englander SW (2000) Protein folding intermediates and pathways studied by hydrogen exchange. Annual Review of Biophysics and Biomolecular Structure 29: 213-238. Pande VS, Grosberg AY, Tanaka T (2000) Heteropolymer freezing and design: Towards physical models of protein folding. Rev Mod Phys 72: 259–314. Shea J, Brooks [III]{} C (2001) From folding theories to folding proteins: A review and assessment of simulation studies of protein folding and unfolding. Ann Rev Phys Chem 52: 499-535. Plotkin SS, Onuchic JN (2002) Understanding protein folding with energy landscape theory i: Basic concepts. Quart Rev Biophys 35: 111–167. Plotkin SS, Onuchic JN (2002) Understanding protein folding with energy landscape theory ii: Quantitative aspects. Quart Rev Biophys 35: 205–286. Snow CD, Nguyen H, Pande VS, Gruebele M (2002) Absolute comparison of simulated and experimental protein-folding dynamics. Nature 420: 102-106. Oliveberg M, Wolynes PG (2005) The experimental survey of protein-folding energy landscapes. Quarterly Reviews of Biophysics 38: 245-288. Khatib F, Cooper S, Tyka MD, Xu K, Makedon I, et al. (2011) Algorithm discovery by protein folding game players. Proc Natl Acad Sci USA 108: 18949-18953. Lindorff-Larsen K, Piana S, Dror RO, Shaw DE (2011) How fast-folding proteins fold. Science 334: 517-520. Chung HS, McHale K, Louis JM, Eaton WA (2012) Single-molecule fluorescence experiments determine protein folding transition path times. Science 335: 981-984. Fersht AR, Matouschek A, Serrano L (1992) The folding of an enzyme i. theory of protein engineering analysis of stability and pathway of protein folding. J Mol Biol 224: 771–782. Abkevich VI, Gutin AM, Shakhnovich EI (1994) Specific nucleus as the transition state for protein folding: Evidence from the lattice model. Biochemistry 33: 10026–10036. Fersht AR (1995) Optimization of rates of protein folding: the nucleation-condensation mechanism and its implications. Proc Natl Acad Sci USA 92: 10869–10873. Daggett V, Li A, Itzhaki LS, Otzen DE, Fersht AR (1996) Structure of the transition state for folding of a protein derived from experiment and simulation. J Mol Biol 257: 430–440. Gianni S, Guydosh NR, Khan F, Caldas TD, Mayor U, et al. (2003) Unifying features in protein-folding mechanisms. Proc Natl Acad Sci USA 100: 13286-13291. Klimov DK, Thirumalai D (1998) Lattice models for proteins reveal multiple folding nuclei for nucleation-collapse mechanism. J Mol Biol 282: 471–492. Oliveberg M, Tan Y, Silow M, Fersht A (1998) The changing nature of the protein folding transition state: Implications for the shape of the free energy profile for folding. J Mol Biol 277: 933–943. Martinez J, Pisabarro M, Serrano L (1998) Obligatory steps in protein folding and the conformational diversity of the transition state. Nature Struct Biol 5: 721-729. Clementi C, Garcia AE, Onuchic JN (2003) Interplay among tertiary contacts, secondary structure formation and side-chain packing in the protein folding mechanism: an all-atom representation study. J Mol Biol 326: 933-954. Ejtehadi MR, Avall SP, Plotkin SS (2004) [Three-body interactions improve the prediction of rate and mechanism in protein folding models]{}. Proc Natl Acad Sci 101: 15088–15093. Oztop B, Ejtehadi MR, Plotkin SS (2004) Protein folding rates correlate with heterogeneity of folding mechanism. Phys Rev Lett 93: 208105. Bodenreider C, Kiefhaber T (2005) Interpretation of protein folding $\psi$ values. Journal of Molecular Biology 351: 393 - 401. Sosnick TR, Krantz BA, Dothager RS, Baxa M (2006) Characterizing the protein folding transition state using $\psi$ analysis. Chemical Reviews 106: 1862-1876. Wensley BG, G[Ã]{}¤rtner M, Choo WX, Batey S, Clarke J (2009) Different members of a simple three-helix bundle protein family have very different folding rate constants and fold by different mechanisms. Journal of Molecular Biology 390: 1074 - 1085. \(1999) From snapshot to movie: [œ]{}[ü]{} analysis of protein folding transition states taken one step further. Proc Natl Acad Sci USA 96: 14854-14859. Chan HS, Dill KA (1990) The effects of internal constraints on the configurations of chain molecules. J Chem Phys 92: 3118–3135. Wolynes PG (1997) Folding funnels and energy landscapes of larger proteins in the capillarity approximation. Proc Natl Acad Sci USA 94: 6170–6175. Nymeyer H, Garcia AE, Onuchic JN (1998) Folding funnels and frustration in off-lattice minimalist protein landscapes. Proc Natl Acad Sci USA 95: 5921–5928. Du R, Pande VS, Grosberg AY, Tanaka T, Shakhnovich ES (1998) On the transition coordinate for protein folding. J Chem Phys 108: 334–350. Nymeyer H, Socci ND, Onuchic JN (2000) Landscape approaches for determining the ensemble of folding transition states: Success and failure hinge on the degree of frustration. Proc Natl Acad Sci USA . Shoemaker BA, Wang J, Wolynes PG (1999) Exploring structures in protein folding funnels with free energy functionals: the transition state ensemble. J Mol Biol 287: 675–694. Zhou Y, Karplus M (1999) Folding of a model three-helix bundle protein: a thermodynamic and kinetic analysis. Journal of Molecular Biology 293: 917 - 951. Plotkin SS, Onuchic JN (2000) Investigation of routes and funnels in protein folding by free energy functional methods. Proc Natl Acad Sci USA 97: 6509–6514. Plotkin SS, Onuchic JN (2002) Structural and energetic heterogeneity in protein folding i: Theory. J Chem Phys 116: 5263–5283. Favrin G, Irb[ä]{}ck A, Samuelsson B, Wallin S (2003) Two-state folding over a weak free-energy barrier. Biophysical Journal 85: 1457 - 1465. Weikl TR, Dill KA (2007) Transition-states in protein folding kinetics: The structural interpretation of phi values. J Mol Biol 365: 1578–1586. Maity H, Maity M, Krishna MMG, Mayne L, Englander SW (2005) Protein folding: The stepwise assembly of foldon units. Proc Natl Acad Sci USA 102: 4741-4746. Weinkam P, Zong C, Wolynes PG (2005) A funneled energy landscape for cytochrome c directly predicts the sequential folding route inferred from hydrogen exchange experiments. Proc Natl Acad Sci USA 102: 12401-12406. Dellago C, Bolhuis PG, Csajka FS, Chandler D (1998) Transition path sampling and the calculation of rate constants. The Journal of Chemical Physics 108: 1964-1977. Bolhuis PG, Chandler D, Dellago C, Geissler PL (2002) Transition path sampling: Throwing ropes over rough mountain passes, in the dark. Ann Rev Phys Chem 53: 291–318. Hummer G (2004) From transition paths to transition states and rate coefficients. The Journal of Chemical Physics 120: 516-523. Best RB, Hummer G (2005) Reaction coordinates and rates from transition paths. Proc Natl Acad Sci USA 102: 6732–6737. Maragliano L, Fischer A, Vanden-Eijnden E, Ciccotti G (2006) String method in collective variables: Minimum free energy paths and isocommittor surfaces. The Journal of Chemical Physics 125: 024106. van der Vaart A, Karplus M (2007) Minimum free energy pathways and free energy profiles for conformational transitions based on atomistic molecular dynamics simulations. The Journal of Chemical Physics 126: 164106. Fischer S, Karplus M (1992) Conjugate peak refinement: an algorithm for finding reaction paths and accurate transition states in systems with many degrees of freedom. Chemical Physics Letters 194: 252 - 261. Yang H, Wu H, Li D, Han L, Huo S (2007) Temperature-dependent probabilistic roadmap algorithm for calculating variationally optimized conformational transition pathways. Journal of Chemical Theory and Computation 3: 17-25. Branduardi D, Gervasio FL, Parrinello M (2007) From a to b in free energy space. The Journal of Chemical Physics 126: 054103. Plotkin SS, Wolynes PG (1998) Non-markovian configurational diffusion and reaction coordinates for protein folding. Phys Rev Lett 80: 5015–5018. Hummer G, Kevrekidis IG (2003) Coarse molecular dynamics of a peptide fragment: Free energy, kinetics, and long-time dynamics computations. The Journal of Chemical Physics 118: 10762-10773. Cerjan CJ, Miller WH (1981) On finding transition states. The Journal of Chemical Physics 75: 2800-2806. Bell S, Crighton JS (1984) Locating transition states. The Journal of Chemical Physics 80: 2464-2475. Elber R, Karplus M (1987) A method for determining reaction paths in large molecules: Application to myoglobin. Chemical Physics Letters 139: 375 - 380. Wales DJ (1993) Theoretical study of water trimer. Journal of the American Chemical Society 115: 11180-11190. Wales DJ (2001) A microscopic basis for the global appearance of energy landscapes. Science 293: 2067-2070. Komatsuzaki T, Berry RS (2003) Chemical Reaction Dynamics: Many-Body Chaos and Regularity, John Wiley & Sons, Inc. pp. 79–152. Prentiss MC, Wales DJ, Wolynes PG (2010) The energy landscape, folding pathways and the kinetics of a knotted protein. PLoS Comput Biol 6: e1000835. Mezard M, Parisi E, Virasaro MA (1986) Spin Glass Theory and Beyond. Singapore: World Scientific Press. Kim MK, Chirikjian GS, Jernigan RL (2002) Elastic models of conformational transitions in macromolecules. Journal of Molecular Graphics and Modelling 21: 151 - 160. Kim MK, Jernigan RL, Chirikjian GS (2002) Efficient generation of feasible pathways for protein conformational transitions. Biophysical Journal 83: 1620 - 1630. Maragakis P, Karplus M (2005) Large amplitude conformational change in proteins explored with a plastic network model: Adenylate kinase. Journal of Molecular Biology 352: 807 - 822. Schuyler AD, Jernigan RL, Qasba PK, Ramakrishnan B, Chirikjian GS (2009) Iterative cluster-nma: A tool for generating conformational transitions in proteins. Proteins: Structure, Function, and Bioinformatics 74: 760–776. Krebs WG, Gerstein M (2000) The morph server: a standardized system for analyzing and visualizig macromolecular motions in a database framework. Nucleic Acids Research 28: 1665-1675. Wells S, Menor S, Hespenheide B, Thorpe MF (2005) Constrained geometric simulation of diffusive motion in proteins. Physical Biology 2: S127. Farrell DW, Speranskiy K, Thorpe MF (2010) Generating stereochemically acceptable protein pathways. Proteins: Structure, Function, and Bioinformatics 78: 2908–2921. Plotkin SS (2007) Generalization of distance to higher dimensional objects. Proc Natl Acad Sci USA 104: 14899–14904. Mohazab AR, Plotkin SS (2008) Minimal distance transformations between links and polymers: principles and examples. J Phys Cond Mat 20: 244133. Mohazab AR, Plotkin SS (2008) Minimal folding pathways for coarse-grained biopolymer fragments. Biophys J 95: 5496–5507. Mohazab AR, Plotkin SS (2009) Structural alignment using the generalized euclidean distance between conformations. IJQC 109: 3217–3228. Grosberg AY (2004) Statistical mechanics of protein folding: Some outstanding problems. In: Attig N, Binder K, Grubm$\mbox{\"{u}}$ller H, Kremer K, editors, Computational Soft Matter: From Synthetic Polymers to Proteins, Bonn: John von Neumann Institut f$\mbox{\"{u}}$r Computing, volume NIC series vol. 23. pp. 375–399. Kabsch W (1976) [A solution for the best rotation to relate two sets of vectors]{}. Acta Crystallographica Section A 32: 922–923. Kabsch W (1978) [A discussion of the solution for the best rotation to relate two sets of vectors]{}. Acta Crystallographica Section A 34: 827–828. Kneller GR (1991) Superposition of molecular structures using quaternions. Molecular Simulation 7: 113-119. Flower DR (1999) Rotational superposition: A review of methods. J Mol Graph Mod 17: 238–244. Coutsias EA, Seok C, Dill KA (2004) Using quaternions to calculate rmsd. Journal of Computational Chemistry 25: 1849–1857. Coutsias EA, Seok C, Dill KA (2005) Rotational superposition and least squares: The svd and quaternions approaches yield identical results. reply to the preceding comment by [G]{}. [K]{}neller. Journal of Computational Chemistry 26: 1663–1665. Plaxco KW, Simons KT, Baker D (1998) Contact order, transition state placement and the refolding rates of single domain proteins. J Mol Biol 277: 985–994. Gromiha MM, Selvaraj S (2001) Comparison between long-range interactions and contact order in determining the folding rate of two-state proteins: application of long-range order to folding rate prediction. Journal of Molecular Biology 310: 27 - 32. Gutin AM, Abkevich VI, Shakhnovich EI (1996) Chain length scaling of protein folding time. Phys Rev Lett 77: 5433–5436. Galzitskaya OV, Ivankov DN, Finkelstein AV (2001) Folding nuclei in proteins. FEBS Letters 489: 113–118. Adams CC (1994) The Knot Book. W H Freeman and Company. Nechaev SK (1996) Statistics of Knots and Entangled Random Walks. World Scientific. Wiegel FW (1986) Introduction to path-integral methods in physics and polymer science. Singapore: World Scientific. Clementi C, Nymeyer H, Onuchic JN (2000) Topological and energetic factors: what determines the structural details of the transition state ensemble and en-route intermediates for protein folding? [A]{}n investigation for small globular proteins. J Mol Biol 298: 937–953. Shea JE, Onuchic JN, Brooks CL (2000) Energetic frustration and the nature of the transition state in protein folding. J Chem Phys 113: 7663–7671. Lal M (1969) ’[M]{}onte [C]{}arlo’ computer simulation of chain molecules. i. Mol Phys 17: 57-64. Madras N, Sokal AD (1988) The pivot algorithm: [A]{} highly efficient monte carlo method for the self-avoiding walk. Journal of Statistical Physics 50: 109-186. Ivankov DN, Garbuzynskiy SO, Alm E, Plaxco KW, Baker D, et al. (2003) Contact order revisited: Influence of protein size on the folding rate. Protein Science 12: 2057–2062. Gromiha M, Thangakani A, Selvaraj S (2006) Fold-rate: prediction of protein folding rates from amino acid sequence. Nucleic acids research 34: W70–W74. Mallam A, Jackson S (2007) A comparison of the folding of two knotted proteins: Ybea and yibk. Journal of molecular biology 366: 650–665. Mallam AL, Jackson SE (2006) Probing nature’s knots: The folding pathway of a knotted homodimeric protein. J Mol Biol 359: 1420–1436. King NP, Jacobitz AW, Sawaya MR, Goldschmidt L, Yeates TO (2010) Structure and folding of a designed knotted protein. Proceedings of the National Academy of Sciences 107: 20732-20737. van Roon A, Loening N, Obayashi E, Yang J, Newman A, et al. (2008) Solution structure of the u2 snrnp protein rds3p reveals a knotted zinc-finger motif. Proceedings of the National Academy of Sciences 105: 9621–9626. Bölinger D, Su[ł]{}kowska JI, Hsu HP, Mirny LA, Kardar M, et al. (2010) A stevedore’s protein knot. PLoS Comput Biol 6: e1000731. Kolesov G, Virnau P, Kardar M, Mirny L (2007) Protein knot server: detection of knots in protein structures. Nucleic acids research 35: W425–W428. Cavagnero S, Dyson H, Wright P (1999) Effect of h helix destabilizing mutations on the kinetic and equilibrium folding of apomyoglobin. Journal of Molecular Biology 285: 269. Thiruv B, Quon G, Saldanha S, Steipe B (2005) Nh3d: A reference dataset of non-homologous protein structures. BMC structural biology 5: 12. Zhang Y, Skolnick J (2005) Tm-align: a protein structure alignment algorithm based on the tm-score. Nucleic Acids Research 33: 2302-2309. Koniaris K, Muthukumar M (1991) Knottedness in ring polymers. Phys Rev Lett 66: 2211–2214. Koniaris K, Muthukumar M (1991) Self‐entanglement in ring polymers. J Chem Phys 95: 2873–2881. Taylor WR (2000) A deeply knotted protein structure and how it might fold. Nature 406: 916–919. Virnau P, Kantor Y, Kardar M (2005) Knots in globule and coil phases of a model polyethylene. Journal of the American Chemical Society 127: 15102-15106. Galzitskaya OV, Garbuzynskiy SO, Ivankov DN, Finkelstein AV (2003) Chain length is the main determinant of the folding rate for proteins with three-state folding kinetics. Proteins: Structure, Function, and Bioinformatics 51: 162–166. Onuchic JN, Socci ND, Luthey-Schulten Z, Wolynes PG (1996) Protein folding funnels: The nature of the transition state ensemble. Folding and Design 1: 441–450. Baker D (2000) A surprising simplicity to protein folding. Nature 405: 39–42. Nymeyer H, Socci ND, Onuchic JN (2000) Landscape approaches for determining the ensemble of folding transition states: Success and failure hinge on the degree of minimal frustration. Proc Natl Acad Sci USA 97: 634–639. Ding F, Guo W, Dokholyan NV, Shakhnovich EI, Shea JE (2005) Reconstruction of the src-sh3 protein domain transition state ensemble using multiscale molecular dynamics simulations. J Mol Biol 350: 1035–1050. Cho SS, Levy Y, Wolynes PG (2006) P versus [Q]{}: Structural reaction coordinates capture protein folding on smooth landscapes. Proc Natl Acad Sci USA 103: 586–591. Zhou H, Xue B, Zhou Y (2009) Ddomain: Dividing structures into domains using a normalized domain–domain interaction profile. Protein science 16: 947–955. Wu H (??) The crystal structure of methyltransferase domain of human tar (hiv-1) rna binding protein 1 in complex with s-adenosyl-l-homocystein. to be published . Koczyk G, Berezovsky I (2008) Domain hierarchy and closed loops (dhcl): a server for exploring hierarchy of protein domain structure. Nucleic acids research 36: W239–W245. Byeon I, Li J, Ericson K, Selby T, Tevelev A, et al. (1998) Tumor suppressor p16ink4a: Determination of solution structure and analyses of its interaction with cyclin-dependent kinase 4. Molecular cell 1: 421–431. Andersen KV, Poulsen FM (1993) The three-dimensional structure of acyl-coenzyme a binding protein from bovine liver: structural refinement using heteronuclear multidimensional nmr. J Biomol NMR 3: 271–284. Koyama S, Yu H, Dalgarno D, Shin T, Zydowsky L, et al. (1993) Structure of the pl3k sh3 domain and analysis of the sh3 family. Cell 72: 945–952. Plaxco KW, Simons KT, Ruczinski I, Baker D (2000) Topology, stability, sequence, and length: Defining the determinants of two-state protein folding kinetics. Biochemistry 39: 11177–11183. Istomin AY, Jacobs DJ, Livesay DR (2007) On the role of structural class of a protein with two-state folding kinetics in determining correlations between its size, topology, and folding rate. Protein Science 16: 2564–2569. Mallam AL, Morris ER, Jackson SE (2008) Exploring knotting mechanisms in protein folding. Proceedings of the National Academy of Sciences 105: 18740-18745. Chavez LL, Onuchic JN, Clementi C (2004) Quantifying the roughness on the free energy landscape: Entropic bottlenecks and protein folding rates. Journal of the American Chemical Society 126: 8426–8432. Norcross TS, Yeates TO (2006) A framework for describing topological frustration in models of protein folding. Journal of Molecular Biology 362: 605 - 621. Ferguson A, Liu Z, Chan HS (2009) Desolvation barrier effects are a likely contributor to the remarkable diversity in the folding rates of small proteins. Journal of Molecular Biology 389: 619–636. Banavar J, Hoang T, Maritan A (2005) Proteins and polymers. J Chem Phys 122: 234910. Connolly ML, Kuntz ID, Crippen GM (1980) Linked and threaded loops in proteins. Biopolymers 19: 1167–1182. Kayatekin C, Zitzewitz JA, Matthews CR (2008) [Zinc Binding Modulates the Entire Folding Free Energy Surface of Human Cu,Zn Superoxide Dismutase]{}. Journal of Molecular Biology 384: 540 - 555. Nordlund A, Leinartaite L, Saraboji K, Aisenbrey C, Gr[ö]{}bner G, et al. (2009) [Functional features cause misfolding of the ALS-provoking enzyme SOD1]{}. Proc Natl Acad Sci USA 106: 9667–9672. Das A, Plotkin SS (2012) Sod1 exhibits allosteric frustration to facilitate metal binding affinity. Proc Natl Acad Sci USA . Stefan Wallin KBZ, Shakhnovich EI (2007) The folding mechanics of a knotted protein. J Mol Biol 368: 884–893. Plotkin SS (2001) Speeding protein folding beyond the gō model: How a little frustration sometimes helps. Proteins 45: 337–345. Clementi C, Plotkin SS (2004) The effects of nonnative interactions on protein folding rates: Theory and simulation. Protein Science 13: 1750–1766. Su[ł]{}kowska JI, Su[ł]{}kowski P, Onuchic JN (2009) Dodging the crisis of folding proteins with knots. Proceedings of the National Academy of Sciences 106: 3119–3124. Noel JK, Su[ł]{}kowska JI, Onuchic JN (2010) Slipknotting upon native-like loop formation in a trefoil knot protein. Proceedings of the National Academy of Sciences 107: 15403-15408. King NP, Yeates EO, Yeates TO (2007) Identification of rare slipknots in proteins and their implications for stability and folding. Journal of Molecular Biology 373: 153 - 166. Su[ł]{}kowska JI, Rawdon EJ, Millett KC, Onuchic JN, Stasiak A (2012) Conservation of complex knotting and slipknotting patterns in proteins. Proceedings of the National Academy of Sciences 109: E1715-E1723. Lindberg M, Tangrot J, Oliveberg M (2002) Complete change of the protein folding transition state upon circular permutation. Nature Struct Biol 9: 818–822. Lua RC, Grosberg AY (2006) Statistics of knots, geometry of conformations, and evolution of proteins. PLoS Comput Biol 2: e45. Mansfield ML (1994) Are there knots in proteins? Nat Struct Mol Biol 1: 213–214. Mansfield ML (1997) Fit to be tied. Nat Struct Mol Biol 4: 166–167. Virnau P, Mirny LA, Kardar M (2006) Intricate knots in proteins: Function and evolution. PLoS Comput Biol 2: 1074–1079. Potestio R, Micheletti C, Orland H (2010) Knotted vs. unknotted proteins: Evidence of knot-promoting loops. PLoS Comput Biol 6: e1000864. Figure Legends {#figure-legends .unnumbered} ============== \[fig:loop\_twist\_pinch\] \[fig:3mlg\_render\] \[fig:2abd\_LR\] \[fig:1pks\_LR\] \[fig:3mlg\_LR\] \[fig:cons\_Other\] Tables {#tables .unnumbered} ====== -------------- ------------ ---------------- ----- ----- ------ ------ ------ ---------------------- ------------------------ ------------------------------------------ ------------------------------- ----- PDB x-State 2ndry str. LRO RCO ACO MRSD RMSD $\langle\Dnx\rangle$ $\langle\Dnx\rangle/N$ $\langle\mathcal{D}\rangle(\times 10^3)$ $\langle\mathcal{D}\rangle/N$ $N$ \[1mm\] 1A6N 3 $\alpha$-helix 1.4 0.1 14.0 26.2 29.2 285 1.9 [4.24]{} 28.1 151 \[1mm\] 1APS 2 Mixed 4.2 0.2 21.8 22.7 25.4 201 2.1 [2.43]{} 24.8 98 \[1mm\] 1BDD 2 $\alpha$-helix 0.9 0.1 5.2 14.0 14.9 76.5 1.3 [0.91]{} 15.2 60 \[1mm\] 1BNI 3 Mixed 2.5 0.1 12.3 20.8 22.8 209 1.9 [2.46]{} 22.8 108 \[1mm\] 1CBI 3 $\beta$-sheet 2.8 0.1 18.8 25.1 27.9 286 2.1 [3.70]{} 27.2 136 \[1mm\] 1CEI 3 $\alpha$-helix 1.0 0.1 9.1 16.7 18.9 71.4 0.8 [1.49]{} 17.5 85 \[1mm\] 1CIS 2 Mixed 3.3 0.2 10.8 15.1 16.8 99.7 1.5 [1.10]{} 16.6 66 \[1mm\] 1CSP 2 $\beta$-sheet 3.0 0.2 11.0 16.8 18.4 98.0 1.5 [1.23]{} 18.3 67 \[1mm\] 1EAL 3 $\beta$-sheet 2.5 0.1 15.7 24.9 27.9 278 2.2 [3.44]{} 27.1 127 \[1mm\] 1ENH 2 $\alpha$-helix 0.4 0.1 7.4 13.5 14.9 28.0 0.5 [0.76]{} 14.1 54 \[1mm\] 1G6P 2 $\beta$-sheet 3.8 0.2 11.7 16.4 18.0 83.1 1.3 [1.17]{} 17.7 66 \[1mm\] 1GXT 3 Mixed 3.7 0.2 18.6 21.1 23.5 148 1.7 [2.03]{} 22.8 89 \[1mm\] 1HRC 2 $\alpha$-helix 2.2 0.1 11.7 19.6 22.2 126 1.2 [2.17]{} 20.8 104 \[1mm\] 1IFC 3 $\beta$-sheet 2.8 0.1 17.7 25.1 27.9 284 2.2 [3.58]{} 27.3 131 \[1mm\] 1IMQ 2 $\alpha$-helix 1.7 0.1 10.4 16.1 17.9 80.7 0.9 [1.46]{} 17.0 86 \[1mm\] 1LMB 2 $\alpha$-helix 1.1 0.1 7.1 17.0 18.6 76.8 0.9 [1.55]{} 17.9 87 \[1mm\] 1MJC 2 $\beta$-sheet 3.0 0.2 11.0 17.5 19.2 110 1.6 [1.32]{} 19.1 69 \[1mm\] 1NYF 2 $\beta$-sheet 2.8 0.2 10.6 15.3 17.0 87.4 1.5 [0.97]{} 16.8 58 \[1mm\] 1PBA 2 Mixed 2.6 0.1 12.0 18.9 20.8 156 1.9 [1.69]{} 20.8 81 \[1mm\] 1PGB 2 Mixed 2.1 0.2 9.7 14.1 15.7 25.4 0.5 [0.81]{} 14.5 56 \[1mm\] 1PKS 2 $\beta$-sheet 3.8 0.2 15.2 17.9 20.2 136 1.8 [1.50]{} 19.7 76 \[1mm\] 1PSF 3 $\beta$-sheet 2.8 0.2 11.7 16.8 19.4 72.1 1.0 [1.23]{} 17.8 69 \[1mm\] 1RA9 3 Mixed 3.4 0.1 22.3 25.5 28.6 402 2.5 [4.46]{} 28.1 159 \[1mm\] 1RIS 2 Mixed 3.0 0.2 18.4 21.5 23.9 163 1.7 [2.25]{} 23.2 97 \[1mm\] 1SHG 2 $\beta$-sheet 3.0 0.2 10.9 15.1 16.7 92.3 1.6 [0.95]{} 16.7 57 \[1mm\] 1SRL 2 $\beta$-sheet 3.1 0.2 11.0 14.8 16.3 94.5 1.7 [0.92]{} 16.5 56 \[1mm\] 1TIT 3 $\beta$-sheet 4.1 0.2 15.8 18.7 20.8 154 1.7 [1.82]{} 20.4 89 \[1mm\] 1UBQ 2 Mixed 2.4 0.2 11.5 17.0 18.9 92.1 1.2 [1.39]{} 18.2 76 \[1mm\] 1VII 2 $\alpha$-helix 0.4 0.1 4.0 8.1 9.2 4.1 0.1 [0.30]{} 8.2 36 \[1mm\] 1WIT 2 $\beta$-sheet 5.0 0.2 18.9 20.4 22.7 168 1.8 [2.07]{} 22.2 93 \[1mm\] 2A5E 3 Mixed 2.6 0.1 8.3 22.2 23.9 354 2.3 [3.82]{} 24.5 156 \[1mm\] 2ABD 2 $\alpha$-helix 2.3 0.1 12.0 18.2 20.0 77.5 0.9 [1.65]{} 19.1 86 \[1mm\] 2AIT 2 $\beta$-sheet 4.1 0.2 14.4 16.9 18.7 107 1.5 [1.36]{} 18.3 74 \[1mm\] 2CI2 2 Mixed 2.7 0.2 10.0 15.1 16.9 78.3 1.2 [1.06]{} 16.4 65 \[1mm\] 2CRO 3 $\alpha$-helix 1.2 0.1 7.3 14.0 15.5 37.3 0.6 [0.95]{} 14.6 65 \[1mm\] 2HQI 2 Mixed 4.3 0.2 13.6 16.3 18.4 86.9 1.2 [1.26]{} 17.5 72 \[1mm\] 2PDD 2 $\alpha$-helix 1.0 0.1 4.8 10.6 11.5 19.9 0.5 [0.48]{} 11.0 43 \[1mm\] 2RN2 3 Mixed 3.6 0.1 19.3 27.7 30.9 521 3.4 [4.81]{} 31.0 155 \[1mm\] 1O6D –${}^\dag$ Knotted 3.1 0.1 18.9 26.2 28.7 515 3.5 [4.36]{} 29.7 147 \[1mm\] 2HA8 –${}^\dag$ Knotted 3.3 0.1 16.2 25.7 28.5 671 4.1 [4.84]{} 29.9 162 \[1mm\] 2K0A –${}^\dag$ Knotted 3.4 0.1 14.6 22.4 24.5 369 3.4 [2.81]{} 25.8 109 \[1mm\] 2EFV –${}^\dag$ Knotted 2.1 0.2 12.6 20.0 21.8 147 1.8 [1.79]{} 21.8 82 \[1mm\] 1NS5 3 Knotted 2.9 0.1 18.2 27.5 30.4 503 3.3 [4.71]{} 30.8 153 \[1mm\] 1MXI 3 Knotted 2.8 0.1 16.7 26.1 29.0 643 4.0 [4.85]{} 30.1 161 \[1mm\] 3MLG 3 Knotted 1.2 0.1 21.4 27.7 30.8 481 2.8 [5.16]{} 30.5 169 \[1mm\] -------------- ------------ ---------------- ----- ----- ------ ------ ------ ---------------------- ------------------------ ------------------------------------------ ------------------------------- ----- : ${}^\dag$Data not available at present.[]{data-label="tab:proteins_used"} [|c||c|c||c|c||c|c|]{} Class&[**INX**]{}&${\bf P_{INX}}$&${\mbox{\bf LRO}}$&${\bf P_{LRO}}$&${\mbox{\bf RCO}}$&${\bf P_{RCO}}$\ ----------------- 2-state folders 3-state folders ----------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ---------- 7.55e-02 8.25e-02 ---------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & (3.93e-01) & ----- 2.7 2.6 ----- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & (9.46e-01) & ---------- 1.58e-01 1.31e-01 ---------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & (5.07e-02)\ --------------------------- $\alpha$-helix proteins $\beta$-sheet proteins Mixed secondary structure --------------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ---------- 5.21e-02 9.04e-02 8.64e-02 ---------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & -------------------------------- $\alpha\beta$:4.01e-05 $\beta\mbox{\sc m}$:(5.71e-01) $\alpha\mbox{\sc m}$:5.44e-04 -------------------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ----- 1.2 3.3 3.1 ----- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & -------------------------------- $\alpha\beta$:7.40e-08 $\beta\mbox{\sc m}$:(4.27e-01) $\alpha\mbox{\sc m}$:6.20e-07 -------------------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ---------- 1.10e-01 1.72e-01 1.56e-01 ---------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & -------------------------------- $\alpha\beta$:3.34e-07 $\beta\mbox{\sc m}$:(2.68e-01) $\alpha\mbox{\sc m}$:3.48e-03 -------------------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} \ -------------------- Unknotted proteins knotted proteins -------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ---------- 7.79e-02 1.30e-01 ---------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 1.48e-03 & ----- 2.6 2.7 ----- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & (9.20e-01) & ---------- 1.49e-01 1.24e-01 ---------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 1.49e-02\ Class&$\mbox{\bf ACO}$&${\bf P_{ACO}}$&$\mbox{\bf MRSD}$&${\bf P_{MRSD}}$&$\mbox{\bf RMSD}$&${\bf P_{RMSD}}$\ ----------------- 2-state folders 3-state folders ----------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ------ 11.4 14.7 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 4.50e-02 & ------ 16.4 21.9 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 5.89e-04 & ------ 18.1 24.4 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 4.88e-04\ --------------------------- $\alpha$-helix proteins $\beta$-sheet proteins Mixed secondary structure --------------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ------ 8.5 13.9 14.5 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & -------------------------------- $\alpha\beta$:3.76e-04 $\beta\mbox{\sc m}$:(7.08e-01) $\alpha\mbox{\sc m}$:1.62e-03 -------------------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ------ 15.8 18.7 19.9 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & -------------------------------- $\alpha\beta$:(1.19e-01) $\beta\mbox{\sc m}$:(4.50e-01) $\alpha\mbox{\sc m}$:4.11e-02 -------------------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ------ 17.5 20.8 22.1 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & -------------------------------- $\alpha\beta$:(1.14e-01) $\beta\mbox{\sc m}$:(4.73e-01) $\alpha\mbox{\sc m}$:4.16e-02 -------------------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} \ -------------------- Unknotted proteins knotted proteins -------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ------ 12.5 16.9 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 5.59e-03 & ------ 18.3 25.1 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 1.79e-04 & ------ 20.3 27.7 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 3.18e-04\ Class&$\boldsymbol{\Dnx}/{\bf N}$&${\bf P_{\boldsymbol{\Dnx}/N}}$&$\boldsymbol{\Dnx}$&${\bf P_{\boldsymbol{\Dnx}}}$&$\boldsymbol{\mathcal{D}}$&${\bf P_{\boldsymbol{D}}}$\ ----------------- 2-state folders 3-state folders ----------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ----- 1.3 1.9 ----- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 1.71e-02 & ------ 94.9 238 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 3.30e-03 & ------ 1309 2924 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 8.06e-04\ --------------------------- $\alpha$-helix proteins $\beta$-sheet proteins Mixed secondary structure --------------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ---------- 8.74e-01 1.7 1.8 ---------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & -------------------------------- $\alpha\beta$:1.88e-04 $\beta\mbox{\sc m}$:(6.65e-01) $\alpha\mbox{\sc m}$:1.56e-03 -------------------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ------ 80.4 146 195 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & -------------------------------- $\alpha\beta$:4.50e-02 $\beta\mbox{\sc m}$:(2.99e-01) $\alpha\mbox{\sc m}$:2.30e-02 -------------------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ------ 1450 1802 2274 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & --------------------------------- $\alpha\beta$:(4.14e-01) $\beta\mbox{\sc m}$:(3.10e-01) $\alpha\mbox{\sc m}$:(1.06e-01) --------------------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} \ -------------------- Unknotted proteins knotted proteins -------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ----- 1.5 3.3 ----- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 5.33e-04 & ----- 144 476 ----- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 2.05e-03 & ------ 1862 4074 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 2.67e-03\ [|c||c|c||c|c|]{} Class&$\boldsymbol{\mathcal{D}/N}$&${\bf P_{\boldsymbol{D}/N}}$&${\bf N}$&${\bf P_{N}}$\ ----------------- 2-state folders 3-state folders ----------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ------ 17.6 23.8 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 8.56e-04 & ------ 71.3 116 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 4.17e-04\ --------------------------- $\alpha$-helix proteins $\beta$-sheet proteins Mixed secondary structure --------------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ------ 16.7 20.4 21.6 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & -------------------------------- $\alpha\beta$:(6.95e-02) $\beta\mbox{\sc m}$:(4.67e-01) $\alpha\mbox{\sc m}$:2.68e-02 -------------------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ------ 77.9 83.4 98.3 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & --------------------------------- $\alpha\beta$:(6.57e-01) $\beta\mbox{\sc m}$:(2.49e-01) $\alpha\mbox{\sc m}$:(1.59e-01) --------------------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} \ -------------------- Unknotted proteins knotted proteins -------------------- : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & ------ 19.7 28.4 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 1.04e-04 & ------ 86.9 140 ------ : Order parameters for various classifications of proteins. The data set of 2- and 3-state folders is the same as the data set for $\alpha$-helical $\beta$-sheet and mixed proteins, and is given in table \[tab:proteins\_used\]. This is also the same data set as the unknotted proteins. Knotted proteins are separately classified, and not included as either 2-state or 3-state proteins. A discrimination is deemed statistically significant if the probability of the null hypothesis is less than $5\%$. []{data-label="tab:order_parameters_for_various_classes"} & 3.54e-03\ [^1]: This in principle may result in a link length change for the corresponding link, and thus constraint violation, in our approximation. An exact algorithm involves local link rotation instead. [^2]: We use the crossings in the projected image as a book-keeping device to detect real 3D crossings. A real crossing event is characterized by a sudden change in the over-under nature of a crossing on a projected plane. Since for any 3D crossing, the change of nature of the over-under order of crossing links is present in any arbitrary projection of choice, keeping track of a single projection is enough to detect 3D crossings (A concrete illustration of the independence of crossing detection on the projection plane is given in the Supplementary Material). Of course a given projection plane may not be the optimal projection plane for a given crossing, however if the time step is small enough any projection plane will be sufficient to detect a crossing. [^3]: As a hypothetical example, suppose at time $t_1$ a crossing event occurs between residue $a$ which is 10 residues in from the N-terminus, and residue $b$ somewhere else along the chain. Then at time $t_2$, the next crossing event involves a residue $c$ that is 20 residues in from the N-terminus, and residue $d$ somewhere along the chain. To avoid redundant motion, the minimal transformation is only taken to involve a leg motion between the residues from $c$ to the N-terminus, about point $d$; this is assumed to encompass the motion in the first leg transformation, even though the crossing events occured at different times.
{ "pile_set_name": "ArXiv" }
**Practical Explicitly Invertible Approximation to 4 Decimals of Normal Cumulative Distribution Function Modifying Winitzki’s Approximation of erf** Alessandro Soranzo$^{\ast}$, Emanuela Epure$^{\star}$\ $^{\ast}$ $^{\star}$ **Abstract.** We give a new explicitly invertible approximation of the normal cumulative distribution function: $\Phi(x) \simeq \frac{1}{2} + \frac{1}{2} \sqrt{ 1-{e}^{-x^2\frac{17+{x}^{2}}{26.694+2x^2}}}$, $\forall x \ge 0$, with absolute error $<4.00\cdot 10^{-5}$, absolute value of the relative error $<4.53\cdot 10^{-5}$, which, beeing designed essentially for practical use, is much simpler than a previously published formula and, though less precise, still reaches 4 decimals of precision, and has a complexity essentially comparable with that of the approximation of the normal cumulative distribution function $\Phi(x)$ immediatly derived from Winitzki’s approximation of erf$(x)$, reducing about $36\%$ the absolute error and about $28\%$ the relative error with respect to that, overcoming the threshold of 4 decimals of precision.\ **2010 Mathematics Subject Classification:** 33B20 , 33F05 , 65D20 , 97N50.\ **Keywords:** normal cdf, $\Phi$, error function, erf, Winitzki, normal quantile, probit, erf$^{-1}$, approximation, non-linear fitting. This paper is devoted to approximate some special functions, in particular the normal cumulative distribution function.\ Though computers now allow to compute them with arbitrary precision, such approximations are still worth for several reasons, including *to catch the soul* of the considered functions, allowing to understand at a glance their behaviour. Let’s add that, despite technologic progress, those functions – of wide practical use – not always are available on pocket calculators. In practice, the ancient numerical tables are still widely used; but they give approximations only for some values and, if linearly interpolating to approximate intermediate values, both precision and simplicity of use reduce. And surely by computers you may obtain graphs of those functions, but for a mathematician the meaning content of formuls is greater. Furthermore, here we produce an *explicitly invertible* (and, in fact, *simply*) approximation, which allow to keep coherence working contemporarily with the considered function and its inverse.\ For the [@Wiki1] [@Wolfram1] special functions $$\label{normalCDF} \Phi(x):= \int_{-\infty }^{x} \frac{1}{\sqrt {2\pi}} e^{\frac{-t^2}{2}} dt,$$ and the related error function $erf(x)=2\Phi(x\sqrt 2) - 1$ $\big( \forall x \in I\!\!R \big)$, there are several approximations; in particular see classical [@Abramowitz] [@Hart] [@Balakrishnan] and recent [@Dyer] [@SoranzoEpure] [@Winitzki] [@ZogheibHlynka]; approximations for $x \ge 0$ are sufficient because of the symmetry formula $\Phi(-x)=1-\Phi (x) $ $\big ( \forall x \in I\!\!R \big)$.\ Restricting now our attention only to those approximations which are *simply explicitly invertible* – in the sense, explicitly invertible without requiring to solve cubic or quartic equations – the most precise appears to be [@SoranzoEpure] $$\label{eq:Wini1.2735457} \Phi(x) \cong \frac{1}{2} + \frac{1}{2} \sqrt{1-{e}^{\frac{-1.2735457x^2-0.0743968x^4}{2+0.1480931x^2+0.0002580x^4}}} \quad \left\{ \begin{array}{ll} |\varepsilon (x)| <1.14\cdot 10^{-5}\\ |\varepsilon_r (x)| <1.78\cdot 10^{-5} \end{array} \forall x \ge 0 \right.$$ which is an improvement preserving (despite the adding of the quartic monomial) the simple explicit invertibility (essential solving a biquadratic equation after obvious substitutions) of this approximation of $\Phi$ $$\label{eq:Wini735} \Phi(x) \simeq\frac{1}{2} +\frac{1}{2}\sqrt{1-{e}^{-\frac{{x}^{2}\,\left( \frac{4}{\pi }+0.0735\,{x}^{2}\right) }{2\,\left(1+ 0.0735\,{x}^{2}\right) }}} \quad \left\{ \begin{array}{ll} |\varepsilon (x) |<6.21\cdot 10^{-5} \\ |\varepsilon_r (x) |<6.30\cdot 10^{-5} \end{array} \forall x \ge 0 \right.$$ immediately derived by $\Phi(x)=\frac{1}{2}+\frac{1}{2}erf \Big (\frac{x}{\sqrt 2} \Big )$ $\big (\forall x \ge 0\big )$ from this [@Winitzki] *Winitzki’s Approximation of erf* $$\label{eq:Wini147} erf(x)\cong \sqrt{1-e^{-x^2\frac{\frac{4}{\pi}+0.147x^2}{1+0.147x^2}}} \quad \left\{ \begin{array}{ll} |\varepsilon (x) |<1.25\cdot 10^{-4} \\ |\varepsilon_r (x) |<1.28\cdot 10^{-4} \end{array} \forall x \ge 0 \right. .$$ **In this note we give this new (simply) explicitly invertible approximation of the normal cumulative distribution function** $$\label{eq:ourPhiSimple} \begin{tabular}{|c|} \hline $\Phi(x) \simeq \frac{1}{2} + \frac{1}{2} \sqrt{ 1-{e}^{-x^2\frac{17+{x}^{2}}{26.694+2x^2}}} \quad \left\{ \begin{array}{ll} |\varepsilon (x) |<4.00\cdot 10^{-5} \\ |\varepsilon_r (x) |<4.53\cdot 10^{-5} \end{array} \forall x \ge 0 \right. $\\ \hline \end{tabular}$$ **which, beeing designed essentially for practical use,** $\bullet$ **is much simpler than (\[eq:Wini1.2735457\]) and, though less precise, still reaches 4 decimals of precision;** $\bullet$ **has a complexity essentially comparable with that of (\[eq:Wini735\]) reducing about $36\%$ the absolute error and about $28\%$ the relative error with respect to that, overcoming the threshold of 4 decimals of precision.**\ Instead, the corresponding approximation of erf is not so worth, because, though reduces about $36\%$ the absolute error of (\[eq:Wini147\]), it remains with the precision of 3 decimals, and furthermore the absolute value of the relative error $|\varepsilon_r (x)| <1.79 \cdot 10^{-4}$ is quite greater than in (\[eq:Wini147\]). See below the graphs (made by Mathematica $\textsuperscript{\textregistered}$) of the approximation, of the absolute error and of the absolute value of the relative error for $0 \le x \le 7$. For $x \ge 7$ the trivial approximation $\Phi(x) \simeq 1$ has absolute error and absolute value of the relative error highly less than $4 \cdot 10^{-5}$ and $4.53 \cdot 10^{-5}$ respectively. (Nevertheless, if interested in a formal proof of the majorization $|\varepsilon (x)| < 4 \cdot 10^{-5}$ for $x \ge 7$, you may follow [@SoranzoEpure]). $ \begin{array}{cc} \includegraphics[width=2.2in,height=1.5in]{Wini26x694xGraph.pdf} & \includegraphics[width=2.2in,height=1.5in]{Wini26x694x0x00004-AbsErrZoom2.pdf} \\ \rm{Fig.\: 1.\: The\: new\: approximation\: (\ref{eq:ourPhiSimple})} & \rm{Fig.\: 4.\: Second\: zoom\: of\: Fig.\: 2.}\\ \includegraphics[width=2.2in,height=1.5in]{WiniArxivSecondAbsErr.pdf} & \includegraphics[width=2.2in,height=1.5in]{Wini26x694xRelErr.pdf} \\ \rm{Fig.\: 2.\: Absolute\: error} & \rm{Fig.\: 5.\: Absolute\: value\: of\: relative\: error} \\ \includegraphics[width=2.2in,height=1.5in]{Wini26x694x0x00004-AbsErrZoom1.pdf} & \includegraphics[width=2.2in,height=1.5in]{Wini26x694x0x0000453-RelErrZoom.pdf} \\ \rm{Fig.\: 3.\: First\: zoom\: of\: Fig.\: 2} & \rm{Fig.\: 6.\: Zoom\: of\: Fig.\: 5.} \end{array}$ [20]{} Abramowitz, M. Stegun, I.A. (Eds.), (1972). *Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing* New York: Dover, pagg. 932 and 299. Dyer, S.A. Dyer, J.S. (2007). Approximations to error function, Instrumentation $\&$ Measurement Magazine, IEEE 10, no.6:45 – 48. Hart, J.F. et al. (1968). *Computer Approximations, SIAM series in applied mathematics*, John Wiley & Sons, Inc., New York - London - Sydney, 140, pagg. 288 – 289. Johnson, N. Kotz, S. Balakrishnan, N. (1994). *Continuous Univariate Distributions*, Vol. 1, 2nd ed. Boston, MA: Houghton Mifflin. Soranzo, A. Epure, E. (2012). Simply Explicitly Invertible Approximations to 4 Decimals of Error Function and Normal Cumulative Distribution Function, www.intellectualarchive.com (selecting Mathematics) and http://arxiv.org/abs/1201.1320v1. Wikipedia. http://en.wikipedia.org/wiki/Error$\_$function (read 2011, September) Winitzki, S. A handy approximation for the error function and its inverse, http://docs.google.com/viewer?a=v$\&$pid=sites$\&$srcid=ZGVmYXVsdGRv bWFpbnx3aW5pdHpraXxneDoxYTUzZTEzNWQwZjZlOWY2 http://sites.google.com/site/winitzki http://sites.google.com/site/winitzki/sergei-winitzkis-files \(2008) (read 2011, December) Wolfram Research, Inc. http://functions.wolfram.com/GammaBetaErf/Erf/10/01/ (read 2011, September) Zogheib, B. Hlynka, M. (2009). Approximations of the Standard Normal Distribution, University of Windsor, Dept. of Mathematics and Statistics. (Legally) available at http://web2.uwindsor.ca/math/hlynka/zogheibhlynka.pdf (read 2012, March).\ Remark. This text has been explicitly placed by the Authors in the Creative Commons Public Domain.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Several neutral species (, , , ) have been detected in a weak absorption line system ($W_r(2796) \sim 0.15~$[Å]{}) at $z \sim 0.45$ along the sightline toward HE0001-2340. These observations require extreme physical conditions, as noted in @D''Odorico07. We place further constraints on the properties of this system by running a wide grid of photoionization models, determining that the absorbing cloud that produces the neutral absorption is extremely dense ($\sim 100-1000$ ), cold ($<100$K), and has significant molecular content ($\sim 72-94 \%$). Structures of this size and temperature have been detected in Milky Way CO surveys, and have been predicted in hydrodynamic simulations of turbulent gas. In order to explain the observed line profiles in all neutral and singly ionized chemical transitions, the lines must suffer from unresolved saturation and/or the absorber must partially cover the broad emission line region of the background quasar. In addition to this highly unusual cloud, three other ordinary weak clouds (within densities of $\sim 0.005$  and temperatures of $\sim10000$ K) lie within 500  along the same sightline. We suggest that the “bare molecular cloud”, which appears to reside outside of a galaxy disk, may have had in situ star formation and may evolve into an ordinary weak absorbing cloud.' author: - 'Therese M. Jones, Toru Misawa, Jane C. Charlton, Andrew C. Mshar, Gary J. Ferland' title: 'A Bare Molecular Cloud at $z \sim 0.45$' --- Introduction {#sec:1} ============ Observations of in intervening quasar absorption line systems are vital to the study of the interstellar medium of low-redshift galaxies and their environments, as lies in the optical from $z \sim 0.3-2.4$, and serves as a probe of low-ionization gas. Through photoionization models, it is possible to derive many properties of the absorbing gas. This includes the line-of-sight extent, density, temperature, and molecular content, which are constrained by absorption in the different ionization states of various chemical elements. @CWC99b and @Rigby found that roughly one-third of weak absorbers at redshifts $0.4 < z < 1.4$, with $W_r(2796) < 0.3$[Å]{}, are in multiple-cloud systems. They propose that many of the weak absorbers contain multi-phase structures with a range of levels of ionization. Such absorbers are generally thought to arise in sub-Lyman limit system environments, with $N({\HI}) < 10^{17.2}~{{\hbox{cm$^{-2}$}}}$, and metallicities log$[Z/Z_{\odot}]> -1$ [@Churchill00]. Strong absorbers ($W_r(2796) > 0.3$ [Å]{}), in contrast, are almost always associated with Lyman limit systems ($N({\HI}) > 10^{17.2}~{{\hbox{cm$^{-2}$}}}$), and many with damped absorbers (DLAs; with $N({\HI}) > 2$x$10^{20}~{{\hbox{cm$^{-2}$}}}$). @Rao06 found that $36\%$ of MgII absorbers with $W_r(2796) > 0.5$ [Å]{} and $W_r(2600) > 0.5$ [Å]{} were DLAs in an HST survey for $z<1.65$ systems. In that sample, the average $N({\HI})$ was $9.7 \pm 2.2$ x $10^{18}~{{\hbox{cm$^{-2}$}}}$ for $0.3 < W_r(2796) < 0.6$ [Å]{}, and $3.5 \pm 0.7$ x $10^{20}~{{\hbox{cm$^{-2}$}}} $ for $W_r(2796) > 0.6$ [Å]{}. Most DLAs at low $z$ are thought to be associated with galaxies with a variety of morphological types, from $0.1 L^*$ galaxies to low surface brightness galaxies [@LeBrun97; @RT98; @Bowen01; @RT03]. @Ledoux03 find molecular hydrogen in 13-20% of DLA systems at high redshift, but note that there is no correlation between the detection of molecular hydrogen and column density. Despite this lack of correlation, [@Petitjean06] find molecular hydrogen in 9 out of 18 high metallicity systems (\[X/H\]$>-1.3$) at high redshift. In this paper, we study a multiple-cloud weak system toward HE0001-2340 ($z_{em}=2.28$, @Reimers98) at $z= 0.4524$. The four weak clouds are spread over a velocity range of $\sim 600$ . and , which are generally only detected in the very strongest absorbers, are found in one of the four clouds comprising this system. , , and are also detected in this cloud; these neutral states have not been reported to exist in any other extragalactic environment, even in most DLAs, and have only been found in several dense galactic molecular clouds [@Welty03]. @D'Odorico07 notes that the ratios of ${\MgI}/{\MgII}$ and ${\CaI}/{\CaII}$ in this system are orders of magnitude higher than in other absorbers, implying a very low ionization state. She also observes that there is an extreme underabundance of Mg with respect to Fe, which cannot be explained by nucleosynthesis or dust depletion, and cannot be reproduced by photoionization models. The metallicity of the system cannot be directly determined from due to the absorption from a Lyman limit system at $z=2.18$ [@Reimers98]. Due to a lack of metallicity constraints, @D'Odorico07 assumes DLA-regime column densities, noting that the observed $N({\MgI})$ of the system is comparable to the sample of 11 DLA by @DZ06. She further constrains her parameters by comparing to the local cold interstellar clouds of @Welty03, finding that systems with similar amounts of ${\FeI}$ have metallicities $-3.78 < [\rm{Fe/H}]]<-2.78$. With such a low assumed metallicity and high ${\HI}$ column density, @D'Odorico07 is unable to reproduce the observed ratios of $\frac{N({\MgI})}{N({\MgII})}, \frac{N({\CaI})}{N({\CaII})}$ and $\frac{N({\FeI})}{N({\FeII})}$ in photoionization models, and is thus unable to draw concrete conclusions about the properties of this cloud. We propose that these noted differences suggest that this unusual cloud is part of class of systems unrelated to previously observed DLAs, which @D'Odorico07 notes have $N({\MgII})$ two orders of magnitude higher than this system, as well as no previous , , and detections. We therefore explore a range of metallicities and $N({HI})$ in our modeling process. As it is difficult to reproduce the line ratios in this unusual cloud, we also expand the consideration of parameter space to explore the possibility of unresolved saturation. The effect of partial covering of the background quasar is also explored, since photoionization models of the system suggest the cloud is compact enough to partially cover the broad emission line region of the quasar, with high densities ($n_H=1-34$ ), cold temperatures ($<200$K), and a molecular hydrogen fraction larger than $20\%$. Partial covering has only been observed once before in an intervening quasar absorption line system. In the lensed quasar APM 08279+5255 at $z=3.911$ [@Kobayashi02], one-third of the components of a strong absorber were not detected toward the second image of the QSO, while fits to the components suggested $C_f=0.5$ in the other image, constraining the absorber size to be as small as $200$ pc. We begin, in §\[sec:2\], with a description of the VLT/UVES spectrum of HE $0001-2340$, and display and quantify the observed properties of the $z=0.4524$ multiple cloud weak system. §\[sec:3\] details the Voigt profile fit performed on this four-cloud system, including covering factor analysis of the first cloud. It also describes the photoionization modeling method used to constrain the ionization parameters/densities of the four clouds. §\[sec:4\] gives modeling results for each cloud, while §\[sec:5\] discusses the implications of the cloud models on the origin of the gas, while §\[sec:6\] summarizes the findings and considers the properties of the absorption system in the context of broader questions relating to galaxy environments. The $z=0.4524$ Absorber Toward HE 0001-2340 {#sec:2} =========================================== A spectrum of HE0001-2340, taken in 2001, was procured from the ESO archive, having been obtained as part of the ESO-VLT Large Programme “The Cosmic Evolution of the IGM” [@Richter05]. This $z_{em}=2.28$ quasar, HE0001-2340 ($V=16.7$), was observed with the Ultraviolet and Visual Echelle Spectrograph (UVES) on the Very Large Telescope (VLT), as detailed in @Richter05. The data were reduced as described in @Kim04. The spectrum has a resolution R $\sim 45,000$ ($\sim 6.6$ ) and covers a range of 3050-10070 [Å]{}. Breaks in wavelength coverage occurred at 5750-5840 [Å]{} and 8500-8660 [Å]{}. The spectrum is of extremely high quality, with $S/N > 100$ per pixel over most of the wavelength coverage. Continuum fitting (with a cubic spline function) was performed using the IRAFSFIT task as described in @Lynch06. The System {#sec:2.1} ---------- In the $z=0.4524$ system, we detect four distinct  $\lambda$2796 subsystems at $>5 \sigma$ levels in the VLT spectrum, and all are confirmed by  $\lambda$2803. Fig. \[fig:clouds\_labeled\] shows the location of these features, with a velocity scale centered so that half of the system optical depth lies blueward of $0$ , at $z=0.452399$. Detections may be noted at $-69$, $0$, $47$, and $507$ in , at $-69$ and $0$ in , and at $-69$ in . The fourth cloud is very weak, with  $\lambda$2803 detected at just over $3 \sigma$; this detection is made possible by the very high S/N of the spectrum. The first cloud also has detected , , , , and , as seen in Figure  \[fig:system\_plot\_unresolved\]. All transitions blueward of $\sim4000$ [Å]{} ($\sim2750$ [Å]{} in the rest frame of the $z=0.4524$ system) are potentially contaminated by forest lines. The only two detected features displayed in Fig. \[fig:system\_plot\_unresolved\] that are not from the $z=0.4524$ system, and are outside of the forest region, occur in the  $\lambda$4228 panel, at $\sim30$ , and in the  $\lambda$3022 panel, at $\sim270$ . The former feature is  $\lambda$2374 from a system at $z=1.587$, while the latter is  $\lambda$1548 from a system at $z=1.838$. Lyman series lines for the $z=0.4524$ system are unavailable, due to a full Lyman Limit break from a system at $z=2.187$ [@Richter05], allowing no direct constraints on the metallicity of the system. Rest frame equivalent widths for all transitions detected at $>3 \sigma$ in the spectra, as determined by Gaussian fits to the unresolved line profiles, are given in Table \[tab:ew\] for the strongest transitions. The column densities and Doppler parameters, using Voigt profile fitting, assuming full coverage, are listed in Table \[tab:Nb\]. Cloud 2 was resolved into two blended components by our fitting procedure. Table \[tab:ew2\] shows equivalent widths for additional transitions detected in Cloud 1. Upper limits are given for blended detections. Since the detection of neutral transitions in a weak absorber is quite surprising, we must consider whether the apparent , , , and detections for Cloud 1 are valid, and not chance superpositions. In Table \[tab:ew2\], the oscillator strengths of the nine detected transitions are listed, along with their equivalent widths. Only three of these detections, $\lambda$2484, $\lambda$2502, and $\lambda$2524, are in the forest. The oscillator strengths of the various transitions are roughly consistent with the relative equivalent widths, however, a detailed analysis suggests that partial covering or unresolved saturation affects the line profiles, as discussed further in § \[sec:3\]. Both and are located outside of the forest, and are detected at $>5{\sigma}$ significance. The  $\lambda$2515 detection is within the forest, and cannot be confirmed by other transitions. However, we expect that it is valid because of the precise alignment with the $v$=-69  cloud. We thus conclude that there is more than sufficient evidence that absorption in neutral species is detected, making Cloud 1 in this system unique among other weak systems [@Anand08]. Deriving Physical Conditions {#sec:3} ============================ Oscillator Strengths and Saturation ----------------------------------- The ratio of the  $\lambda$ 2803 to  $\lambda$ 2796 equivalent width in Cloud 1 of the VLT spectrum (Fig. \[fig:clouds\_labeled\]) is not $0.5-0.7$ as expected [@Anand08], and as seen in Clouds 2-4, but is $0.84$; the weaker member is considerably stronger than would be expected. This ratio implies either unresolved saturation of the line profile, or partial covering of the quasar broad emission line region (BELR). If the profile were unresolved and saturated, profile fits to would severely underestimate the column density and overestimate the Doppler parameters of fits to the lines. We consider this possibility in defining a range of models to be considered. Voigt Profile Fitting --------------------- We initially perform Voigt profile fitting on the using MINFIT [@Churchill03]. We choose to optimize on (to require Cloudy models to reproduce exactly the observed value) because it is the only ion detected in all four clouds, and is the strongest ion detected for this $z=0.4524$ absorber outside of the forest. The Doppler parameters ($b$) and column densities ($N$) given from the fit for Clouds 2-4 are in Table \[tab:Nb\], along with the redshifts of each cloud. The profile for Cloud 2 is asymmetric, and we find that a two component fit provides a significantly better fit than does one component. These components are denoted as 2a and 2b in Table \[tab:Nb\]. We note that the resolution of the spectrum, $\sim 6.6$ , is greater than the value of $b$ for four out of the five Voigt profile components, implying that the clouds may not be fully resolved. The best fit to the Cloud 1 VLT spectrum is provided by a model with column density $\log N({\MgII}) = 12.1~[{\rm cm}^{-2}]$ and Doppler parameter $b=3.1 $ , however this fit overproduces  $\lambda$2796 and underproduces  $\lambda$2803, as expected by the difference in $\frac{W_r({\MgII}\lambda 2796)}{W_r({\MgII}\lambda 2803)}$ and the ratio of the oscillator strengths of the two transitions. Fits to the other clouds were adequate. Because of the likelihood of unresolved saturation, we increase $N$ until the ratio of equivalent widths of the synthetic profile matches that of the observed profile. We also consider smaller Doppler parameters in our modeling process, and for these values we adjust $N$ accordingly. Covering Factor --------------- We also consider the possibility of partial covering of the BELR of HE0001-2340 to explain the observed equivalent widths. The size of the broad emission line region of HE0001-2340 cannot be calculated directly, as the spectrum is not flux calibrated. However, via comparison to a quasar of similar redshift and magnitude with a flux calibrated spectrum, we may approximate the BELR size. With $z=2.28$ and $V=16.7$, HE0001-2340 may be compared to S5 0836+71 with $z=2.172$ and $V=16.5$. @Benz07 gives the following relation for BELR size: $$\log R_{BELR} [light days]=-22.198+0.539 \log \lambda L_{\lambda} (5100 \rm{\AA})$$ With $\log \lambda L_{\lambda} \sim 47$, we estimate that the BELR size of HE0001-2340 is $\sim 1.0$ pc. Therefore, partial coverage of the BELR requires a very small cloud size. We explore this possibility as an alternative to unresolved saturation, and perform VP fitting to the doublet for Cloud 1, using a modified version of MINFIT [@Churchill03] in which the covering factor, $C_f({\MgII})$, is allowed to vary along with $N$ and $b$. A covering factor of $C_f({\MgII})=0.60\pm 0.10$ best fits the profile, as determined by the $\chi^2$ minimization technique described in @Ganguly99, and applied to a large doublet sample in @Misawa07. The $N({\MgII})$ and $b({\rm Mg})$ values for this $C_f$ are $10^{12.1}$  and 3.1 , respectively. Since we would not expect to find evidence for partial covering in an absorber that is not intrinsic to a quasar, we examine how significant an improvement $C_f({\MgII}) \sim 0.6$ provides as compared to other possible values of the covering factor. We force $C_f$ to have various other values and in each case find the best $N$ and $b$ and compute the $\chi^2$ comparing the best model to the data. Figure \[fig:cf\] shows that a clear minimum in $\chi^2$ occurs at $C_f({\MgII}) \sim 0.6$. The $C_f$ measured from VP fitting is an “effective covering factor”, representing the absorption of the total flux at that wavelength, which is a combination of flux from the quasar continuum source and BELR [@Ganguly99]. The different transitions for the intervening $z=0.4524$ system fall at different positions relative to the quasar emission lines, and will therefore absorb different relative fractions of continuum and BELR flux. In general, $C_f = \frac{C_c+{\rm W}C_{elr}}{1+{\rm W}}$, where $C_c$ is the covering factor of the continuum, $C_{elr}$ is the covering factor of the BELR, and $C_f$ is the total covering factor [@Ganguly99]. The value of W, the ratio of the broad emission-line flux to the continuum flux at the wavelength of the narrow intervening absorption line ($F_{elr}/F_c$), can be determined for each transition using a low resolution spectrum [@Tytler04]. However, in order to calculate the effective covering factor, $C_f$, we must make an assumption about the relative covering factors of the continuum and BELR. For , the value $C_f = 0.6$ can correspond to a range of possible $C_c$, $C_{elr}$ pairs. However, we know that $C_c$ cannot be very small, as we see detections of many transitions that are not superimposed on an emission line. Since the continuum source is likely to be considerably smaller than the BELR, it is most straightforward to assume that it is fully covered and that the BELR is partially covered. For the W value measured for (${\rm W}=1.0$), and $C_c=1$, we then find $C_{elr}=0.2$. $C_{elr}$ values should be the same for all transitions, if their absorption arises from the same cloud. With this assumption, using the W values of each transition, we can compute the effective covering factors as $C_f = \frac{1+0.2{\rm W}}{1+{\rm W}}$. Table \[tab:ew2\] lists the $C_{f}$ values for all transitions detected from Cloud 1. Many of the detected neutral transitions have $C_{f}$ values that are close to $1$, rendering their absorption stronger relative to the , which only partially absorbs a significant fraction of the incident flux due to its position on a broad emission line. Cloudy Photoionization Modeling {#sec:cloudy} ------------------------------- For each model, we begin with a column density, Doppler parameter, and covering factor for . For Cloud 1, we consider a range of $N({\MgII})$, $b({\rm Mg})$ pairs that are consistent with the data, including fits affected by unresolved saturation. Starting with the column density as a constraint, we used the code Cloudy (version 07.02.01) to conduct photoionization models [@Ferland98]. For each cloud, we assume a plane-parallel slab with the given $N({\MgII})$ and illuminate it with a Haardt-Madau extragalactic background spectrum, including quasars and star-forming galaxies (with an escape fraction of 0.1) at $z=0.4524$ [@HM96; @HM01]. Given that the absorption is so weak in , it seems unlikely that this absorber is housed in the midst of a galaxy where the local stellar radiation field would be significant. We initially run a grid of models for ionization parameters $\log U=\log n_e/n_\gamma=-8.5$ to $-2.5$, and metallicities $\log Z/Z_{\odot}= -3.0$ to $1.0$. All models assume a solar abundance pattern, unless otherwise stated. The Cloudy output includes model column densities for all transitions, as well as an equilibrium temperature, $T$, for the cloud. The turbulent component of the Doppler parameter ($b^2_{turb}=b({\MgII})^2-b_{therm}(\mathrm{Mg})^2$; where the thermal component $b^2_{therm}=\frac{2kT}{m}$) is calculated from the equilibrium temperature and observed $b({\MgII})$. Given $b_{turb}$ and $b_{therm}$, the Doppler parameter can be computed for all other elements. After running Cloudy for all components from the Voigt profile fit, the $b$ parameters and $C_f$ values are combined with the model column densities to create a synthetic spectrum. This model spectrum is compared to the data in order to constrain the $\log U$ and $\log Z/Z_{\odot}$ values. The modeling method is the same as that used by @Ding05, @Lynch07, and @Misawa07. Physical Conditions of the Absorbing Clouds {#sec:4} =========================================== Cloud 2 {#sec:4.1} ------- Cloud 2 can be fit with $-2.9<\log U<-3.3$; lower values overproduce , while higher values underproduce . Although there are no Lyman series lines to use for direct constraints on the metallicity, we find that if log$Z/Z_{\odot}< -0.3$, the is overproduced. The temperature of the cloud is found to be $\sim 9000$K for $\log Z/Z_{\odot}= -0.3$, with a size of tens of parsecs, and a density $\sim 4$x$10^{-3}$ . Model parameters are listed in Table \[tab:models\], where Clouds $2.1$ and $2.2$ represent the two components of Cloud 2, while the best model is shown in Figure \[fig:system\_plot\_unresolved\]. Clouds 3 and 4 {#sec:4.2} -------------- Clouds 3 and 4 are quite weak, and are detected in the VLT spectrum only because it has such a high S/N. For Clouds 3 and 4, a wide range of metallicities and ionization parameters provide an adequate fit to the transitions covered by the spectrum. Low ionization species other than are not detected at this velocity, but because of the weakness of the lines, are also not predicted by models with any reasonable ionization parameter, $\log U > -7.5$. Coverage of higher ionization states would be needed to further constrain the ionization parameter of these clouds. Constraints for these clouds are summarized in Table \[tab:models\], where parameters are given for two acceptable values of $\log U$ and log$Z/{Z_{\odot}}$ for each Cloud. Cloud 1 {#sec:4.3} ------- ### Unresolved Saturation Model We run a grid of models with varying column densities, metallicities, and ionization parameters, as described in section § \[sec:3\], for $b=3.1$ , the Doppler parameter given by a Voigt profile fit, which assumes the line is resolved, unsaturated, and fully covered. We find that detectable amounts of are not produced in any model, and that is always underproduced. For smaller $b$ parameters we would expect that ${\FeI}/{\MgII}$ and ${\FeII}/{\MgII}$ would be larger because and would be on the linear parts of their curves of growth where the corresponding larger $N$ would affect their equivalent widths. In contrast, the would be on the flat part of its curve of growth so that the increased $N$ would have little effect on its equivalent width. We therefore considered $N({\MgII})$, $b({\rm Mg})$ pairs with $b=0.1-0.5$ (giving $\log N({\MgII}) \sim 14$ ) and $\log U < -7.5$. An example model, with log$Z/{Z_{\odot}} =-1.0$, $\log U=-8.2$, and $b=0.2$  is shown in Fig. \[fig:system\_plot\_unresolved\]. Due to the seemingly cold nature of this cloud, we opt to add dust grains to the Cloudy [@Ferland98] simulations. The primary flaw of this model is the over-production of  $\lambda$2853 by $\sim 1$ dex in column density; the apparent underproduction of some and transitions may be attributed to forest contamination of the observed profiles. Greater metallicities further overproduce , and a small Fe abundance enhancement of $\sim 0.2$ dex is needed to account for observed and profiles. Constraints for this model are given in Table \[tab:models\], under Cloud $1^{a}$. We find a size of 0.01-0.6 pc, T$<100$ K, and $500<n_H<1100$ \[\]. With a neutral hydrogen column density of $10^{18.7-20.8}$ (sub-DLA to DLA range), we find $0.72< \frac{2N(H_2)}{2N(H_2)+N(HI)} < 0.91$. $\rm{log}\frac{N(MgII)}{N(MgI)}\sim 1.0-1.3$ and $\rm{log}\frac{N(FeII)}{N(FeI)} \sim 1.6-2.2$. ### Partial Covering Model We similarly find that a small $b$ ($<0.5~{{\hbox{km~s$^{-1}$}}}$) is necessary to reproduce the observed with partial coverage models. An example of such a model, with $b({\rm Mg}) = 0.4$ , $\log U = -7.5$, and $\log Z/Z_{\odot} = -1.0$ is given in Fig. \[fig:system\_plot\_partial\]. For this example model, the observed , , , , , and are adequately reproduced within the uncertainties. The absorption at the position of  $\lambda$2484 and $\lambda$2524 is not fully produced, however the location of these transitions in the forest makes contamination fairly likely. The only discrepancy of this model is the over-production of  $\lambda$2853 by $\sim 1$ dex in column density. In addition to this sample model, a range of ionization parameters and metallicities provide a similar fit. For $\log Z/Z_{\odot} = -1.0$ and $\log U =-7.5$ the cloud size/thickness is 0.2 pc, a size comparable to that of the quasar BELR. The range of models that provide an adequate fit to this cloud have extreme properties. The ionization parameters for successful models range from $-8.0$ to $-7.0$, implying densities of $30 < n_H < 1100$ . For the range of possible metallicities, the equilibrium temperatures are low, $< 50$ K. The neutral hydrogen column densities, $\log N({\HI})=18.8-19.9$ ), are in the sub-DLA range, while $0.76< \frac{2N(H_2)}{2N(H_2)+N(HI)} < 0.94$ . Cloud properties for acceptable models are summarized in in Table \[tab:models\] under Cloud $1^b$, while a sample model is plotted in Figure \[fig:system\_plot\_partial\]. Discussion {#sec:5} ========== The $z=0.4524$ system toward HE0001-2340, shown in Figures  \[fig:clouds\_labeled\] and \[fig:system\_plot\_unresolved\], significantly differs from typical weak absorption line systems, due to the detection of low ionization states in one of four main absorption components. We consider the environments of these clouds in the context of known structures that could produce these absorption signatures. Clouds 2, 3, and 4 ------------------ Cloud 2 has conditions similar to those of the 100 weak systems modeled by [@Anand08], with $N({\MgII})=10^{12.5}$ , $N({\FeII})=10^{11.85}$ , a size of tens of parsecs, a density of $\sim 0.004 {{\hbox{cm$^{-3}$}}}$, and a temperature of $\sim 9000$K (Table \[tab:models\]). Clouds 3 and 4 also fall within the range of properties exhibited in the [@Anand08] sample, and although their properties are not well-constrained by models, they appear to be of a temperature and density typical of the weakest absorbers studied to date. Detection of such low $N({\MgII})$ absorbers is limited to very high S/N spectra, suggesting that such clouds may often go undetected near what are perceived as single-cloud weak absorbers. [@Rigby] divides weak absorbers at $\log [N({\FeII})/N({\MgII})]=-0.3$ into iron-rich and iron-poor subcategories. While it is not possible to classify Clouds 3 and 4, Cloud 2, with log(N(FeII)/N(MgII))=-0.62, falls into the “iron-poor” subcategory, with corresponding $\log U \sim -3.1$. Clouds associated with superwind condensations could be responsible for this iron-poor environment, as type II supernovae driving the wind will lead to ${\alpha}$-enhancements, as they build up high metallicities. Such systems may be predecessors to the high metallicity ($>0.1~Z_{\odot}$) absorbers of [@Schaye07], which have sizes $\sim 100$ pc. With radii less than the Jean’s length for self-gravitating clouds, these high metallicity clouds are likely to be short-lived, with lifetimes on the order of the time-scale for free expansion, $\sim 10^7$ years. In comparison, Cloud 2 would have a lifetime on the order of $\sim 10^6-10^7$ years in its present state, which could precede its cloud phase. Cloud 1 {#cloud-1} ------- Based solely upon its detected absorption features, Cloud 1 is an anomaly. has only been detected along a fraction of Galactic sightlines passing through molecular clouds [@Welty03]. These sightlines pass through clouds that are very cold ($<100$K), and have densities ranging from $\sim 10-300$ , on par with the results given by photoionization models of Cloud 1. Partial covering and unresolved saturation models of Cloud 1 in the VLT spectrum provide similar constraints to its properties as those dense Galactic clouds. Both require a narrow 2796[Å]{} profile, with $b<0.5$ , to reproduce the observed , suggesting that regardless of whether the system is partially covering the BELR, it is unresolved. We note that the other properties of Cloud 1 given by Cloudy are similar in both models, as shown in Table \[tab:models\]. Such models are not mutually exclusive, as it would be impossible to distinguish partial covering from unresolved saturation based on a comparison of the members of the doublet. With either model, the size of the absorber is likely to be on the order of the QSO BELR size at the distance of the absorber (parsec-scale), thus partial covering is not unlikely. It is important to note that the covering factors of different transitions depends upon their position on broad emission lines, a concept that is key to intrinsic absorption line studies, as partial covering by clouds in the immediate vicinity of the quasar is common. ### Detection Rates {#sec:5.2.1} In a sample of 81 VLT QSOs, with a redshift path length of $\sim 75$ [@Anand07], we have found only one system with detected . However, considering the small size of this object, even this one detection is significant, suggesting that there may be a significant cross section covered by these objects, which lie in the outskirts of galaxies and remain undetected. Assuming the same average redshift pathlength per quasar, $\Delta z = 0.93$, and a Poisson distribution for the detections, we find that there is a $50\%$ chance of one or more such detections in a sample of 56 more quasars, and a $95\%$ chance of one or more detections in a sample of 243 more quasars (which corresponds to 332 more weak absorption line systems). The one system that was detected in a pathlength of $75$ yields $dN/dz=0.013$, and there is a probability of 95% that $dN/dz > 0.0044$ (based on one detection in a pathlength of $243 \times 0.93$). [@Glenn08] finds that, for a range of Schechter luminosity function parameters, a statistical absorption radius of $43 < R_x < 88$ kpc would explain the observed $dN/dz \sim 0.8$ of strong absorbers ($W_r(2796) > 0.3$ [Å]{}) at $0.3 < z < 1.0$. To consider the radius specifically at the redshift of the $z=0.4524$ absorber, we scale those results considering that [@Nestor05] find $dN/dz \sim 0.6$ for strong absorbers at $z \sim 0.45$. This implies a corresponding absorber halo radius of 24-49 kpc, assuming a covering factor of order unity. From the estimated $dN/dz$ of absorbers, we estimate that approximately $2\%$ of the area covered by strong absorbers ($1810$-$7540$ kpc$^2$) is covered by such small molecular clouds. These clouds are likely to take the form of filaments or thin sheets around many galaxies. It is therefore of interest to consider what the covering factor would be of molecules in the Milky Way halo, if we were to observe it from outside the halo. The covering factor of 21-cm high velocity clouds, to a limit of $N(HI) = 7 \times 10^{17}$ , was measured to be 37%, looking out from our vantage point. Although molecules have been detected in several HVCs [@Richter99; @Sembach01; @Richter01a; @Richter01b], the fraction of HVC sightlines with detected H$_2$ is small, e.g. 1/19 in the FUSE survey of @Wakker06. Nonetheless, looking through the Milky Way from outside, we might expect roughly $2 \times 37$%$\times (1/19) \sim 4$% as the covering factor for molecular absorption, consistent with the 2% halo covering factor that we estimated for absorbers at $z \sim 0.45$. An alternative to a large fraction of galaxies producing absorbers with small individual covering factors is a small population of galaxies that which have larger individual covering factors. One possibility is starburst galaxies with superwinds, since the neutral species is commonly detected in absorption through these objects [@Heckman00]. Nearby starbursts are found to have strong absorption over regions $1$-$10$ kpcs in extent. Weaker absorption, consistent with its expected strength for Cloud 1 in the $z=0.4524$ absorber, would be detected over larger areas. Also, starbursts are more common at $z\sim0.45$ than at the present. These factors could combine to lead to a significant covering factor of (and therefore ) absorption from starburst winds at the redshift of the $z=0.4524$ absorber. ### Origin {#sec:5.2.2} The model gives neutral hydrogen column densities of the absorber that suggest a sub-DLA or DLA environment, with $10^{18.8}< N({\HI})< 10^{20.8}$ . Although H$_2$ is frequently detected in higher redshift systems, the molecular hydrogen fraction is only $<10^{-6}$ in $58-75\%$ of the high-z DLAs surveyed by [@Ledoux03], and does not rise above $\sim 0.03$ for sub-DLAs and DLAs. [@Hirashita03] perform simulations of high-z DLAs, and find that the area of a region that would produce DLA absorption that contains molecules is a very small fraction of its overall area. They note that in high ultraviolet background, the molecular fraction can be $>10^{-3}$, which is large enough to fuel star formation. Small, dense ($n_H \sim 10-1000$), cold ($<100$K) absorbers, with significant molecular fractions, like the $z=0.4524$ absorber toward HE0001-2340, may occur over a small fraction of structures that give rise to sub-DLAs and DLAs, but are rarely intersected by QSOs due to their relatively smaller area. While extremely high molecular hydrogen fractions seem to be rare in sub-DLAs and DLAs, up to $53\%$ of total hydrogen mass was found to be molecular in a survey of [@Leroy09], which examined CO emission from 18 nearby galaxies. In these detections, the H$_2$ mass can be as high as 0.1 times the stellar mass. The ratio of CO $J=2 \rightarrow 1$ intensity to CO $J=1 \rightarrow 0$ intensity suggests that the emitting gas is optically thick with an excitation temperature of $\sim 10$K. Although the exponential scale lengths of the survey targets range only from 0.8-3.2 kpc, they note that with increased sensitivity, objects at larger distances could be detected. Similar high resolution CO observations of structures likely belonging to the Milky Way by [@Heithausen02; @Heithausen04] show fractalized clumpuscules at high galactic latitudes, with sizes of $\sim 100$ AU, masses $\sim 10^{-3} M_{\odot}$, $n($H$_2)=1000-20000~{{\hbox{cm$^{-3}$}}}$, and column densities of $\sim 10^{20}~{{\hbox{cm$^{-2}$}}}$; he notes that such small structure would not be visible in the inner galaxy, due to overcrowding in normal molecular gas. A previous study of high galactic latitude molecular clouds via stellar absorption lines by [@Penprase93] likewise found high molecular hydrogen fractions, high densities, and small sizes. @Pfenniger94a and @Pfenniger94b suggest such small structures may make up a significant percentage of halo matter, and propose that such structures may account for a fraction of missing baryonic dark matter. These structures are believed to be formed by Jeans unstable and isothermal clouds, which fragment into smaller cloudlets. An alternative to this fractalized ISM model is explored through simulations by [@Glover07]. They study H$_2$ formation in turbulent gas, predicting that H$_2$ should rapidly form in any dense turbulent atomic cloud (within 0.05 Myr), regardless of whether or not the cloud is gravitationally bound or whether it is magnetic subcritical or supercritical; up to $40\%$ of the initial atomic hydrogen may be converted to H$_2$ in this process. They find that regions of low density ($n<300 {{\hbox{cm$^{-3}$}}}$) contain more H$_2$ than expected in gas in photodissociation equilibrium due to turbulent transport of H$_2$ from higher density ($n>1000 {{\hbox{cm$^{-3}$}}}$) areas. Hydrodynamic simulations by [@Fujita09] of the blowout of starburst-driven superbubbles from the molecular disk of ULIRGs also show similar small-scale clumps. Thin dense shells are created as the bubbles sweep up dense disk gas; Rayleigh-Taylor instabilities then cause these shells to fragment. Clumpiness is seen down to the limits of the simulation resolution of 0.1 pc. These clumps contain high mass, and have $N({\rm H}) \sim 10^{21}$ . As small molecular structures have been observed both in the galactic neighborhood (on the order of hundreds of AU),and in hydrodynamic simulations (to the limit of their resolutions) one would expect there to exist such structures beyond the local universe, at higher redshifts. Because these structures are far too small to detect via imaging of extragalactic environments, absorption lines serve as the only probe of small-scale structure of distant galaxies. The $z=0.4524$ absorber may be a first glimpse of such a cold, dense cloud at an earlier epoch. ### Iron content Cloud 1, with $\log$(/)$\sim 0.5$, is an extreme case of an iron-rich absorber. Iron-rich systems discussed in [@Rigby] were generally found to have high metallicities ($>0.1 Z_{\odot}$), $\log U<-4.0$ ($n_H >0.09$ ), and small sizes ($N$()+$N$()$<10^{18}$ , $R<10$ pc). Solar abundances, and not $\alpha$-enhancement, are needed to explain their similar and column densities, which implies Type Ia event-enrichment of the gas [@Timmes95]. Type Ia enrichment requires a $\sim 1$ billion year delay from the onset of star formation before the elements produced enter the interstellar medium, which could be consistent with gas trapped in potential wells of dwarf galaxies, or with intergalactic star-forming structures in the cosmic web [@Rigby; @Nikola06]. Rapid condensation of turbulent gas from a supernova could explain the small size, high molecular content, and high iron abundance observed in Cloud 1. The System {#the-system} ---------- Weak absorption systems are not often found within 30 kpc of luminous star-forming galaxies, with $L>0.05 L_*$ [@CLB98; @Churchill05; @Mil06]; the astrophysical origins of such systems have not been identified, although they may include extragalactic high velocity clouds [@Anand07], dwarf galaxies [@Lynch06], material expelled by superwinds in dwarf galaxies [@Zonak04; @Stocke04; @Keeney06], and/or massive starburst galaxies and metal-enriched gas in intergalactic star clusters [@Rigby]. [@Schaye07] suggest that at intermediate redshifts ($z \sim 2-3$), weak clouds may have been ejected from starburst supernova-driven winds during an intermediate phase of free expansion, prior to achieving equilibrium with the IGM. Supernova-driven winds are believed to have a multi-phase structure, with a cold component ($T \sim 100$K) detected in [@Heckman00; @Rupke02; @Fujita09]. A warm neutral phase ($T \sim 10000$K) may surround this component [@Schwartz06]. This gas will likely fragment through hydrodynamical instabilities as it moves through the halo of the galaxy, and would appear initially as weak absorption, and later as weak absorption associated with lines in the forest [@Zonak04; @Schaye07]. To account for the differing / observed in the different clouds of the same system, it seems necessary to propose different stellar populations in the vicinity of the absorbing structure. Cloud 1, due to its iron-rich nature, must be enhanced by a Type Ia supernova, implying the gas does not originate from a very young stellar population. The density and temperature of this gas, comparable to that of Milky Way molecular clouds, suggests that it is a potential site of star formation. Cloud 2, in contrast, is likely a remnant of a supernova from a massive star recently formed in its vicinity, or in a superwind driven by a collection of massive stars, as suggested by its ${\alpha}$-enhancement. It is not surprising that we would see a mix of different processes and stellar populations in an absorption line system. Some combination of dwarf galaxies, high velocity clouds, superwind and supernova shell fragments, and tidal debris could give rise to such variety. In such a scenario, clouds like Cloud 2 would commonly be grouped together into systems, sometimes with Fe-rich clouds, but something as extreme as Cloud 1 would be rare. Conclusions {#sec:6} =========== Although detections are extremely rare in extragalactic absorbers, photoionization models of Cloud 1 suggest that its small size ($<1$pc) may cause many such absorbers to go undetected; we estimate that two percent of the areas of $z\sim0.45$ halos (24-49 kpc in radius) should be covered by such objects. With cold temperatures ($<100$K), high densities ($30<n_H<1100$), and a large molecular hydrogen fraction ($72-94\%$), the properties of Cloud 1 are similar to the dense, small Milky Way clouds at high galactic latitudes observed in CO by [@Heithausen02; @Heithausen04] and @Penprase93, suggesting that pockets of gas like Cloud 1 may be analogs of these Milky Way clouds in the halos of other galaxies. Small scale clumps, with sizes of hundreds of AU, are predicted to exist both as part of the fractalized ISM in the halos of galaxies [@Pfenniger94a; @Pfenniger94b] and as condensates in turbulent gas [@Glover07; @Fujita09]. A Type-Ia supernova may have been responsible for both the turbulence and observed iron abundance in Cloud 1. In contrast, the ${\alpha}$-enhancement observed in Cloud 2, with its low-Fe content, is suggestive of an origin in a Type II supernova-driven superwind. Multiple-cloud weak absorption line systems are generally thought to originate in lines of sight that pass through more than one dwarf galaxies, through sparse regions of luminous galaxies at high impact parameters, or in gas-poor galaxies. In this case, one of the four components happens to be a relatively rare type of cold and dense weak absorber. Via imaging of the HE0001-2340 field it may be possible to further constrain the absorption origin, perhaps by identifying a host galaxy or galaxies, or by finding a nearby galaxy whose environment is being sampled by these absorbers. It is also of interest to find another similar cloud, by searching large numbers of high-$S/N$, high-resolution quasar spectra. Perhaps in another case, the sightline would be clear enough to allow access to the Lyman series lines, so that its metallicity could be directly measured, or to higher ionization absorption lines so that any surrounding, lower density gas phases could be identified. [XXX]{} Bentz, M. C., Denney, K. D., Peterson, B. M., & Pogge, R. W. 2007, The Central Engine of Active Galactic Nuclei, 373, 380 Bowen, D. V., Tripp, T. M., & Jenkins, E. B. 2001, , 121, 1456 Charlton, J. C., Ding, J., Zonak, S. G., Churchill, C. W., Bond, N. A., & Rigby, J. R. 2003, , 589, 111 Churchill, C. W., & Le Brun, V. 1998, , 499, 677 Churchill, C. W., Charlton, J. C., & Vogt, S. S. 1999a, AJ, 118, 59 Churchill, C. W., Rigby, J. R., Charlton, J. C., & Vogt, S. S. 1999b, ApJS, 120, 51 Churchill, C. W., Mellon, R. R., Charlton, J. C., Jannuzi, B. T., Kirhakos, S., Steidel, C. C., & Schneider, D. P. 2000, , 130, 91 Churchill, C. W., Vogt, S. S., & Charlton, J. C. 2003, , 125, 98 Churchill, C. W., Kacprzak, G. G., & Steidel, C. C. 2005, IAU Colloq. 199: Probing Galaxies through Quasar Absorption Lines, 24 Dessauges-Zavadsky, M., Prochaska, J. X., D’Odorico, S., Calura, F., & Matteucci, F. 2006, , 445, 93 Ding, J., Charlton, J. C., Churchill, C. W., & Palma, C. 2003, , 590, 746 Ding, J., Charlton, J. C., Bond, N. A., Zonak, S. G., & Churchill, C. W. 2003, , 587, 551 Ding, Jie, Charlton, Jane C., & Churchill, Christopher W. 2005, ApJ, 621, 615 D’Odorico, V. 2007, , 470, 523 Ellison, S. L., Ibata, R., Pettini, M., Lewis, G. F., Aracil, B., Petitjean, P., & Srianand, R. 2004, , 414, 79 Ferland, G. J., Korista, K. T., Verner, D. A., Ferguson, J. W., Kingdon, J. B., & Verner, E. M. 1998, , 110, 761 Fujita, A., Martin, C. L., Low, M.-M. M., New, K. C. B., & Weaver, R. 2009, , 698, 693 Ganguly, R., Eracleous, M., Charlton, J. C., & Churchill, C. W. 1999, , 117, 2594 Glover, S. C. O., & Mac Low, M.-M. 2007, , 659, 1317 Haardt, F., & Madau, P. 1996, , 461, 20 Haardt, F., & Madau, P. 2001, in Recontres de Moriond XXXVI, Clu sters of Galaxies and the High Redshift Universe Observed in X-rays, ed. D. M. N eumann & J. T. T. Van (Paris: ESA), 64 Heckman, T. M., Lehnert, M. D., Strickland, D. K., & Armus, L. 2000, , 129, 493 Heithausen, A. 2002, , 393, L41 Heithausen, A. 2004, , 606, L13 Hirashita, H., Ferrara, A., Wada, K., & Richter, P. 2003, , 341, L18 Kacprzak, G. G., Churchill, C. W., Steidel, C. C., & Murphy, M. T. 2008, , 135, 922 Keeney, B. A., Stocke, J. T., Rosenberg, J. L., Tumlinson, J., & York, D. G. 2006, , 132, 2496 Kim, T.-S., Viel, M., Haehnelt, M. G., Carswell, R. F., & Cristiani, S. 2004, , 347, 355 Kobayashi, N., Terada, H., Goto, M., & Tokunaga, A. 2002, , 569, 676 Le Brun, V., Bergeron, J., Boisse, P., & Deharveng, J. M. 1997, , 321, 733 Ledoux, C., Petitjean, P., & Srianand, R. 2003, , 346, 209 Leroy, A. K., et al.  2009, , 137, 4670 Lynch, R. S., Charlton, J. C., & Kim, T.-S. 2006, , 640, 81 Lynch, R. S., & Charlton, J. C. 2007, , 666, 64 Masiero, Joseph R., Charlton, Jane C., Ding, Jie, Churchill, Christopher W., & Kacprzak, Glenn, 2005, ApJ, 623, 57 Milutinovi[ć]{}, N., Rigby, J. R., Masiero, J. R., Lynch, R. S., Palma, C., & Charlton, J. C. 2006, , 641, 190 Narayanan, A., Misawa, T., Charlton, J. C., & Kim, T.-S. 2007, , 660, 1093 Narayanan, A., Charlton, J. C., Misawa, T., Green, R. E., & Kim, T.-S. 2008, , 689, 782 Nestor, D. B., Turnshek, D. A., & Rao, S. M. 2005, , 628, 637 Milutinovi[ć]{}, N., Rigby, J. R., Masiero, J. R., Lynch, R. S., Palma, C., & Charlton, J. C. 2006, , 641, 190 Misawa, T., Charlton, J. C., Eracleous, M., Ganguly, R., Tytler, D., Kirkman, D., Suzuki, N., & Lubin, D. 2007, , 171, 1 Penprase, B. E. 1993, , 88, 433 Petitjean, P., Ledoux, C., Noterdaeme, P., & Srianand, R. 2006, , 456, L9 Pfenniger, D., & Combes, F. 1994, , 285, 94 Pfenniger, D., Combes, F., & Martinet, L. 1994, , 285, 79 Rao, S. M., & Turnshek, D. A. 1998, , 500, L115 Rao, S. M., Nestor, D. B., Turnshek, D. A., Lane, W. M., Monier, E. M., & Bergeron, J. 2003, , 595, 94 Rao, S. M., Turnshek, D. A., & Nestor, D. B. 2006, , 636, 610 Reimers, D., Hagen, H.-J., Rodriguez-Pascual, P., & Wisotzki, L. 1998, , 334, 96 Richter, P., de Boer, K. S., Widmann, H., Kappelmann, N., Gringel, W., Grewing, M., & Barnstedt, J. 1999, , 402, 386 Richter, P., Savage, B. D., Wakker, B. P., Sembach, K. R., & Kalberla, P. M. W. 2001, , 549, 281 Richter, P., Sembach, K. R., Wakker, B. P., & Savage, B. D. 2001, , 562, L181 Richter, P., Ledoux, C., Petitjean, P., & Bergeron, J. 2005, ArXiv Astrophysics e-prints, arXiv:astro-ph/0505340 Rigby, Jane R.; Charlton, Jane C.; & Churchill, Christopher W., P. 2002, ApJ, 565, 743 Rupke, D. S., Veilleux, S., & Sanders, D. B. 2002, , 570, 588 Schaye, J., Carswell, R. F., & Kim, T.-S. 2007, , 379, 1169 Schwartz, C. M., Martin, C. L., Chandar, R., Leitherer, C., Heckman, T. M., & Oey, M. S. 2006, , 646, 858 Sembach, K. R., Howk, J. C., Savage, B. D., & Shull, J. M. 2001, , 121, 992 Stocke, J. T., Keeney, B. A., McLin, K. M., Rosenberg, J. L., Weymann, R. J., & Giroux, M. L. 2004, , 609, 94 Timmes, F. X., Woosley, S. E., & Weaver, T. A. 1995, , 98, 617 Tytler, D., O’Meara, J. M., Suzuki, N., Kirkman, D., Lubin, D., & Orin, A. 2004, , 128, 1058 Wakker, B. P. 2006, , 163, 282 Welty, D. E., Hobbs, L. M., & Morton, D. C. 2003, , 147, 61 Zonak, S. G., Charlton, J. C., Ding, J., & Churchill, C. W. 2004, , 606, 196 ![Reduced $\chi ^2$ value for Voigt profile fits of various covering factors to Cloud 1. Covering factor values between $0.5$ and $0.7$ provide a significantly better fit than full coverage models. \[fig:cf\]](cffix.eps) [ccccccc]{}\ $1$ & 0.452061 & $0.0377 \pm 0.0006$ & $0.028 \pm 0.001$ & 0.028 $\pm$ 0.007 & 0.0392 $\pm$ 0.0006 & 0.0128 $\pm$ 0.0006\ $2$ & 0.452387 & 0.0761 $\pm$ 0.0004 & $0.04551 \pm 0.0009$ & $0.010 \pm 0.001$ & $0.0079 \pm 0.0008$ & $<0.005$\ $3$ & 0.452622 & 0.019 $\pm$ 0.001 & $0.008 \pm 0.001$ & $<0.004$ & $<0.004$ & $<0.003$\ $4$ & 0.454864 & 0.009 $\pm$ 0.001 & $0.002 \pm 0.001$ & $<0.003$ & $<0.04$ & $<0.003$ Cloud z log(N(2796)) b () ------- ---------- ------------------ ---------------- -- 1 0.452061 $12.10 \pm 0.04$ $3.1 \pm 0.1$ $2^a$ 0.452387 11.8 $\pm$ 0.4 5 $\pm$ 2 $2^b$ 0.452410 12.4 $\pm$ 0.1 2.77 $\pm$ 0.3 3 0.452622 11.67 $\pm$ 0.02 15 $\pm$ 1 4 0.454864 11.30 $\pm$ 0.05 7 $\pm$ 1 : Cloud redshifts and VP fit-derived column density and Doppler parameters, assuming full coverage \[tab:Nb\] Ion Transition $f_{lu}$ $W_r$ ([Å]{}) W $C_f$ ------- ------------ ---------- --------------------- ------ ------- Mg II 2796 0.6123 0.0377 $\pm$ 0.0006 1.0 0.60 Mg II 2803 0.3054 0.028 $\pm$ 0.001 1.0 0.60 Mg I 2853 1.810000 0.0186 $\pm$ 0.0008 0.33 0.80 Fe I 2484 0.557000 0.013 $\pm$ 0.001 0.21 0.86 2502 0.049600 $<0.097$ 0.11 0.92 2524 0.279000 0.018 $\pm$ 0.001 0.11 0.92 2968 0.043800 0.011 $\pm$ 0.001 0 1.00 2984 0.029049 0.0045 $\pm$ 0.0008 0 1.00 3022 0.10390 0.011 $\pm$ 0.001 0 1.00 3441 0.02362 0.0033 $\pm$ 0.0006 0.86 0.63 3721 0.04105 0.0061 $\pm$ 0.0007 0.27 0.83 3861 0.02164 0.0048 $\pm$ 0.0004 0.07 0.95 Si I 2515 0.162000 0.017 $\pm$ 0.001 0.11 0.92 Fe II 2344 0.109700 0.036 $\pm$ 0.001 0.57 0.71 2374 0.02818 0.014 $\pm$ 0.002 0.43 0.76 2383 0.3006 0.040 $\pm$ 0.001 0.43 0.76 2587 0.064570 0.078 $\pm$ 0.001 0.21 0.86 2600 0.22390 0.0569 $\pm$ 0.0008 0.21 0.86 Ca I 4228 1.7534 0.0051 $\pm$ 0.0008 0.18 0.88 Ca II 3935 0.6346 0.0179 $\pm$ 0.0004 0.13 0.91 Mn II 2577 0.03508 0.011 $\pm$ 0.001 0.11 0.92 2594 0.2710 0.001 $\pm$ 0.0002 0.21 0.86 2606 0.1927 $< 0.028$ 0.21 0.86 : Oscillator strengths and equivalent widths for transitions detected in Cloud 1, the ratio of broad emission-line flux to continuum flux, W, and the covering factors, $C_f$ for partial covering model \[tab:ew2\] [ccccccccccccccccc]{}\ $1^{a}$ & $<-1.0$ & $-8.5~\rm{to} -7.5$ & 500-1100 & 0.01-0.6 & $<100$ & $10^{18.5-20.8}$ & $10^{15.9-16.5}$ & $10^{19.2-21.1}$ & $10^{19.3-21.3}$ & 0.72-0.91 & $10^{12.8-13.1}$ & $10^{14.1}$ & $10^{12.4-13.0}$ & $10^{14.6} $& $<0.5$\ $1^{b}$ & $<-0.3$ & $-8.0~\rm{to} -7.0$ & 30-1100 & 0.08-0.19 & $<50$ & $10^{18.8-19.9}$ & $10^{16.1-16.3}$ & $10^{19.0-20.8}$ & $10^{19.2-20.9}$ & 0.76-0.94 & $10^{13.2-13.4}$ & $10^{14.7}$ & $10^{13.0-13.4}$ & $10^{15.3}$ & $<0.5$\ $2.1$ & $-0.3$ & $-3.1$ & 0.004 & 15 & 9200 & $10^{15.14}$ & $10^{17.30}$ & $10^{5.10}$ & $10^{17.30}$ & 0 & $10^{9.83}$ & $10^{11.79}$ & $10^{7.25}$ & $10^{11.17}$ & 0.4\ $2.2$ & -0.3 & $-3.1$ & 0.004 & 57 & 9100 & $10^{15.71}$ & $10^{17.87}$ & $10^{5.68}$ & $10^{17.88}$ & 0 & $10^{10.40}$ & $10^{12.37}$ & $10^{7.83}$ & $10^{11.75}$ & 2.77\ $3^{a}$ & $-1.0^*$ & $-7.5$ & 110 & 0.0004 & $ 3000$ & $10^{17.13}$ & $10^{15.84}$ & $10^{12.60}$ & $10^{17.15}$ & 0 & $10^{10.62}$ & $10^{11.65}$ & $10^{7.71}$ & $10^{11.60}$ & $14.53$\ $3^{b}$ & $-0.3^*$ & $-3.1$ & 0.004 & 110 & 9200 & $10^{15.01}$ & $10^{17.17}$ & $10^{4.98}$ & $10^{17.18}$ & 0 & $10^{9.70}$ & $10^{11.67}$ & $10^{7.12}$ & $10^{11.05}$ & 14.53\ $4^{a}$ & $-1.0^*$ & $-7.5$ & 110 & 0.0002 & 3250 & $10^{16.78}$ & $10^{15.52}$ & $10^{12.07}$ & $10^{16.80}$ & 0 & $10^{10.17}$ & $10^{11.30}$ & $10^{7.35}$ & $10^{11.25}$ & 6.00\ $4^{b}$ & $-0.3^*$ & $-3.1$ & 0.004 & 5 & 9200 & $10^{14.64}$ & $10^{16.80}$ & $10^{4.60}$ & $10^{16.80}$ & 0 & $10^{9.33}$ & $10^{11.29}$ & $10^{6.75}$ & $10^{10.67}$ & 6.00\ \
{ "pile_set_name": "ArXiv" }
ENSLAPP xxx/93\ .1in [On Pentagon And Tetrahedron Equations]{}\ .3in [**Jean Michel MAILLET** ]{} .1in [*Laboratoire de Physique Théorique $^{*}$\ ENS Lyon, 46 allée d’Italie 69364 Lyon CEDEX 07 France*]{}\ .40in [**Abstract**]{}\ > We show that solutions of Pentagon equations lead to solutions of the Tetrahedron equation. The result is obtained in the spectral parameter dependent case. 1.8in ------------------------------------------------------------------------ \ $^{*}$[URA 1436 ENSLAPP du CNRS, associée à l’Ecole Normale Supérieure de Lyon et au Laboratoire d’Annecy de Physique des Particules. ]{}\ \ Ref. ENSLAPP xxx/93\ November 1993 [**1. Introduction**]{}\ \ Yang-Baxter (or triangle, or $2$-simplex) equations [@Yang; @Bax1], play a central role in the theory of two-dimensional Integrable Systems of Field Theory and Statistical Mechanics (see for reviews [@Bax1; @F]). They also lead to the theory of Quantum Groups [@Drin1; @Jim1; @Jim2; @Skl1; @KR1; @FRT] and have important applications in low dimensional Topology. In 1980, A. B. Zamolodchikov [@Zam1; @Zam2] described a generalization of this equation, the Tetrahedron (or $3$-simplex) equation, for three-dimensional Integrable Systems. This equation can be further extended to an arbitrary dimension $d$ and is called the $d$-simplex equation [@BS1]. More recently, the first solution of the Tetrahedron equation, proposed in [@Zam2] (see [@Bax2] for the proof), has been generalized using results from the two-dimensional Chiral Potts models [@BB1; @KMS].\ The purpose of this letter is to give a construction of unitary solutions of the Tetrahedron equation (depending on spectral parameters) in terms of solutions of Pentagon equations. Our starting point is the geometrical interpretation of these equations given in [@JMM1]. It is argued in [@JMM1] that the $d$-simplex equation can be obtained as a special discretized case of a (generalized) zero holonomy equation for transport operators acting in a space of functionals of $(d-1)$-dimensional manifolds. In this picture, an $R$-matrix $R_{d}$ solving the $d$-simplex equation is associated to a $d$-dimensional parallelepipedic cell, and is interpreted as an operator moving a functional of $d$ of its $2d$ faces to a functional of the $d$ other faces. The condition for parallel transport (zero holonomy) is then precisely the $d$-simplex equation. For $d=1$ it gives Lax type equations, for $d=2$ the Quantum Yang-Baxter equation, for $d=3$ the Tetrahedron equation, etc.\ In short, this equation can be described as follows. Let $\Sigma_{d}$ and $\Sigma_{d}^{'}$ be two oriented $d$-dimensional manifolds having the same compact oriented boundary which is divided into two $(d-1)$-dimensional oriented manifolds $\Sigma_{d-1}^{*}$ ($\Sigma^{*}$ meaning the same manifold as $\Sigma$ but with reversed orientation) and $\Sigma_{d-1}^{'}$ having also the same boundary. Then we associate to $\Sigma_{d-1}$ and $\Sigma_{d-1}^{'}$ respectively two vector spaces $V_{\Sigma_{d-1}}$ and $V_{\Sigma_{d-1}^{'}}$. Let $F (\Sigma_{d})$ be a map, $$F (\Sigma_{d})\ :\ V_{\Sigma_{d-1}}\ \longmapsto\ V_{\Sigma_{d-1}^{'}} \label{eq:F}$$ Then $F (\Sigma_{d})$ can be interpreted as a transport operator (depending on the manifold $\Sigma_{d}$) acting on functionals of $(d-1)$-dimensional manifolds . The condition for parallel transport is just, $$F({\Sigma}_{d})\ =\ F({\Sigma}_{d}^{'}) \label{eq:FFn}$$ for any two manifolds $\Sigma_{d}$ and $\Sigma_{d}^{'}$ satisfying the above conditions, in particular, $\partial \Sigma_{d}\ =\ \partial \Sigma_{d}^{'}$.\ However as noticed in [@JMM1], it is also possible to give another discrete version of such a (generalized) zero holonomy equation in terms of operators ${\Phi}_{d}$ attached to $d$-simplices instead of $d$-cells. Equations of this type for ${\Phi}_{d}$’s are called the Fundamental $(d+1)$-Simplex Relations (due to the fact that they are written around a $(d+1)$-simplex, each face of it being a $d$-simplex associated to one ${\Phi}_{d}$ and that they realize eq. (\[eq:FFn\]) in the minimal (simplicial) way.). Moreover, the operator $R_{d}$ attached to any $d$-cell can be obtained as an (ordered) product of the $d!$ operators ${\Phi}_{d}$ attached to $d!$ $d$-simplices in which this $d$-cell can be decomposed. Given such a formula, the $d$-simplex equation for $R_{d}$ is a consequence of the fundamental $(d+1)$-simplex relations for the ${\Phi}_{d}$’s.\ For $d=2$ this procedure gives the decomposition of a quantum $R$-matrix in terms of $F$ type objects satisfying quadratic equations (a $3$-simplex possesses four faces, one $F$ being attached to each of them). In that case, from the algebraic point of view, this procedure gives the geometrical interpretation of the construction, used by V. G. Drinfel’d in [@Drin2], of solutions of the Quantum Yang-Baxter equation.\ In [@Drin2], unitary solutions $R_{12}$, namely $R_{12}(u,v)\ R_{21}(v,u)\ = {\bf 1}$, of the Quantum Yang-Baxter equation, $$\begin{aligned} &R_{12}(u,v)\ R_{13}(u,w)\ R_{23}(v,w)\ =\nonumber\\ &=\ R_{23}(v,w)\ R_{13}(u,w) \ R_{12}(u,v) \label{eq:yb}\end{aligned}$$ are obtained in terms of a more fundamental object $F_{12}$ such that, $$R_{12}(u,v)\ =\ F_{21}^{-1}(v,u)\ F_{12}(u,v) \label{eq:RF}$$ ($(u,v)$ being two vectors spectral parameters), $R_{12} \in A \otimes A$, $A$ being a Hopf algebra with co-commutative co-product ${\Delta}_{0}$, $F_{12} \in A \otimes A$ and the $F$ objects satisfy the quadratic ($3$-simplex) relation, $$({\Delta}_{0} \otimes {\bf 1})F\ F_{12}\ =\ ({\bf 1} \otimes {\Delta}_{0})F \ F_{23} \label{eq:FF2}$$\ The geometrical setting for this construction and its generalizations to non-unitary cases is given in [@JMM2]. It is used in [@FM1] to construct from any given classical $r$-matrix $r \in {\cal G} \otimes {\cal G}$ the corresponding universal quantum $R$-matrix as a functional of $r$, together with the quantized Hopf (quasi-triangular) algebra $\cal A$, $R \in {\cal A} \otimes {\cal A}$.\ Then, for $d=3$, the $R$-matrix $R(u,v,w)$ is interpreted as an operator associated to a three-dimensional parallelepipedic cell (depending on three vectors $(u,v,w)$), and acting in a space of functionals of surfaces. The condition for parallel transport is the Tetrahedron equation. A three-dimensional parallelepipedic cell can be decomposed into six tetrahedrons (and one “hat”, see below). We associate one $\Phi$ to each tetrahedron (and one $\Gamma$ to the “hat”), such that the $R$-matrix decomposes as a product of six $\Phi$’s and one $\Gamma$. Then the $4$-simplex relation for $\Phi$ is in fact a Pentagon equation. The zero holonomy requirement imposes also some consistency relations between $\Phi$ and $\Gamma$. These relations for $\Phi$ and $\Gamma$ imply that the $R$-matrix $R(u,v,w)$ satisfies the Tetrahedron equation. This will be our main result.\ All equations will be given here for vertex models, namely for indices attached to surfaces (plaquettes or triangles). A completely similar description exists for variables on links or on points (or for all these possibilities together) and will be described elsewhere as well as more details on proofs and examples.\ This letter is organized as follows. In section 2, we define the objects $R$, $\Phi$, and $\Gamma$ in the three-dimensional case and give their geometrical meaning together with the decomposition of $R$ in terms of $\Phi$ and $\Gamma$. We also give the Tetrahedron equation for $R$. In section 3, we descibe Pentagon equations in this geometrical framework. Our main result is stated in section 4. There we give the skeleton of the proof of the relation between solutions of Pentagon and Tetrahedron equations. Perspectives and conclusions are given in section 5.\ \ It is a great pleasure to dedicate this paper to L. D. Faddeev on the occasion of his $60^{th}$ birthday.\ \ \ [**2. The Tetrahedron equation**]{}\ \ For vertex models, the Tetrahedron equation can be written as follows [@Zam1; @Zam2; @Bax2; @JMM1; @MN1], $$\begin{aligned} R_{123}(u,v,w)\ R_{145}(u,v,t)\ R_{246}(u,w,t)\ R_{356}(v,w,t)\ =\nonumber \\ =\ R_{356}(v,w,t)\ R_{246}(u,w,t)\ R_{145}(u,v,t)\ R_{123}(u,v,w) \label{eq:t}\end{aligned}$$ where, $R_{ijk} \in End( V_{i} \otimes V_{j} \otimes V_{k} )$, $V_{i}$ being vector spaces of dimensions $N_{i}$ and $u,v,w,t$ are four arbitrary vectors (say elements of ${\bf C}^{n}$) parametrizing the $R$-matrices.\ As sketched in the Introduction, such $R$-matrices can be interpreted as transport operators on a space of functionals of surfaces. So let us first describe this functional space in a discretized case.\ We consider an $n$-dimensional affine space on ${\bf C}$, with origin $O$. We denote by ${\Delta}^{(x)}(u,v)$ (or equivalently ${\Delta}^{(x+u)}(v,-u-v)$ or ${\Delta}^{(x+u+v)}(-u-v,u)$), $(u,v,x)$ being vectors in ${\bf C}^{n}$, the oriented triangle defined by the point $(O+x)$ and its oriented boundary $(u,v,-u-v)$. To such a triangle we associate a vector (which is a functional of this triangle) $h^{(x)}(u,v) \in V_{u}^{(x)} \otimes V_{v}^{(x+u)} \otimes V_{-u-v}^{(x+u+v)} \otimes {\cal A}_{(u,v)}^{(x)}$, where $V_{u}^{(x)}$ is a vector space attached to the oriented link starting at point $(O+x)$ in direction $u$, such that its dual vector space $V_{u}^{(x)*}$ is equal to $V_{-u}^{(x+u)}$, and ${\cal A}_{(u,v)}^{(x)}$ is a vector space attached to the triangle ${\Delta}^{x}(u,v)$. Here also, the dual vector space to ${\cal A}_{(u,v)}^{(x)}$ is ${\cal A}_{(u+v,-v)}^{(x)}$ associated to the same triangle but with reversed orientation. Note also that we have, ${\cal A}_{(u,v)}^{(x)} \equiv {\cal A}_{(v,-u-v)}^{(x+u)} \equiv {\cal A}_{(-u-v,u)}^{(x+u+v)}$.\ We define a composition law for two $h$-functionals whenever the two corresponding triangles have (at least) one edge in common with opposite orientation by the evaluation of one $h$ on the other using the duality bracket on the vector spaces attached to the common edge which are dual to one another. For example, to any two-dimensional parallelepiped ${\Box}^{(x)}(u,v)$ starting at point $(O+x)$ with oriented boundary $(u,v,-u,-v)$ we associate a functional $$l^{(x)}(u,v)\ =\ {< h^{(x)}(u,v) , h^{(x)}(u+v,-u) > }_{V^{(x)}_{u+v}} \label{eq:l}$$ where we have used the natural duality bracket between $V^{(x)}_{u+v}$ and its dual vector space denoted by ${< . , . > }_{V^{(x)}_{u+v}}$. There, $l^{(x)}(u,v)$ is an element of the tensor product, $V_{u}^{(x)} \otimes V_{v}^{(x+u)} \otimes V_{-u}^{(x+u+v)} \otimes V_{-v}^{(x+v)} \otimes {\cal A}_{[u,v]}$, where ${\cal A}_{[u,v]}^{(x)}$ stands for ${\cal A}_{(u,v)}^{(x)} \otimes {\cal A}_{(u+v,-u)}^{(x)}$ and we will require for simplicity ${\cal A}_{[u,v]}^{(x)}$ not to depend on the vector $x$. Then it is also possible to define the composition law for $l$-functional using their decomposition in terms of the $h$’s.\ As a useful example we consider the functionals $$j^{(x)}(u,v,w)\ =\ < l^{(x)}(v,u) , l^{(x+v)}(w,u) , l^{(x)}(w,v) >_{V_{u}^{(x+v)} \otimes V_{v}^{(x)} \otimes V_{w}^{(x+v)}}$$ and $$k^{(x)}(u,v,w)\ =\ < l^{(x+u)}(w,v) , l^{(x)}(w,u) , l^{(x+w)}(v,u) >_{V_{u}^{(x+w)} \otimes V_{v}^{(x+u+w)} \otimes V_{w}^{(x+u)}}$$ Then, we define the operator $R^{(x)}(u,v,w) \in End(\ {\cal A}_{[v,u]} \otimes {\cal A}_{[w,u]} \otimes {\cal A}_{[w,v]}\ )$ as the map, $$R^{(x)}(u,v,w) : j^{(x)}(u,v,w) \longmapsto k^{(x)}(u,v,w) \label{eq:R}$$ Here $R^{(x)}(u,v,w)$ is a functional of the parallelepipedic three-dimensional cell at point $(O+x)$ defined by the three vectors $(u,v,w)$.\ We further impose a unitarity condition on this operator, namely, that the map $R^{(x+u+v+w)}(-u,-v,-w)$ is the inverse map to $R^{(x)}(u,v,w)$. Then, by considering the two (minimal) ways of mapping the functional, $$\begin{aligned} < l^{(x)}(v,u) , l^{(x+v)}(w,u) , l^{(x)}(w,v) ,\nonumber\\ l^{(x+w+v)}(t,u) ,l^{(x+w)}(t,v) , l^{(x)}(t,w) > \label{eq:s1}\end{aligned}$$ where the duality bracket evaluation is on, $$\begin{aligned} V_{u}^{(x+v)} \otimes V_{v}^{(x)} \otimes V_{w}^{(x+v)} \otimes V_{t}^{(x+v+w)}\nonumber\\ \otimes V_{t}^{(x+w)} \otimes V_{w}^{(x)} \otimes V_{v}^{(x+w)} \otimes V_{u}^{(x+v+w)} \label{eq:v1}\end{aligned}$$ to the functional, $$\begin{aligned} < l^{(x+w+t)}(v,u) , l^{(x+t)}(w,u) , l^{(x+u+t)}(w,v) ,\nonumber\\ l^{(x)}(t,u) , l^{(x+u)}(t,v) , l^{(x+u+v)}(t,w) > \label{eq:s2}\end{aligned}$$ where the duality bracket evaluation is on, $$\begin{aligned} V_{u}^{(x+w+t)} \otimes V_{v}^{(x+u+w+t)} \otimes V_{w}^{(x+u+t)} \otimes V_{t}^{(x+u)}\nonumber\\ \otimes V_{t}^{(x+u+v)} \otimes V_{w}^{(x+u+v+t)} \otimes V_{v}^{(x+u+t)} \otimes V_{u}^{(x+t)} \label{eq:v2}\end{aligned}$$ we obtain the following parallel transport condition on the $R$-matrices, $$\begin{aligned} R^{(x+t)}(u,v,w)\ R^{(x)}(u,v,t)\ R^{(x+v)}(u,w,t)\ R^{(x)}(v,w,t)\ =\nonumber\\ =\ R^{(x+u)}(v,w,t)\ R^{(x)}(u,w,t)\ R^{(x+w)}(u,v,t)\ R^{(x)}(u,v,w) \label{eq:tf}\end{aligned}$$ If we consider the simplified case where the operator $R^{(x)}(u,v,w)$ do not depend on the shift $(x)$, we obtain the Tetrahedron equation (\[eq:t\]), the convention being that the vector spaces ${\cal A}_{[v,u]}$ are label by numbers ${1,2,3,4,5,6}$ or better here ${(11'),(22'),...}$ with the correspondence, $[v,u] \equiv (11')$ ($(v,u) \equiv (1)$ and $(u,v) \equiv (1')$) and so on , $[w,u] \equiv 22'$, $[w,v] \equiv 33'$, $[t,u] \equiv 44'$, $[t,v] \equiv 55'$ and $[t,w] \equiv 66'$. Then we have, $$R_{11',22',33'}^{(x)}(u,v,w)\ =\ R^{(x)}(u,v,w)$$ The (local) unitarity condition is now, $$R_{11',22',33'}^{(x)}(u,v,w)\ R_{1'1, 2'2, 3'3}^{(x+u+v+w)}(-u,-v,-w)\ =\ {\bf 1} \label{eq:UR}$$ Note here the exchange of spaces $(i)$ and $(i')$. The (local) Tetrahedron equation is given by, $$\begin{aligned} R_{11',22',33'}^{(x+t)}(u,v,w)\ R_{11',44',55'}^{(x)}(u,v,t)\ R_{22',44',66'}^{(x+v)}(u,w,t) \ R_{33',55',66'}^{(x)}(v,w,t)\ =\nonumber \\ =\ R_{33',55',66'}^{(x+u)}(v,w,t)\ R_{22',44',66'}^{(x)}(u,w,t)\ R_{11',44',55'}^{(x+w)}(u,v,t) \ R_{11',22',33'}^{(x)}(u,v,w)\nonumber\\ \label{eq:T}\end{aligned}$$ for any set of vectors $(u,v,w,t,x)$.\ Let us now define two other transport operators $\Phi$ and $\Gamma$ as the mappings, $$\begin{aligned} {\Phi}^{(x)}(u,v,w)\ :\ < h^{(x)}(u,v) , h^{(x)}(u+v,w) >_{V^{(x)}_{u+v}}\nonumber\\ \longmapsto < h^{(x)}(u,v+w) , h^{(x+u)}(v,w) >_{V^{(x+u)}_{v+w}} \label{eq:fi}\end{aligned}$$ and, $$\begin{aligned} {\Gamma}^{(x)}(u,v,w)\ :\ < h^{(x)}(u,v) , h^{(x)}(u+v,-v) >_{V^{(x)}_{u} \otimes V^{(x+u)}_{v}}\nonumber\\ \longmapsto < h^{(x)}(u+v+w,-w) , h^{(x)}(u+v,w) >_{V^{(x)}_{u+v+w} \otimes V^{(x+u+v)}_{-w}} \label{eq:gamma}\end{aligned}$$ where, ${\Phi}^{(x)}(u,v,w)$ is a linear map from ${\cal A}^{(x)}_{(u,v)} \otimes {\cal A}^{(x)}_{(u+v,w)}$ to ${\cal A}^{(x)}_{(u,v+w)} \otimes {\cal A}^{(x+u)}_{(v,w)}$. Similarly, ${\Gamma}^{(x)}(u,v,w)$ is a map from ${\cal A}^{(x)}_{(u,v)} \otimes {\cal A}^{(x+u+v)}_{(-v,-u)}$ to ${\cal A}^{(x)}_{(u+v+w,-w)} \otimes {\cal A}^{(x)}_{(u+v,w)}$. Moreover, for simplicity, we will make the identifications (in the above formula for $\Phi$ and $\Gamma$), ${\cal A}^{(x)}_{(u,v)} \equiv {\cal A}^{(x)}_{(u,v+w)}$, ${\cal A}^{(x)}_{(u+v,w)} \equiv {\cal A}^{(x+u)}_{(v,w)}$ for $\Phi$ and similarly for $\Gamma$, ${\cal A}^{(x)}_{(u,v)} \equiv {\cal A}^{(x)}_{(u+v,w)}$ and ${\cal A}^{(x+u+v)}_{(-v,-u)} \equiv {\cal A}^{(x)}_{(u+v+w,-w)}$ such that $\Gamma$ contains a permutation operator in its definition.\ Using these operators it is quite easy to decompose the action of the $R$-matrix in terms of $\Phi$ and $\Gamma$.\ For this purpose, we put indices on $\Phi$ and $\Gamma$ to make explicit the vector spaces they are acting upon, namely, using the above conventions and identifications of vector spaces, we obtain for example, ${\Phi}^{(x+v+w)}(u,-u-w,-v) \equiv {\Phi}_{23'}^{(x+v+w)}(u,-u-w,-v)$, and so on.\ We have, $$\begin{aligned} R_{11',22',33'}^{(x)}(u,v,w)\ =\ P_{12}\ P_{13'}\ P_{2'3'}\ P_{13}\ {\Phi}_{31}^{(x)}(w,u+v,-v)\nonumber\\ {\Phi}_{3'1'}^{(x)}(u+w,v,-v-w)\ {\Phi}_{32}^{(x)}(w,v,u)\ P_{1'2'}\ {\Phi}_{2'1'}^{(x)}(u+v+w,-w,-v)\nonumber\\ {\Gamma}_{13'}^{(x)}(v,u+w,-v)\ P_{12'}\ {\Phi}_{12'}^{(x+u+v)}(-u-v,v,u+w)\ {\Phi}_{23'}^{(x+v+w)}(u,-u-w,-v)\nonumber\\ \label{eq:RFG}\end{aligned}$$ Note that in this formula each $\Phi$ is associated to one of the six tetrahedrons decomposing the three-dimensional cell corresponding to the $R$-matrix and having always the two points $(O+x)$ and $(O+x+u+v+w)$ among their four vertices. These six tetrahedrons are labelled by the six possible ordered triplets $(a,b,c)$, $a,b,c \in \{u,v,w\}$. Note also that the role of $\Gamma$ is to create the only vertex of the three-dimensional cell $(O+x, u,v,w)$, namely the point $(O+x+u+w)$, not present in the initial surface.\ \ [**3. The Pentagon Equation**]{}\ \ We are now interested in writing the general equation (\[eq:F\]) for the operators $\Phi$ and $\Gamma$.\ Let us first note the useful symmetry relations, $${\Phi}^{(x)}_{ij}(u,v,w)\ =\ {\Phi}^{(x+u+v)}_{ji}(w,-u-v-w,u) \label{eq:sfi}$$ and for $\Gamma$, $${\Gamma}^{(x)}_{ij}(u,v,w)\ =\ {\Gamma}^{(x+u+v)}_{ji}(-v,-u,u+v+w) \label{eq:sg}$$ Then we impose the unitarity relation on $\Phi$, $${\Phi}^{(x+u)}_{ij}(v,w,-u-v-w)\ P_{ij}\ {\Phi}^{(x)}_{ij}(u,v,w)\ =\ {\bf 1} \label{eq:ufi}$$ and on $\Gamma$, $${\Gamma}^{(x)}_{ij}(u,v,w)\ {\Gamma}^{(x)}_{ji}(u+v+w,-w,-v)\ =\ {\bf 1} \label{eq:ug}$$ We also ask for the following composition law, $${\Gamma}^{(x+u+v)}_{ij}(w,-u-v-w,t)\ {\Gamma}^{(x)}_{ij}(u,v,w)\ =\ P_{ij}\ {\Gamma}^{(x)}_{ij}(u,v,t-u-v) \label{eq:cg}$$ It means in particular that ${\Gamma}^{(x)}_{ij}(u,v,-v)\ =\ P_{ij}$.\ To obtain the $4$-simplex fundamental relation on $\Phi$, we consider the two minimal ways of mapping the functional, $$< h^{(x)}(u,v) , h^{(x)}(u+v,w) , h^{(x)}(u+v+w,t) >_{V^{(x)}_{u+v} \otimes V^{(x)}_{u+v+w}}$$ to the functional, $$< h^{(x)}(u,v+w+t) , h^{(x+u)}(v,w+t) , h^{(x+u+v)}(w,t) >_{V^{(x+u)}_{v+w+t} \otimes V^{(x+u+v)}_{w+t}}$$ This gives the following Pentagon equation on $\Phi$, $$\begin{aligned} {\Phi}^{(x)}_{12}(u,v,w+t)\ {\Phi}^{(x)}_{23}(u+v,w,t)\ =\nonumber\\ {\Phi}^{(x+u)}_{23}(v,w,t)\ {\Phi}^{(x)}_{13}(u,v+w,t)\ {\Phi}^{(x)}_{12}(u,v,w) \label{eq:P}\end{aligned}$$ Then using again discretized versions of eq. (\[eq:F\]), we obtain the following constraints between $\Phi$ and $\Gamma$, $$\begin{aligned} {\Phi}^{(x+u+v+w)}_{12}(t,-u-v-w-t,u+v)\ {\Gamma}^{(x)}_{13}(u,v+w,t) \nonumber\\ {\Phi}^{(x+u+v)}_{23}(w,-v-w,-u)\ =\nonumber\\ =\ {\Phi}^{(x+u+v)}_{23}(w+t,-t,-u-v-w)\ {\Gamma}^{(x+u+v)}_{12}(-v,v+w,t) \nonumber\\ {\Phi}^{(x+u)}_{13}(v+w,-u-v-w,u+v)\ =\nonumber\\ =\ {\Phi}^{(x+u+v)}_{13}(w,-u-v-w,u+v+w+t)\ {\Gamma}^{(x)}_{23}(u,v,w+t) \nonumber\\ {\Phi}^{(x+u+v+w)}_{12}(-u-v-w,u,v)\nonumber\\ \label{eq:fg}\end{aligned}$$ and similarly, $$\begin{aligned} P_{14}\ P_{23}\ {\Gamma}^{(x)}_{12}(u,v,w)\ {\Gamma}^{(x)}_{34}(u+v+w,-w,-v) \ =\ {\Phi}^{(x+u)}_{23}(-u,u+v+w,-w)\nonumber\\ {\Phi}^{(x+u)}_{24}(-u,u+v,w)\ {\Phi}^{(x)}_{13}(u,v+w,-w)\ {\Phi}^{(x)}_{14}(u,v,w)\nonumber\\ \label{eq:ffg}\end{aligned}$$ This last relation is quite interesting since it allows us to compute the operator $\Gamma$ in terms of $\Phi$’s up to its trace.\ \ [**4. Tetrahedron equation from Pentagon Equation**]{}\ \ We can now state our main result using all the above ingredients,\ [**Theorem :**]{}\ [*Let ${\Phi}_{ij}(u,v,w)$ and ${\Gamma}_{ij}(u,v,w)$ be defined by eqs. (\[eq:fi\],\[eq:gamma\]) and satisfying eqs. (\[eq:sfi\],\[eq:sg\], \[eq:ufi\], \[eq:ug\],\[eq:cg\],\[eq:P\],\[eq:fg\],\[eq:ffg\]). Then the $R$-matrix $R_{11',22',33'}^{(x)}(u,v,w)$ defined in terms of $\Phi$ and $\Gamma$ in eq. (\[eq:RFG\]) satisfies the Tetrahedron equation (\[eq:T\]) and is unitary.*]{}\ The proof of this theorem is quite long and will not be given here. In particular it uses $24$-times the Pentagon equation for $\Phi$. So, instead of giving an explicit proof, let us describe the main idea leading to its construction. In fact this idea holds for any of the $d$-simplex equation with regards to their relation to the corresponding fundamental $(d+1)$-simplex relations when writing the $R$-matrix $R_{d}$ in terms of the simplicial objects ${\Phi}_{d}$. Hence for simplicity let us explain it first in the two-dimensional case, namely for the Yang-Baxter equation.\ Geometrically, the $(l.h.s)$ of the quantum Yang-Baxter equation (\[eq:yb\]) is associated to three faces of a cube, namely to the surface corresponding to the functional $k^{(x)}(u,v,w)$, while the $(r.h.s)$ is associated to the three other faces of the cube, hence to the surface corresponding to the functional $j^{(x)}(u,v,w)$. So the $(l.h.s)$ and the $(r.h.s)$ of the Quantum Yang-Baxter equation can be viewed as related by the (symbolic) action of $R(u,v,w)$.Then in a similar way, the operator $\Phi$ can be consider in the two-dimensional case as the symbolic action relating the $(l.h.s)$ and the $(r.h.s)$ of eq. (\[eq:FF2\]) for $F$, $\Gamma$ being related to unitarity relation for $F$. Now the proof of the Quantum Yang-Baxter equation (\[eq:yb\]) from eqs. (\[eq:RF\],\[eq:FF2\]) is precisely given by the decomposition of $R(u,v,w)$ in terms of $\Phi$ and $\Gamma$. Namely, to each $\Phi$ corresponds the use of eq. (\[eq:FF2\]) for two definite $F$’s, and to $\Gamma$ corresponds the use of the unitarity for $F$. Indead, using eq. (\[eq:FF2\]) six-times, in the precise order given by the (non-abelian) decomposition (\[eq:RFG\]), and once the unitarity for $F$, we can achieve the proof of eq. (\[eq:yb\]).\ Generalizing this procedure to the $d=3$ case amounts to decompose an $R$-matrix attached to a four-dimensional cell into its $24$ $4$-simplices. This gives the precise way to use $24$-times the Pentagon equation to prove the Tetrahedron equation for $R(u,v,w)$. In fact to achieve the proof of the theorem we also need the compatibility conditions (\[eq:fg\],\[eq:ffg\]). A more detailed account of this proof will be given elsewhere.\ At this point two remarks are in order.\ First, It was noticed long ago [@MN2], that any solution of the quantum Yang-Baxter equation (\[eq:yb\]) leads to solutions of the Tetrahedron equation (we consider here the case with no dependence on shifts $(x)$) (\[eq:T\]) as, $$R_{11',22',33'}(u,v,w)\ =\ R_{12}(v,w)\ R_{1'3}(u,w)\ R_{2'3'}(u,v) \label{eq:rt}$$ However such solutions are degenerate in the sense that the partition function of such a model will decompose into the product of three partition functions of two-dimensional models associated to the planes $(u,v)$, $(u,w)$, $(v,w)$. The solutions we propose here are not of this type since the existence of a non-trivial $\Phi$ ensures precisely that the two-dimensional equations such as (\[eq:FF2\]) and hence (\[eq:yb\]) are broken.\ Second, the restricted Star-Triangle equations proposed in ref. [@BB1] are likely to be very similar to our Pentagon equation. This point deserves further study.\ The next step is of course to find solutions $\Phi$ and $\Gamma$. In fact, if we consider eqs. (\[eq:sfi\],\[eq:sg\],\[eq:ufi\], \[eq:ug\],\[eq:cg\],\[eq:P\],\[eq:fg\],\[eq:ffg\]) for functionals having vector indices on links (and eventually on surfaces), solutions are already at hand. Their are given by Conformal Field Theories or by Topological Field Theories in the sense of Turaev and Viro [@TV]. In this case $\Phi$ satisfies a usual Pentagon equation and $\Gamma$ is the product of Kroenecker delta’s. However in that case the $R$-matrix turns out to be non-invertible. Moreover the model is topological [@MR1]. In fact in that situation the proof of the Tetrahedron equation is a trivial consequence of the Turaev-Viro theorem. The problem of finding more general solutions (in particular non-topological one’s, and depending on spectral parameters) is now under study.\ \ [**4. Conclusion**]{}\ \ Using a geometrical interpretation of the $R$-matrix solving the Tetrahedron equation as a transport operator acting in a space of functionals of surfaces, we have obtained a decomposition of such an $R$-matrix in terms of more fundamental objects $(\Phi, \Gamma)$, $\Phi$ being the solution of the Pentagon equation (\[eq:P\]). This provides an explicit link between Pentagon and Tetrahedron equations. We expect such a relation to be fruitful in the construction of new solutions to the Tetrahedron equation. It also open the possibility of extending the algebraic picture of Quantum Groups as given in [@Drin1] to another algebraic structure suitable for Integrable Systems in three dimensions. Finally, as can be expected from the fundamental relation (\[eq:FFn\]), it also relates three-dimensional Topological Field Theories to a special case of the Tetrahedron equation.\ [99]{} C.N. Yang, Phys. Rev. Lett [**19**]{} (1967) 1312 R.J. Baxter, [*Exactly Solved Models in Statistical Mechanics*]{}, (Academic Press, London, 1982). L.D. Faddeev, in [*Développements Récents en Théorie des Champs et Mécanique Statistique*]{}, eds. R. Stora and J.B. Zuber, (North-Holland, Amsterdam, 1983), p.561. V.G. Drinfel’d, [*Quantum Groups*]{}, in Proc. of the Int. Conf. of Mathematicians (Berkeley, 1986), [*and references therein*]{}. M. Jimbo, Lett. Math. Phys. [**10**]{} (1985) 63, ibid. [**11**]{} (1986) 247. M. Jimbo, Commun. Math. Phys. [**102**]{} (1986) 537. E. K. Sklyanin, Funct. Anal. Appl. [**16**]{} (1983) 263; ibid. [**17**]{}(1984) 273. P.P. Kulish and N.Yu. Reshetikhin, J. Sov. Math. [**23**]{} (1983) 2435. L.D. Faddeev, N.Yu. Reshetikhin and L.A. Takhtadzhyan, [*Quantization of Lie Groups and Lie Algebras*]{}, in Algebraic Analysis, vol. I, (Academic Press, 1988), p. 129. A.B. Zamolodchikov, Sov. Phys. JETP [**52**]{} (1980) 325. A.B. Zamolodchikov, Commun. Math. Phys. [**79**]{} (1981) 489. V.V. Bazhanov and Yu.G. Stroganov, Theor. Math. Phys. [**52**]{} (1982) 685, Nucl. Phys. [**B230**]{} \[FS10\] (1984) 435. R.J. Baxter, Commun. Math. Phys. [**88**]{} (1983) 185, Phys. Rev. Lett. [**53**]{} (1984) 1795, Physica [**18D**]{} (1986) 321. V. V. Bazhanov and R. J. Baxter, J. Stat. Phys. [**69**]{} (1992) 453, ibid. [**71**]{} (1993) 839. R. M. Kashaev, V. V. Mangazeev, Yu. G. Stroganov, Int. J. Mod. Phys. [**A8**]{} (1993) 587, and ibid. [**A8**]{} (1993) 1399. J. M. Maillet, Nucl. Phys. [**B**]{} (Proc. Suppl.) [**18B**]{} (1990) 212. V.G. Drinfel’d, Sov. Math. Dokl. [**28**]{} (1983) 667. J. M. Maillet, [*Drawing quantum groups*]{}, to appear. L. Freidel and J. M. Maillet, Phys. Lett. [**B 296**]{} (1992) 353. J.M. Maillet and F.W. Nijhoff, Phys. Lett. [**B224**]{} (1989) 389. J.M. Maillet and F.W. Nijhoff, (1987), unpublished. V. Turaev and O. Viro, Topology [**31**]{} (1992) 865. J.M. Maillet and Ph. Roche, (1992), unpublished.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The design and fabrication of phononic crystals (PnCs) hold the key to control the propagation of heat and sound at the nanoscale. However, there is a lack of experimental studies addressing the impact of order/disorder on the phononic properties of PnCs. Here, we present a comparative investigation of the influence of disorder on the hypersonic and thermal properties of two-dimensional PnCs. PnCs of ordered and disordered lattices are fabricated of circular holes with equal filling fractions in free-standing Si membranes. Ultrafast pump and probe spectroscopy (asynchronous optical sampling) and Raman thermometry based on a novel two-laser approach are used to study the phononic properties in the gigahertz (GHz) and terahertz (THz) regime, respectively. Finite element method simulations of the phonon dispersion relation and three-dimensional displacement fields furthermore enable the unique identification of the different hypersonic vibrations. The increase of surface roughness and the introduction of short-range disorder are shown to modify the phonon dispersion and phonon coherence in the hypersonic (GHz) range without affecting the room-temperature thermal conductivity. On the basis of these findings, we suggest a criteria for predicting phonon coherence as a function of roughness and disorder.' author: - 'Markus R. Wagner' - Bartlomiej Graczykowski - Juan Sebastian Reparaz - Alexandros El Sachat - Marianna Sledzinska - Francesc Alzina - 'Clivia M. Sotomayor Torres' title: 'Two-Dimensional Phononic Crystals: Disorder Matters' --- Phononic crystals (PnCs) constitute an attractive class of materials with the potential to manipulate and control the propagation of vibrational energy, i.e., sound and heat. Caused by their periodic structure, these materials can exhibit complete acoustic band gaps due to Bragg reflections and local resonances controlled by geometry and material properties [@Sigalas1992; @Sigalas1993; @Kushwaha1993; @Pennec2010; @Still2008; @Khelif2010; @Maldovan2013]. The periodic modulation of their elastic properties [@Khelif2006; @Gorishnyy2007; @Still2008; @Pennec2010; @Schneider2012; @Graczykowski2014a; @Graczykowski2015] and the reduced dimensionality in nanostructures [@Groenen2008; @Cuffe2012; @Chavez-Angel2014] lead to strong modifications of the acoustic phonon dispersion which directly affects the phonon group velocity, phonon propagation, and, ultimately, sound and heat transport. The continuing miniaturization and progress in nanofabrication techniques have enabled the reduction of the characteristic sizes of PnCs to the nanometer scale and thereby allow the modification and control of phonon propagation and transport properties in the frequency range from hypersonic (GHz) [@Gorishnyy2007; @Schneider2012; @Graczykowski2014a; @Graczykowski2015] to thermal (THz) phonons [@Yu2010; @Hopkins2011; @Maldovan2013; @Maldovan2013a; @Zen2014; @Neogi2015; @Maldovan2015]. The prospect to tailor the thermal conduction and heat capacity has recently triggered tremendous research activities and several authors have reported the successful reduction of the room temperature thermal conductivity in PnCs impacting potential applications in thermoelectricity [@Song2004; @Tang2010; @Yu2010; @Hopkins2011; @Reinke2011; @Alaie2015; @Nakagawa2015; @Nomura2015]. The ability to modify the phonon dispersion relation in the hypersonic frequency range, and thus the group velocity of acoustic phonons, has paved the way to applications in RF communication technologies and optomechanics [@OlssonIII2009; @Eichenfield2009; @Safavi-Naeini2014; @Gomis-Bresco2014; @Volz2016]. However, studies of the GHz phonon dispersion relation have been performed mostly for bulk [@Gorishnyy2007; @Still2008; @Schneider2012; @Gomopoulos2010; @Sato2012; @Parsons2014] and surface PnCs [@Graczykowski2014a; @Mielcarek2012; @Hou2014] including band structure mapping of surface acoustic modes [@Maznev2011; @Veres2012] whereas the effects of hole patterning and pillar growth in combination with second order periodicity in thin membranes were only recently studied [@Graczykowski2015]. In particular, the experimental exploitation of disorder, a key aspect in the design of modern photonics [@Wiersma2013] for the guidance and trapping of light and ultrasound by Anderson localization [@Anderson1958; @Schwartz2007; @Mascheck2012; @Hu2008; @Yu2009], is almost completely unexplored in phononic crystals to the present day [@Maire2015]. ![image](./Fig1.eps){width="\linewidth"} In this work, we investigate the influence of short-range disorder in Si membrane-based 2D phononic crystals on the GHz and THz phononic properties. We use time-resolved femtosecond pump-probe spectroscopy based on asynchronous optical sampling (ASOPS) to measure the zone-center phonon spectrum and phonon dynamics in the time domain. Finite element method (FEM) simulations are applied to calculate the phonon dispersion relation, 3D displacement fields, and amplitudes of the different mechanical modes. The thermal conductivity of the ordered and disordered PnCs is measured by the recently developed contactless technique of 2-laser Raman thermometry, here applied to PnCs for the first time. 2D phononic crystals were fabricated of free-standing silicon membranes (Norcada Inc.) using electron beam lithography and reactive ion etching to generate ordered and disordered hole patterns with equal filling fractions (Fig. \[fig1\](a-c)) [@Sledzinska2016]. The disorder was introduced by random displacements of the holes in $x$ and $y$ direction within the unit cell of the PnC lattice. The hole positions of the disordered PnC were defined by $p=p_0\pm\epsilon\cdot s$, where $p$ is the displaced hole position along the two in-plane axes, $p_0$ is the ordered lattice position, $\epsilon$ is a random number between 0 and 1 and $s$ is the maximum displacement which was set to 45 nm. The level of disorder in percentage of the period $a=$ 300 nm is than quantified by $n=s/a\cdot 100\% = 15\%$. Figs. \[fig1\](d-f) display schematic illustrations of the unprocessed membrane with a surface roughness of about 1 nm (d), the ordered PnC with a hole wall roughness of about 7 nm (e), and the disordered PnC with an average displacement of the holes from the ordered lattice sites of 22.5 nm in x- and y-direction (f). The coherent acoustic phonon dynamics of the ordered and disordered PnCs with the same filling fractions are investigated using femtosecond pump-probe reflectivity measurements using asynchronous optical sampling [@Bartels2007; @Hudert2009; @Bruchhausen2011] (see Fig. \[fig1\](g) and section Methods for details). The optical excitation of acoustic vibrations arises from the electronic and thermal stresses induced by the pump pulse which are determined by the generated electron-hole pair density and the temperature-induced lattice deformation, respectively [@Thomsen1986; @Wright1995; @Hudert2009]. The excited out-of-plane (dilatational) oscillations change the optical cavity thickness of the membrane which leads to a modulation of the probed reflectivity by the Fabry-Perot effect [@Hudert2009; @Cuffe2013; @Schubert2014]. ![image](./Fig2.eps){width="\linewidth"} Figs. \[fig2\](a-c) show the time-resolved intensity modulation of the reflected probe laser after subtraction of the electronic contribution and system response correction (see SI) for a 250 nm thick Si membrane before patterning (a), with ordered hole lattice (b) and with disordered holes (c). The frequencies of the coherent acoustic phonon modes are derived from the time-domain spectra by numerical Fourier transformation as shown in Fig. \[fig2\](d-f), where dotted spectra represent the FFT spectra of the measured time domain spectra and solid lines are the FFT spectra of the corresponding multi-sinusoidal fits in Figs. \[fig2\](a-c). The blue shaded areas indicate the frequency ranges in which coherent acoustic phonons are detected. It should be noted that only phonon modes with amplitudes greater than the noise level are displayed in the fits, thus, the highest phonon frequency is not a strict limit for phonon coherence as indicated by the gradient in the blue shaded range. Using this experimental approach, we can directly obtain the complete zone-center ($q=$ 0) coherent phonon spectrum from the GHz to the THz regime. In the case of the bare membrane (Fig. \[fig2\](d)), the different harmonics in the vibrational spectrum appear as equidistant peaks as a consequence of the confinement of the acoustic modes [@Hudert2009; @Torres2004; @Groenen2008]. The lowest frequency peak at 16.8 GHz thereby corresponds to the first order symmetric ($S_1$) mode and higher frequency modes up to the 9th harmonic at 151 GHz are clearly visible. The absence of even harmonics can be understood taking into account that those modes have only in-plane displacement at the $\Gamma$ point so that no modulation of the optical cavity thickness occurs [@Schubert2014]. Using the value of the longitudinal sound velocity for Si \[001\] of $v_L=8433$ m/s [@McSkimin1964], the thickness of the membrane $d$ determines the frequencies of the observed modes $f_n = nv_L/2d$, where $f_n$ is the frequency of the $n$-th harmonic ($n=1,3,5,...$) of the symmetric (dilatational) mode [@Hudert2009]. In addition, a weak signal at 23.8 GHz is observed (0.03 of the amplitude of $S_1$). Using FEM modeling, we identify this peak as the symmetric $S_2$ mode which might be visible due to excitation of propagating in-plane modes with small but nonzero wave numbers and therefore nonzero out-of-plane displacement [@Auld1990]. Following the discussion of the acoustic phonon dynamics of the non-patterned membrane, we now focus on the modification of the frequency spectrum in ordered and disordered PnCs by hole patterning of the original membrane. The time resolved pump-probe reflectivity spectra for the ordered and disordered PnCs are displayed in Fig. \[fig2\](b) and \[fig2\](c), respectively. The periodic signal of the unpatterned membrane in Fig. \[fig2\](a) is replaced by a more complex time response of the reflectivity change in the ordered and disordered PnCs. This indicates a strong modification of the phonon dispersion relation with the appearance of additional acoustic phonon modes that contribute to reflectivity modulations. The corresponding frequency spectra for the ordered and disordered PnCs are shown in Fig. \[fig2\](e) and \[fig2\](f). The relative amplitudes of the different modes, normalized to the most intense mode at about 16 GHz, are displayed in Fig. \[fig2\](g-i) (dots) and compared to the results of FEM simulations (bars). In order to explain the observed differences between the unpatterned membrane and the ordered PnC, we calculate the acoustic phonon dispersion relation and the 3-dimensional displacement fields for the zone-center modes up to 55 GHz by means of FEM modeling as described in Ref. [@Graczykowski2015]. In principle, the reflectivity of the membrane is modulated by the induced mechanical modes which result in non-zero average thickness variations. Here the model assumes a predominant role of the optical cavity thickness mechanism with negligible contribution of the photoelastic effect (PE) [@Hudert2009]. The pump-induced variation of the membrane thickness $\Delta d$ is of the order of a picometer and directly proportional to the measured relative change of the reflectivity $\Delta R/R_0$. In the case of the PnCs the mechanical modes are complex and $\Delta d$ is position dependent. We correlate the amplitudes $A_i$ of the modes $\omega_i$ with their corresponding average change of thickness $|\overline\Delta d|$ calculated over the whole FEM unit cell using the formula: $$\label{eq1} A_i(\omega_i)\propto\overline{\Delta d}\propto\frac{1}{S\omega_i}\int_S \big((u_i(z=0)-u_i(z=d)\big)\mathrm{d}S,$$ where $u_i(z)$ are the out-of-plane displacement components, and $S$ is the free surface area. The displacement fields of all the FEM solutions are normalized in such a way that all the modes store the same elastic energy and are populated according to the Planck distribution at high temperature. ![image](./Fig3.eps){width="\linewidth"} The acoustic phonon dispersion relations for the membranes before and after hole patterning are displayed in Fig. \[fig3\](a) and \[fig3\](b), respectively. For the unpatterned membrane, the first three symmetric modes $S_1$, $S_2$, and $S_3$ are precisely reproduced by the FEM simulations regarding both amplitude and frequency (Fig. \[fig2\](g) and Fig. \[fig3\](a)). The decreasing amplitude of the higher harmonics can be accurately described by Eq. (\[eq1\]) which in the case of the unpatterned membranes simplifies to a $1/\omega^2$ relation (Fig. \[fig2\](d)) [@Hudert2009]. The fact that the mode amplitudes obey this relation indicates that the phonon coherence is not destroyed by, e.g., surface roughness up to at least 150 GHz. In the case of the ordered PnCs, the strong modifications of the phonon dispersion relation in Fig. \[fig3\](b) compared to the bare membrane (Fig. \[fig3\](a)) arises from band folding and band splitting when the Bragg condition is satisfied [@Graczykowski2015]. Considering the multitude of different modes in the dispersion relation, the question arises why there are only nine discrete modes visible in the time domain measurements. The answer becomes clear, if we consider the out-of-plane displacement amplitudes of the different modes according to Eq. \[eq1\] which are plotted together with the measured zone-center phonon frequencies in Fig. \[fig3\](c). In excellent agreement with the experimental results, the calculations reveal nine discrete modes in the frequency range up to 55 GHz ($\Gamma_1 - \Gamma_9$) with non-vanishing out-of-plane displacement amplitude at the $\Gamma$ point. The corresponding 3-dimensional displacement fields for these modes are displayed in Fig. \[fig3\](d) and \[fig3\](e) enabling an unambiguous identification of the displacement characteristic of each observed modes. Comparing the acoustic phonon spectrum of the ordered PnCs with those of the disordered PnCs, it is apparent that the dispersion relation and zone-center phonon frequencies differ significantly. In a disordered PnC, no individual modes above about 20 GHz can be detected (see Figs. \[fig2\](f) and \[fig2\](i)). This observation demonstrates the importance of translational symmetry to build coherent phonon modes in the hypersonic frequency range. Interestingly, two modes remain observable which are comparable in frequency and amplitude to the lowest frequency modes $\Gamma_1$ and $\Gamma_2$ of the ordered PnC, although significantly broadened. The fact that these modes are largely unaffected by disorder can be understood considering the 3D displacement fields of the lowest frequency modes in the unpatterned membrane ($S_1$) and the ordered PnC ($\Gamma_1$) in Figs. \[fig3\](d) and \[fig3\](e), respectively. Both modes exhibit a large out-of-plane displacement amplitude and similar displacement symmetry indicating that they are mainly governed by the bare membrane and do not depend on the second-order periodicity. Consequently, these modes are not significantly affected by the degree of order / disorder in the PnCs. Finally, it should be noted that the frequency of the lowest energy mode in the disordered PnC (14.6 GHz) and ordered PnC (15.5 GHz) is slightly smaller compared to the reference membrane (16.8 GHz) which is caused by the softening of the material due to the combined effects of mass removal and periodic arrangement. ![image](./Fig4.eps){width="\linewidth"} Up to this point, we have limited our discussion to the hypersonic (GHz) frequency range of the phonon spectrum. Taking into account that no coherent phonon modes in the ordered and disordered PnCs could be observed at frequencies above 55 and 20 GHz, respectively, we use a different approach to investigate the influence of order and disorder on the thermal properties: Two laser Raman thermometry (2LRT) [@Reparaz2014]. The main advantage of this technique with respect to, for example, electrical measurements or time-domain thermoreflectance (TDTR), is given by its contactless nature avoiding the introduction of additional thermal interface resistances. Thus, the thermal conductivity of the PnCs can be directly obtained from the measurements without additional modeling. A spatially fixed heating laser generates a localized steady-state thermal excitation, whereas a low power probe laser measures the spatially-resolved temperature profile with sub-micrometer resolution through the temperature dependent Raman frequency of the optical phonons in the material. Fig. \[fig4\](a) displays the temperature profiles for ordered and disordered PnCs and the unpatterned Si membrane obtained by 2LRT with the experimental arrangement shown schematically in Fig. \[fig1\](h). Applying Fourier’s law in 2-dimensions for a thermally isotropic medium leads to a temperature field $T(r)$ with: $$\label{eq2} T(r)=T_0+\frac{P_{abs}}{2\pi f d \kappa_0}ln(r/r_0)$$ where $(r_0,T_0)$ is an arbitrary point in the temperature field, $P_{abs}$ is the absorbed power, $f=0.528$ is a correction factor for the missing material due to the holes in the PnCs with a filling fraction of $\phi=0.267$ [@Alaie2015], $d$ is the thickness of the PnC membranes, and $\kappa_0$ is the thermal conductivity. Here, $\kappa_0$ can be treated as temperature independent since the temperature range is sufficiently small ($\approx$ 50 K). We recall that for the case of bulk Si the thermal conductivity changes by about 15% in the range from 350 K to 400 K [@Glassbrenner1964]. This variation represents only an upper (bulk) limit since the temperature dependence is typically reduced as boundary scattering increases. Fig. \[fig4\](b) displays the thermal decays in logarithmic scale according to Eq. (\[eq2\]), thus, the slope of the thermal decay is directly related to $\kappa_0$. The purely linear decay observed in this graph validates the temperature independent treatment of $\kappa$. A deviation from this linear relation is expected in cases where $\kappa=\kappa(T)$ as discussed in Ref. [@Reparaz2014]. Based on these measurements, we obtain the same value for the thermal conductivity $\kappa_0=14\pm2$ Wm$^{-1}\mathrm{K}^{-1}$ for the ordered and disordered PnCs compared to $\kappa_0=80\pm3$ Wm$^{-1}\mathrm{K}^{-1}$ in case of the unpatterned membrane. Considering the correction factor for the material loss in the holes of the PnCs, we would obtain an effective thermal conductivity of the PnCs in the absence of size effects of $f\kappa_0=46\pm3$ Wm$^{-1}\mathrm{K}^{-1}$. The reduction in the thermal conductivity of the PnCs down to $18$% of the value of the unpatterned membrane, about a six-fold reduction, is in accordance with other recent studies of the thermal conductivity of PnCs with comparable dimensions [@Tang2010; @Yu2010; @Zen2014; @Alaie2015; @Nakagawa2015; @Nomura2015]. This drastic reduction cannot be solely explained by the mass loss due to hole patterning. Instead, two aspects need to be considered: (i) diffuse boundary scattering and (ii) phonon coherence. A decrease of thermal conductivity by diffuse boundary scattering is expected due to the increase of the surface area caused by the introduction of holes as well as the hole wall roughness of about 7 nm due to the patterning process (c.f. Figs. \[fig1\](e) and \[fig1\](f)). We address the issue of phonon coherence by plotting in Fig. \[fig4\](c) the phonon frequencies of the measured coherent acoustic phonons (Fig. \[fig2\]) as function of the characteristic size $R$ limiting in each case the measured phonon frequency range, i.e. surface roughness, hole wall roughness, and average lattice site displacement. Next, we plot the corresponding phonon frequencies for selected specularity parameters $p$ as function of $R$: $$f(R)|_{p}=\sqrt{\frac{-\mathrm{ln}(p)}{16\pi^3}}\cdot\frac{v_L}{R}$$ where the specularity parameter $p$ expresses a quantitative measure for the percentage of specular scattering at a normal surface with given roughness (1 for purely specular scattering and 0 for purely diffusive scattering). The dependence of $p$ as function of the phonon wavelength is displayed for given values of $R$ in SI Fig. 5. Attempting to derive a general criteria for the non-coherent phonon regime (fully diffusive phonon scattering), we extrapolate the specularity parameter towards 0 in SI Fig. 5. The corresponding phonon wavelength is given in good approximation by the line for $p=$ 0.01 in Fig. \[fig4\](c). By plotting the phonon wavelength $\lambda_{ph}$ as function of roughness for this specularity parameter, we obtain $\lambda_{ph}\leq10R$ as simple criteria for the non-coherent phonon regime. Following this approach, we obtain frequency limits of 800 GHz, 115 GHz, and 36 GHz for surface roughness values of 1 nm, 7 nm, and 22.5 nm, respectively. It is important to note that the specularity parameter as introduced by Ziman [@Ziman1962] only considers the surface roughness for a normal incidence wave, not wall roughness or lattice site displacement. However, despite the different types of roughness in our phononic crystals, the computed values reproduce the general tendency of the measured decreasing high frequency limit of coherent phonons in our phononic crystals. In fact, our experimental data suggests that phonon coherence is already affected by roughness corresponding to a specularity parameter between 0.3 and 0.5 (c.f. Fig. \[fig4\](c)). Using the more conservative value of $p=$ 0.5, we find in a rough approximation that $\lambda_{ph}>25R$ constitutes a realistic criteria for the coherent phonon regime. Consequently, we suggest that disorder, quantified by the average hole displacement from the periodic lattice sites, can also be considered as a type of roughness for long wavelength phonons as can be seen when plotting the measured coherent phonon frequencies of the disordered PnCs for a roughness $R=25$ nm in Fig. \[fig4\](c): the measured frequencies are in good agreement with the calculated frequency for $p=$ 0.3 - 0.5. The absence of any coherent acoustic phonon signal with $f>20$ GHz for the disordered PnCs in Fig. \[fig2\](f) and \[fig2\](i) can then be understood as approaching the frequency limit of the coherent phonon regime for this given level of disorder. It now becomes clear why no differences between the thermal conductivity of the ordered and disordered PnCs should be expected due to coherent effects as confirmed by the equal values of $\kappa$ measured in the 2LRT experiments: On the one hand, the hole wall roughness of the ordered and disordered PnCs of about 7 nm prevents coherent effects for phonons with frequencies above $\approx$100 GHz (see Fig. \[fig4\](c)) whereas on the other hand, the frequencies of the dominant thermal phonons are in the low THz regime. In other words, the wavelengths of thermal phonons at room temperature are below 10 nm and therefore commensurate with the characteristic roughness $R$ in the studied samples which excludes any coherent effects based on the above given criteria. Consequently, the thermal conductivity is not affected by phonon coherence for both, the ordered and the disordered PnCs. Even for a roughness value as low as 1 nm, we predict phonon coherence only up to about 400 GHz at room temperature. According to Fig. \[fig4\](c) modifications of the thermal conductivity due to phonon coherence will only occur for very smooth surfaces/interfaces where the limit of the coherent phonon regime reaches the THz range or for very low temperatures where the wavelength of the thermal phonons is significantly enlarged and thereby greatly exceeds the characteristic roughness $R$. These results are also in agreement with a recent study of Maire et. al [@Maire2015], who observed pronounced modifications of the thermal conductivity depending on the level of disorder in Si PnCs only for temperatures up to about 10 K. In conclusion, we have addressed the question to what extend disorder influences the phononic properties of 2-dimensional phononic crystals both in the GHz and THz frequency range. In a first step, we have shown that patterning of a 2D Si membrane by an ordered phononic crystal lattice strongly modifies the frequencies and dispersion relation of hypersonic vibrations measured by femtosecond time-domain spectroscopy. Using finite element method simulations we have uniquely identified the displacement characteristic of each mode by the calculation of the dispersion relation and the 3-dimensional displacement fields. In particular, we have developed a simple model that accurately predicts the amplitudes of the out-of-plane displacement for all observed modes in the ordered PnCs. The introduction of disorder in the PnCs drastically modifies the hypersonic phonon spectrum resulting in the suppression of coherent acoustic phonon modes. Measurements of the thermal conductivity using a novel two laser Raman thermometry technique have shown that a six-fold reduction of the thermal conductivity occurs for both ordered and disordered PnCs ($\kappa_0=14\pm2$ Wm$^{-1}\mathrm{K}^{-1}$) with respect to the unpatterned membrane ($\kappa_0=80\pm3$ Wm$^{-1}\mathrm{K}^{-1}$). Based on the measured coherent acoustic phonon frequencies for different levels of roughness and disorder we have derived two criteria for the prediction of coherent and non-coherent phonon regimes: (i) phonon coherence is unaffected if the roughness $R$ is smaller than 1/25 of the phonon wavelength and (ii) phonon coherence is destroyed if $R$ is greater than 1/10 of the phonon wavelength. The results reveal the impact of surface roughness and disorder on the observation of coherent effects in phononic crystals and demonstrate that the room temperature thermal conductivity in comparable phononic crystals should not be affected by the change of the phonon dispersion resulting from coherent boundary scattering. Materials and Methods ===================== **Sample preparation:** Commercially available single crystalline silicon (100) membranes (Norcada Inc.) with a thickness of 250 nm and window size of 3.2$\times$3.2 mm$^2$ were used to fabricate 2D PnCs [@Sledzinska2016]. PMMA 950k (Allresist) was spun at 4000 rpm for one minute, followed by a 60 min bake at 100$^\circ$C in an oven. Electron beam lithography (Raith 150-TWO) was carried out to pattern ordered and disordered PnCs with a hole size of 175 nm and a pitch of 300 nm for the ordered PnCs and equal filling fraction of $\phi=0.267$ for the disordered PnCs (c.f. Fig. \[fig1\]). The dimensions of the structures were $\mathrm{50\times50}$ $\mathrm{\mu}$m. After development in 1:3 methyl isobutyl ketone:isopropanol (MIBK:IPA), the samples were post-baked for 1 min in 80$^\circ$C on a hot plate. The pattern was transferred to silicon using the reactive ion etching Bosch process (Alcatel AMS-110DE) and finally the samples were cleaned in an oxygen plasma system (PVA Tepla). **Femtosecond pump-probe reflectivity measurements:** Femtosecond pump-probe reflectivity measurements based on the asynchronous optical sampling (ASOPS) technique [@Bartels2007; @Hudert2009; @Bruchhausen2011] were used to investigate the acoustic phonon dynamics of ordered and disordered PnCs and unpatterned membranes in the hypersonic (GHz) frequency range. The experimental method is based on two asynchronously coupled Ti:sapphire ring cavity lasers with a repetition rate of 1 GHz and a nominal pulse length of about 50 fs. The time delay between the pump and probe pulses is achieved through an actively stabilized frequency offset of 10 kHz between the repetition rate of the two laser oscillators. This allows for a linearly increasing time delay between pump and probe pulses with steps of 10 fs without the need for a mechanical delay line. Owing to the high repetition rate of 1 GHz, an excellent signal-to-noise ratio of above $10^7$ can be achieved for typical acquisition times in the seconds to minutes range. Pump-probe reflectivity measurements were performed by focusing both lasers collinear onto the PnC membranes using a 50$\times$ microscopy objective (Olympus, NA = 0.55) in normal incidence geometry, resulting in a spot size of about 2 $\mu$m which corresponds to an excitation area in the PnC of about 35 unit cells. The pump laser was tuned to a center wavelength of 770 nm with an average power of 8 mW and the probe laser to 830 nm with a power of 4 mW. The reflected probe laser was spectrally filtered by a long-pass filter at 800 nm to eliminate contributions from the pump laser and recorded with a low noise photodetector with 125 MHz bandwidth. **Two-laser Raman thermometry:** Thermal conductivity measurements were conducted using two-laser Raman thermometry, a novel technique recently developed to investigate the thermal properties of suspended membranes  [@Reparaz2014]. A spatially fixed heating laser generates a localized steady-state thermal excitation, whereas a low power probe laser measures the spatially-resolved temperature profile with sub-micrometer resolution through the temperature dependent Raman frequency of the optical phonons in the material. Both lasers were focused on the PnCs using 50$\times$ microscope objectives with numerical apertures of NA = 0.55. The power of the heating laser with a wavelength of $\lambda_{heat}=\mathrm 405$ nm was set to 1 mW and the power of the probe laser with a wavelength of $\lambda_{probe}=\mathrm 488$ nm to 0.1 mW in order to avoid local heating by the probe laser while measuring the temperature field. The absorbed power is measured for each sample as the difference between incident and transmitted plus reflected light intensities probed by a calibrated system based on a non-polarizing cube beam splitter. The measurements were performed at ambient pressure which introduces heat losses through convective cooling. This effect accounts for about 30% of the thermal conductivity in our samples, i.e., the measured values for the thermal conductivity of the PnCs were $\kappa_0=21\pm2$ Wm$^{-1}\mathrm{K}^{-1}$. After correcting for the heat transport due to convective cooling we obtained the reported value of $\kappa_0=14\pm2$ Wm$^{-1}\mathrm{K}^{-1}$ for the ordered and disordered PnCs. A detailed discussion of the influence of convective cooling on the experimental values obtained for the thermal conductivity in Si PnCs is being published elsewhere [@Graczykowski2016]. ASSOCIATED CONTENT ================== **Supporting Information** The Supporting Information is available free of charge on the ACS Publications website at DOI: Description of the finite element modeling simulations; femtosecond pump-probe reflectivity spectra for a bare membrane, an ordered PnC, and a disordered PnC; specularity parameter as function of phonon wavelength for selected roughness values. AUTHOR INFORMATION ================== **Corresponding Author** Email: [email protected] **Present Addresses** M.R.W.: TU Berlin, Institute of solid state physics, Hardenbergstr. 36, 10623 Berlin, Germany ACKNOWLEDGMENTS =============== The authors acknowledge financial support from the EU FP7 project MERGING (Grant No. 309150), NANO-RF (Grant No. 318352) and QUANTIHEAT (Grant No. 604668); the Spanish MICINN projects nanoTHERM (Grant No. CSD2010-0044) and TAPHOR (Grant No. MAT2012-31392); and the Severo Ochoa Program (MINECO, Grant SEV-2013-0295). M.R.W. acknowledges the postdoctoral Marie Curie Fellowship (IEF) HeatProNano (Grant No. 628197). [61]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****, ()]{} @noop [****,  ()]{} [****, ()](\doibase 10.1103/PhysRevLett.75.3580) [****, ()](\doibase 10.1016/j.surfrep.2010.08.002) [****,  ()](\doibase 10.1103/PhysRevLett.100.194301) [****,  ()](\doibase 10.1103/PhysRevB.81.214303) [****,  ()](\doibase 10.1038/nature12608) [****,  ()](\doibase 10.1103/PhysRevE.74.046610) [****,  ()](\doibase 10.1063/1.2786605) [****,  ()](\doibase 10.1021/nl300982d) [****,  ()](\doibase 10.1063/1.4870045) [****,  ()](\doibase 10.1103/PhysRevB.91.075414) [****,  ()](\doibase 10.1103/PhysRevB.77.045420) [****,  ()](\doibase 10.1021/nl301204u),  [****,  ()](\doibase 10.1063/1.4861796) [****,  ()](\doibase 10.1038/nnano.2010.149) [****,  ()](\doibase 10.1021/nl102918q) [****,  ()](\doibase 10.1103/PhysRevLett.110.025902) [****,  ()](\doibase 10.1038/ncomms4435) [****,  ()](\doibase 10.1021/nn506792d) [****, ()](\doibase 10.1038/nmat4308) @noop [****,  ()]{} [****,  ()](\doibase 10.1021/nl102931z) [****,  ()](\doibase 10.1063/1.3675918) [****,  ()](\doibase 10.1038/ncomms8228) [****,  ()](\doibase 10.1063/1.4926653) [****,  ()](\doibase 10.1103/PhysRevB.91.205422) [****,  ()](\doibase 10.1088/0957-0233/20/1/012002) [****,  ()](\doibase 10.1038/nature08524) [****,  ()](\doibase 10.1103/PhysRevLett.112.153603) [****,  ()](\doibase 10.1038/ncomms5452) @noop [****,  ()]{} [****,  ()](\doibase 10.1021/nl903959r) @noop [****, ()]{} [****,  ()](\doibase 10.1063/1.4890319) [****,  ()](\doibase 10.1002/pssr.201206039) [****,  ()](\doibase 10.1063/1.4868020) @noop [****,  ()]{} @noop [****,  ()]{} [****, ()](\doibase 10.1038/NPHOTON.2013.29) [****,  ()](\doibase 10.1103/PhysRev.109.1492),  [****,  ()](\doibase 10.1038/nature05623) [****,  ()](\doibase 10.1038/nphoton.2012.69) [****,  ()](\doibase 10.1038/nphys1101),  [****,  ()](\doibase 10.1038/nphoton.2008.273) @noop [ ()]{},  [****,  ()](\doibase 10.1016/j.mee.2015.09.004) [****,  ()](\doibase 10.1063/1.2714048) [****,  ()](\doibase 10.1103/PhysRevB.79.201307) [****, ()](\doibase 10.1103/PhysRevLett.106.077401) @noop [****,  ()]{} [****, ()](\doibase 10.1063/1.113853) [****,  ()](http://link.aps.org/doi/10.1103/PhysRevLett.110.095503) [****,  ()](\doibase 10.1016/j.ultras.2014.06.018) [****,  ()](\doibase 10.1002/pssc.200405313) [****,  ()](\doibase 10.1063/1.1702809) [**](\doibase 10.1016/0003-682X(75)90008-0) (, ) [****,  ()](\doibase 10.1063/1.4867166) [****,  ()](papers2://publication/uuid/7209AD75-575F-441B-BD09-43061B709971) @noop [**]{} (, , ) @noop [ ()]{}
{ "pile_set_name": "ArXiv" }
--- abstract: 'Generated hateful and toxic content by a portion of users in social media is a rising phenomenon that motivated researchers to dedicate substantial efforts to the challenging direction of hateful content identification. We not only need an efficient automatic hate speech detection model based on advanced machine learning and natural language processing, but also a sufficiently large amount of annotated data to train a model. The lack of a sufficient amount of labelled hate speech data, along with the existing biases, has been the main issue in this domain of research. To address these needs, in this study we introduce a novel transfer learning approach based on an existing pre-trained language model called BERT (Bidirectional Encoder Representations from Transformers). More specifically, we investigate the ability of BERT at capturing hateful context within social media content by using new fine-tuning methods based on transfer learning. To evaluate our proposed approach, we use two publicly available datasets that have been annotated for racism, sexism, hate, or offensive content on Twitter. The results show that our solution obtains considerable performance on these datasets in terms of precision and recall in comparison to existing approaches. Consequently, our model can capture some biases in data annotation and collection process and can potentially lead us to a more accurate model.' author: - Marzieh Mozafari - Reza Farahbakhsh - Noël Crespi bibliography: - 'sample-bibliography.bib' title: 'A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media' --- Introduction {#sec:Introduction} ============ People are increasingly using social networking platforms such as Twitter, Facebook, YouTube, etc. to communicate their opinions and share information. Although the interactions among users on these platforms can lead to constructive conversations, they have been increasingly exploited for the propagation of abusive language and the organization of hate-based activities [@BadjatiyaG0V17; @burnap2015], especially due to the mobility and anonymous environment of these online platforms. Violence attributed to online hate speech has increased worldwide. For example, in the UK, there has been a significant increase in hate speech towards the immigrant and Muslim communities following the UK’s leaving the EU and the Manchester and London attacks[^1]. The US also has been a marked increase in hate speech and related crime following the Trump election[^2]. Therefore, governments and social network platforms confronting the trend must have tools to detect aggressive behavior in general, and hate speech in particular, as these forms of online aggression not only poison the social climate of the online communities that experience it, but can also provoke physical violence and serious harm [@burnap2015]. Recently, the problem of online abusive detection has attracted scientific attention. Proof of this is the creation of the third Workshop on Abusive Language Online[^3] or Kaggle’s Toxic Comment Classification Challenge that gathered 4,551 teams[^4] in 2018 to detect different types of toxicities (threats, obscenity, etc.). In the scope of this work, we mainly focus on the term hate speech as abusive content in social media, since it can be considered a broad umbrella term for numerous kinds of insulting user-generated content. Hate speech is commonly defined as any communication criticizing a person or a group based on some characteristics such as gender, sexual orientation, nationality, religion, race, etc. Hate speech detection is not a stable or simple target because misclassification of regular conversation as hate speech can severely affect users’ freedom of expression and reputation, while misclassification of hateful conversations as unproblematic would maintain the status of online communities as unsafe environments [@DavidsonBhattacharya2019]. To detect online hate speech, a large number of scientific studies have been dedicated by using Natural Language Processing (NLP) in combination with Machine Learning (ML) and Deep Learning (DL) methods [@Nobata2016; @mehdad2016; @waseemhovy2016; @gamback2017; @Zhang2018; @BadjatiyaG0V17]. Although supervised machine learning-based approaches have used different text mining-based features such as surface features, sentiment analysis, lexical resources, linguistic features, knowledge-based features or user-based and platform-based metadata [@Fortuna2018; @Davidson2017; @Waseem2018], they necessitate a well-defined feature extraction approach. The trend now seems to be changing direction, with deep learning models being used for both feature extraction and the training of classifiers. These newer models are applying deep learning approaches such as Convolutional Neural Networks (CNNs), Long Short-Term Memory Networks (LSTMs), etc.[@gamback2017; @BadjatiyaG0V17] to enhance the performance of hate speech detection models, however, they still suffer from lack of labelled data or inability to improve generalization property. Here, we propose a transfer learning approach for hate speech understanding using a combination of the unsupervised pre-trained model BERT [@bert2019] and some new supervised fine-tuning strategies. As far as we know, it is the first time that such exhaustive fine-tuning strategies are proposed along with a generative pre-trained language model to transfer learning to low-resource hate speech languages and improve performance of the task. In summary: - We propose a transfer learning approach using the pre-trained language model BERT learned on English Wikipedia and BookCorpus to enhance hate speech detection on publicly available benchmark datasets. Toward that end, for the first time, we introduce new fine-tuning strategies to examine the effect of different embedding layers of BERT in hate speech detection. - Our experiment results show that using the pre-trained BERT model and fine-tuning it on the downstream task by leveraging syntactical and contextual information of all BERT’s transformers outperforms previous works in terms of precision, recall, and F1-score. Furthermore, examining the results shows the ability of our model to detect some biases in the process of collecting or annotating datasets. It can be a valuable clue in using pre-trained BERT model for debiasing hate speech datasets in future studies. Previous Works {#sec:Liturature} ============== Here, the existing body of knowledge on online hate speech and offensive language and transfer learning is presented. **Online Hate Speech and Offensive Language:** Researchers have been studying hate speech on social media platforms such as Twitter [@Davidson2017], Reddit [@Olteanu2018; @Mittos2019], and YouTube [@Ottoni2018] in the past few years. The features used in traditional machine learning approaches are the main aspects distinguishing different methods, and surface-level features such as bag of words, word-level and character-level $n$-grams, etc. have proven to be the most predictive features [@Nobata2016; @mehdad2016; @waseemhovy2016]. Apart from features, different algorithms such as Support Vector Machines [@Malmasi2018], Naive Baye [@burnap2015], and Logistic Regression [@waseemhovy2016; @Davidson2017], etc. have been applied for classification purposes. Waseem et al. [@waseemhovy2016] provided a test with a list of criteria based on the work in Gender Studies and Critical Race Theory (CRT) that can annotate a corpus of more than $16k$ tweets as racism, sexism, or neither. To classify tweets, they used a logistic regression model with different sets of features, such as word and character $n$-grams up to 4, gender, length, and location. They found that their best model produces character $n$-gram as the most indicative features, and using location or length is detrimental. Davidson et al. [@Davidson2017] collected a $24K$ corpus of tweets containing hate speech keywords and labelled the corpus as hate speech, offensive language, or neither by using crowd-sourcing and extracted different features such as $n$-grams, some tweet-level metadata such as the number of hashtags, mentions, retweets, and URLs, Part Of Speech (POS) tagging, etc. Their experiments on different multi-class classifiers showed that the Logistic Regression with L2 regularization performs the best at this task. Malmasi et al. [@Malmasi2018] proposed an ensemble-based system that uses some linear SVM classifiers in parallel to distinguish hate speech from general profanity in social media. As one of the first attempts in neural network models, Djuric et al. [@Djuric2015] proposed a two-step method including a continuous bag of words model to extract paragraph2vec embeddings and a binary classifier trained along with the embeddings to distinguish between hate speech and clean content. Badjatiya et al. [@BadjatiyaG0V17] investigated three deep learning architectures, FastText, CNN, and LSTM, in which they initialized the word embeddings with either random or GloVe embeddings. Gambäck et al. [@gamback2017] proposed a hate speech classifier based on CNN model trained on different feature embeddings such as word embeddings and character $n$-grams. Zhang et al. [@Zhang2018] used a CNN+GRU (Gated Recurrent Unit network) neural network model initialized with pre-trained word2vec embeddings to capture both word/character combinations (e. g., $n$-grams, phrases) and word/character dependencies (order information). Waseem et al. [@Waseem2018] brought a new insight to hate speech and abusive language detection tasks by proposing a multi-task learning framework to deal with datasets across different annotation schemes, labels, or geographic and cultural influences from data sampling. Founta et al. [@Founta2019] built a unified classification model that can efficiently handle different types of abusive language such as cyberbullying, hate, sarcasm, etc. using raw text and domain-specific metadata from Twitter. Furthermore, researchers have recently focused on the bias derived from the hate speech training datasets [@WaseemDavidson2017; @DavidsonBhattacharya2019; @wiegand2019]. Davidson et al. [@DavidsonBhattacharya2019] showed that there were systematic and substantial racial biases in five benchmark Twitter datasets annotated for offensive language detection. Wiegand et al. [@wiegand2019] also found that classifiers trained on datasets containing more implicit abuse (tweets with some abusive words) are more affected by biases rather than once trained on datasets with a high proportion of explicit abuse samples (tweets containing sarcasm, jokes, etc.). **Transfer Learning:** Pre-trained vector representations of words, embeddings, extracted from vast amounts of text data have been encountered in almost every language-based tasks with promising results. Two of the most frequently used context-independent neural embeddings are word2vec and Glove extracted from shallow neural networks. The year 2018 has been an inflection point for different NLP tasks thanks to remarkable breakthroughs: Universal Language Model Fine-Tuning (ULMFiT) [@Ruder2018], Embedding from Language Models (ELMO) [@Matthew_2018], OpenAI’ s Generative Pre-trained Transformer (GPT) [@Radford2018], and Google’s BERT model [@bert2019]. Howard et al. [@Ruder2018] proposed ULMFiT which can be applied to any NLP task by pre-training a universal language model on a general-domain corpus and then fine-tuning the model on target task data using discriminative fine-tuning. Peters et al. [@Matthew_2018] used a bi-directional LSTM trained on a specific task to present context-sensitive representations of words in word embeddings by looking at the entire sentence. Radford et al. [@Radford2018] and Devlin et al. [@bert2019] generated two transformer-based language models, OpenAI GPT and BERT respectively. OpenAI GPT [@Radford2018] is an unidirectional language model while BERT [@bert2019] is the first deeply bidirectional, unsupervised language representation, pre-trained using only a plain text corpus. BERT has two novel prediction tasks: Masked LM and Next Sentence Prediction. The pre-trained BERT model significantly outperformed ELMo and OpenAI GPT in a series of downstream tasks in NLP [@bert2019]. Identifying hate speech and offensive language is a complicated task due to the lack of undisputed labelled data [@Malmasi2018] and the inability of surface features to capture the subtle semantics in text. To address this issue, we use the pre-trained language model BERT for hate speech classification and try to fine-tune specific task by leveraging information from different transformer encoders. Methodology =========== Here, we analyze the BERT transformer model on the hate speech detection task. BERT is a multi-layer bidirectional transformer encoder trained on the English Wikipedia and the Book Corpus containing 2,500M and 800M tokens, respectively, and has two models named BERT~base~ and BERT~large~. BERT~base~ contains an encoder with 12 layers (transformer blocks), 12 self-attention heads, and 110 million parameters whereas BERT~large~ has 24 layers, 16 attention heads, and 340 million parameters. Extracted embeddings from BERT~base~ have 768 hidden dimensions [@bert2019]. As the BERT model is pre-trained on general corpora, and for our hate speech detection task we are dealing with social media content, therefore as a crucial step, we have to analyze the contextual information extracted from BERT’ s pre-trained layers and then fine-tune it using annotated datasets. By fine-tuning we update weights using a labelled dataset that is new to an already trained model. As an input and output, BERT takes a sequence of tokens in maximum length 512 and produces a representation of the sequence in a 768-dimensional vector. BERT inserts at most two segments to each input sequence, \[CLS\] and \[SEP\]. \[CLS\] embedding is the first token of the input sequence and contains the special classification embedding which we take the first token \[CLS\] in the final hidden layer as the representation of the whole sequence in hate speech classification task. The \[SEP\] separates segments and we will not use it in our classification task. To perform the hate speech detection task, we use BERT~base~ model to classify each tweet as Racism, Sexism, Neither or Hate, Offensive, Neither in our datasets. In order to do that, we focus on fine-tuning the pre-trained BERT~base~ parameters. By fine-tuning, we mean training a classifier with different layers of 768 dimensions on top of the pre-trained BERT~base~ transformer to minimize task-specific parameters. Fine-Tuning Strategies ---------------------- Different layers of a neural network can capture different levels of syntactic and semantic information. The lower layer of the BERT model may contain more general information whereas the higher layers contain task-specific information [@bert2019], and we can fine-tune them with different learning rates. Here, four different fine-tuning approaches are implemented that exploit pre-trained BERT~base~ transformer encoders for our classification task. More information about these transformer encoders’ architectures are presented in [@bert2019]. In the fine-tuning phase, the model is initialized with the pre-trained parameters and then are fine-tuned using the labelled datasets. Different fine-tuning approaches on the hate speech detection task are depicted in Figure \[fig:finetuning\_strategies\], in which $X_{i}$ is the vector representation of token $i$ in a tweet sample, and are explained in more detail as follows: **1. BERT based fine-tuning:** In the first approach, which is shown in Figure \[fig:1\], very few changes are applied to the BERT~base~. In this architecture, only the \[CLS\] token output provided by BERT is used. The \[CLS\] output, which is equivalent to the \[CLS\] token output of the 12th transformer encoder, a vector of size 768, is given as input to a fully connected network without hidden layer. The softmax activation function is applied to the hidden layer to classify. **2. Insert nonlinear layers:** Here, the first architecture is upgraded and an architecture with a more robust classifier is provided in which instead of using a fully connected network without hidden layer, a fully connected network with two hidden layers in size 768 is used. The first two layers use the Leaky Relu activation function with negative slope = 0.01, but the final layer, as the first architecture, uses softmax activation function as shown in Figure \[fig:2\]. **3. Insert Bi-LSTM layer:** Unlike previous architectures that only use \[CLS\] as the input for the classifier, in this architecture all outputs of the latest transformer encoder are used in such a way that they are given as inputs to a bidirectional recurrent neural network (Bi-LSTM) as shown in Figure \[fig:3\]. After processing the input, the network sends the final hidden state to a fully connected network that performs classification using the softmax activation function. **4. Insert CNN layer:** In this architecture shown in Figure \[fig:4\], the outputs of all transformer encoders are used instead of using the output of the latest transformer encoder. So that the output vectors of each transformer encoder are concatenated, and a matrix is produced. The convolutional operation is performed with a window of size (3, hidden size of BERT which is 768 in BERT~base~ model) and the maximum value is generated for each transformer encoder by applying max pooling on the convolution output. By concatenating these values, a vector is generated which is given as input to a fully connected network. By applying softmax on the input, the classification operation is performed. Experiments and Results ======================= We first introduce datasets used in our study and then investigate the different fine-tuning strategies for hate speech detection task. We also include the details of our implementation and error analysis in the respective subsections. Dataset Description ------------------- We evaluate our method on two widely-studied datasets provided by Waseem and Hovey [@waseemhovy2016] and Davidson et al. [@Davidson2017]. Waseem and Hovy [@waseemhovy2016] collected $16k$ of tweets based on an initial ad-hoc approach that searched common slurs and terms related to religious, sexual, gender, and ethnic minorities. They annotated their dataset manually as racism, sexism, or neither. To extend this dataset, Waseem [@waseem2016] also provided another dataset containing $6.9k$ of tweets annotated with both expert and crowdsourcing users as racism, sexism, neither, or both. Since both datasets are overlapped partially and they used the same strategy in definition of hateful content, we merged these two datasets following Waseem et al. [@Waseem2018] to make our imbalance data a bit larger. Davidson et al. [@Davidson2017] used the Twitter API to accumulate 84.4 million tweets from 33,458 twitter users containing particular terms from a pre-defined lexicon of hate speech words and phrases, called Hatebased.org. To annotate collected tweets as Hate, Offensive, or Neither, they randomly sampled $25k$ tweets and asked users of CrowdFlower crowdsourcing platform to label them. In detail, the distribution of different classes in both datasets will be provided in Subsection \[implementation\]. Pre-Processing -------------- We find mentions of users, numbers, hashtags, URLs and common emoticons and replace them with the tokens &lt;user&gt;,&lt;number&gt;,&lt;hashtag&gt;,&lt;url&gt;,&lt;emoticon&gt;. We also find elongated words and convert them into short and standard format; for example, converting yeeeessss to yes. With hashtags that include some tokens without any with space between them, we replace them by their textual counterparts; for example, we convert hashtag “\#notsexist" to “not sexist". All punctuation marks, unknown uni-codes and extra delimiting characters are removed, but we keep all stop words because our model trains the sequence of words in a text directly. We also convert all tweets to lower case. Implementation and Results Analysis {#implementation} ------------------------------------ For the implementation of our neural network, we used pytorch-pretrained-bert library containing the pre-trained BERT model, text tokenizer, and pre-trained WordPiece. As the implementation environment, we use Google Colaboratory tool which is a free research tool with a Tesla K80 GPU and 12G RAM. Based on our experiments, we trained our classifier with a batch size of 32 for 3 epochs. The dropout probability is set to 0.1 for all layers. Adam optimizer is used with a learning rate of 2e-5. As an input, we tokenized each tweet with the BERT tokenizer. It contains invalid characters removal, punctuation splitting, and lowercasing the words. Based on the original BERT [@bert2019], we split words to subword units using WordPiece tokenization. As tweets are short texts, we set the maximum sequence length to 64 and in any shorter or longer length case it will be padded with zero values or truncated to the maximum length. We consider 80% of each dataset as training data to update the weights in the fine-tuning phase, 10% as validation data to measure the out-of-sample performance of the model during training, and 10% as test data to measure the out-of-sample performance after training. To prevent overfitting, we use stratified sampling to select 0.8, 0.1, and 0.1 portions of tweets from each class (racism/sexism/neither or hate/offensive/neither) for train, validation, and test. Classes’ distribution of train, validation, and test datasets are shown in Table \[tab:table1\]. [0.4]{} **Racism** **Sexism** **Neither** **Total** ---------------- ------------ ------------ ------------- ----------- **Train** 1693 3337 10787 15817 **Validation** 210 415 1315 1940 **Test** 210 415 1315 1940 **Total** 2113 4167 13417 : Davidson-dataset.[]{data-label="class_distribution_davidson"} [0.4]{} **Hate** **Offensive** **Neither** **Total** ---------------- ---------- --------------- ------------- ----------- **Train** 1146 15354 3333 19832 **Validation** 142 1918 415 2475 **Test** 142 1918 415 2475 **Total** 1430 19190 4163 : Davidson-dataset.[]{data-label="class_distribution_davidson"} As it is understandable from Tables \[tab:table1\]() and \[tab:table1\](), we are dealing with imbalance datasets with various classes’ distribution. Since hate speech and offensive languages are real phenomena, we did not perform oversampling or undersampling techniques to adjust the classes’ distribution and tried to supply the datasets as realistic as possible. We evaluate the effect of different fine-tuning strategies on the performance of our model. Table \[bert-fine-tune\] summarized the obtained results for fine-tuning strategies along with the official baselines. We use Waseem and Hovy [@waseemhovy2016], Davidson et al. [@Davidson2017], and Waseem et al. [@Waseem2018] as baselines and compare the results with our different fine-tuning strategies using pre-trained BERT~base~ model. The evaluation results are reported on the test dataset and on three different metrics: precision, recall, and weighted-average F1-score. We consider weighted-average F1-score as the most robust metric versus class imbalance, which gives insight into the performance of our proposed models. According to Table \[bert-fine-tune\], F1-scores of all BERT based fine-tuning strategies except BERT + nonlinear classifier on top of BERT are higher than the baselines. Using the pre-trained BERT model as initial embeddings and fine-tuning the model with a fully connected linear classifier (BERT~base~) outperforms previous baselines yielding F1-score of 81% and 91% for datasets of Waseem and Davidson respectively. Inserting a CNN to pre-trained BERT model for fine-tuning on downstream task provides the best results as F1- score of 88% and 92% for datasets of Waseem and Davidson and it clearly exceeds the baselines. Intuitively, this makes sense that combining all pre-trained BERT layers with a CNN yields better results in which our model uses all the information included in different layers of pre-trained BERT during the fine-tuning phase. This information contains both syntactical and contextual features coming from lower layers to higher layers of BERT. Error Analysis -------------- Although we have very interesting results in term of recall, the precision of the model shows the portion of false detection we have. To understand better this phenomenon, in this section we perform a deep analysis on the error of the model. We investigate the test datasets and their confusion matrices resulted from the BERT~base~ + CNN model as the best fine-tuning approach; depicted in Figures \[fig:Zaraak\] and \[fig:Davidson\]. According to Figure \[fig:Zaraak\] for Waseem-dataset, it is obvious that the model can separate sexism from racism content properly. Only two samples belonging to racism class are misclassified as sexism and none of the sexism samples are misclassified as racism. A large majority of the errors come from misclassifying hateful categories (racism and sexism) as hatless (neither) and vice versa. 0.9% and 18.5% of all racism samples are misclassified as sexism and neither respectively whereas it is 0% and 12.7% for sexism samples. Almost 12% of neither samples are misclassified as racism or sexism. As Figure \[fig:Davidson\] makes clear for Davidson-dataset, the majority of errors are related to hate class where the model misclassified hate content as offensive in 63% of the cases. However, 2.6% and 7.9% of offensive and neither samples are misclassified respectively. ![[]{data-label="fig:Davidson"}](Zaraak-eps-converted-to.pdf){width="1.2\linewidth"} ![[]{data-label="fig:Davidson"}](Davidson-eps-converted-to.pdf){width="1.2\linewidth"} To understand better the mislabeled items by our model, we did a manual inspection on a subset of the data and record some of them in Tables \[waseem\_error\] and \[davidson\_error\]. Considering the words such as “daughters", “women", and “burka" in tweets with IDs 1 and 2 in Table \[waseem\_error\], it can be understood that our BERT based classifier is confused with the contextual semantic between these words in the samples and misclassified them as sexism because they are mainly associated to femininity. In some cases containing implicit abuse (like subtle insults) such as tweets with IDs 5 and 7, our model cannot capture the hateful/offensive content and therefore misclassifies. It should be noticed that even for a human it is difficult to discriminate against this kind of implicit abuses. By examining more samples and with respect to recently studies [@DavidsonBhattacharya2019; @sap2019; @wiegand2019], it is clear that many errors are due to biases from data collection [@wiegand2019] and rules of annotation [@sap2019] and not the classifier itself. Since Waseem et al.[@waseemhovy2016] created a small ad-hoc set of keywords and Davidson et al.[@Davidson2017] used a large crowdsourced dictionary of keywords (Hatebase lexicon) to sample tweets for training, they included some biases in the collected data. Especially for Davidson-dataset, some tweets with specific language (written within the African American Vernacular English) and geographic restriction (United States of America) are oversampled such as tweets containing disparage words “nigga", “faggot", “coon", or “queer", result in high rates of misclassification. However, these misclassifications do not confirm the low performance of our classifier because annotators tended to annotate many samples containing disrespectful words as hate or offensive without any presumption about the social context of tweeters such as the speaker’s identity or dialect, whereas they were just offensive or even neither tweets. Tweets IDs 6, 8, and 10 are some samples containing offensive words and slurs which arenot hate or offensive in all cases and writers of them used this type of language in their daily communications. Given these pieces of evidence, by considering the content of tweets, we can see in tweets IDs 3, 4, and 9 that our BERT-based classifier can discriminate tweets in which neither and implicit hatred content exist. One explanation of this observation may be the pre-trained general knowledge that exists in our model. Since the pre-trained BERT model is trained on general corpora, it has learned general knowledge from normal textual data without any purposely hateful or offensive language. Therefore, despite the bias in the data, our model can differentiate hate and offensive samples accurately by leveraging knowledge-aware language understanding that it has and it can be the main reason for high misclassifications of hate samples as offensive (in reality they are more similar to offensive rather than hate by considering social context, geolocation, and dialect of tweeters). Conclusion ========== Conflating hatred content with offensive or harmless language causes online automatic hate speech detection tools to flag user-generated content incorrectly. Not addressing this problem may bring about severe negative consequences for both platforms and users such as decreasement of platforms’ reputation or users abandonment. Here, we propose a transfer learning approach advantaging the pre-trained language model BERT to enhance the performance of a hate speech detection system and to generalize it to new datasets. To that end, we introduce new fine-tuning strategies to examine the effect of different layers of BERT in hate speech detection task. The evaluation results indicate that our model outperforms previous works by profiting the syntactical and contextual information embedded in different transformer encoder layers of the BERT model using a CNN-based fine-tuning strategy. Furthermore, examining the results shows the ability of our model to detect some biases in the process of collecting or annotating datasets. It can be a valuable clue in using the pre-trained BERT model to alleviate bias in hate speech datasets in future studies, by investigating a mixture of contextual information embedded in the BERT’s layers and a set of features associated to the different type of biases in data. [^1]: [^2]: [^3]: <https://sites.google.com/view/alw3/home> [^4]: <https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/>
{ "pile_set_name": "ArXiv" }
--- abstract: 'We construct dynamical black ring solutions in the five dimensional Einstein-Maxwell system with a positive cosmological constant and investigate the geometrical structure. The solutions describe the physical process such that a thin black ring at early time shrinks and changes into a single black hole as time increase. We also discuss the multi-black rings and the coalescence of them.' author: - 'Masashi Kimura[^1]' title: Dynamical Black Rings with a Positive Cosmological Constant --- [ ]{} Introduction {#sec1} ============ Recently, higher dimensional black holes have attracted much attention in the context of string theory and the brane world scenario. In particular, the black ring solution [@Emparan:2001wn] is one of the most important discovery because that means the uniqueness theorem does not hold in higher-dimensional space-time unlike the case of four-dimensional space-time[^2] and the shape of black objects can take various topology in higher-dimension. In fact, many solutions which have more complicated structure have been constructed  (see also [@Emparan:2006mm; @Emparan:2008eg]). It is interesting to have black ring solution with a cosmological constant in the context of AdS/CFT correspondence. However, by now, attempts to obtain regular black ring solution with a cosmological constant did not succeed [@Chu:2006pf; @Kunduri:2006uh]. This might be because the co-existence of the scales of the diameter of black ring and the cosmological constant is difficult. In [@Caldarelli:2008pz], Caldarelli et al. constructed solutions for thin black rings in dS and AdS space-times using approximate methods, and they mentioned static black ring can exist in the case of positive cosmological constant because of the force balance between gravitational force and repulsive force by cosmological constant. We consider a possibility that solution is dynamical by the existence of positive cosmological constant unlike the other works so far. In general relativity, it is difficult to obtain dynamical black hole solutions, however we can easily construct such the black hole solution in the case of the mass equal to the charge as is constructed by Kastor and Traschen [@Kastor:1992nn].[^3] Kastor-Traschen solution [@Kastor:1992nn] was generalized to higher-dimension in [@London:1995ib] and coalescing black holes on Gibbons-Hawking space [@Gibbons:1979zt] in [@Ishihara:2006ig; @Yoo:2007mq]. In this paper, we discuss the dynamical black ring solution in a variation of the Kastor-Traschen solutions. The organization of the paper is as follows. The method for constructing dynamical black ring solutions are shown in §\[sec2\]. In §\[sec3\], global structure of the solution in the case of a single black ring is discussed. Event horizon of coalescing multi-black rings are discussed in §\[sec4\]. Summary and discussions are given in §\[sec5\]. Construction of Dynamical Black Ring Solutions {#sec2} ============================================== We consider five dimensional Einstein-Maxwell system with a positive cosmological constant, which is described by the action $$S=\frac{1}{16\pi G_5}\int dx^5 \sqrt{-g} (R -4\Lambda-F_{\mu\nu}F^{\mu \nu} ),$$ where $R$ is the five dimensional scalar curvature, $F_{\mu\nu}$ is the Maxwell field strength tensor, $\Lambda$ is the positive cosmological constant and $G_5$ is the five-dimensional Newton constant. From this action, we write down the Einstein equation $$R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu} +2g_{\mu \nu}\Lambda = 2\biggl(F_{\mu\lambda}{F_{\nu}}^{\lambda}-\frac{1}{4}g_{\mu\nu}F_{\alpha \beta}F^{\alpha\beta}\biggl), \label{einsteineq}$$ and the Maxwell equation $${F^{\mu \nu}}_{;\nu}=0. \label{maxwelleq}$$ In this system, we consider the following metric and the gauge 1-form $$\begin{aligned} ds^2 &=& - H^{-2} dt^2 + H e^{-\lambda t} ds^2_{\rm E^4}, \label{met} \\ A &=& \pm \frac{\sqrt{3}}{2} H^{-1} dt, \label{gauge}\end{aligned}$$ where $ds^2_{\rm E^4}$ is a four-dimensional Euclid space and $\lambda = \sqrt{4\Lambda/3}$ and the function $H$ is $$\begin{aligned} H &=& 1 + \frac{1}{e^{-\lambda t}} \Psi,\end{aligned}$$ and the function $\Psi$ is independent of time coordinate $t$. As shown in [@Kastor:1992nn; @London:1995ib], if the function $\Psi$ is the solution of the Laplace equation on $ds^2_{\rm E^4}$ $$\begin{aligned} \triangle_{\rm E^4} \Psi &=& 0, \label{laplace}\end{aligned}$$ the metric (\[met\]) and the gauge-1-form (\[gauge\]) become solutions of five-dimensional Einstein-Maxwell system with a positive cosmological constant.[^4] In [@London:1995ib], the function $\Psi$ was chosen as point source harmonics, and then the solution describes coalescence of multi-black holes with the topology of ${\rm S}^3$. In construct, in this paper, we focus on the ring source solutions of (\[laplace\]) given by $$\begin{aligned} \Psi &= \sum_i \frac{m_{i}}{\sqrt{(r_1 + a_i)^2 + r_2^2} \sqrt{(r_1 - a_i)^2 + r_2^2}} + \sum_i \frac{n_i}{\sqrt{r_1^2+ (r_2 + b_i)^2 } \sqrt{r_1^2 + (r_1 - b_i)^2 }}, \label{ringsource0}\end{aligned}$$ where we use the coordinate of $ds_{\rm E^4}^2$ as $$\begin{aligned} ds^2_{\rm E^4} &=& dr_1^2 + r_1^2 d\phi_1^2 + dr_2^2 + r_2^2 d\phi_2^2,\end{aligned}$$ and $\Psi$ satisfies $$\begin{aligned} \triangle \Psi &= \sum_i \frac{m_i}{2\pi a_i}\delta(r_1 - a_i)\delta(r_2) + \sum_i \frac{n_i}{2\pi b_i}\delta(r_1)\delta(r_2-b_i).\end{aligned}$$ global structure of single black ring solution {#sec3} ============================================== At first, we focus on a single black ring solution, namely the harmonics $\Psi$ takes the form $$\begin{aligned} \Psi &= \frac{m}{\sqrt{(r_1 + a)^2 + r_2^2} \sqrt{(r_1 - a)^2 + r_2^2}}. \label{ringsource1}\end{aligned}$$ In this case, the solution is dynamical because the topologies of the spatial cross section of the event horizon change from ${\rm S^2} \times {\rm S^1}$ at early time into ${\rm S^3}$ at late time as shown in the following subsection. Event Horizon ------------- At late time $t \to \infty$ the metric (\[met\]) behaves $$\begin{aligned} ds^2 &\simeq & -\left(1 + \frac{1}{e^{-\lambda t}}\frac{m}{r_1^2 + r_2^2}\right)^{-2} dt^2 \notag\\ & & + \left(1 + \frac{1}{e^{-\lambda t}}\frac{m}{r_1^2 + r_2^2}\right)e^{-\lambda t} (dr_1^2 + r_1^2 d\phi_1^2 + dr_2^2 + r_2^2 d\phi_2^2).\end{aligned}$$ We can see the geometry described by (\[met\]) at late time asymptotes to that of Reissner-Nordström-de Sitter solution. So we can find event horizon locally at late time if $m < m_{\rm ext} = 16/(27\lambda^2)$. Similar to the discussion in [@Ida:1998qt], by solving null geodesics from each point of the event horizons on $t={\rm const}.$ surface at late time to the past, we can get null geodesic generators of the event horizons, namely we can find the locations of the event horizons approximately. We plot coordinate value of event horizon in $r_1-r_2$ plane at each time in Fig.\[fig:horizon\]. From this, we can see that the topology of spatial cross section of event horizon at late time is ${\rm S}^3$ and the topology of that at early time is ${\rm S}^1 \times {\rm S}^2$. ![ Time evolution of the event horizon for a single black ring in $r_1-r_2$ plane. Coordinate values of event horizon of each time slices are plotted. We set parameters $\lambda =1,~m=1/2,~a=1$.[]{data-label="fig:horizon"}](horizon4.eps "fig:"){width="0.35\linewidth"}     ![ Time evolution of the event horizon for a single black ring in $r_1-r_2$ plane. Coordinate values of event horizon of each time slices are plotted. We set parameters $\lambda =1,~m=1/2,~a=1$.[]{data-label="fig:horizon"}](horizon3.eps "fig:"){width="0.35\linewidth"} ![ Time evolution of the event horizon for a single black ring in $r_1-r_2$ plane. Coordinate values of event horizon of each time slices are plotted. We set parameters $\lambda =1,~m=1/2,~a=1$.[]{data-label="fig:horizon"}](horizon2.eps "fig:"){width="0.35\linewidth"}     ![ Time evolution of the event horizon for a single black ring in $r_1-r_2$ plane. Coordinate values of event horizon of each time slices are plotted. We set parameters $\lambda =1,~m=1/2,~a=1$.[]{data-label="fig:horizon"}](horizon1.eps "fig:"){width="0.35\linewidth"} Early Time Behavior ------------------- From Fig.\[fig:horizon\], at early time, we can see the event horizon locate on near the points of ring source of harmonics $r_1 = a, r_2 = 0$. So in this section, we study the geometrical structure near $r_1 = a,~r_2=0$ analytically. Near the points $r_1 = a, r_2 = 0$, the metric (\[met\]) behaves $$\begin{aligned} ds^2 &\simeq & -\left(1 + \frac{m}{e^{-\lambda t}} \frac{m}{2a \rho}\right)^{-2}dt^2 \notag\\ && + \left(1 + \frac{m}{e^{-\lambda t}} \frac{m}{2a \rho}\right) e^{-\lambda t}\Big( d\rho^2 + \rho^2(d\theta^2 + \sin^2 \theta d\phi^2) + a^2 d\phi_1^2 \Big), \label{blackstring}\end{aligned}$$ where we introduce new coordinate $\rho,~\theta,~\phi$ as $$\begin{aligned} \rho \sin{\theta} \cos{\phi} &=& r_1 - a, \\ \rho \sin{\theta} \sin{\phi} &=& r_2 \sin \phi_2, \\ \rho \cos{\theta} &=& r_2 \cos \phi_2.\end{aligned}$$ From this, we can see that the early time behavior is like black string. The metric (\[met\]) describes the physical process such that a thin black ring at early time shrinks and changes into a single black hole as time increase. If we take $\lambda = 0$ limit of the metric (\[blackstring\]), it reduce to the charged black string solution with the mass equal to the charge [@Horowitz:2002ym]. The charged black string solution [@Horowitz:2002ym] is regular solution which has two horizon if the mass is greater then the charge, but it has naked singularity at degenerated horizon if the mass equal to the charge. One may worry about that the metric (\[blackstring\]) also has naked singularity. So, we investigate whether the singularities are hidden by the horizon, i.e., whether the null geodesic generators of event horizon reach $r_1=a, r_2=0$ at a finite past time. To do this we study null geodesics in the metric (\[blackstring\]). The geometry of (\[blackstring\]) has $SO(3) \times U(1)$ symmetry, so it is sufficient to focus on $t-\rho$ part of the metic (\[blackstring\]). The null geodesics $\rho = \rho(t)$ which goes inward from the future to the past satisfies $$\begin{aligned} \frac{d \rho}{dt} = \frac{1}{\sqrt{e^{-\lambda t}}} \left(1 + \frac{m}{e^{-\lambda t}} \frac{m}{2a \rho}\right)^{-3/2}. \label{geodesiceq}\end{aligned}$$ The solution of this equation (\[geodesiceq\]) asymptotes to $$\begin{aligned} \rho(t) \to \frac{1}{e^{-2 \lambda t}}\frac{\lambda^2 m^3}{2 a^3}, \label{BShorizon}\end{aligned}$$ as $t \to -\infty$. So, we can see that the singularity is hidden by event horizon at least finite past time. However, in the $t\to -\infty$ limit along the event horizon (\[BShorizon\]), the curvature behaves $$\begin{aligned} R_{\alpha \beta \mu \nu}R^{\alpha \beta \mu \nu} \sim e^{-4 \lambda t} \to \infty,\end{aligned}$$ but we consider this singularity is not so wrong as long as we focus on the region in which the time coordinate $t$ takes finite values. event horizon of multi black ring solution {#sec4} ========================================== In this section, we investigate the event horizons of coalescing multi-black rings. For simplicity, we restrict ourselves to the solution with two black rings. There are two typical situation, one is concentric black rings in a plane, the other is orthogonal black rings. ![ Time evolution of the event horizon for concentric black rings in a plane in $r_1-r_2$ plane. Coordinate values of event horizon of each time slices are plotted. We set parameters $\lambda =1,~m_1=m_2=1/4,~a_1=1,~a_2=2$.[]{data-label="fig:chorizon"}](chorizon6.eps "fig:"){width="0.33\linewidth"}     ![ Time evolution of the event horizon for concentric black rings in a plane in $r_1-r_2$ plane. Coordinate values of event horizon of each time slices are plotted. We set parameters $\lambda =1,~m_1=m_2=1/4,~a_1=1,~a_2=2$.[]{data-label="fig:chorizon"}](chorizon5.eps "fig:"){width="0.33\linewidth"} ![ Time evolution of the event horizon for concentric black rings in a plane in $r_1-r_2$ plane. Coordinate values of event horizon of each time slices are plotted. We set parameters $\lambda =1,~m_1=m_2=1/4,~a_1=1,~a_2=2$.[]{data-label="fig:chorizon"}](chorizon4.eps "fig:"){width="0.33\linewidth"}     ![ Time evolution of the event horizon for concentric black rings in a plane in $r_1-r_2$ plane. Coordinate values of event horizon of each time slices are plotted. We set parameters $\lambda =1,~m_1=m_2=1/4,~a_1=1,~a_2=2$.[]{data-label="fig:chorizon"}](chorizon3.eps "fig:"){width="0.33\linewidth"} ![ Time evolution of the event horizon for concentric black rings in a plane in $r_1-r_2$ plane. Coordinate values of event horizon of each time slices are plotted. We set parameters $\lambda =1,~m_1=m_2=1/4,~a_1=1,~a_2=2$.[]{data-label="fig:chorizon"}](chorizon2.eps "fig:"){width="0.33\linewidth"}     ![ Time evolution of the event horizon for concentric black rings in a plane in $r_1-r_2$ plane. Coordinate values of event horizon of each time slices are plotted. We set parameters $\lambda =1,~m_1=m_2=1/4,~a_1=1,~a_2=2$.[]{data-label="fig:chorizon"}](chorizon1.eps "fig:"){width="0.33\linewidth"} ![ Time evolution of the event horizon for orthogonal black rings in $r_1-r_2$ plane. Coordinate values of event horizon of each time slices are plotted. We set parameters $\lambda =1,~m=n=1/4,~a=b=1$.[]{data-label="fig:ohorizon"}](ohorizon6.eps "fig:"){width="0.33\linewidth"}     ![ Time evolution of the event horizon for orthogonal black rings in $r_1-r_2$ plane. Coordinate values of event horizon of each time slices are plotted. We set parameters $\lambda =1,~m=n=1/4,~a=b=1$.[]{data-label="fig:ohorizon"}](ohorizon5.eps "fig:"){width="0.33\linewidth"} ![ Time evolution of the event horizon for orthogonal black rings in $r_1-r_2$ plane. Coordinate values of event horizon of each time slices are plotted. We set parameters $\lambda =1,~m=n=1/4,~a=b=1$.[]{data-label="fig:ohorizon"}](ohorizon4.eps "fig:"){width="0.33\linewidth"}     ![ Time evolution of the event horizon for orthogonal black rings in $r_1-r_2$ plane. Coordinate values of event horizon of each time slices are plotted. We set parameters $\lambda =1,~m=n=1/4,~a=b=1$.[]{data-label="fig:ohorizon"}](ohorizon3.eps "fig:"){width="0.33\linewidth"} ![ Time evolution of the event horizon for orthogonal black rings in $r_1-r_2$ plane. Coordinate values of event horizon of each time slices are plotted. We set parameters $\lambda =1,~m=n=1/4,~a=b=1$.[]{data-label="fig:ohorizon"}](ohorizon2.eps "fig:"){width="0.33\linewidth"}     ![ Time evolution of the event horizon for orthogonal black rings in $r_1-r_2$ plane. Coordinate values of event horizon of each time slices are plotted. We set parameters $\lambda =1,~m=n=1/4,~a=b=1$.[]{data-label="fig:ohorizon"}](ohorizon1.eps "fig:"){width="0.33\linewidth"} concentric black rings in a plane --------------------------------- In this case, $\Psi$ is given by $$\begin{aligned} \Psi &= \frac{m_1}{\sqrt{(r_1 + a_1)^2 + r_2^2} \sqrt{(r_1 - a_1)^2 + r_2^2}} + \frac{m_2}{\sqrt{(r_1 + a_2)^2 + r_2^2} \sqrt{(r_1 - a_2)^2 + r_2^2}}, \label{ringsource3}\end{aligned}$$ which is constructed by two ring sources on a plane. We can find the location of event horizon at each time slices plotted in Fig.\[fig:chorizon\] as same as the case of a single black ring solution. At $t=-0.0489$ there are two black rings near $r_1 = 1,~r_2=0$ and $r_1=2,~r_2=0$, and the topology of inner black ring is changes into ${\rm S}^3$ near $t=0.122$. The black hole with ${\rm S}^3$ and the ring near $r_1=2,~r_2=0$ coalesce into a single black hole with ${\rm S}^3$ near $t=0.904$. orthogonal black rings ---------------------- In this case, $\Psi$ is given by $$\begin{aligned} \Psi &= \frac{m}{\sqrt{(r_1 + a)^2 + r_2^2} \sqrt{(r_1 - a)^2 + r_2^2}} + \frac{n}{\sqrt{ r_1^2 + (r_2 + b)^2} \sqrt{ r_1^2 + (r_2 - b)^2}}, \label{ringsource3}\end{aligned}$$ which is constructed by two ring sources which are orthogonal. Similar to above discussion, the event horizon of this geometry is plotted in Fig.\[fig:ohorizon\]. At $t=-0.916$ there are two black rings near $r_1 = 1,~r_2=0$ and $r_1=0,~r_2=1$, and at a time between $t=-0.916$ and $t=-0.811$ a black hole with ${\rm S}^3$ appears near $r_1=0,~r_2=0$. Finally they coalesce into a single black hole with ${\rm S}^3$ near $t=-0.182$. Summary and Discussion {#sec5} ====================== In this paper, we have discussed the dynamical black ring solutions in the five dimensional Einstein-Maxwell system with a positive cosmological constant. Our solution has constructed by use of ring source harmonics on four-dimensional Euclid space and this is analogous to the case of super-symmetric black ring solution [@Gauntlett:2004qy]. In the case of single ring source harmonics, the solutions describe the physical process such that a thin black ring at early time shrinks and changes into a single black hole as time increase. In general, our solution can describe coalescence of multi black rings. All regular black ring solutions so far found have angular momenta to keep balance between gravitational force and centrifugal force, otherwise there exist some singularities. On the other hand, our solutions do not rotate, i.e., do not have angular momentum, but clearly this has no conical singularity because of the way of the construction of the solution. We consider that this is because of the balance between gravitational force and electric force. One of important point in this paper is that if we set $\lambda = 0$ our solutions are static singular solutions which have curvature singularities at the points $\Psi$ diverge, but in the case of $\lambda \neq 0$ our solutions become regular in the region in which the time coordinate $t$ takes finite values since the event horizon encloses the singularities. This suggest that other harmonics which were not focused on so far also give regular solutions with various horizon topologies. It may be also interesting that this way applies to the case of dimensions higher than five. While this paper was being prepared for submission, an interesting paper [@Ida:2009nd] appeared, in which the black rings off the nuts on the Gibbons-Hawking space was discussed. Acknowledgements {#acknowledgements .unnumbered} ================ The author would like to thank Hideki Ishihara for useful discussions. This work is supported by the JSPS Grant-in-Aid for Scientific Research No. 20$\cdot$7858. [99]{} R. Emparan and H. S. Reall, Phys. Rev. Lett.  [**88**]{}, 101101 (2002) \[arXiv:hep-th/0110260\]. Y. Morisawa and D. Ida, Phys. Rev.  D [**69**]{}, 124005 (2004) \[arXiv:gr-qc/0401100\]. S. Hollands and S. Yazadjiev, Commun. Math. Phys.  [**283**]{}, 749 (2008) \[arXiv:0707.2775 \[gr-qc\]\]. S. Hollands and S. Yazadjiev, Class. Quant. Grav.  [**25**]{}, 095010 (2008) \[arXiv:0711.1722 \[gr-qc\]\]. Y. Morisawa, S. Tomizawa and Y. Yasui, Phys. Rev.  D [**77**]{}, 064019 (2008) \[arXiv:0710.4600 \[hep-th\]\]. S. Hollands and S. Yazadjiev, arXiv:0812.3036 \[gr-qc\]. A. A. Pomeransky and R. A. Sen’kov, arXiv:hep-th/0612005. H. Elvang and P. Figueras, JHEP [**0705**]{}, 050 (2007) \[arXiv:hep-th/0701035\]. H. Iguchi and T. Mishima, Phys. Rev.  D [**75**]{}, 064018 (2007) \[Erratum-ibid.  D [**78**]{}, 069903 (2008)\] \[arXiv:hep-th/0701043\]. J. Evslin and C. Krishnan, arXiv:0706.1231 \[hep-th\]. K. Izumi, Prog. Theor. Phys.  [**119**]{}, 757 (2008) \[arXiv:0712.0902 \[hep-th\]\]. H. Elvang and M. J. Rodriguez, JHEP [**0804**]{}, 045 (2008) \[arXiv:0712.2425 \[hep-th\]\]. H. Elvang, R. Emparan, D. Mateos and H. S. Reall, Phys. Rev. Lett.  [**93**]{}, 211302 (2004) \[arXiv:hep-th/0407065\]. J. P. Gauntlett and J. B. Gutowski, Phys. Rev.  D [**71**]{}, 045002 (2005) \[arXiv:hep-th/0408122\]. H. Ishihara and K. Matsuno, Prog. Theor. Phys.  [**116**]{}, 417 (2006) \[arXiv:hep-th/0510094\]. J. P. Gauntlett, J. B. Gutowski, C. M. Hull, S. Pakis and H. S. Reall, Class. Quant. Grav.  [**20**]{}, 4587 (2003) \[arXiv:hep-th/0209114\]. H. Ishihara, M. Kimura, K. Matsuno and S. Tomizawa, Class. Quant. Grav.  [**23**]{}, 6919 (2006) \[arXiv:hep-th/0605030\]. H. Ishihara, M. Kimura, K. Matsuno and S. Tomizawa, Phys. Rev.  D [**74**]{}, 047501 (2006) \[arXiv:hep-th/0607035\]. S. S. Yazadjiev, Phys. Rev.  D [**76**]{}, 064011 (2007) \[arXiv:0705.1840 \[hep-th\]\]. S. S. Yazadjiev, Phys. Rev.  D [**78**]{}, 064032 (2008) \[arXiv:0805.1600 \[hep-th\]\]. R. Emparan and H. S. Reall, Class. Quant. Grav.  [**23**]{}, R169 (2006) \[arXiv:hep-th/0608012\]. R. Emparan and H. S. Reall, Living Rev. Rel.  [**11**]{}, 6 (2008) \[arXiv:0801.3471 \[hep-th\]\]. C. S. Chu and S. H. Dai, Phys. Rev.  D [**75**]{}, 064016 (2007) \[arXiv:hep-th/0611325\]. H. K. Kunduri, J. Lucietti and H. S. Reall, JHEP [**0702**]{}, 026 (2007) \[arXiv:hep-th/0611351\]. M. M. Caldarelli, R. Emparan and M. J. Rodriguez, JHEP [**0811**]{}, 011 (2008) \[arXiv:0806.1954 \[hep-th\]\]. D. Kastor and J. H. Traschen, Phys. Rev.  D [**47**]{}, 5370 (1993) \[arXiv:hep-th/9212035\]. L. A. J. London, Nucl. Phys.  B [**434**]{}, 709 (1995). S. D. Majumdar, Phys. Rev.  [**72**]{}, 390 (1947). A. Papaetrou, Proc. Roy. Irish Acad. (Sect. A) A [**51**]{} (1947) 191. J. B. Hartle and S. W. Hawking, Commun. Math. Phys.  [**26**]{}, 87 (1972). R. C. Myers, Phys. Rev.  D [**35**]{}, 455 (1987). D. L. Welch, Phys. Rev.  D [**52**]{}, 985 (1995) \[arXiv:hep-th/9502146\]. G. N. Candlish and H. S. Reall, Class. Quant. Grav.  [**24**]{}, 6025 (2007) \[arXiv:0707.4420 \[gr-qc\]\]. G. N. Candlish, arXiv:0904.3885 \[hep-th\]. M. Kimura, Phys. Rev.  D [**78**]{}, 047504 (2008) \[arXiv:0805.1125 \[gr-qc\]\]. G. W. Gibbons and S. W. Hawking, Phys. Lett.  B [**78**]{}, 430 (1978). H. Ishihara, M. Kimura and S. Tomizawa, Class. Quant. Grav.  [**23**]{}, L89 (2006) \[arXiv:hep-th/0609165\]. C. M. Yoo, H. Ishihara, M. Kimura, K. Matsuno and S. Tomizawa, Class. Quant. Grav.  [**25**]{}, 095017 (2008) \[arXiv:0708.0708 \[gr-qc\]\]. D. Ida, H. Ishihara, M. Kimura, K. Matsuno, Y. Morisawa and S. Tomizawa, Class. Quant. Grav.  [**24**]{}, 3141 (2007) \[arXiv:hep-th/0702148\]. D. Ida, K. i. Nakao, M. Siino and S. A. Hayward, Phys. Rev.  D [**58**]{}, 121501 (1998). G. T. Horowitz and K. Maeda, Phys. Rev.  D [**65**]{}, 104028 (2002) \[arXiv:hep-th/0201241\]. D. Ida, arXiv:0904.3581 \[gr-qc\]. [^1]: E-mail:[email protected] [^2]: However, some partial achievements was obtained in [@Morisawa:2004tc; @Hollands:2007aj; @Hollands:2007qf; @Morisawa:2007di]. [^3]: If we take the cosmological constant to zero, the solution [@Kastor:1992nn] reduces to the Majumdar-Papapetrou solution [@Majumdar:1947eu; @Papaetrou:1947ib; @Hartle:1972ya] which describes static multi-black holes. The construction of the solution is possible because of a force balance between the gravitational and Coulomb forces. The higher-dimensional generalizations of the multi-black holes are discussed in [@Myers:1986rx; @Gauntlett:2002nw; @Gauntlett:2004qy; @Ishihara:2006iv; @Ishihara:2006pb], and recently the smoothness of horizons of higher-dimensional multi-black holes are also investigated in [@Welch:1995dh; @Candlish:2007fh; @Candlish:2009vy; @Kimura:2008cq]. [^4]: In [@Ishihara:2006ig; @Ida:2007vi] the case of Gibbons-Hawking base space is discussed.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In 1970, based on newly available empiric evidence, a remarkable monotonicity property for $\vert \zeta(z) \vert$ was conjectured by R. Spira. The $\zeta$-monotonicity property can be written as follows: $$\vert \zeta (x_2 + y i ) \vert < \vert \zeta \left ( x_1 +y i \right )\vert \hspace{0.5cm} \textrm {for any } \hspace{0.25cm} x_1 < x_2 \leq 0.5 \textrm{ and } 6.29 <y.$$ In this work we present an experimental study of the monotonicity conjecture, in the course of which new properties of $\zeta(z)$ are discovered. For instance, the spectrum of semi-limits $ \lambda(z) \subset \mathbb{R}$ and the core function $C(z)$, which serves as a non-chaotic simplification of $\zeta(z)$ to the left of the critical line' author: - Yochay Jerby title: An experimental study of the monotonicity property of the Riemann zeta function --- Introduction - The Riemann hypothesis and monotonicity ====================================================== In 1970, based on newly available empiric evidence, a remarkable monotonicity property for $\vert \zeta(z) \vert$ was conjectured by R. Spira: **The $\zeta$-monotonicity conjecture ([@S1]): For any $y>6.29$ the function $\vert \xi \left ( x +y i \right )\vert$ is strictly-decreasing in the half-line $x<0.5$.** Clearly, monotonicity implies the Riemann hypothesis. In fact, the two conjectures have been shown to be equivalent, see [@S1; @SZ; @MSZ] and [@SC] for an analog for Riemann’s $\xi$-function. Figure 1 illustrates the $\zeta$-monotonicty property in the domain $0<y< 10^4$: This work is devoted to a further, modern, experimental study of the $\zeta$-monotonicity conjecture and related properties. In particular, this study leads to the discovery of various other new fundamental properties of zeta. For instance: the *spectrum of semi-limits* of the partial sums, which we view as governing the chaotic part of zeta, and to the definition of the *core*, $C(z)$, which serves as a non-chaotic simplification of $\zeta(z)$, to the left of the critical line. A few effective remarks on the monotonicity property {#s:left} ==================================================== Let us consider the function $$\label{eq:4} \eta(y,t) := e^t \left ( \vert \zeta(0.5 \cdot (1-e^{-t}) +yi ) \vert - \vert \zeta(0.5+yi) \vert \right ).$$ The $\zeta$-monotonicity implies that $\eta(y,t)$ is positive for $y \geq 6.29 $ and $ t \geq 0$ or, equivalently, that the function $log ( \eta(y,t))$ is well-defined. Figure 2 shows, for instance, the values of $log ( \eta(y,t)) $ for $t=0$ and $t=10$ in the domain $6.29 \leq y \leq 2 \cdot 10^3 $: ![Graph of $log (\eta(y,0))$ (a) and $log( \eta(y,10))$ (b) for $6.29 \leq y \leq 2 \cdot 10^3$.](36.jpg) Note that the function $log(\eta(y,0))$ seems to be not only well-defined but, also, to fluctuate, at its core, around some strictly increasing function. However, as $t$ increases, it becomes less straight-forward to discern that the $log(\eta(y,t))$ is well define. However, it turns that the function $log (\eta(y,t))$ also admits the following remarkable property: it could be bounded from below in terms of $log(\vert \zeta(0.5+yi) \vert)$ itself, in the region $y>6.29$ (compare, for instance, Fig. 3)! ![Graph of $log (\eta(y,10))$ and $log (\vert \zeta(0.5+yi) \vert ) -3$ over $0 \leq y \leq 50$. \[overflow\]](6-1.jpg) This leads us to consider, instead of $\eta(y,t)$, the following function $$\label{eq:5} \widetilde{\eta} (y,t):= e^{t} \cdot \left ( \frac{ \vert \zeta (0.5(1-e^{-t})+yi) \vert }{ \vert \zeta(0.5 +yi) \vert } -1 \right ),$$ which is defined for all $(y,t)$ such that $\zeta(0.5+yi) \neq 0 $ and positive exactly when $\eta(y,t)$ is. The advantage of $\widetilde{\eta} (y,t)$ over $\eta(y,t)$ is that, contrary to $\eta(y,t)$, the function $\widetilde{\eta} (y,t)$ appears to be not only strictly positive, but actually seems to be bounded from below by a rather well behaved, *smooth, non-chaotic, increasing* function $\widetilde{X}(y,t)$, for any given $t$ (as Fig. 4 illustrates). ![Graph of $\widetilde{\eta} (y,t)$ over $0<y<75$ and $t=0,1,2,3$ (a) and $\widetilde{\eta} (y,3)$ over $0 \leq y \leq 10^4$ (b).](7_2.jpg) Recall that the functional equation of $\zeta(z)$ is given by $ \zeta(z) = \chi (z) \cdot \zeta(1-z)$ where $ \chi(z) := 2^z \pi^{z-1} sin \left ( \frac{\pi z }{2} \right ) \Gamma(1-z)$. It turns that a rather good first order approximation of $\widetilde{X}(y,t)$ could be given in terms of the following function: $$\label{eq:7} X(y,t):= e^{t} \cdot \left ( \frac{ \vert \chi (0.5(1-e^{-t})+yi) \vert }{ \vert \chi(0.5 +yi) \vert } -1 \right ) = e^t \cdot \left ( \vert \chi (0.5(1-e^{-t})+yi) \vert -1 \right ).$$ Figure 5, for instance, shows a graph of $log(\widetilde{\eta} (y,10) ) $ (blue) and $log(X(y,10))-0.75$ (purple) for $0<y<10^4$: ![Graph of $log(\widetilde{\eta}(y,10))$ and $log(X(y,10))-0.75$ over $0<y<10^4$.](9-1.jpg) Our aim is to explain why the increasing non-chaotic $\widetilde{X}(y,t)$ such that $\widetilde{X}(y,t) < \widetilde{\eta} (y,t)$ should countinue to exist for any $y \geq 6.29$ and $t \geq 0$. In order to do so, we need to introduce the spectrum of semi-limits and core function in the next section 3. The spectrum of semi-limits and the core function {#s:left} ================================================= Recall that in the critical strip $0 \leq Re(z) \leq 1$ zeta is given by $\zeta(z)=\frac{1}{1-2^{1-z}} \sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{k^z}$. For $n \in \mathbb{N}$ consider the partial sums $$\label{eq:10} S_n(z):=\frac{1}{1-2^{1-z}} \sum_{k=1}^{n} \frac{(-1)^{k+1}}{k^z}.$$ The starting point of this section is the observation of a few special properties of the series $S_n(z)$. For instance, a typical example of the behavior of $S_n(z)$ is presented in Fig. 6: ![Values of $ \vert S_n(0.5 \cdot (1-e^{-2}) +2 \cdot 10^4 \cdot i) \vert$ for $n=0,...,2 \cdot 10^4$. ](34.jpg) As one can see, the series $S_n(z)$ actually fluctuates around various other values for a while before “starting to approximate” $\zeta(z)$ (purple) and that the “surge towards $\zeta(z)$” is made around the “critical” stage $n \approx Im(z)/3$. In fact, the bigger $Im(z)$ becomes, the interval $[0,Im(z)/3]$ becomes divided into more and more sub-segments over which $S_n(z)$ fluctuates around a certain fixed *semi-limit*, and the transition between two such *semi-limits* is done by steep surges (as in the picture). We refer to the collection of these semi-limits $\lambda(z) \subset \mathbb{R}$ as the *spectrum of the value $z$*. In particular, in view of the above, for $\alpha \in [0,1]$, we define the *$\alpha$-truncation of zeta*: $$\label{eq:11} \zeta_{\alpha}(z):=\frac{1}{1-2^{1-z}} \sum_{k=1+ [(1-\alpha) \cdot Im(z)]}^{\infty} \frac{(-1)^{k+1}}{k^z},$$ where $[y] \in \mathbb{N}$ stands for the *integral value* of the real number $y$. An example is presented in Fig. 7: ![Values of $ \vert \zeta_{\alpha}(0.5 \cdot (1-e^{-2})+2 \cdot 10^4 \cdot i) \vert$ for $0 \leq \alpha \leq 1$.](13.jpg) It is very interesting to study the properties of the *spectrum* of *semi-limits* which the $\alpha$-truncation $\zeta_{\alpha}(z)$ fluctuates around, in general (see remark 3.1 below). However, of special interest for us is the first of them (the last being the limit $\zeta(z)$ itself). In particular, truncating from $\zeta(z)$ all the semi-limits except for the first one leads to the following definition: **Definition 3.1: $C(z):=\vert \zeta_{0.8}(z) \vert $ is the core function of $\zeta(z)$.** Let us note that the value $a=0.8$ is simply taken to represent the first spectrum value, which occurs for $\zeta_{\alpha}(z)$ around $\alpha \approx 2/3$. We call $C(z)$ the core of $\zeta(z)$ as, left to the critical line, the core turns to serve as a non-chaotic simplification of $\vert \zeta(z) \vert$, as illustrated in Fig. 8: ![Graph of $log(C(yi))$ and $log \vert \zeta(yi) \vert$ for $0 \leq y \leq 2 \cdot 10^4$ (a) and $500 \leq y \leq 550$ (b). ](15-3.jpg) Let us note that, as by truncating the semi-limits in the spectrum $\lambda(z)$ we obtained the non-chaotic core function $C(z)$, we view $\lambda(z)$ as encoding the chaotic, random, features of zeta. Moroever, as mentioned the number of elements in $N(z) = \vert \lambda(z) \vert$ grows as $Im(z) \rightarrow \infty$ (see further discussion in section 4). The importance of the core $C(z)$ to the study of the $\zeta$-monotonicity property is that it can be viewed as the part of $\zeta(z)$ that is efficiently approximated in terms of $\vert \chi(z) \vert$. Figure 9 illustrates this for $Re(z)=0$: ![Graph of $log(C(yi))$ and $ log(2.2 \vert \chi (yi) \vert) ,log(0.6 \chi(yi) \vert )$, approximating $log(C(yi))$ from above and below, for $0 \leq y \leq 2 \cdot 10^4$. ](16-1.jpg) In particular, the “mysterious difference” between $\widetilde{X}(y,t)$ and $X(y,t)$ can be viewed as a result of the contribution of the re-addition of the chaotic elements of the spectrum $\lambda(z)$. Moreover, recall that the core, $C(z)= \vert \zeta_{0.8}(z) \vert$, by definition, conceptually represents $80 \%$ of the relevant elements of the series defining $\vert \zeta(z) \vert$. Hence, we turn to discuss the addition the remaining, chaotic, $20 \%$, to $\zeta(z)$ and, mainly, to $\widetilde{\eta} (y,t)$. For $ a \in [0,1]$ set: $$\label{eq:12} C_a(z) :=\left \vert \zeta_{0.8}(z) + \frac{a}{(1-2^{1-z} ) } \cdot \sum_{k=1}^{[0.2Im(z)]} \frac{ (-1)^{k+1}}{k^z} \right \vert,$$ Interpolating between the core, $C(z)=C_0(z)$, and zeta itself, $\vert \zeta (z) \vert = C_1(z)$. In view of section 2 set: $$\label{eq:13} \widetilde{\eta} _a(y,t) := e^t \cdot \left ( \frac{C_a(0.5(1-e^{-t})+yi)}{C_a(0.5+yi)}-1 \right ).$$ Figure 10 shows the remarkably structured way $\widetilde{\eta} _a(y,t)$ transitions from $\widetilde{\eta} _0(y,t)$ (blue) to $\widetilde{\eta} (y,t)=\widetilde{\eta} _1(y,t)$ (purple) (contrary to the chaotic transition of $C(z)$ to $\vert \zeta(z) \vert$): ![$\widetilde{\eta}_{k/10} (y,5)$ with $k = 0,...,10$ for $15 \leq y \leq 60 $ (a) and $43 \leq y \leq 57$ (b) and of $\widetilde{\eta} _a (y,5)$ over $(y,a) \in [43,57] \times [0,1] $ front (c) rear (d).](17-3.jpg) The key feature is that, on the one hand, the structured transition from $\widetilde{\eta} _0(y,t)$ to $\widetilde{\eta} _1(y,t)$, described in Fig. 10, is independent of $y$. However, on the other hand, by definition, the number of elements discerning between $\widetilde{\eta} _0(y,t)$ and $\widetilde{\eta} _1(y,t)$ is given by $[0.2 y]$. Hence, conceptually, the pattern in Fig. 10 can be explained in terms of “induction on $[0.2y]$”. Finally, let us note the following remark regarding the poles: Set $$\label{eq:14} \begin{array}{ccc} \eta(y,t) : =e^t ( \vert \zeta(0.5(1-e^{-t})+yi) \vert - \vert \zeta(0.5 +yi) \vert ) & ; & \theta(y):= \vert \zeta (0.5 +yi) \vert. \end{array}$$ In fact, it seems possible to locally *resolve* the poles altogether by replacing $ \theta(y) $ with a smooth non-vanishing function $\widetilde{\theta} (y) \neq 0$, coinciding with $\theta(y)$ away from small neighborhoods of the zeros, and keeping the property $\widetilde{X} (y,t) < \widetilde{\widetilde{\eta} }(y,t):= \eta(y,t) / \widetilde{\theta}(y) $. In order to understand how the local correction should occur let us consider $\widetilde{y}_1 \approx 14.1347$, the first zero on the critical strip. Figure 11 shows the behavior of $\eta(y,t)$ and $\theta(y)$ in a small neighborhood of $\widetilde{y}_1$: ![Graph of $log(\theta(\widetilde{y}_1+ \epsilon)), log(\eta(\widetilde{y}_1+ \epsilon,t))$ for $\vert \epsilon \vert \leq 0.0001$ and $0 \leq t \leq 20 $.](27.jpg) In particular, we can take $\widetilde{\theta}(y) = max ( \theta(y) , e^{-20})$ as the required correction in the considered neighborhood of $\widetilde{y}_1$. Let $\widetilde{z}_k = 0.5+\widetilde{y}_k$ be the $k$-th zero of zeta on the critical strip. Empirical verification shows that the typical local behavior of $log(\eta(y,t))$ and $ log( \theta(y))$ in a neighborhood of $\widetilde{y}_k$ for any $k$ is, in fact, similar to that presented in Fig. 11 for $\widetilde{y}_1$. **Remark 3.2 (The right-hand-side): The function $log \vert \zeta( x +yi) \vert $ is known to be unbounded, as a function of $y$, for $0.5 \leq x \leq 1$, see, for instance, Theorem 11.9 of [@T]. In view of this it is interesting to point out Fig. 12:** ![Graph of $log \vert \zeta(0.95+e^{0.0001t} i) \vert$ over $t=0,...,250000$.](19.jpg) First, as one can see from figure 12, even though $log \vert \zeta(0.95 +yi) \vert$ is guaranteed to be unbounded it should nevertheless have an extremely slow rate of growth. However, in view of Fig. 12 it also becomes interesting to ask the following more refined global question: **Question: For $0.5 \leq x \leq 1$ and $ s \in \mathbb{R}$ let $Y(s,x)$ be the minimal value of $0 \leq Y$ for which $log \vert \zeta(x +Yi) \vert = s$. What can be said about the function $Y(s,x)$?** For instance, Fig. 12 shows that $e^{25}<Y(\pm 2,0.95)$. The above question has, of course, direct bearing on zeros of zeta. As mentioned, the results of [@T] and later, the, much more general, Voronin’s universality theorem [@V] imply that $log \vert \zeta(x+yi) \vert$ is unbounded in $0.5 \leq x \leq 1$. However, these classical results are non-quantitative, in the sense that they do not give quantitative information on $Y(s,x)$ beyond guaranteeing that it is unbounded. In particular, this does not exclude the existence of, so called, “ghost zeros”, that is, an infinite amount of extremely tiny non-zero values which (a) can hardly be discerned from a real zero (b) appear practically anywhere to the right of the critical line. However, Fig. 15 shows that this is not exactly the case. Indeed, at least for $0<y<e^{25}$ the size of $\vert \zeta (0.95+yi) \vert $ is globally bounded by $e^{-2}$ and, as $y$ grows, it is natural to suggest that it would be possible to extend this bound by a (very slowly) decreasing function of $y$. It is important to note in this context results of Garunkštis on effective versions of Voronin’s universality theorem, specifically corollary 2 of [@G], which also seem to suggest very slow asymptotics for $Y(s,x)$. Moreover, in the context of this work, it is interesting to note that the question of the description of $Y(s,x)$ could be viewed as the right hand side analog of the description of the “mysterious difference” between $\widetilde{X}(y,t)$ and $X(y,t)$. Concluding remarks {#s:intro} ================== In this work we conducted an experimental study of the $\zeta$-monotonicity conjecture [@BB; @S1; @SZ], which is an equivalent reformulation of the Riemann hypothesis, see [@MSZ; @S1; @SC]. This led us to discover the spectrum of semi-limits $\lambda(z) \subset \mathbb{R}$ (which we view as dominating the chaotic features of zeta) and the existence of the core function $C(z)$ which we view as a non-chaotic simplification of $\vert \zeta(z) \vert$ (to the left of the critical line), obtained by truncating the semi-limits in $\lambda(z)$ aside from the first one. As mentioned, for a given $z \in \mathbb{C}$, the spectrum is a collection of random-like values such that $N(z) = \vert \lambda (z) \vert \rightarrow \infty $ when $Im(z) \rightarrow \infty$. One of the fascinating aspects in the modern study of zeta is the discovery of various relations to quantum chaos, see [@Bog] and references therein. Specifically the existence of conjectural relations between statistical properties of the zeros of zeta and statistical properties of $\lambda(M)$, the spectrum of eigenvalues of random $N \times N$-matrices in GUE (Gaussian Unitary Ensemble), such that $N \rightarrow \infty$. Even though, there is vast empirical evidence to back up the the various quantum chaos conjectures, a conceptual explanation to the observed relation between zeros of zeta and eigenvalues of random matrices, is still largely missing. In view of this, it is interesting to ask, weather the monotonicity reformulation can be extended to relate zeros of zeta, the spectrum of semi-limitis $\lambda(z)$ and spectrum of eigenvalues of random matrices $\lambda(M)$? [10]{} J. Borwein and D. Bailey. Mathematics by Experiment, 2nd Edition: Plausible Reasoning in the 21st Century. Taylor $\&$ Francis, 2004. E. Bogomolny. Riemann Zeta Function and Quantum Chaos. Progress of Theoretical Physics Supplement, Volume 166, 2007, Pages 19–36. R. Garunkštis. The effective universality theorem for the Riemann zeta function. Proceedings of the session in analytic number theory and Diophantine equations, MPI-Bonn, January - June 2002, Ed. by D. R. Heath-Brown, B. Z. Moroz, Bonner mathematische Schriften, 360 (2003). Y. Matiyasevich, F. Saidak and P. Zvengrowski. Horizontal monotonicity of the modulus of the Riemann zeta-function, and related functions. Acta Arith. 166 (2014), 189-200. J. Sondow and C. Dumitrescu. A monotonicity property of Riemann’s xi function and a reformulation of the Riemann hypothesis. Periodica Mathematica Hungarica March 2010, Volume 60, Issue 1, 37–40. R. Spira. Zeros of $\zeta'(s)$ and the Riemann hypothesis. Illinois J. Math. 17 (1973), no. 1, 147–152. R. Spira. Check Values, Zeros and Fortran Programs for the Riemann Zeta Function and its First Three Derivatives. Report No. 1, University Computation Center, University of Tennessee, Knoxville, Tennessee. F. Saidak and P. D. Zvengrowski. On the modulus of the Riemann zeta function in the critical strip. Mathematica Slovaca, 2003, Volume: 53, Issue: 2, page 145-172 E.  C.  Titchmarsh. The Theory of the Riemann Zeta-function (2nd ed. revised by D. R. Heath-Brown). Oxford University press, 1951 (1986). S.  M.  Voronin. Theorem on the Universality of the Riemann Zeta Function. Izv. Akad. Nauk SSSR Ser. Matem, 1975, 475-486.
{ "pile_set_name": "ArXiv" }
--- abstract: | The static cylindrically symmetric solutions of the gravitating Abelian Higgs model form a two parameter family. In this paper we give a complete classification of the string-like solutions of this system. We show that the parameter plane is composed of two different regions with the following characteristics: One region contains the standard asymptotically conic cosmic string solutions together with a second kind of solutions with Melvin-like asymptotic behavior. The other region contains two types of solutions with bounded radial extension. The border between the two regions is the curve of maximal angular deficit of $2\pi$.\ author: - 'M. Christensen$^a$[^1]$\:$, A.L. Larsen$^a$[^2] $\:$ and Y. Verbin$^b$[^3]' title: | **Complete Classification of the String-like Solutions\ of the Gravitating Abelian Higgs Model\ ** --- $^a$ *Department of Physics, University of Odense,* *Campusvej 55, 5230 Odense M, Denmark* 0.4cm $^b$ *Department of Natural Sciences, The Open University of Israel,* *P.O.B. 39328, Tel Aviv 61392, Israel* 1.1cm [*PACS: 11.27.+d, 04.20.Jb, 04.40.Nr*]{} Introduction ============ \[secintro\] Cosmic strings [@vilsh; @KibbleH] were introduced into cosmology by Kibble [@Kibble1], Zel’dovich [@Zel] and Vilenkin [@Vil1] as (linear) topological defects, which may have been formed during phase transitions in the early universe. Cosmic strings are considered as possible sources for density perturbations and hence for structure formation in the universe. The simplest and most common field-theoretical model, which is used in order to describe the generation of cosmic strings during a phase transition, is the Abelian Higgs model. This model is known to have (magnetic) flux tube solutions [@NO], whose gravitational field is represented by an asymptotically conic geometry. These are the local cosmic string solutions. This conic space-time is a special case of the general static and cylindrically-symmetric vacuum solution of Einstein equations [@exact-sol], the so-called Kasner solution: $$\begin{aligned} ds^{2} = (kr)^{2a}dt^{2} - (kr)^{2c}dz^{2} - dr^{2} - \beta^{2} (kr)^{2(b-1)}r^{2}d{\phi}^2 \label{Kasner1}\end{aligned}$$ where $k$ sets the length scale while $\beta$ represents the asymptotic structure, as will be discussed below. Notice also that $(a, b, c)$ must satisfy the Kasner conditions: $$a + b + c=a^2 + b^2 + c^2=1 \label{Kasner2}$$ The standard conic cosmic string solution [@Vil1; @Garfinkle1] is characterized by an asymptotic behavior given by (\[Kasner1\]) with $a=c=0,\; b=1$ which is evidently locally flat. In this case, the parameter $\beta$ represents a conic angular deficit [@Marder2; @Bonnor], which is also related to the mass distribution of the source. A first approximation to the relation between the angular deficit $\delta\phi = 2 \pi (1-\beta)$ and the “inertial mass" (per unit length) $\tilde{m}$, of a local string was found to be [@Vil1; @Gott; @Hiscock; @Linet1]: $$\delta\phi = 8 \pi G\tilde{m} \label{angdef}$$ Further corrections to (\[angdef\]) were calculated in the following papers [@LagMatz; @GarfinkleLag; @LagGarfinkle; @CNV; @Ver]. In order to fully analyse the string-like solutions of the Abelian Higgs system one needs to solve the full system of coupled field equations for the gravitational field and matter fields (scalar + vector). But this is not necessary in order to obtain the asymptotic geometry. It is sufficient to note that for cylindrical symmetry the flux tube has the property of ${\cal T}^0_0={\cal T}^{z}_{z}$, where ${\cal T}^\mu_\nu$ is the energy-momentum tensor. This means that the solution will have a symmetry under boosts along the string axis, i.e., $a=c$. The Kasner conditions (\[Kasner2\]) then leave only two options; either the locally flat case: $$a=c=0 \;\;\; , \;\;\; b=1 \label{SL}$$ which we shall refer to as the cosmic string branch, or $$a=c=2/3\;\;\; ,\;\;\;b=-1/3 \label{MW}$$ which is the same behavior as that of the Melvin solution [@Melvin]. We shall therefore refer to (\[MW\]) as the Melvin branch. The Melvin branch does not provide the same characteristics used to describe a “standard" cosmic string, and thus it has been disregarded in previous investigations. However, the Melvin-like solution seems completely well-behaved and one may wonder whether it exists at all (asymptotically), or maybe it is just an artifact of the too general reasoning above. Although several authors [@CNV; @Ver; @Ortiz; @FIU; @Rayc] explicitly mention this possibility of a second Melvin branch of solutions in the Abelian Higgs system, it has never been properly discussed. In this paper we show that indeed the Melvin branch actually exists in the Abelian Higgs model. It turns out that each conic cosmic string solution has a “shadow" in the form of a corresponding solution in the Melvin branch. These two solutions, conic cosmic string and asymptotic Melvin, are found only in a part of the two dimensional parameter space of the system, bounded by the curve of maximal angular deficit of $2\pi$. There also seem to be some open questions in the literature [@LagGarfinkle; @Ortiz] about the nature of the solutions beyond the border of maximal angular deficit of $2\pi$. We will see that also in this region two types of solutions coexist, both of them with a finite radial extension.\ General analysis of the Abelian Higgs system ============================================ \[secgeneral\] The action of the gravitating Abelian Higgs model is: $$S = \int d^4 x \sqrt{\mid g\mid } \left({1\over 2}D_{\mu}\Phi ^ {\ast}D^{\mu}\Phi - {{\lambda }\over 4}(\Phi ^{\ast} \Phi - v^2)^2 - {1\over 4}{F}_{\mu \nu}{F}^{\mu \nu} + \frac{1}{16\pi G} {\cal R}\right) \label{higgsaction}$$ where ${\cal R}$ is the Ricci scalar, ${F}_{\mu \nu}$ the Abelian field strength, $\Phi$ is a complex scalar field with vacuum expectation value $v$ and $D_\mu = \nabla _{\mu} - ieA_{\mu}$ is the usual gauge covariant derivative. We use units in which $\hbar=c=1$. Because of the cylindrical symmetry of the source, we will use a line element of the form: $$ds^{2} = N^{2}(r)dt^{2} - d{r}^{2} - L^{2}(r)d{\phi}^2 - K^{2}(r)dz^{2} \label{lineelement}$$ and the usual Nielsen-Olesen ansatz for the +1 flux unit: $$\begin{aligned} \Phi=vf(r)e^{i\phi} \hspace{10 mm} , \hspace{10 mm} A_\mu dx^\mu = {1\over e}(1-P(r))d\phi \label{NOansatz}\end{aligned}$$ This gives rise to the following field equations for the Abelian Higgs flux tube: $$\begin{aligned} \frac{(NKLf')'}{NKL} + \left(\lambda v^2 (1-f^2) - \frac{P^2}{L^2}\right)f = 0 \\ \frac{L}{NK}\left(\frac{NK}{L} P'\right)' - e^2 v^2 f^2 P = 0 \label{fluxtube}\end{aligned}$$ With the line element (\[lineelement\]), the components of the Ricci tensor are: $$\begin{aligned} {\cal R}^{0}_{0} = - \frac{(LKN')'}{NLK} & , & {\cal R}^{r}_{r} = - \frac{N''}{N} - \frac{L''}{L} - \frac{K''}{K} \nonumber \\ {\cal R}^{\phi}_{\phi} = - \frac{(NKL')'}{NLK} &,& {\cal R}^{z}_{z} = - \frac{(NLK')'}{NLK} \label{Ricci}\end{aligned}$$ The source is described by the energy-momentum tensor with the following components: $$\begin{aligned} {\cal T}^{0}_{0} &=& \rho = \varepsilon _s +\varepsilon _v + \varepsilon _{w} + u \nonumber \\ {\cal T}^{r}_{r} &=& -p_r = -\varepsilon _s -\varepsilon _v + \varepsilon _{w} + u \nonumber \\ {\cal T}^{\phi}_{\phi} &=& -p_{\phi} = \varepsilon _s - \varepsilon _v -\varepsilon_{w} + u \nonumber \\ {\cal T}^{z}_{z} &=& -p_z = \rho \label{Tmunu}\end{aligned}$$ where: $$\begin{aligned} \varepsilon_s = {v^2 \over 2} f'^2 \hspace{5 mm},\hspace{5 mm} \varepsilon_v = \frac{P'^2}{2e^2 L^2} \hspace{5 mm},\hspace{5 mm} \varepsilon_{w} = \frac{v^2 P^2 f^2}{2L^2} \hspace{5 mm},\hspace{5 mm} u = \frac{\lambda v^4}{4} (1-f^2)^2 \label{densities}\end{aligned}$$ It turns out to be convenient to use Einstein equations in the form (${\cal T}\equiv {\cal T}^{\lambda}_{\lambda}$): $${\cal R}_{\mu\nu} = -8 \pi G({\cal T}_{\mu \nu} - \frac{1}{2} g_{\mu \nu}{\cal T}) \label{EinsteinR}$$ from which one obtains: $$\begin{aligned} \frac{(LKN')'}{NLK} &=& 4 \pi G(\rho +p_r+p_{\phi} +p_z)= 8 \pi G(\varepsilon_v - u) \label{Einst0}\end{aligned}$$ $$\begin{aligned} \frac{(NKL')'}{NLK} &=& -4 \pi G(\rho -p_r+p_{\phi} -p_z) = - 8 \pi G(\varepsilon_v + 2 \varepsilon_{w} + u) \label{Einstphi}\end{aligned}$$ $$\begin{aligned} \frac{(LNK')'}{NLK} &=& -4 \pi G(\rho -p_r-p_{\phi} +p_z)= 8 \pi G(\varepsilon_v - u) \label{Einst3}\end{aligned}$$ and instead of the “radial" part of (\[EinsteinR\]), we take the following combination: $$\begin{aligned} \frac{N'}{N} \frac{L'}{L} + \frac{L'}{L} \frac{K'}{K}+ \frac{K'}{K} \frac{N'}{N} = 8 \pi G p_r = 8 \pi G(\varepsilon _s +\varepsilon _v - \varepsilon _{w} - u) \label{constraint}\end{aligned}$$ which is not an independent equation but serves as a constraint. In vacuum, the right-hand-sides of these equations vanish, and the first three of them are trivially integrated. In this way we may get back Kasner’s line element (\[Kasner1\]). This is therefore the asymptotic form of the metric tensor around any (transversally) localized source and especially around an Abelian Higgs flux tube. Moreover, it is easy to get convinced that due to the symmetry under boosts along the string axis, $K=N$. The equations become more transparent if we express all lengths in terms of the scalar characteristic length scale $1/\sqrt{\lambda v^2}$ (the “correlation length" in the superconductivity terminology). We therefore change to the dimensionless length coordinate $x=\sqrt{\lambda v^2}r$ and we introduce the metric component $L(x)=\sqrt{\lambda v^2}L(r)$. We also introduce the two parameters $\alpha=e^2/\lambda$ and $\gamma=8\pi Gv^2$. In terms of these new quantities, we get a two parameter system of four coupled non-linear ordinary differential equations (the prime now denotes $d/dx$): $$\begin{aligned} \frac{(N^2 Lf')'}{N^2 L} + \left(1-f^2 - \frac{P^2}{L^2}\right)f = 0 \label{systemNO1}\\ \frac{L}{N^2}\left(\frac{N^2 P'}{L}\right)' - \alpha f^2 P = 0 \label{systemNO2}\end{aligned}$$ $$\begin{aligned} \frac{(LNN')'}{N^2 L} &=&\gamma\left(\frac{P'^2}{2\alpha L^2} - \frac{1}{4} (1-f^2)^2\right) \label{systemE1}\\ \frac{(N^2 L')'}{N^2 L} &=& - \gamma\left(\frac{P'^2}{2\alpha L^2} + \frac{P^2 f^2}{L^2} + \frac{1}{4} (1-f^2)^2\right) \label{systemE2}\end{aligned}$$ We have also to keep in mind the existence of the constraint (\[constraint\]), which gets the following form: $$\begin{aligned} \frac{N'}{N} \left(2\frac{L'}{L} + \frac{N'}{N}\right) = \gamma\left(\frac{f'^2}{2} + \frac{P'^2}{2\alpha L^2} - \frac{P^2 f^2}{2L^2} - \frac{1}{4}(1-f^2)^2 \right) \label{constraintmod}\end{aligned}$$ In order to get string-like solutions, the scalar and gauge fields should satisfy the following boundary conditions: $$\begin{aligned} f(0)=0 &, & \lim_{x\rightarrow \infty} f(x) = 1 \nonumber \\ P(0)=1 &, & \lim_{x\rightarrow \infty} P(x) = 0 \label{boundarycond}\end{aligned}$$ Moreover, regularity of the geometry on the symmetry axis $x=0$ will be guaranteed by the “initial conditions": $$\begin{aligned} L(0)=0 &, & L'(0) = 1 \nonumber \\ N(0) = 1 &, & N'(0) = 0 \label{initcond}\end{aligned}$$ The purpose of the present paper is to map the two dimensional $\alpha$-$\gamma$ parameter space, and thereby to classify all the string-like solutions of the Abelian Higgs system. It is well-known that, even in flat space, the field equations can only be solved numerically. However, much can be said about the nature of the solutions even without explicitly solving the field equations [@Garfinkle1; @LagGarfinkle; @CNV; @FIU]. The standard Abelian Higgs string solution, usually considered in the literature, has a vanishing gravitational mass. This means that the spacetime around the string is locally flat except in the core of the string, while there is a non-trivial global effect namely a conical structure of the space which is quantified by an angular deficit. However, this does not at all saturate all the possibilities of solutions of (\[systemNO1\])-(\[systemE2\]). There are further types of solutions with the same boundary and “initial" conditions, (\[boundarycond\])-(\[initcond\]), which are not asymptotically flat but have interesting physical interpretations. In this paper, we will show that a point in the $\alpha$-$\gamma$ plane always represents two solutions, except at the curve representing angular deficit of $2 \pi$. The various solutions are distinguished by, among other things, their asymptotic geometries.\ For analysing the solutions and obtaining the above-mentioned features and some additional ones, we introduce the Tolman mass (per unit length), $M$: $$GM=2\pi G\int _{0}^\infty dr\; N^2 L(\rho +p_r+p_{\phi} +p_z)=\frac{\gamma}{2} \int _{0}^\infty dx\; N^2 L \left(\frac{P'^2}{2\alpha L^2} - \frac{1}{4} (1-f^2)^2\right) \label{mass}$$ Using the field equations, one can show that [@Ver]: $$GM=\frac{1}{2} \lim_{x\rightarrow\infty} (LNN') \label{tolmanmass}$$ We also define the magnetic field $B$: $$B = -\frac{1}{eL(r)}\frac{dP(r)}{dr}=-\frac{\gamma}{\alpha}\left( \frac{e}{8\pi G}\right) \frac{P'(x)}{L(x)} \label{magn}$$ and the dimensionless parameter ${\cal B}=8\pi GB(0)/e$. We find that the central value of the magnetic field (its value in the core of the string) can be expressed as [@Ver]: $${\cal B}= 1+2GM-\lim_{x\rightarrow\infty}(N^2 L') \label{centralmag}$$ The asymptotic form of the metric tensor is easily found by direct integration of the two Einstein equations (\[systemE1\])-(\[systemE2\]) using the boundary conditions and the definitions of $M$ and ${\cal B}$. It is of the Kasner form $$\begin{aligned} N(x)=K(x)\sim \kappa x^a \hspace{6 mm},\hspace{6 mm} L(x)\sim \beta x^b \label{asymptKasner1}\end{aligned}$$ with: $$\begin{aligned} a=\frac{2GM}{6GM+1-{\cal B}} \hspace{6 mm},\hspace{6 mm} b=\frac{2GM+1-{\cal B}}{6GM+1-{\cal B}} \hspace{6 mm},\hspace{6 mm} \kappa ^2\beta=6GM+1-{\cal B} \label{asymptKasner2}\end{aligned}$$ The constant $\kappa$ appears free in the asymptotic solution, but it is uniquely fixed in the complete one by the boundary conditions on the metric. Due to (\[constraintmod\]) we get the following relation which is equivalent to the quadratic Kasner condition in Eq. (\[Kasner2\]): $$GM(3GM+1-{\cal B})=0 \label{MB}$$ This immidiately points to the possibility of two branches of solutions corresponding to the vanishing of either factor in Eq. (\[MB\]). These two possibilities of course represent the two branches already discussed in Section 1.\ Consider first the cosmic string branch ($a=c=0,\; b=1$) in a little more detail. In this case, we find a simple physical interpretation to the parameters in Eq. (\[asymptKasner1\]). the constant $\beta=L'(\infty)$ defines the deficit angle $\delta\phi=2\pi(1-\beta)$, while the constant $\kappa=N(\infty)$ is the red/blue shift of time between infinity and the string core. Moreover, relations (\[tolmanmass\]), (\[centralmag\]) give: $$M=0$$ $${\cal B}=1-\kappa ^2 \left( 1-\frac{\delta\phi}{2\pi}\right)$$ That is to say, the Tolman mass vanishes and the central magnetic field is directly expressed in terms of the red/blue shift $\kappa $ and the deficit angle $\delta\phi$.\ Then consider the Melvin branch ($a=c=2/3,\; b=-1/3$). In this case there is no simple physical interpretation of the parameters $\beta$ and $\kappa$ in Eq. (\[asymptKasner1\]). As for the Tolman mass, we notice that it is non-zero, but the equations (\[tolmanmass\]), (\[centralmag\]) lead to a simple relation to the central magnetic field [@Ver]: $${\cal B}=1+3GM$$ For more discussion of these general relations (and some other ones), we refer to Ref. [@Ver]. In the remaining sections of this paper, we shall construct explicitly the various types of solutions to (\[systemNO1\])-(\[systemE2\]) by using a relaxation procedure to solve the four coupled differential equations numerically. The constraint (\[constraintmod\]) has been used for estimating the numerical errors.\ Open solutions: Cosmic strings and Melvin branch ================================================ \[secopen\] The existence of cosmic string solutions in the Abelian Higgs system is very well established. However, the existence of the solutions of the second type, i.e. the Melvin branch type which is implied by (\[MW\]) and (\[MB\]), has not been properly studied. We have found that for any conic cosmic string solution, an associated solution exists in the Melvin branch. The main difference with respect to the cosmic string branch is the fact that asymptotically the azimuthal circles have vanishing circumference. A related difference is the non-vanishing total mass (per unit length) of the Melvin-like solutions. Figure \[figopn\] shows an example of the conic and the Melvin-like solution at the point $(\alpha ,\gamma )=(2,1.8)$ in parameter space. Only the (square root of the) metric components, $N$ and $L$, are plotted, as the scalar and vector fields for the various solutions deviate only very little from the standard and well examined cosmic string configuration. The two branches are obviously quite different. The cosmic string spacetime is asymptotically flat as $N$ is constant (actually $N=1$ here as $\alpha = 2$ is the Bogomol’nyi limit [@Linet2; @GibbCom]) and $L$ becomes proportional to $x$ far from the core. In contrast hereto, the associated solution in the Melvin branch has $N \propto x^{2/3}$ and $L \propto x^{- 1/3}$ in the asymptotic region. Therefore the circumference of circles lying in planes perpendicular to the core with $x=0$ as center will eventually decrease as one moves to larger $x$. The curves shown in Figure \[figopn\] are representative for the cosmic string and Melvin branches. In the first case, increasing $\gamma$ at fixed $\alpha $ will simply shift $N$ towards a higher constant value in the asymptotic region, whereas $L$ will become less steep (meaning larger angular deficit). An increase of $\alpha $ when $\gamma $ is held constant will have the opposite effect, i.e. shifting $L$ to lower asymptotic values and decreasing the angular deficit. This observation holds for all solutions (including the cylindrical solutions and the closed solutions to be described in Section \[secclosed\]): only the $\gamma / \alpha$ ratio seems to matter for the asymptotic behavior. Still, because of the region near the core, no obvious symmetry is observed that could render the parameter space effectively one dimensional. In the case of the Melvin branch, an increased $\gamma$ (again at fixed $\alpha$) tends to flatten $N$, whereas the global maximum of $L$ takes a larger value at higher $x$.\ Moving towards more massive strings in the parameter space (larger $\gamma$ to $\alpha$ ratio) the angular deficit of the conic solution will increase until it reaches a maximum of $2 \pi$. The corresponding solution represents an asymptotically cylindrical manifold with $N$ and $L$ both asymptotically constant. One such solution exists for any $\alpha$ and the curve $\gamma _*(\alpha)$, where $\delta \phi = 2 \pi$, plays a very special role in the classification of the various solutions. First of all, it obeys a power law [@LagGarfinkle]. Secondly, as one approaches a point on this curve in parameter space along the Melvin branch the solution converges to the same cylindrical solution as for the cosmic string; that is, the conic solutions and the Melvin-like solutions coincide in a single asymptotically cylindrical solution on the curve of maximal angular deficit. Figure \[figmag\] illustrates how the central magnetic field ${\cal B}$ varies throughout the parameter space for both the cosmic string and the Melvin-like universes. Notice how the two surfaces intersect along the curve of maximal angular deficit, where the two branches coincide in the cylindrical solutions.\ Closed solutions: Inverted cone and singular Kasner solutions ============================================================= \[secclosed\] The issue of “supermassive cosmic strings" in the Abelian Higgs system was discussed at early days [@LagGarfinkle; @Ortiz]. Two types of closed solutions were found. One [@LagGarfinkle] in which $N(x)$ vanishes at a finite distance from the axis ($x=x_{max}$) while $L(x)$ diverges at that point. In the other, $N(x)$ stays finite for any $x$ while $L(x)$ decreases linearly outside the core of the string, and vanishes at a finite distance from the axis. The geometry is still conic but of an inverted one where the apex of the cone is at the point where $L=0$ [@Gott; @Ortiz]. Using the same numerical methods as in the previous section, we have found that these two types of solutions are encountered just as the curve $\delta\phi = 2 \pi$ is traversed. But the lack of asymptotic behavior for closed solutions forces us to replace the boundary conditions (\[boundarycond\]) by the condition $$\begin{aligned} f(0)=0 &, & f(x_{max}) = 1 \nonumber \\ P(0)=1 &, & P(x_{max}) = 0 \label{bc2}\end{aligned}$$ This ensures that the solutions are unit flux tubes, and makes the numerical work simpler. For quite large radial extension this technicality is not of any relevance as the variables $f$ and $P$ describing the scalar and vector fields, respectively, converge quite fast to their values at “infinity".\ Figure \[figclo\] shows an example of both kinds of solutions at the point $(\alpha,\gamma)=(2,2.05)$ in parameter space. This is just above the curve of maximal angular deficit, which lies at $\gamma _* (2)=2$ for fixed $\alpha =2$, and the solutions still have considerable sizes. Strictly speaking, these two types of solutions have no asymptotic behavior as they are both representing closed solutions that pinch off at a finite radial extension ($x_{max}$). But for slightly supermassive configurations we have $$\begin{aligned} N(x)=K(x)\sim \kappa (x_{max} -x)^a \hspace{6 mm},\hspace{6 mm} L(x)\sim \beta (x_{max} -x)^b \label{singKasner1}\end{aligned}$$ The inverted cone behaves according to Eq. (\[SL\]) and the singular Kasner solution behaves according to Eq. (\[MW\]). The curves shown in Figure \[figclo\] are representative for the solutions above the curve of maximal angular deficit. As $\gamma$ increases ($\alpha $ fixed) both solutions become smaller in radial extent. For the inverted cone $N$ gets shifted to a lower constant value and $L$ intersects zero at smaller $x$. Likewise $x_{max}$ decreases for the singular Kasner solution causing $L$ to diverge at similarly low $x$. We have found that for any given $\alpha $ and $\gamma > \gamma _* (\alpha)$, the solution with geometry as an inverted cone has the larger size.\ As mentioned, the solutions shrink when $\gamma $ is increased ($\alpha $ fixed). At some point the spacetime described by the solution becomes smaller than a few times the characteristic core thickness which scales as $\alpha ^{-1/2}$ [@NO]. At even higher values of $\gamma$ the choice of boundary conditions becomes more and more important to fundamental questions as flux quantization and topological stability; the solutions exist mathematically but are probably unphysical. Figure \[figlim\] shows three curves in parameter space which have all been fitted to a power law, $\gamma = c_1 \alpha ^{c_2}$. The lowest one is simply the curve of angular deficit $2 \pi$. This curve separates the two open solutions from the two closed ones, and represents itself a linear family of asymptotically cylindrical solutions. The middle one represents the curve in parameter space where the singular Kasner-like solutions have a radial extent of $10 \, \alpha ^{-1/2}$, i.e. ten times the characteristic thickness of the core. Similarly, the upper one represents the curve where the inverted cone solutions pinches off at $x_{max} = 10 \, \alpha ^{-1/2}$. Starting on one of these two curves and moving downwards (towards lower $\gamma$ for fixed $\alpha$), the two closed solutions will become more and more similar and will eventually coincide in the cylindrical solution. Moving upward instead will yield rather small and increasingly unphysical solutions.\ Conclusion ========== Cylindrically symmetric configurations of the gravitating Abelian Higgs model have been examined by application of relaxation techniques to Einstein equations which simplify in this case to a system of coupled ordinary differential equations. Everywhere in parameter space two kinds of solutions exist, except at the curve of angular deficit $2\pi $, where only one cylindrical solution exists. The two open solutions are the standard asymptotically conic cosmic string and a solution with a Melvin like asymptotic behavior. The two closed solutions are the inverted cone and a singular Kasner-like solution. Supermassive solutions pinch off very fast and are probably unphysical.\ Here only the unit flux tube, $n=1$, has been considered, but introduction of an arbitrary $n \neq 0$ is expected to give similar families of solutions, which can be examined in an analogous way.\ Finally, it is worthwhile noting that the singularities of static field configurations generally seem to be lifted when time-dependence is reintroduced [@stringinfl]. Therefore the nature of the singularities of the supermassive strings may be understood in the framework of time dependent analysis. [123]{} A. Vilenkin and E.P.S. Shellard, [*“Cosmic strings and other Topological Defects"*]{} (Cambridge Univ. Press, Cambridge, 1994). T.W.B Kibble and M. Hindmarsh, Rep. Progr. Phys. [**58**]{}, 477 (1995). T.W.B Kibble, J. Phys. [**A9**]{}, 1387 (1976). Ya. B. Zel’dovich, Mon. Not. R. Astron. Soc. [**192**]{}, 663 (1980). A. Vilenkin, Phys. Rev. [**D23**]{}, 852 (1981). H.B. Nielsen and P. Olesen, Nucl. Phys. [**B61**]{}, 45 (1973). D. Kramer, H. Stephani, E. Herlt and M. MacCallum, [*“Exact Solutions of Einstein’s Field Equations"*]{} (Cambridge Univ. Press, Cambridge, England 1980). D. Garfinkle, Phys. Rev. [**D32**]{}, 1323 (1985). L. Marder, Proc. Roy. Soc. London A [**252**]{}, 45 (1959). W.B. Bonnor, J. Phys. A [**12**]{}, 847 (1979). J.R. Gott, Astrophys. J. [**288**]{}, 422 (1985). W.A. Hiscock, Phys. Rev. [**D31**]{}, 3288 (1985). B. Linet, Gen. Relativ. Gravit. [**17**]{}, 1109 (1985). P. Laguna-Castillo and R.A. Matzner, Phys. Rev.[**D36**]{}, 3663 (1987). D. Garfinkle and P. Laguna, Phys. Rev. [**D39**]{}, 1552 (1989). P. Laguna and D. Garfinkle, Phys. Rev. [**D40**]{}, 1011 (1989). J. Colding, N.K. Nielsen and Y. Verbin, Phys. Rev. [**D56**]{}, 3371 (1997). Y. Verbin, [*“Cosmic Strings in the Abelian Higgs Model with Conformal Coupling to Gravity"*]{}, preprint hep-th/9809002, to appear in Phys. Rev. [**D**]{}. M.A. Melvin, Phys. Lett. [**8**]{}, 65 (1964). M.E. Ortiz, Phys. Rev. [**D43**]{}, 2521 (1991). V.P. Frolov, W. Israel and W.G. Unruh, Phys Rev. [**D39**]{}, 1084 (1989). A.K. Raychaudhuri, Phys. Rev. [**D41**]{}, 3041 (1990). B. Linet, Phys. Lett. A [**124**]{}, 240 (1987). A. Comtet and G.W. Gibbons, Nucl. Phys. [**B299**]{}, 719 (1988). I. Cho, Phys. Rev. [**D58**]{}, 103509 (1998). \[figopn\] \[figmag\] \[figclo\] \[figlim\] [^1]: Electronic address: [email protected] [^2]: Electronic address: [email protected] [^3]: Electronic address: [email protected]
{ "pile_set_name": "ArXiv" }
--- abstract: 'The extragalactic background light (EBL), exclusive of the cosmic microwave background, consists of the cumulative radiative output from all energy sources in the universe since the epoch of recombination. Most of this energy is released at ultraviolet and optical wavelengths. However, observations show that a significant fraction of the EBL falls in the 10 to 1000 $\mu$m wavelength regime. This provides conclusive evidence that we live in a dusty universe, since only dust can efficiently absorbs a significant fraction of the background energy and reemit it at infrared wavelengths. The general role of dust in forming the cosmic infrared background (CIB) is therefore obvious. However, its role in determining the exact spectral shape of the CIB is quite complex. The CIB spectrum depends on the microscopic physical properties of the dust, its composition, abundance, and spatial distribution relative to the emitting sources, and its response to evolutionary processes that can modify all the factors listed above. This paper will present a brief summary of the many ways dust affects the intensity and spectral shape of the cosmic infrared background. In an Appendix we present new limits on the mid-infrared intensity of the CIB using TeV $\gamma$-ray observations of Mrk 501.' author: - Eli Dwek title: The Role of Dust in Producing the Cosmic Infrared Background --- \#1[[*\#1*]{}]{} \#1[[*\#1*]{}]{} = \#1 1.25in .125in .25in Introduction ============ The extragalactic background light (EBL), exclusive of the cosmic microwave background, is the repository of all radiant energy releases in the universe since the epoch of recombination. Radiative sources contributing to the EBL include stars, which derive their energy from the nuclear processing of hydrogen into heavier elements, active galactic nuclei (AGN), which are powered by the release of gravitational energy associated with the accretion of matter onto a central black hole, and various exotic sources, such as decaying particles, primordial black holes, exploding stars, and substellar mass objects. Current limits on the EBL, and the relative contribution of the various energy sources to the EBL are presented in these Symposium Proceedings by Hauser (2001) and in the review paper of Hauser & Dwek (2001). In a dust-free universe the EBL can, in principle, be simply derived from knowledge of the spectrum of the emitting sources and the cosmic history of their energy release. In a dusty universe the total intensity of the EBL is unchanged, but its energy is redistributed over the entire X-ray to far-infrared region of the spectrum. Predicting the EBL spectrum in a dusty universe therefore poses a significant challenge, since the exact frequency distribution of the reradiated emission depends on a large number of factors. On a microscopic level, the emitted spectrum depends on the wavelength dependence of the absorption and scattering properties of the dust, which in turn depend on the dust composition and size distribution. The reradiated spectrum also depends on the dust abundance and the relative spatial distribution of energy sources and absorbing dust. Finally, the cumulative spectrum from all sources depends on various evolutionary factors, including the history of dust formation and processes which destroy the dust, modify it, or re-distribute it relative to the radiant sources. Intergalactic dust, if present in sufficient quantities, can cause an overall dimming of the UV-optical output from distant sources, and produce a truly diffuse infrared background. In steady-state models for the universe, dust plays a more significant role, producing the cosmic microwave background via the thermalization of starlight by iron whiskers (Hoyle, Burbidge, & Narlikar 1993). In the following I will examine in more detail the various factors and processes that determine the intensity and spectral energy distribution (SED) of the cosmic infrared background (CIB). Dust Properties =============== The presence of dust in the interstellar medium (ISM) of the Milky Way is manifested in many different ways including the extinction, scattering, and polarization of starlight, the infrared emission, the interstellar depletion, and the presence of isotopic anomalies in meteorites. The most accepted interstellar dust model consists of a population of bare silicate and graphite grains with a power law distribution in grain radii extending from a few tens of angstroms to about 0.5 $\mu$m. An additional population of macromolecules, most commonly identified with polycyclic aromatic hydrocarbons (PAHs) must be added to this model. The very small dust particles and macromolecules are stochastically heated by the ambient radiation field, and give rise to the mid-infrared continuum spectrum and mid-infrared emission features observed in the Milky Way and in external galaxies. Figure 1 (left panel) presents the mass absorption coefficient of graphite and silicate dust grains as a function of wavelength. The Figure illustrates the different absorption efficiencies of carbonaceous and silicate dust particles, the latter being significantly more transparent in the UV-visible regions of the spectrum. The efficiency of converting starlight to thermal infrared emission clearly depends on the relative abundance of carbonaceous-to-silicate dust in galaxies. Graphite particles posses a strong absorption feature at 2175 Å seen in the extinction curve towards many stars in the Milky Way (MW) and nearby galaxies. The ratio of the 2175 Å-to-continuum extinction can be used to estimate the relative abundance of graphite-to-silicate dust in the interstellar medium towards the extincted source. The right panel of Figure 1 is a schematic representation of the observed average extinction in several local galaxies. The Figure shows an obvious trend of increasing strength of the 2175 Å feature from the SMC, LMC, the MW, and M51. The numbers next to each curve represent the average metallicity in these galaxies in solar units. The Figure suggests a trend of increasing graphite-to-silicate dust ratio with metallicity. The trend may suggest that details of the galactic star formation history and stellar initial mass function play an important role in determining the dust composition and abundance, and hence the opacity, in galaxies. Relative Distribution of the Dust Compared to the Sources ========================================================= Given a dust abundance and composition, the most important parameter that determines the absorption efficiency of starlight and the spectrum of the reradiated infrared emission is the proximity of the dust to the radiation sources. One can distinguish between four different dust environments characterized by the spatial relation between the dust and the radiation sources: the circumstellar environment, consisting of dust that had recently formed out of the stellar ejecta and that is heated by the underlying stellar radiation field; the interstellar environment, consisting of dust residing in the ISM and heated by ionizing stars and/or the general interstellar radiation field; the AGN environment, consisting of a dusty torus, heated by the emission from the accretion disk; and the intergalactic environment, consisting of dust that has been expelled from galaxies and heated by the general diffuse background radiation. Circumstellar Dust ------------------ Circumstellar dust dominates the mid–IR emission from intermediate age stellar populations (Bressan, Granato, & Silva 1998), and may be an important source of thermal mid–IR emission in elliptical galaxies (Knapp, Gunn, & Wynn–Williams 1992). However, ISOCAM 4 and 15 $\mu$m images of some elliptical galaxies reveal that the morphology of the 15 $\mu$m dust emission component is significantly different from the 4 $\mu$m stellar emission component (Madden, Vigroux, & Sauvage 1999), suggesting an interstellar origin instead. Dust around young stellar objects will also have a very high efficiency for converting the radiation from the underlying object into infrared emission. However, the lifetime of this embedded phase is very short compared to the main sequence lifetime of these objects. Circumstellar dust therefore plays a minor role in the redistribution of energy in galaxies. Interstellar Dust and the Infrared Spectrum of Galaxies ------------------------------------------------------- Most of the processing of galactic starlight is done by interstellar dust particles that reside in the different phases of the ISM. In fact, the infrared SEDs of all galaxies in the local universe can be constructed from a linear combination of several distinct emission components representing the different phases of the ISM: (1) a cirrus component, representing the emission from dust and carriers of the solid state infrared bands at 3.3, 6.2, 7.7, 8.6, 11.3, and 12.7 $\mu$m, both residing in the diffuse atomic phase of the ISM and heated by the general interstellar radiation field; (2) a cold dust component, representing the emission from dust residing in molecular clouds, and heated by an attenuated interstellar radiation field; and (3) an H II or starburst emission component, representing the emission from dust residing in H II regions and heated by the ionizing radiation field. An additional AGN component may be needed to represent the spectra of some of the most luminous infrared galaxies. Using this simple procedure with two or more emission components, one can reproduce the fluxes and colors of [*IRAS*]{} galaxies with luminosities ranging from normal ($L \sim 10^{8.5}\ {\rm L}_{\odot}$) to the most luminous ($L \sim 10^{13}\ {\rm L}_{\odot}$) galaxies, and the observed trend of increasing $S(60\ \mu$m)/$S(100\ \mu$m) and decreasing $S(12\ \mu$m)/$S(25\ \mu$m) flux ratios with increasing infrared luminosity. These empirical models offer a simple way of calculating the infrared spectra of galaxies in the local universe (Malkan & Stecker 1998; Dwek et al. 1998; Guiderdoni et al. 1998; Rowan-Robinson & Crawford 1989). However, to preserve the radiative energy balance of a galaxy, this emission must equal the amount of starlight absorbed by the dust. Calculating the opacity of galaxies poses a significant challenge, since in addition to the microscopic dust properties, the efficiency at which dust absorbs stellar photons depends on the dust abundance, the clumpiness of the ISM, and the relative distribution of stars and dust – all of which are evolving quantities. Various radiative transfer models attempting to represent this complex reality were presented in the workshop on “The Opacity of Spiral Disks" (Davies & Burstein 1995). Models that include a 2–phase ISM consisting of molecular clouds and an intercloud component that calculate not only the attenuation of starlight but the re-radiated IR emission were developed, among others, by Silva et al. (1998), Városi & Dwek (1999), and Misselt et al. (2000). The Városi & Dwek model is analytic, and provides a very good approximation to the Monte–Carlo model of Witt & Gordon (1996) that calculates the attenuation of radiation due to absorption and multiple scattering from clumpy spherical systems. The models of Silva et al., which include a clumpy interstellar medium, are quite successful in fitting the UV to submillimeter wavelength emission of galaxy types ranging from ellipticals and spirals to starbursts and interacting systems. The observed SED of representative galaxies is characterized by an increasing L$_{IR}$/L$_{opt}$ ratio along the sequence from giant ellipticals, to spirals (NGC 6946), to starbursts (M 82), to mergers (Arp 220). Silva et al. attribute this trend to increasing infrared emission contributions from dust in giant star-forming molecular clouds. In fact, the infrared spectrum of Arp 220 is almost identical to that of an H II region. These models provide a more physical approach to the construction of galaxy spectra, using radiative transfer methods to calculate the contribution of the various ISM dust components to the overall spectrum of the various types of galaxies. Dusty Torii Around Active Galactic Nuclei ----------------------------------------- The energy output from AGN represents the radiative energy budget of the universe exclusive of the CMB and that released in nuclear burning processes. The energy of an AGN is derived from the release of gravitational energy associated with the accretion of matter onto a central black hole (BH) located in the nucleus of a host galaxy. A significant fraction of this energy can be absorbed by the dusty torus around the central BH. The energy output from AGN-dominated galaxies is the sum of the reradiated thermal dust emission and the non-thermal synchrotron emission. The overall spectral energy distribution from these objects depends on the viewing angle. AGN that are viewed face-on have a synchrotron dominated power-law spectrum, whereas AGN that are viewed edge-on exhibit a thermal infrared excess consisting of a hot ($\sim$ 50 K) component commonly attributed to emission from the dusty torus, and a cooler ($\sim$ 20 K) component, commonly attributed to reradiated stellar energy (Haas et al. 1998). In principle, the total bolometric contribution of AGN to the EBL could be comparable to the starlight contribution. However, direct observational correlation between X–ray and IR/submillimeter sources suggests that AGN contribute only $\sim$ 10–20% of CIB intensity at 100 and 850 $\mu$m (Barger et al. 2000). Their contribution at shorter wavelengths depends on the exact shape of the IR spectrum and is therefore still uncertain. Intergalactic Dust ------------------ The progressive dimming of the light output of Type Ia supernovae with redshift has been taken as evidence that the mass density of the universe is subcritical, requiring a cosmological constant for closure (Perlmutter et al. 1999). An alternative scenario was suggested by Aguirre (1999) who argued that intergalactic dust could produce the same observational effect, alleviating the need to abandon the concept of a flat universe with a zero cosmological constant. Such intergalactic dust would be heated by the ambient intergalactic radiation field and produce a truly diffuse infrared background. Aguirre & Haiman (2000) calculated the contribution of such dust to the CIB provided it was sufficiently abundant to account for the dimming of the distant supernovae. In particular, they found that intergalactic dust would produce most of the CIB at 850 $\mu$m. However, sources detected by the SCUBA survey account for over 50% of the CIB at this wavelength, leaving little room for any diffuse emission component. The IR Evolution of Galaxies and Models for the CIB =================================================== In order to calculate the contribution of galaxies to the CIB, one needs to know how their cumulative spectral energy density in the universe has evolved with time. Various models have been put forward to predict the CIB. These models can be grouped into four general categories: backward evolution, forward evolution, semi analytical, and cosmic chemical evolution models (see review by Hauser & Dwek 2001). They differ in their degree of complexity, physical realism, and ability to account for various observational constraints or to make predictions. Backward evolution models are the simplest. They extrapolate the spectral properties and/or the comoving number density of local galaxies to higher redshifts using some parametric or unphysical form for their evolution. The main disadvantage of these models is that they are not constrained by the physical processes, such as star and metal formation, or radiative transfer processes that go on in the galaxies they represent. Some of the shortcomings in backward evolution models are corrected in forward evolution models. At the heart of these models is a spectral evolution code which evolves the stellar populations and calculates the stellar, gas, and metallicity content and SED of a galaxy as a function of time. Initial conditions and model parameters are adjusted to reproduce the observational properties of galaxies in the local universe. Models for the diffuse interstellar dust emission vary in degree of complexity and physical input. As in the backward evolution models, the IR emission is represented by a sum of two or more components corresponding to the gas phases in which the dust resides and the radiation field it is exposed to. The various dust emission components are then evolved backwards in time in a manner that is determined by the evolution of the various physical parameters that determine their [*present*]{} intensity and spectral energy distribution. Detailed spectral evolution models that follow the evolution of the dust composition and abundance, the galactic opacity, and the UV-to-far infrared spectral energy distribution for various stellar birthrate histories were constructed by Dwek (1998) and Dwek, Fioc, & Városi (2000). The results of these calculations for spiral galaxies are shown in Figure 2. The left panel of the Figure shows how the dust-to-metals as well as the carbon-to-silicate mass ratios evolve with time. The evolutionary trends reflect the temporal behavior of the evolution of the different stellar sources (carbon stars, OH/IR stars, supernovae) that give rise to the dust composition. The right panel depicts the evolution of the V-band opacity perpendicular to the plane of the galaxy as a function of time. The Figure shows that a maximum opacity is reached at an epoch of $\sim$ 6 Gyr. Figure 3 shows the various emission components contributing to the SED of a typical spiral galaxy at 12 Gyr (Fioc & Dwek 2000). They include the infrared emission from H II regions and diffuse H I clouds. The contribution to the latter from PAH molecules, carbon and silicate dust is explicitly shown in the Figure. Figure 4 examines the effect of the evolution of the dust composition and abundance on the SED of spiral galaxies, by plotting the spectral ratio of S$_{\nu}$(mw) to S$_{\nu}$(evol), versus wavelength for various epochs. The spectrum S$_{\nu}$(mw) is the SED calculated under the assumption that the dust-to-metal and the graphite-to-silicate mass ratios are constant, and equal to their currently observed Milky Way values of 0.35 and 0.75, respectively. The spectrum S$_{\nu}$(evol) includes the detailed evolution of the metallicity dependent dust-to-gas and graphite-to-silicate dust mass ratios, and galactic opacity as depicted in Figures 2 and 3. The Figures show that non-evolving dust models over-predict the infrared emission at early epochs, especially the mid-infrared emission bands. Both models relax to the same spectrum at about 12 Gyr. In forward evolution models galaxies evolve quiescently, with no allowance for any interaction or any stochastic changes in their star formation rate or morphology. In particular these models fail to match the SCUBA 850 $\mu$m galaxy number counts without the ad hoc inclusion of a new population of ultraluminous infrared galaxies. The SCUBA observations of dust enshrouded galaxies at redshifts of $z \approx$ 2 – 3, suggest a quick rise in galactic opacity. Figure 5 (based on the calculations of Dwek 1998; Városi & Dwek 1999; and Dwek, Fioc, & Városi 2000), shows the evolution of gas, metals, and dust mass in a pristine starburst (left panel) and the evolution of the attenuation of starlight (right panel) as a function of time. The Figure shows that starbursts can become effectively opaque in the UV and optical in a mere 100 Myr. Most models for the CIB make similar predictions for the intensity and spectral energy distribution of the CIB, regardless of their degree of complexity or realism (Hauser & Dwek 2001). This should not be surprising, since the CIB is the cumulative sum of energy outputs in the universe, and many details of the emission may be “washed out” in the summation process. The CIB is therefore not a strong discriminator between models, especially since it is still not well determined in the mid-infrared spectral region. Summary ======= The very existence of the CIB provides conclusive evidence that we live in a dusty universe. The CIB is produced by the absorption and reemission of (mostly) starlight by interstellar dust in normal and active galaxies. The CIB intensity and spectrum depend on a large number of microscopic and global parameters that affect the optical properties of the dust, its abundance and size distribution, the spatial distribution of the dust relative to the radiation sources, and the clumpiness of the interstellar medium in galaxies. Some physical processes, such as the stochastic heating of very small dust particles and macromolecules, may be responsible for a significant fraction of the CIB in the mid-infrared. In individual galaxies all these microscopic and global properties evolve with time. For example, it is very likely that the clumpiness in galaxies and the clump filling factor evolve in a fashion that depends on the total mass of gas in the galaxy, the supernova rate, and its metal content. Understanding the various physical processes that determine why local galaxies are the way they are, and how they evolved to their present conditions are crucial for predicting their spectral appearance at various redshifts. The CIB represents, however, the time integrated emission from galaxies. There is therefore no unique way to determine the evolution of the various microscopic and global parameters from studying the CIB alone. Evolutionary models show that massive star forming regions rapidly become opaque at visible wavelengths. Consequently, the total energy released by these objects is mostly deposited in the CIB. The CIB can therefore provide a useful integral constraint on the star formation history of the universe. Different wavelength regions of the CIB sample the contribution of galaxies at different redshifts. Studies of the CIB can therefore provide important insights into the nature and evolution of the various sources contributing to the CIB. I thank Mike Hauser for his comments and input on some sections of this paper, Ant Jones for his careful reading of and comments on the manuscript, and Rick Arendt for assistance in preparing the Appendix of the manuscript. This work was supported by the NASA Astrophysical Theory program OSS NRA 99-OSS-01. Appendix: Probing the Cosmic Infrared Background with TeV $\gamma$-ray Sources {#appendix-probing-the-cosmic-infrared-background-with-tev-gamma-ray-sources .unnumbered} ============================================================================== Co-authored with Okkie C. de Jager\ [*Potchefstroom University, Potchefstroom 2520, South Africa*]{}\ The TeV spectrum of $\gamma$-ray sources can be attenuated by pair producing $\gamma$-$\gamma$ interactions with the extragalactic background light (EBL). Using recent detections of the EBL in the UV-optical (UVO) and far-infrared regions, we set limits on the cosmic infrared background (CIB) intensity in the $\sim$5 to 60 $\mu$m region with minimal assumptions on the intrinsic Mrk 501 source spectrum. The results are shown in the four panels below. The upper left panel depicts the TeV spectra of Mrk 501 and Mrk 421. The panel to its right depicts the constraints used to construct possible EBL spectra. These include 3 possible UVO spectra, 100 (5$\times$5$\times$4 intensity combinations at 6, 30, and 60 $\mu$m), and a fixed far-IR spectrum, for a total of 300 EBL spectra. The intrinsic Mrk 501 source spectra, corrected for attenuation by all EBL combinations, are shown in the lower left panel. Most source spectra exhibit an unphysical rise at $>$ 10 TeV energies. However, some spectra remain flat in E$^2$dN/dE, as shown for select cases in the lower right panel. Two source spectra are shown for each choice of UVO spectrum. The upper curve depicts an unphysical spectral behavior, whereas the lower one corresponds to a dN/dE $\propto$ E$^{-2}$ power law. Preliminary upper limits on the CIB, derived by excluding EBL spectra giving rise to unphysical source spectra, are 5 nW m$^{-2}$ sr$^{-1}$ in the 6 - 30 $\mu$m interval and $<$ 10  at 60 $\mu$m, for H$_0$ = -------------------------------- -------------------------------- ![image](dwek_poster_fig1.eps) ![image](dwek_poster_fig2.eps) ![image](dwek_poster_fig3.eps) ![image](dwek_poster_fig4.eps) -------------------------------- -------------------------------- Aguirre, A. 1999, , 525, 583 Aguirre, A. & Haiman, Z. 2000, , 532, 28 Barger, A. J., Cowie, L. L., Mushotzky, R. F., & Richards, E. A. 2000, , submitted, astro-ph/0007175 Bressan, A., Granato, G. L., & Silva L. 1998, , 332, 135 Davies, J. I., & Burstein D. 1995, The Opacity of Spiral Disks (Dordrecht: Kluwer) Dwek, E. 1998, , 501, 643 Dwek, E., et al. 1998, , 508, 106 Fioc, M., & Dwek, E. 2000, in preparation Dwek, E., Fioc, M., & Városi, F. 2000, ISO Surveys of a Dusty Universe, ed. D. Lemke, M. Stickel, K Wilke (Springer: Berlin), 157 Guiderdoni, B., Hivon, E., Bouchet, F. R., & Maffei, B. 1998, , 295, 877 Haas, M, et al. 1998, , 503, L109 Hauser, M. G., & Dwek, E. 2001, , 39, in press Hauser, M. G. 2001, this volume Hoyle, F., Burbidge, G., & Narlikar, J. V. 1993, , 410, 437 Knapp, G. R., Gunn, J. E., & Wynn-Williams, C. G. 1992, , 399, 76 Madden, S. C., Vigroux, L., & Sauvage, M. 1999, The Universe as Seen by ISO, ed. P. Cox & M. F. Kessler (Noordwijk: ESA Publication), 933 Malkan, M. A., & Stecker, F. W. 1998, , 496, 13 Misselt, K. A., Gordon, K. D., Clayton, G. C., & Wolff, M. J. 2000, , submitted Perlmutter, S., et al. 1999, , 517, 565 Rowan-Robinson, M., & Crawford, J. 1989, , 238, 523 Silva, L., Granato, G. L., Bressan, A., & Danese, L. 1998, , 509, 103 Városi, F., & Dwek, E. 1999, , 523, 265 Witt, A. N., & Gordon, K. D. 1996, , 463, 681 Discussion Mike Hauser: The shaded region in your figure comparing models with CIB measurements is the same as the $\pm 2\sigma$ region in my talk on the observations of the CIB, earlier in this Symposium. Eli Dwek: Right.
{ "pile_set_name": "ArXiv" }
--- abstract: 'For a perfect field ${\kappa}$ of characteristic $p>0$, a positive ingeger $N$ not divisible by $p$, and an arbitrary subgroup $\Gamma$ of $\operatorname{GL}_2({\ensuremath{\mathbf{Z}}}/N{\ensuremath{\mathbf{Z}}})$, we prove (with mild additional hypotheses when $p\le 3$) that the $U$-operator on the space $M_k({\mathscr{P}}_{\Gamma}/{\kappa})$ of (Katz) modular forms for $\Gamma$ over ${\kappa}$ induces a surjection $U:M_{k}({\mathscr{P}}_{\Gamma}/{\kappa})\twoheadrightarrow M_{k''}({\mathscr{P}}_{\Gamma}/{\kappa})$ for all $k\ge p+2$, where $k''=(k-k_0)/p + k_0$ with $2\le k_0\le p+1$ the unique integer congruent to $k$ modulo $p$. When ${\kappa}={\mathbf{F}}_p$, $p\ge 5$, $N\neq 2,3$, and $\Gamma$ is the subgroup of upper-triangular or upper-triangular unipotent matrices, this recovers a recent result of Dewar [@Dewar].' address: 'University of Arizona, Tucson' author: - Bryden Cais bibliography: - 'mybib.bib' title: 'On the $U_p$ operator in characteristic $p$' --- [^1] Introduction {#intro} ============ Fix a prime $p$, an integer $N>0$ with $p\nmid N$, and a subgroup $\Gamma$ of $\operatorname{GL}_2({\ensuremath{\mathbf{Z}}}/N{\ensuremath{\mathbf{Z}}})$. Let ${\widetilde{\Gamma}}$ be the preimage in $\operatorname{SL}_2({\ensuremath{\mathbf{Z}}})$ of $\Gamma_0:=\Gamma\cap \operatorname{SL}_2({\ensuremath{\mathbf{Z}}}/N{\ensuremath{\mathbf{Z}}})$, and write ${\widetilde{M}}_k({\widetilde{\Gamma}})$ for the space of weight $k$ mod $p$ modular forms for ${\widetilde{\Gamma}}$ (in the sense of Serre [@Serre §1.2]). When $N=1$, a classical result of Serre [@Serre §2.2, Théroème 6] asserts that the $U_p$ operator is a contraction: for $k\ge p+2$, the map $U_p: {\widetilde{M}}_k(\Gamma(1))\rightarrow {\widetilde{M}}_k(\Gamma(1))$ factors through the subspace ${\widetilde{M}}_{k'}(\Gamma(1))$ for some $k'<k$ satisfying $pk' \le k+ p^2-1$. In fact, Serre’s result may be generalized and significantly sharpened: \[Main\] Let ${\kappa}$ be a perfect field of characteristic $p$ and denote by $M_k({\mathscr{P}}_{\Gamma}/{\kappa})$ the space of weight $k$ Katz modular forms for $\Gamma$ over ${\kappa}$ $($see $\S\ref{Geometry}$$)$. Let $k_0$ be the unique integer between $2$ and $p+1$ congruent to $k$ modulo $p$, and if $p\le 3$, assume that $N > 4$ and that $\Gamma_0$ is a subgroup of the upper-triangular unipotent matrices. Then for $k\ge p+2$, the $U$-operator $($see $\S\ref{Geometry}$$)$ acting on $M_k({\mathscr{P}}_{\Gamma}/{\kappa})$ induces a surjection $U:M_k({\mathscr{P}}_{\Gamma}/{\kappa})\twoheadrightarrow M_{k'}({\mathscr{P}}_{\Gamma}/{\kappa})$, for $k':=(k-k_0)/p +k_0$. When ${\widetilde{\Gamma}}=\Gamma_{\star}(N)$ for $\star=0,1$ and ${\kappa}={\mathbf{F}}_p$, the endomorphism $U$ coincides with the usual Atkin operator $U_p$ (see Corollary \[ClassicalUp\]). In particular, if $p\ge 5$, so ${\widetilde{M}}_k({\widetilde{\Gamma}})\simeq M_{k}({\mathscr{P}}_{\Gamma}/{\mathbf{F}}_p)$ (by Theorems 1.7.1, and 1.8.1–1.8.2 of [@Katz]) and $N\neq 2,3$, Theorem \[Main\] is due to Dewar [@Dewar]. Both Serre’s original result and Dewar’s refinement of it rely on a delicate analysis of the interplay between the operators $U_p$, $V_p$, and $\theta$ acting on mod $p$ modular forms. In the present note, we take an algebro-geometric perspective, and show how Theorem \[Main\] follows immediately from a (trivial extension of a) general theorem of Tango[^2] [@Tango] on the behavior of vector bundles under the Frobenius map. In this optic, the contractivity of $U_p$ in characteristic $p$ is simply an instance of the “Dwork Principle" of analytic continuation along Frobenius. In particular, we use neither the $\theta$-operator, nor the notion of “filtration" of a mod $p$ modular form. Tango’s Theorem =============== Fix a perfect field ${\kappa}$ of characteristic $p$, and write $\sigma:{\kappa}\rightarrow {\kappa}$ for the $p$-power Frobenius automorphism of ${\kappa}$. Let $X$ be a smooth, proper, and geometrically connected curve over ${\kappa}$ of genus $g$. Attached to $X$ is its [*Tango number*]{}: $$n(X) := \max\left\{ \sum_{x\in X({\overline{{\kappa}}})} \left\lfloor \frac{\operatorname{ord}_{x}(df)}{p}\right\rfloor\ : f\in {\overline{{\kappa}}}(X)\setminus {\overline{{\kappa}}}(X)^p \right\},$$ where ${\overline{{\kappa}}}(X)$ is the function field of $X_{{\overline{{\kappa}}}}$. As in Lemma 10 and Proposition 14 of [@Tango], it is easy to see that $n(X)$ is well-defined and is an integer satisfying $-1\le n(X)\le \lfloor (2g-2)/p\rfloor$, with the lower bound an equality if and only if $g=0$. \[TangoThm\] Let $S\neq X$ be a reduced closed subscheme of $X$ with corresponding ideal sheaf ${\mathscr{I}}_S\subseteq {\mathscr{O}}_X$, and let ${\mathscr{L}}$ be a line bundle on $X$. If $\deg {\mathscr{L}}> n(X)$ then the natural $\sigma$-linear map $$\xymatrix{ {F^*:H^1(X,{\mathscr{L}}^{-1}\otimes {\mathscr{I}}_S)} \ar[r] & {H^1(X,{\mathscr{L}}^{-p}\otimes {\mathscr{I}}_S)} }\label{Frob}$$ induced by pullback by the absolute Frobenius of $X$ is injective, and the natural $\sigma^{-1}$-linear “trace map" $$\xymatrix{ {F_*:H^0(X,\Omega^1_{X/{\kappa}}(S)\otimes{\mathscr{L}}^p)} \ar[r] & {H^0(X,\Omega^1_{X/{\kappa}}(S)\otimes{\mathscr{L}})} }\label{Cartier}$$ given by the Cartier operator $($[@CartierNouvelle], [@SerreTopology §10]$)$ is surjective. First note that the formation of $(\ref{Frob})$ and $(\ref{Cartier})$ is compatible, via $\sigma$- (respectively $\sigma^{-1}$-) linear extension, with any scalar extension ${\kappa}\rightarrow {\kappa}'$ to a perfect field ${\kappa}'$; we may therefore assume that ${\kappa}$ is algebraically closed. As the two assertions are dual[^3] by Serre duality [@SerreTopology §10, Proposition 9], it suffices to prove the injectivity of (\[Frob\]). The case $S=\emptyset$ is Tango’s Theorem[^4] [@Tango Theorem 15]. We reduce the general case to this one as follows: using that $\deg({\mathscr{L}}) >0$ and that ${\mathscr{O}}_X/{\mathscr{I}}_S^j$ is a skyscraper sheaf for all $j >0$, one finds a commutative diagram with exact rows $$\xymatrix{ 0 \ar[r] & {H^0(X,{\mathscr{O}}_X/{\mathscr{I}}_S)} \ar[r]\ar[d]_-{F^*} & {H^1(X,{\mathscr{L}}^{-1}\otimes {\mathscr{I}}_S)} \ar[r]\ar[d]_-{F^*} & {H^1(X,{\mathscr{L}}^{-1})} \ar[r]\ar[d]_-{F^*} & 0\\ 0 \ar[r] & {H^0(X,{\mathscr{O}}_X/{\mathscr{I}}_S^p)} \ar[r]\ar[d] & {H^1(X,{\mathscr{L}}^{-p}\otimes {\mathscr{I}}_S^p)} \ar[r]\ar[d] & {H^1(X,{\mathscr{L}}^{-p})} \ar[r]\ar@{=}[d] & 0\\ 0 \ar[r] & {H^0(X,{\mathscr{O}}_X/{\mathscr{I}}_S)} \ar[r] & {H^1(X,{\mathscr{L}}^{-p}\otimes {\mathscr{I}}_S)} \ar[r] & {H^1(X,{\mathscr{L}}^{-p})} \ar[r] & 0 }$$ in which the lower vertical arrows are induced by the inclusion of ideal sheaves ${\mathscr{I}}_S^p\subseteq {\mathscr{I}}_S$. Using that ${\kappa}={\overline{{\kappa}}}$ and identifying $H^0(X,{\mathscr{O}}_X/{\mathscr{I}}_S)$ with ${\kappa}^{S}$, the left vertical composite is easily seen to coincide with the map $\oplus_S \sigma:{\kappa}^S\rightarrow {\kappa}^S$ which is $\sigma$ on each factor; it is therefore injective. As the right vertical composite map is injective by Tango’s Theorem, an easy diagram chase finishes the proof. Modular forms mod p as differentials on the Igusa curve {#Geometry} ======================================================= In order to apply Tango’s Theorem to prove Theorem \[Main\], we must recall Katz’s geometric definition of mod $p$ modular forms, and Serre’s interpretation of them as certain meromorphic differentials on the Igusa curve. Let us write[^5] $R_{\Gamma}:=({\ensuremath{\mathbf{Z}}}[\zeta_N])^{\det(\Gamma)}$, and for any $R_{\Gamma}$-algebra $A$ denote by ${\mathscr{P}}_{\Gamma}/A$ the moduli problem $([\Gamma(N)]/\Gamma)^{R_{\Gamma}{\text{-}\mathrm{can}}}\otimes_{R_{\Gamma}} A$ on $(\operatorname{Ell}/A)$ (see §3.1, §7.1, 9.4.2, and 10.4.2 of [@KM]) and by $M_k({\mathscr{P}}_{\Gamma}/A)$ the space of weight $k$ Katz modular forms for ${\mathscr{P}}_{\Gamma}/A$ ([*e.g.*]{} [@Ulmer §6]) that are holomorphic at $\infty$ in the sense of [@Katz §1.2]. Equivalently, $M_k({\mathscr{P}}_{\Gamma}/A)$ is the $A$-submodule of level $N$, weight $k$ modular forms in the sense of [@DR [slowromancap7@]{}.3.6] that are invariant under the natural action of $\Gamma_0$. Viewing ${\mathbf{C}}$ as an $R_{\Gamma}$-algebra via $\zeta_N\mapsto \exp(2\pi i/N)$, we remark that $M_k({\mathscr{P}}_\Gamma/{\mathbf{C}})$ [*is*]{} the “classical" space of weight $k$ modular forms for ${\widetilde{\Gamma}}$ over ${\mathbf{C}}$ defined via the trancendental theory [@DR [slowromancap7@]{}.4]. Now fix a ring homomorphism $R_{\Gamma}\rightarrow \kappa$ with $\kappa$ a perfect field of characteristic $p$. From here until the end of the section we will assume that ${\mathscr{P}}_{\Gamma}/\kappa$ is representable and that $-1$ acts without fixed points on the space of cusp-labels for $\Gamma$ (see [@KM §10.6] and [*c.f.*]{} [@KM 10.13.7–8]). We will later explain how to relax these hypotheses to those of Theorem \[Main\]. We write $Y_\Gamma$ (respectively $X_\Gamma$) for the associated (compactified) moduli scheme; by [@KM 10.13.12], one knows that $X_\Gamma$ is a proper, smooth, and geometrically connected curve over $\kappa$. Writing ${\rho}:{\mathscr{E}}\rightarrow Y_\Gamma$ for the universal elliptic curve, our hypothesis that $-1$ acts without fixed points ensures that the line bundle $\omega_{\Gamma}:={\rho}_*\Omega^1_{{\mathscr{E}}/Y_\Gamma}$ on $Y_\Gamma$ admits a canonical extension, again denoted $\omega_{\Gamma}$, to a line bundle on $X_\Gamma$ [@KM 10.13.4, 10.13.7]. By definition, $M_k({\mathscr{P}}_\Gamma/{\kappa}) = H^0(X_\Gamma,\omega_{\Gamma}^{k})$. Let $I_{\Gamma}$ be the [*Igusa curve of level $p$ over $X_\Gamma$*]{}; by definition, $I_{\Gamma}$ is the compactified moduli scheme associated to the simultaneous problem $[{\mathscr{P}}_{\Gamma}/{\kappa},[\operatorname{Ig}(p)]]$ on $(\operatorname{Ell}/{\kappa})$ [@KM §12]. By [@KM 12.7.2], the Igusa curve is proper, smooth, and geometrically connected, and the natural map $\pi:I_{\Gamma}\rightarrow X_\Gamma$, is finite étale and Galois with group $({\ensuremath{\mathbf{Z}}}/p{\ensuremath{\mathbf{Z}}})^{\times}$ outside the supersingular points, and totally ramified over every supersingular point. We define $\omega:=\pi^*\omega_{\Gamma}$, and recall [@KM 12.8.2–3] that there is a canonical section $a\in H^0(I_\Gamma,\omega)$ which has $q$-expansion equal to 1, vanishes to order 1 at each supersingular point, and on which $d\in ({\ensuremath{\mathbf{Z}}}/p{\ensuremath{\mathbf{Z}}})^{\times}$ acts (via its action on $I_\Gamma$) through $\chi^{-1}$, for $\chi:({\ensuremath{\mathbf{Z}}}/p{\ensuremath{\mathbf{Z}}})^{\times}={\mathbf{F}}_p^{\times}\hookrightarrow {\mathbf{F}}_p$ the mod $p$ Teichmüller character. The following is a straightforward generalization of a theorem of Serre; see [@KM §12.8] and [*c.f.*]{} Propositions 5.7–5.10 of [@tameness]. \[MFInter\] Fix an integer $k\ge 2$ and let $k_0\le k$ be any integer with $2\le k_0 \le p+1$. The map $f\mapsto f/a^{k_0-2}$ induces an natural isomorphism of ${\kappa}$-vector spaces $$M_k({\mathscr{P}}_\Gamma/{\kappa})\simeq H^0(I_\Gamma,\Omega^1_{I_\Gamma/{\kappa}}(\operatorname{cusps}+ \delta_{k_0}\cdot{\ensuremath{{\mathrm{ss}}}})\otimes\omega^{k-k_0})(\chi^{k_0-2}), \label{IgusaMF}$$ where $\delta_{k_0}=1$ when $k_0=p+1$ and is zero otherwise; here, ${\ensuremath{{\mathrm{ss}}}}$, $\operatorname{cusps}$ are the reduced supersingular and cuspidal divisors, respectively. The proof is a straightforward adaptation of Propositions 5.7–5.10 of [@tameness]; for the convenience of the reader, we sketch the argument. Thanks to [@KM 10.13.11], the Kodaira-Spencer map [@KM 10.13.10] provides an isomorphism of line bundles $\omega^{2}_{\Gamma}\simeq \Omega^1_{X_\Gamma/{\kappa}}(\operatorname{cusps})$ on $X_\Gamma$ which, after pullback along $\pi$, gives an isomorphism $$\omega^{2}\simeq \Omega^1_{I_\Gamma/{\kappa}}(-(p-2){\ensuremath{{\mathrm{ss}}}}+ \operatorname{cusps})\label{KS}$$ of line bundles on $I_\Gamma$ as $\pi$ is étale outside ${\ensuremath{{\mathrm{ss}}}}$ and totally (tamely) ramified at each supersingular point. Since $a\in H^0(I_{\Gamma},\omega)$ has simple zeroes at the supersingular points, via (\[KS\]) any global section $f$ of $\omega^{k}_{\Gamma}$ induces a global section $\pi^*f/a^{k_0-2}$ of $\Omega^1_{I_\Gamma/{\kappa}}(\operatorname{cusps}+\delta_{k_0}\cdot{\ensuremath{{\mathrm{ss}}}})\otimes \omega^{k-k_0}$ on which $({\ensuremath{\mathbf{Z}}}/p{\ensuremath{\mathbf{Z}}})^{\times}$ acts through $\chi^{k_0-2}$; thus the map (\[IgusaMF\]) is well-defined. Since the $q$-expansion of $a$ is $1$ and $I_{\Gamma}$ is geometrically connected, the $q$-expansion principle then shows that (\[IgusaMF\]) is injective. To prove surjectivity, observe that by (\[KS\]), a global section of $\Omega^1_{I_\Gamma/{\kappa}}(\operatorname{cusps}+\delta_{k_0}\cdot{\ensuremath{{\mathrm{ss}}}})\otimes \omega^{k-k_0}$ gives a meromorphic section $h$ of $\omega^{k-k_0+2}$ satisfying $\operatorname{ord}_x(h) \ge -(p-1)$ at each supersingular point $x$, with equality possible only when $k_0=p+1$. If $h$ lies in the $(k_0-2)$-eigenspace of the action of $({\ensuremath{\mathbf{Z}}}/p{\ensuremath{\mathbf{Z}}})^{\times}$, then $f:=a^{k_0-2}h$ descends to a meromorphic section of $\omega^{k}_{\Gamma}$ over $X_\Gamma$ satisfying $$(p-1)\operatorname{ord}_x(f) = \operatorname{ord}_{x}(h) + k_0 -2 \ge k_0-p-1$$ at each supersingular point $x\in X_{\Gamma}({\overline{{\kappa}}})$, with equality possible only when $k_0=p+1$. Since the left side is a multiple of $p-1$ and $k_0 \ge 2$, we must have $\operatorname{ord}_x(f) \ge 0$ in all cases, and $f$ is a global (holomorphic) section of $\omega^{k}_{\Gamma}$ over $X_\Gamma$ with $\pi^*f/a^{k_0-2}=h$. Using Proposition \[MFInter\], the Cartier operator $F_*$ on meromorphic differentials induces, by “transport of structure", a $\sigma^{-1}$-linear endomorphism $U:M_k({\mathscr{P}}_{\Gamma}/{\kappa})\rightarrow M_k({\mathscr{P}}_{\Gamma}/{\kappa})$. If $G$ is any group of automorphisms of $X(\Gamma)$, then the action of $G$ commutes with $F_*$ (ultimately because the $p$-power map in characteristic $p$ commutes with all ring homomorphisms), and we likewise obtain a $\sigma^{-1}$-linear endomorphism $U$ of $M_k({\mathscr{P}}_{\Gamma}/{\kappa})^G$. This allows us to define $U$ even when ${\mathscr{P}}_{\Gamma}/{\kappa}$ is not representable as follows. Choose a prime $\ell > 3N$, and let $\Gamma'$ be the unique subgroup of $\operatorname{GL}_2({\ensuremath{\mathbf{Z}}}/N\ell{\ensuremath{\mathbf{Z}}})$ projecting to the trivial subgroup of $\operatorname{GL}_2({\ensuremath{\mathbf{Z}}}/\ell{\ensuremath{\mathbf{Z}}})$ and to $\Gamma$ in $\operatorname{GL}_2({\ensuremath{\mathbf{Z}}}/N{\ensuremath{\mathbf{Z}}})$. Then for any perfect field ${\kappa}'$ of characteristic $p$ admitting a map from $R_{\Gamma'}$, the moduli problem ${\mathscr{P}}_{\Gamma'}/{\kappa}'$ is representable, there is a natural action of $G:=\operatorname{SL}_2({\ensuremath{\mathbf{Z}}}/\ell{\ensuremath{\mathbf{Z}}})$ on $M_k({\mathscr{P}}_{\Gamma'}/{\kappa}')$, and one has $M_k({\mathscr{P}}_{\Gamma}/{\kappa}')=M_k({\mathscr{P}}_{\Gamma'}/{\kappa}')^G$ ([*c.f.*]{} [@DR [slowromancap7@]{}.3.3] and [@Katz §1.2]). Since $M_k({\mathscr{P}}_{\Gamma}/{\kappa})\otimes_{{\kappa}}{\kappa}'\simeq M_k({\mathscr{P}}_{\Gamma}/{\kappa}')$, we obtain the desired endomorphism $U$ of $M_k({\mathscr{P}}_{\Gamma}/{\kappa})$ by descent, and it is straightforward to check that it is independent of our initial choices of $\ell$ and ${\kappa}'$. By post-composition with the $\sigma$-linear isomorphism[^6] $M_k({\mathscr{P}}_{\Gamma}/{\kappa})\simeq M_k({\mathscr{P}}_{\Gamma}^{\sigma^{-1}}/{\kappa})$ induced by the “exotic isomorphism" of moduli problems ${\mathscr{P}}_{\Gamma}/{\kappa}\simeq {\mathscr{P}}_{\Gamma}^{\sigma^{-1}}/{\kappa}$ [@KM 12.10.1] we obtain a ${\kappa}$-linear map $U^{\#}:M_k({\mathscr{P}}_{\Gamma}/{\kappa})\rightarrow M_k({\mathscr{P}}_{\Gamma}^{\sigma^{-1}}/{\kappa})$. When ${\mathscr{P}}_{\Gamma}$ is defined over[^7] ${\mathbf{F}}_p$ in the sense that $R_{\Gamma}$ admits a (necessarily unique) surjection to ${\mathbf{F}}_p$, one has canonically ${\mathscr{P}}_{\Gamma}/{\mathbf{F}}_p = {\mathscr{P}}_{\Gamma}^{\sigma^{-1}}/{\mathbf{F}}_p$ as problems on $(\operatorname{Ell}/{\mathbf{F}}_p)$, and $U^{\#}$ is an endomorphism of $M_k({\mathscr{P}}_{\Gamma}/{\mathbf{F}}_p)$. The maps $U$ and $U^{\#}$ are natural generalizations of Atkin’s $U_p$-operator: \[UpRel\] Suppose that ${\mathscr{P}}_{\Gamma}/{\kappa}$ is representable and let $c$ be any cusp of $X(\Gamma)$ defined over ${\kappa}$. Then $q^{1/e}$ is a uniformizing parameter at $c$ for some divisor $e$ of $N$, and for any $f\in M_k({\mathscr{P}}_{\Gamma}/{\kappa})$, the formal expansions of $Uf$ at $c$ and of $U^{\#}f$ at $c^{\sigma^{-1}}$ are given by $$Uf = \sum_{n\ge 0}\sigma^{-1}(a_{np}) q^{n/e}\quad\text{and}\quad U^{\#}f = \sum_{n\ge 0}a_{np} q^{n/e}\ \text{respectively, where}\ f = \sum_{n\ge 0} a_n q^{n/e}.$$ Using the well-known local description of the Cartier operator on meromorphic differentials ([*e.g.*]{} [@SerreTopology §10, Proposition 8]), the result follows easily from the arguments of Propositions 2.8 and 5.7 of [@tameness]; see also (the proof of) [@tameness Proposition 5.9]. \[ClassicalUp\] Suppose that ${\widetilde{\Gamma}} = \Gamma_{\star}(N)$ for $\star=0,1$. Then $R_{\Gamma}={\ensuremath{\mathbf{Z}}}$ and the resulting endomorphisms $U$ and $U^{\#}$ of $M_k({\mathscr{P}}_{\Gamma}/{\mathbf{F}}_p)$ coincide with the Atkin operator $U_p$, whether or not ${\mathscr{P}}_{\Gamma}/{\mathbf{F}}_p$ is representable. That $R_{\Gamma}={\ensuremath{\mathbf{Z}}}$ is clear, as $\det(\Gamma)=({\ensuremath{\mathbf{Z}}}/N{\ensuremath{\mathbf{Z}}})^{\times}$. By the discussion above, we may reduce to the representable case, and the result then follows from Proposition \[UpRel\] and the $q$-expansion principle. Proof of Theorem \[Main\] {#ThPf} ========================= We now prove Theorem \[Main\]. Fix $k$ and let $k_0$ and $k'$ be as in the statement of Theorem \[Main\]. First suppose that ${\mathscr{P}}_{\Gamma}\otimes_{R_{\Gamma}}{\kappa}$ is representable and that $-1$ acts without fixed points on the cusp-labels of $\Gamma$. Using (\[KS\]) and the fact that $a$ has simple zeroes along ${\ensuremath{{\mathrm{ss}}}}$ we compute ([*c.f.*]{} [@KM 12.9.4]) $$\deg \omega = \frac{2g-2}{p} + \frac{1}{p}\deg(\operatorname{cusps}) > \left\lfloor \frac{2g-2}{p}\right\rfloor \ge n(I_\Gamma)$$ where $g$ is the genus of $I_{\Gamma}$. Applying Proposition \[TangoThm\] with $X=I_\Gamma$, $S=\operatorname{cusps}+\delta_{k_0}\cdot{\ensuremath{{\mathrm{ss}}}}$, and ${\mathscr{L}}=\omega$, we conclude from (\[Cartier\]) and the relation $k-k_0=p(k'-k_0)$ that the Cartier operator $$\xymatrix{ {F_*:H^0(I_\Gamma,\Omega^1_{I_\Gamma/{\kappa}}(\operatorname{cusps}+\delta_{k_0}\cdot{\ensuremath{{\mathrm{ss}}}})\otimes \omega^{k-k_0})} \ar[r] & {H^0(I_\Gamma),\Omega^1_{I_\Gamma/{\kappa}}(\operatorname{cusps}+\delta_{k_0}\cdot{\ensuremath{{\mathrm{ss}}}})\otimes \omega^{k'-k_0})} }$$ is surjective whenever $k-k_0 \ge p$. Passing to $\chi^{k_0-2}$-eigenspaces for $({\ensuremath{\mathbf{Z}}}/p{\ensuremath{\mathbf{Z}}})^{\times}$ and appealing to Proposition \[MFInter\] and Corollary \[ClassicalUp\] then completes the proof in this case. Now when $p\le 3$, the hypotheses $N>4$ and ${\widetilde{\Gamma}}\subseteq \Gamma_1(N)$ of Theorem \[Main\] ensure that ${\mathscr{P}}_{\Gamma}\otimes_R{\kappa}$ is representable (as it maps to the moduli problem $[\Gamma_1(N)]$, which is representable for $N\ge 4$ by [@KM 10.9.6]) and that $-1$ acts without fixed points on the cusp-labels of $\Gamma$ [@KM 10.7.4]. If $p\ge 5$, we may choose a prime $\ell > 3N$ with $\ell\not\equiv 0,\pm 1\bmod p$, so that $p\nmid|\operatorname{SL}_2({\ensuremath{\mathbf{Z}}}/\ell{\ensuremath{\mathbf{Z}}})|$. Then for $N':=N\ell$ and $\Gamma':=1\times \Gamma \subseteq \operatorname{SL}_2({\ensuremath{\mathbf{Z}}}/\ell{\ensuremath{\mathbf{Z}}})\times \operatorname{SL}_2({\ensuremath{\mathbf{Z}}}/N{\ensuremath{\mathbf{Z}}})=\operatorname{SL}_2({\ensuremath{\mathbf{Z}}}/N\ell{\ensuremath{\mathbf{Z}}})$, we have (after passing to an appropriate extension ${\kappa}'$ of ${\kappa}$) that ${\mathscr{P}}_{\Gamma'}\otimes_{R_{\Gamma'}}{\kappa}'$ is representable with $-1$ acting freely on the cusp-labels of $\Gamma'$ [@KM 10.7.1, 10.7.3]. We conclude that the $U$-operator induces a surjection of $\kappa[\operatorname{SL}_2({\ensuremath{\mathbf{Z}}}/\ell{\ensuremath{\mathbf{Z}}})]$-modules $M_{k}({\mathscr{P}}_{\Gamma'}/{\kappa}')\twoheadrightarrow M_{k'}({\mathscr{P}}_{\Gamma'}/{\kappa}')$. Our choice of $\ell$ ensures that the ring $\kappa[\operatorname{SL}_2({\ensuremath{\mathbf{Z}}}/\ell{\ensuremath{\mathbf{Z}}})]$ is semisimple, so passing to $\operatorname{SL}_2({\ensuremath{\mathbf{Z}}}/\ell{\ensuremath{\mathbf{Z}}})$-invarants is exact. As the space of $\operatorname{SL}_2({\ensuremath{\mathbf{Z}}}/\ell{\ensuremath{\mathbf{Z}}})$-invariant weight $k$ modular forms for $\Gamma'$ coincides with $M_{k}({\mathscr{P}}_{\Gamma}/{\kappa}')$ ([*c.f.*]{} the definition of $U$ in §\[Geometry\]), passing to invariants and descending from ${\kappa}'$ to ${\kappa}$ then completes the proof of Theorem \[Main\] in the general case. [^1]: During the writing of this paper, the author was partially supported by an NSA Young Investigator grant (H98230-12-1-0238). We are very grateful to David Zureick-Brown for many helpful conversations. [^2]: Tango’s paper, which appeared the year prior to Serre’s [@Serre], is perhaps not as well-known as it should be. [^3]: Note that $\kappa$-linear duality interchanges $\sigma$-linear maps with $\sigma^{-1}$-linear ones. [^4]: Strictly speaking, Tango requires $g>0$; however, by tracing through Tango’s argument—or by direct calculation—one sees easily that the result holds when $g=0$ as well. [^5]: Here, we follow the notation of [@KM §9.4]: By definition ${\ensuremath{\mathbf{Z}}}[\zeta_N]$ is the finite free ${\ensuremath{\mathbf{Z}}}$-algebra ${\ensuremath{\mathbf{Z}}}[X]/\Phi_N(X)$, where $\Phi_N$ is the $N$-th cyclotomic polynomial and $\zeta_N$ corresponds to $X$, equipped with its natural Galois action of $({\ensuremath{\mathbf{Z}}}/N{\ensuremath{\mathbf{Z}}})^{\times}$. [^6]: Explicitly, this isomorphism sends $f\in M_k({\mathscr{P}}_{\Gamma}/{\kappa})$ to the modular form $f^{\sigma}$ defined by $f^{\sigma}(E,\alpha):=(E^{\sigma},\alpha^{\sigma})$ [^7]: A sufficient condition for this to happen is that $\det(\Gamma)$ contain the residue class of $p\bmod N$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We report on the synthesis, crystal structure, and magnetic properties of a previously unreported Co$^{2+}$ $S={3 \over 2}$ compound, (C$_{4}$H$_{12}$N$_{2}$)\[CoCl$_{4}$\], based upon a tetrahedral crystalline environment. The $S={3 \over 2}$ magnetic ground state of Co$^{2+}$, measured with magnetization, implies an absence of spin-orbit coupling and orbital degeneracy. This contrasts with compounds based upon an octrahedral and even known tetrahedral Co$^{2+}$ (Ref. ) based systems where a sizable spin-orbit coupling is measured. The compound is characterized with single crystal x-ray diffraction, magnetic susceptibility, infrared, and ultraviolet/visible spectroscopy. Magnetic susceptibility measurements find no magnetic ordering above 2 K. The results are also compared with the previously known monoclinic hydrated analogue.' author: - 'C. Decaroli' - 'A. M. Arevalo-Lopez' - 'C. H. Woodall' - 'E. E. Rodriguez' - 'J. P. Attfield' - 'S. F. Parker' - 'C. Stock' title: '(C$_{4}$H$_{12}$N$_{2}$)\[CoCl$_{4}$\] -tetrahedrally coordinated Co$^{2+}$ without the orbital degeneracy' --- Introduction ============ Molecular magnets containing 3*d* transition metal ions have played a central role in the creation of new materials and guiding theory owing to the interplay between orbital and spin magnetism and the ability to tune both degrees of freedom through the surrounding ligand field environment.[@Kugel46:92] It is the aim of this work to find new quantum molecular magnets which will aid in the understanding of strongly correlated and quantum phenomena. Co$^{2+}$ (d$^{7}$) compounds have provided an excellent starting point for the creation of model low dimensional magnets owing to the delicate interplay between orbital properties and the surrounding crystalline electric field environment.[@Schoonevel116:2012; @Banci52:1982; @Jaynes1995:34] Due to the orbital degree of freedom, magnets based upon Co$^{2+}$ in an octahedral field tend to be excellent realizations of low spin magnets which display strong quantum fluctuations. [@Abragram1986] The basic orbital properties of Co$^{2+}$ are summarized in Figure 1 for both octahedral ($Dq>0$, where $10Dq\equiv \Delta$ is the splitting between the $t_{2g}$ and $e_{g}$ levels) and tetrahedral ($Dq<0$) coordination. [@Balhausen1962; @Orgel23:1955; @Pappalardo35:1961] Co$^{2+}$ in an octahedral environment has an orbital degree of freedom resulting in an orbital triplet ground state (with an effective orbital angular moment $\tilde{l}=1$) with a spin $S={3\over 2}$. Spin-orbit coupling splits this twelve-fold degenerate state into a doubly degenerate ground state with an effective $j={1\over2}$ and two excited states with $j={3\over2}$ and ${5\over2}$.[@Cowley13:88; @Sakurai167:68] This doubly degenerate ground state has been exploited to study the magnetic response of low-spin chains in CoNb$_{2}$O$_{6}$ and CsCoX$_{3}$ (with X=Br and Cl) salts where low-energy fluctuations within the doublet $j={1\over2}$ manifold reside well below the $j={3\over2}$ level set by spin-orbit coupling.[@Coldea327:2010; @Nagler27:1983; @Cabrera90:2014] These systems have been used to study the physics of the S=${1 \over 2}$ Ising chain. ![\[matlab\] The Tanabe-Sugano diagram and orbital states for octahedral ($Dq>0$) and tetrahedral ($Dq<0$) crystalline electric field environments. The orbital degeneracy in an octahedral crystalline field results in a spin transition from a $^{4}T_{1}$ (high spin - HS) state to a $^{2}E$ (low spin - LS) state for large values of $Dq$. For negative values of $Dq$ (tetrahedral environment), the orbital triplet is replaced by a singlet state ($^{4}A_{2}$) with a $S={3 \over 2}$. However, real systems display a coupling between this singlet ground state and the excited orbital triplet resulting in a ground state orbital degeneracy. The diagram was calculated taking the Racah parameters $B$ and $C$ to be the free ion values of 0.12 eV and 0.56 eV respectively (see Ref. for discussion and further definitions). This work is centred around the goal of discovering new $S={3\over 2}$ Co$^{2+}$ magnets based on a tetrahedral crystal field ($Dq<0$, left side of the plot).](tanabe.pdf){width="8.7cm"} It is interesting to search for $S={3\over2}$ low dimensional magnets given that these systems are not obviously classical or quantum mechanical (see for example, Ref. which discusses the relation between spin size and quantum corrections). As reviewed in Figure 1, Co$^{2+}$ in a tretrahedral environment possesses a spin state of $S={3\over2}$ and this can be modelled by reversing the sign of the crystal field splitting. In a tetrahedral environment, the orbital degeneracy and complications due to spin-orbit coupling present in the $^{4}T_{1}$ octahedral state are removed and replaced by an orbital singlet $^{4}A_{2}$ state with $S={3 \over 2}$. Despite this expectation based upon crystal field arguments, Co$^{2+}$ in a tetrahedral environment typically does display an orbital degree of freedom [@Cotton1999; @Horrocks76:98; @Romerosa16:2003] complicating the system and the ability to apply such model magnets to test theories described above. In this report, we discuss the magnetic and structural properties of Co$^{2+}$ in a series of piperazine based molecular magnets. We will show that the magnetic ground state of these systems lack a strong measurable orbital degree of freedom making them candidate model examples of $S={3\over 2}$ magnets. This paper is divided into five sections including this introduction. We first note the solution based synthesis of the previously unreported Co$^{2+}$ (C$_{4}$H$_{12}$N$_{2}$)\[CoCl$_{4}$\] compound and compare this with the known growth of the hydrated version. We then present the crystal structure and magnetic susceptibility illustrating the underlying $S={3 \over 2}$ nature of this compound. We finally present spectroscopic data measuring the orbital transitions and compare these with other Co$^{2+}$ tetrahedrally based compounds. Synthesis ========= Motivated by piperazine based Cu$^{2+}$ quantum magnets, we investigated similar Co$^{2+}$ based compounds. [@Stone95:2006] Piperazinium Tetrachlorocobaltate Monohydrate (C$_{4}$H$_{12}$N$_{2}^{2+}\cdot$CoCl$_{4}^{2-}\cdot$H$_{2}$O - denoted as $PTCM$, for the remainder of this paper) was originally reported by Tran Qui and Palacios and is monoclinic (space group P2$_{1}$/a) with $a$=14.017 Å, $b$=12.706 Å, $c$=6.559 Å, and $\beta$=87.21$^{\circ}$.[@Qiu46:90] The structure consists of tetrahedrally coordinated Co$^{2+}$ and piperazinium layers coupled only through hydrogen bonding. The Co$^{2+}$-Co$^{2+}$ distances are large (the drawn bond lengths labelled $J$ in Figure 2 $b)$ are $\sim$ 6-7 Å), particularly in comparison to other Co$^{2+}$ materials such as CoO where the Co$^{2+}$-Co$^{2+}$ distances are $\sim$ 3-4 Å.[@Greenwald53:6] Because of the large distances the magnetic properties were not characterized and the tetrahedra considered as isolated. Based on evaporation out of solution, we were able to synthesize either $PTCM$ or the previously unreported linear chain variant (C$_{4}$H$_{12}$N$_{2}$)\[CoCl$_{4}$\] (termed $PTC$ during this report). CoCl$_{2}$ and piperazine (C$_{4}$H$_{10}$N$_{2}$) were mixed in a solution in a 1:4 molar ratio. Piperazinium is the conjugate acid of the basic piperazine building block and is formed upon protonation in HCl. Therefore, both compounds were dissolved separately in concentrated (37%) hydrochloric acid and then mixed in a single solution. ![\[structure\] $a)$ displays the structure of (C$_{4}$H$_{12}$N$_{2}$)\[CoCl$_{4}$\] -“$PTC$" with inter ($J_{1}$) and intra ($J_{2}$) exchange pathways highlighted. $b)$ compares this to the structure of -“$PTCM$". The lower panel shows typical crystal morphologies on millimetre scaled paper.](structure.pdf){width="8.7cm"} We initially followed the synthesis procedure for Cu$^{2+}$ quantum magnets based on piperazine and found two classes of molecules depending on the synthesis procedure. The evaporation temperature and hydrated nature of the CoCl$_{2}$ were found to be key to whether crystals of $PTCM$ or $PTC$ were produced. Rapid evaporation (temperatures between 60 $^{\circ}$ and 200 $^{\circ}$) formed $PTC$ needle crystals. Typical lengths were 3-5 mm, however crystals up to a few centimetres in length were also synthesized (Figure 2 $a)$). Anhydrous CoCl$_{2}$ was found to always produce $PTC$ crystals while hydrated versions were found to produce $PTCM$ ($PTC$) from low (high) temperature evaporation. The needle-like nature of $PTC$ contrasts with the cuboid nature of $PTCM$ formed at lower temperatures (Figure 2 $b$). Single crystal x-ray diffraction ================================ Single crystals of both $PTCM$ and $PTC$ were isolated and their structures were studied via single crystal x-ray diffraction as summarized in Tables \[table\_frac\] and \[table\_cryst\]. Experimental information is provided in Table \[table\_xray\]. The structure of $PTCM$ was verified, using a similar experimental configuration to that listed in Table \[table\_xray\], to be the same as reported previously. [@Qiu46:90] $PTC$ was found to be isostructural with the Zn analogue reported by Sutherland *et al*.[@Sutherland65:2009] The spacing between the tetrahedra in the lattice environment shows an anisotropy in one direction which has led us to investigate whether this is a candidate $S={3 \over 2}$ chain. The distance between the tetrahedra for the two nearest neighbours is given in Table \[table\_cryst\], whereas the next largest directions are 8.32 Å and 8.12 Å. The difference between $J_{1}$ and $J_{2}$ defines the growth direction in Figure 2. This contrasts with $PTCM$ which has a definitive three dimensional structure. [c c]{} Space Group & P2$_{1}$2$_{1}$2$_{1}$\ *a* (Å) & 8.2876(6)\ *b* (Å) & 11.1324(9)\ *c* (Å) & 11.9121(9)\ Crystal System & orthorhombic\ Volume (Å$^{3}$) & 1099.02(15)\ Z & 4\ Formula weight & 288.89\ Calculated density (mg/mm$^{3}$) & 1.746\ $\lambda$ Mo K$\alpha$ (Å) & 0.71073\ Monochromator & graphite\ no. of reflections collected & 40260\ no. of independent reflections & 4190\ Absorption coefficient (mm$^{-1}$) & 2.480\ F(000) & 580.0\ R$_{1}$, wR$_{2}$ (%) & 4.35, 7.40\ $R_{1}=\sum||F_{0}|-|F_{c}||/\sum |F_{0}|$ &\ $wR_{2}=\left( \sum \left [w (F_{0}^{2}-F_{c}^{2})^{2} \right ]/[w(F_{0}^{2})^{2}] \right)^{1/2}$ &\ \[table\_xray\] [c c c c c]{} *Atom* & *x* & *y* & *z* & *U(eq)*\ Co1 & 2285.4(3) & 358.0(2) & 5918.4(2) & 32.79(7)\ Cl1 & 4362.5(6) & -977.8(5) & 5941.7(5) & 39.06(11)\ Cl2 & 2378.7(6) & 1419.6(4) & 7541.6(4) & 36.25(11)\ Cl3 & 2513.9(7) & 1501.7(5) & 4360.4(4) & 43.21(13)\ Cl4 & -136.2(6) & -658.4(5) & 5981.7(5) & 38.67(11)\ \[table\_frac\] [c c c c]{} Co-Co($J1$) & 6.043(1) Å& Co-Co($J2$) & 7.743(1)Å\ Co1-Cl1 & 2.2749(6) Å& Co1-Cl2 & 2.2674(6)Å\ Co1-Cl3 & 2.2586(6) Å& Co1-Cl4 & 2.3052(6)Å\ Cl1-Co1-Cl4 & 109.73(2)$^{\circ}$ & Cl2-Co1-Cl1 & 107.73(2)$^{\circ}$\ Cl2-Co1-Cl4 & 104.93(2)$^{\circ}$ & Cl3-Co1-Cl1 & 108.36(2)$^{\circ}$\ Cl3-Co1-Cl2 & 113.83(2)$^{\circ}$ & Cl3-Co1-Cl4 & 112.12(2)$^{\circ}$\ \[table\_cryst\] Magnetic susceptibility ======================= Magnetic susceptibility measurements were performed on a Quantum Design MPMS system. Figure 3 $a)$ illustrates a plot of the magnetization as a function of magnetic field along directions parallel to the chain axis and perpendicular. The high field limit tends towards $\sim$ 3 $\mu_{B}$, consistent with expectations based upon Co$^{2+}$ in a tetrahedral crystal field environment with $S={3\over2}$. Panel $b)$ illustrates plots of the susceptibility and inverse for $PTC$ and $PTCM$, respectively, showing no sharp anomaly indicative of magnetic order above 2 K. A Curie fit ($\chi \propto {1\over T}$) for $PTC$ gives a moment of 3.7 $\mu_{B}$, in close agreement to the expected result for $S={3\over2}$ of 3.9 $\mu_{B}$. The moment is consistent with weak or negligible mixing with the excited $^{4}T_{2}$ (Fig. 1) state and non-measurable spin orbit coupling ($\lambda$) which would be reflected by an enhanced effective moment of $\mu=3.89-1.559 {\lambda \over |Dq|}$ (with $\lambda$ negative). [@Cotton83:1961; @Holm32:1960] The lack of substantial spin orbit coupling is different from previously studied tetrahalo complexes. [@Holm59:31] ![\[squid\] $a)$ plots the magnetization of PTC as a function of applied field parallel and perpendicular to the chain axis. The dashed lines are fits to a Brillouin function to extract a check for the $g$ factors derived from the temperature dependence. The curves tend to $\sim$ 3 $\mu_{B}$ as expected given the crystalline electric field environment. As confirmed by the fits, the saturation field is larger than the 7T field range probed. $b)$ shows the susceptibility and inverse susceptibility for both $PTC$ and $PTCM$. The solid lines are fits to a Curie type susceptibility ($\propto {1\over T} + A$).](chi.pdf){width="8.7cm"} The $g$ factors were derived from fits to the susceptibility (Fig. 3) to be $g_{||}$=2.49 and $g_{\perp}$=1.61. [@Kuzian74:2006; @Macfarlane47:1967] The powder average, $\tilde{g}\equiv {1\over 3}g_{||} +{2 \over 3}g_{\perp}$=1.90, is in agreement with the expected value of 2 and confirms the lack of any mixing and spin-orbit coupling. The ratio $g_{||}/g_{\perp}$=1.6 is close to the value of 1.8 derived from $M$ vs $H$ data (Fig. 3 $a)$ using Brillouin functions. The anisotropic ratios of $g$ may reflect the slight crystallographic distortion of the tetrahedron. As seen in Fig. 3, the susceptibility is described by a Curie term and a temperature independent term $\chi \propto {1\over T} +A$. The results were consistent with zero or negligible Curie Weiss temperature indicating very weak interactions, analogous to Cs$_{3}$CoCl$_{5}$ and Cs$_{3}$CoBr$_{5}$, with tetrahedrally coordinated Co$^{2+}$ with similarly large Co$^{2+}$-Co$^{2+}$ distances.[@Stapele66:1966] UV-Vis and IR spectroscopy ========================== ![\[uv\_plus\_IR\] The IR and UV-Vis spectra for PTC illustrating electronic transition at $\sim$ 0.7 eV and at $\sim$ 1.8 eV. Following previous studies based upon Co$^{2+}$ in a tetrahedral environment, we assign both of these transitions as excitations from the ground $^{4}A_{2}$ orbital singlet to the excited $^{4}T_{1}$ orbital levels.](ir_plus_uv.pdf){width="8.7cm"} Given the lack of any spin orbit coupling, in contrast to previous tetrahedrally coordinated Co$^{2+}$ complexes, we investigated the crystal field excitations to determine the orbital levels and compare them with previous Co$^{2+}$ compounds. To extract the crystal field splitting and determine the energy scale of the orbital transitions for comparison with previous work, we performed both UV-Vis and IR spectroscopy. UV-vis data were recorded in solution using a JASCO V-670 Series spectrophotometer. Infrared spectra (400 - 8000 cm$^{-1}$) were recorded with a Bruker Vertex 70 FTIR spectrometer (TGS detector, 4 cm$^{-1}$ resolution, 64 scans) using attenuated total internal reflection (ATR) by a Bruker Platinum ATR accessory with a diamond ATR element. The results for both experiments are illustrated in Figure 4. Two broad excitations are observed at $\sim$ 0.7 eV and 1.8 eV. Following the systematic work by Cotton *et al.*  [@Cotton83:1961], we assign these transitions to excitations from the $^{4}A_{2}$ orbital singlet to excited $^{4}T_{1}$ orbital triplets. Based on the energy positions and the Tanabe-Sugano diagram (Fig. 1), we estimate that the crystal field splitting $Dq/B \sim 0.3$ implying $\Delta\equiv=10Dq\sim$ 350 meV ($\sim$ 2800 cm$^{-1}$). It is interesting to note that the crystal field splitting is significantly less than octahedral variants such as NiO and CoO where $10Dq\sim$ 1 eV. [@Kim84:11; @Cowley13:88; @Haverkort99:2007; @Larson99:2007; @Schoonevel116:2012] The excitations are nearly identical (though slightly lower in energy) to those presented in the chlorine variants [@Cotton83:1961] where mixing between orbital states was implicated as the origin of a spin orbit coupling resulting in magnetic moments $\sim$ 4 $\mu_{B}$ being observed. Similar effects can be seen in other 4-coordinated cobalt compounds. [@Wilson4:1985; @Dong11:50] Contrasting to these compounds, (C$_{4}$H$_{12}$N$_{2}$)\[CoCl$_{4}$\] is an ideal example of an ionic material where no orbital mixing is present and the magnetic ground state is in a $S={3\over2}$ state with no spin orbit coupling. Summary ======= We have presented data on (C$_{4}$H$_{12}$N$_{2}$)\[CoCl$_{4}$\] - a $S={3\over 2}$ weakly coupled linear chain based upon Co$^{2+}$ in a tetrahedral environment. The compound displays no strong spin orbit coupling and accommodates a $S={3\over2}$ ground state. While, as noted above, most systems with Co$^{2+}$ in tetrahedra coordination display strong orbital effects as evidenced through large magnetic moments, there are some other counterexamples. Most notably, Co$^{2+}$ in a square planar environment results in a low-spin configuration. [@Everett1965:87] Divalent Co$^{2+}$ complexes supported by the \[PhB(CH$_{2}$PPh$_{2}$)$_{3}$\]$^{-}$ ligand also display small magnetic moments and hence small spin orbit effects. [@Jenkins2002:124] These complexes, similar to the compounds described here, were slightly distorted indicating the importance of a distortion away from a perfect tetrahedron to accommodate an $S={3\over2}$ ground state. In summary, $PTCM$ and $PTC$ crystals were both synthesized using fast and slow evaporation techniques respectively. Both crystals have magnetic properties determined by the cobalt ion Co$^{2+}$ with a d$^{7}$ electronic configuration in a tetrahedral environment. The resulting ligand environment forces the Co$^{2+}$ ion into an orbital singlet state removing the effects of orbital degeneracy and spin-orbit coupling present in the octahedral counterpart. Susceptibility measurements find no observable magnetic order above 2 K in either $PTC$ or $PTCM$ and are consistent with weak exchange in both systems. It would be interesting to pursue magnetic studies of similar systems which show significant exchange coupling as excellent realizations of marginal model quantum magnets. These compounds might prove to be useful model systems for testing theoretical predictions of quantum magnetism, $e. g.$ measuring quantum fluctuations directly with neutron scattering. We are grateful for funding from the Royal Society of Edinburgh, the Royal Society of London, and the Carnegie Trust for the Universities of Scotland. [34]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{} (, , ) @noop [**]{} (, , ) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{},  ed. (, , ) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{}
{ "pile_set_name": "ArXiv" }
--- author: - 'Jeong-Man Park$^{1,2}$ and Michael W. Deem$^1$' bibliography: - 'turbulence.bib' title: ' A Statistical Mechanics Model of Isotropic Turbulence Well-Defined within the Context of the $\epsilon$ Expansion ' --- Introduction ============ Turbulence remains an outstanding problem in classical mechanics. Early on, several self-consistent or closure-based statistical models were introduced, with perhaps Kraichnan’s Direct Interaction Approximation the most well known [@Kraichnan1965]. Within the statistical mechanics literature, attention has focused upon the random force model that Dominicis and Martin [@Martin] generalized from Forster, Nelson, and Stephen’s model of a randomly stirred equilibrium fluid [@Forster]. This model has received quite a bit of attention, and its transport properties have been further examined by Yakhot and Orszag and Avellaneda and Majda and co-workers [@Orszag; @Orszag2; @Majda2]. Various treatments, up to and including field-theoretic $\epsilon$ expansions, have been applied to the random force model of turbulence. As pointed out by Eyink, however, even the $\epsilon$ expansion does not lead to a controlled calculation in the random force model because the term representing the effects of high Reynolds numbers must be calculated to arbitrarily high order in perturbation theory, even for an $O(\epsilon)$ calculation [@Eyink]. More recently, a variety of novel renormalization group techniques has been applied to the problem of Navier-Stokes turbulence [@Liao1991; @Tomassini1997; @Esser1999; @Giles2001]. The experimentally observed scaling behavior of turbulent energy dissipation, often called the Kolmogorov energy cascade, suggests that there may be a strong analogy between critical phenomena and turbulence. Indeed, the search for such an analogy motivated much of the field-theoretic work on fluid turbulence [@Goldenfeld]. Just as there are wild fluctuations in particle density near a critical point, so to are there large fluctuations in the fluid velocity at high Reynolds number turbulence. This similarity suggests that the effects of turbulence may be modeled by a random, velocity-dependent force, just as the effects of critical fluctuations can be modeled by a random, density-dependent force in the standard $\phi^4$ model. While there are likely random forces that are independent of the fluid velocity in the context of turbulence, there are also very likely random forces that are velocity dependent. Thus, the turbulent force should really depend on the gradient of the velocity, as it is not simply large velocities that lead to turbulence, but rather regions of high gradient, such as walls or boundaries, that lead to turbulent behavior. Such boundary roughness is one mechanism for breaking the symmetry from laminar to turbulence fluid flow. Indeed, random boundary roughness along the walls generates velocity-gradient-dependent forces in the bulk that are quenched in time. Moreover, in practical, engineering-type calculations, the turbulent forces are often related to velocity gradients by a constitutive relation that contains a “turbulent viscosity” parameter, the simplest of these relations being linear [@Bird]. Following this line of reasoning, we introduce a new statistical mechanics model for turbulence that renormalizes the effect of turbulent stresses into a velocity-gradient-dependent term. This model will turn out to be well-defined within the $\epsilon$ expansion. That is, the renormalization group theory takes into account all physical effects of this model, consistently to order $\epsilon$. Due to its similarity with turbulent viscosity type models, this approach may lead to a closer connection with practical turbulence calculations. We introduce our velocity-gradient-dependent random force model in Sec. 2. The problem is cast in the framework of time-dependent field theory in Sec. 3. Renormalization group flow equations are also calculated in this section, and three appendices provide details of the renormalization group calculations. The behavior of these flow equations in two and three dimensions, and the predictions for the Kolmogorov energy cascade, are described in Sec. 4. Nontrivial intermittency corrections to the single-time structure functions are calculated by an operator product expansion in Sec. 5. We conclude in Sec. 6. Velocity-Gradient-Dependant Random Force Model ============================================== Our goal is to write a form of the Navier-Stokes equation that contains a random piece, the random piece representing the statistical effects of the turbulence. The Navier-Stokes equation with a random force is $$\partial_t v_i + \sum_k \textstyle{\prod_{ik}} \sum_j v_j \partial_j v_k = \nu \nabla^2 v_i + f_i \ , \label{1}$$ where $\nu = \mu/\rho$ is the kinematic viscosity, and $f_i = \sum_k \Pi_{ik} (F_k - \partial_k P)/\rho$ is the total body force on the fluid. The presence of the projection operator $\hat \Pi_{ik}({\bf k}) = \delta_{ik} - k_i k_k/k^2$ in these formulas ensures that the incompressibility condition $\nabla \cdot {\bf v}=0$ is maintained [@Forster]. The Fourier transform is defined by $\hat f({\bf k}) = \int d {\bf x} f({\bf x}) \exp(i {\bf k} \cdot {\bf x})$. We choose the random force to depend on the gradient of the velocity: $$f_i({\bf x},t) = - \gamma_{ijk}({\bf x}) \partial_j v_k({\bf x},t) \ . \label{2}$$ We assume that $\gamma_{ijk}$ is symmetric under exchange of $j$ and $k$, so that the force looks like a turbulent stress. We average over the statistics of this force using a field-theoretic representation of the Navier-Stokes equation. We choose the correlation function to be $$\langle \hat \gamma_{ijk}({\bf k}_1) \hat \gamma_{lmn}({\bf k}_2) \rangle = (2 \pi)^d \delta({\bf k}_1+{\bf k}_2) \vert {\bf k}_1 \vert^{-y} D_{ijk}^{lmn} \ , \label{3}$$ where $$\begin{aligned} && D_{ijk}^{lmn}= D_\alpha( \delta_{il} \delta_{jm} \delta_{kn}+ \delta_{in} \delta_{jm} \delta_{kl} \nonumber \\ && + \delta_{il} \delta_{jn} \delta_{km} + \delta_{im} \delta_{jl} \delta_{kn} + \delta_{im} \delta_{jn} \delta_{kl} + \delta_{in} \delta_{jl} \delta_{km} ) \nonumber \\ && + D_\beta ( \delta_{ij} \delta_{lm} \delta_{kn} + \delta_{ik} \delta_{jm} \delta_{ln} + \delta_{ik} \delta_{jn} \delta_{lm}+ \delta_{ij} \delta_{km} \delta_{ln} ) \ . \nonumber \\ \label{4}\end{aligned}$$ Initially, we treat this as a mathematical problem, taking $y$ to be arbitrary. Later, we determine $y$ by requiring that the transport properties of turbulence are reproduced. This scaling form of the correlation function, Eq. (\[3\]), applies only in the inertial, Kolmogorov regime, for wavevectors below an upper cutoff related to the inverse of the dissipation length scale and above a lower cutoff related to the inverse of the so-called integral length scale. It is this Kolmogorov scaling regime that is of interest in the present work. Given the form of Eq. \[2\], the parameters $\sqrt D_\alpha$ and $\sqrt D_\beta$ can be viewed as modeling gradients of the turbulent viscosity. Renormalization Group Calculations ================================== We write the Navier-Stokes equation in field-theoretic form so that the renormalization group can be applied systematically within the $\epsilon$ expansion [@Justin]. Within the field-theoretic formalism, any observable can be calculated. The average velocity, for example, is given by $v_i({\bf x},t) = \langle b_i({\bf x},t) \rangle$, where the average over the $b$ field is taken with respect to the weight $\exp(-S)$. Using Eqs. \[1\]–\[4\], we arrive at the following action: $$\begin{aligned} S &=& \int_{\bf k} \int d t~ \hat { \bar b}_i(-{\bf k},t) [ \partial_t + \nu k^2 + \delta(t) ] \hat b_i ({\bf k},t) \nonumber \\ &&+ i \lambda \int_{{\bf k}_1 {\bf k}_2 {\bf k}_3} \int d t~ (2 \pi)^d \delta({\bf k}_1 + {\bf k}_2 + {\bf k}_3) \nonumber \\ &&~~~~~~~~~~\times k_{1_j} \hat{\bar b}_k^\perp ({\bf k}_1,t) \hat b_k^\perp ({\bf k}_2,t) \hat b_j^\perp({\bf k}_3,t) \nonumber \\ &&+ \frac{1}{2} \int_{{\bf k}_1 {\bf k}_2 {\bf k}_3 {\bf k}_4} \int d t_1 d t_2 ~ (2 \pi)^d \delta({\bf k}_1 + {\bf k}_2 + {\bf k}_3 + {\bf k}_4 ) \nonumber \\ &&~~~~~~~~~~\times {{\bf k}_2}_j {{\bf k}_4}_m D_{ijk}^{lmn} \vert {\bf k}_1 + {\bf k}_2\vert^{-y} \nonumber \\ &&~~~~~~~~~~\times \hat {\bar b}_i^\perp ({\bf k}_1, t_1) \hat b_k^\perp ({\bf k}_2, t_1) \hat {\bar b}_l^\perp ({\bf k}_3, t_2) \hat b_n^\perp ({\bf k}_4, t_2) \ . \nonumber \\ \label{5}\end{aligned}$$ The notation $\int_{\bf k}$ stands for $\int d {\bf k} / (2 \pi)^d$, the integrals over time are from $t=0$ to some large time $t=t_f$, and the summation convention is implied. This action is written in terms of the divergence-free part of the field, $\hat b_i^\perp({\bf k}) = \sum_k \hat \Pi_{ik}({\bf k}) \hat b_k({\bf k})$. We have used the Feynman gauge, adding in a curl-free component in the quadratic terms to make later calculations easier. Initially $\lambda=1$. We have used the replica trick [@Kravtsov1] to incorporate the statistical disorder, but have suppressed these details since they do not enter in a one-loop calculation. We now apply renormalization group theory to this action. It is important to note that the fields must all have zero average value before the renormalization group is applied [@Nelson1998; @Deem2001], otherwise it would not be correct to truncate perturbation theory at any finite order. If there were an average velocity, the action in (\[5\]) would be different, containing a term of the form $-i {\bf k} \cdot \langle {\bf v} \rangle$ in the propagator. The vertices in the theory are shown in Fig. \[fig1\]. ![(a) Diagram representing the propagator. The arrow points in the direction of increasing time, and double lines represent the bar fields. (b) Convection vertex $\lambda$. (c) Disorder vertex $D_\alpha$ and $D_\beta$.[]{data-label="fig1"}](fig1.eps){height="2in"} From power counting, the upper critical dimension for this theory is $d_{\rm c} = 2+y$. Note that this upper critical dimension is exactly defined once the model is specified [@Justin]. The deviation of the physical dimension from the upper critical dimension is parameterized by $\epsilon = d_{\rm c}-d$. We use the momentum shell procedure, where fields on a shell of differential width $\ln a=dl$ are integrated out, $\Lambda/a < k < \Lambda$. Note that the combination $dl$ invariably means a differential on $l$; in all other cases, the factor $d$ denotes the physical dimension. As usual, we rescale time by the dynamical exponent $t' = a^{-z} t$ and distance by ${\bf k}_\perp' = a {\bf k}_\perp $. The $b$ field is scaled as $\hat{ b}'({\bf k}',t') = a^{z-1-d + \alpha} \hat{ b}( {\bf k},t)$. To maintain dimensional consistency, so that the $b$ field scales as a velocity, one must set $\alpha = 0$ [@Deem2001]. To keep the time derivative in $S$ constant [@Deem1], the $\bar b$ field is scaled as $\hat{\bar b}'({\bf k}',t') = a^{1-z - \alpha} \hat{\bar b}( {\bf k},t)$. In the loop calculation, we make use of the relation for the reference system averages $$\begin{aligned} \langle \hat{ \bar b}_i^\perp ({\bf k}_1,t_1) \hat b_j^\perp ({\bf k}_2, t_2) \rangle_0 &=& (2 \pi)^d \delta({\bf k}_1 + {\bf k}_2) \nonumber \\ && \times \hat \Pi_{ij}({\bf k}_2) \hat G_0({\bf k}_2, t_2-t_1) \nonumber \\ \label{5aaa}\end{aligned}$$ where $\hat G_0({\bf k},t) = \exp[-\nu k^2 t] \Theta(t)$, and $\Theta(t) = 1$ if $t > 0$ and 0 otherwise. Note that elimination of modes at one end of the spectrum by perturbation theory is the standard procedure in renormalization group theory [@Justin]. Use of Eq. (\[5aaa\]) does not imply that the system is somehow Gaussian, as the parameters within the renormalized theory are flowing. The critical properties of the Ising model at the non-Gaussian Wilson-Fisher fixed point, for example, are analyzed in exactly this way [@Justin]. We make use of the rotational averages: $\langle k_l k_u \rangle_\Omega = \delta_{lu} k^2 / d$, $\langle k_l k_m k_s k_u \rangle_\Omega = ( \delta_{lm} \delta_{us} + \delta_{lu} \delta_{ms} + \delta_{ls} \delta_{mu}) k^4 / [d(d+2)]$, and $\langle k_l k_m k_n k_s k_t k_u\rangle_\Omega = M_{lmn}^{stu} k^6 / [d (d+2) (d+4)]$, where the function $M_{lmn}^{stu}$ is equal to all possible couplings of pairs of the arguments: $$\begin{aligned} M_{lmn}^{stu} &=& \delta_{lt} \delta_{ms} \delta_{nu} + \delta_{lt} \delta_{mn} \delta_{su} + \delta_{lt} \delta_{mu} \delta_{ns} \nonumber \\ &+& \delta_{lm} \delta_{ts} \delta_{nu} + \delta_{lm} \delta_{nt} \delta_{su} + \delta_{lm} \delta_{tu} \delta_{ns} \nonumber \\ &+& \delta_{ls} \delta_{mt} \delta_{nu} + \delta_{ls} \delta_{mn} \delta_{tu} + \delta_{ls} \delta_{mu} \delta_{nt} \nonumber \\ &+& \delta_{ln} \delta_{mt} \delta_{su} + \delta_{ln} \delta_{ms} \delta_{tu} + \delta_{ln} \delta_{mu} \delta_{st} \nonumber \\ &+& \delta_{lu} \delta_{mt} \delta_{ns} + \delta_{lu} \delta_{mn} \delta_{st} + \delta_{lu} \delta_{ms} \delta_{nt} \ . \label{6}\end{aligned}$$ The one loop contributions are shown in Fig. \[fig2\]. ![One-loop diagrams: (a) self-energy diagrams contributing to $\nu$, (b) vertex diagrams contributing to $\lambda$, and (c) vertex diagrams contributing to $D_\alpha$ and $D_\beta$.[]{data-label="fig2"}](fig2.eps){height="4in"} There are 13 diagrams of the form Fig. \[fig2\]a, detailed calculation of which is described in Appendix A. There are 13 similar diagrams of the form Fig. \[fig2\]b, detailed calculation of which is described in Appendix B. Finally, there are 100 diagrams of the form Fig. \[fig2\]c, and a detailed discussion of them is given in Appendix C. These last diagrams require some care in their calculation, as they contribute to the complex tensor structure of $D_{ijk}^{lmn}$ in Eq. (\[5\]). These last diagrams make contributions in the form $$\begin{aligned} &&{{\bf k}_2}_j {{\bf k}_4}_v \hat {\bar b}_i^\perp ({\bf k}_1, t_1) \hat b_k^\perp ({\bf k}_2, t_1) \hat {\bar b}_r^\perp ({\bf k}_3, t_2) \hat b_w^\perp ({\bf k}_4, t_2) \nonumber \\ &&~~~~~~~~~~ \times M_{lmn}^{stu} D_{ijk}^{lmn} D_{rst}^{uvw} \ .\end{aligned}$$ There are 15 terms of the form Fig. \[fig2\]c when the sums over $l,m,n$ and $s,t,u$ are taken. Each of these terms corresponds to one of the 15 terms in Eq. (\[6\]), and each gives a contribution that is exactly of the form in Eq. (\[5\]). The theory is, therefore, self-consistent in that no terms are generated at this order that are not of the original form, and the symmetry of the original theory is maintained. The flow equations that result from the one-loop calculation are $$\begin{aligned} \frac{d \ln \nu}{d l} &=& z-2 + [(d^2+2d-2) D_\alpha + (d-2) D_\beta] \nonumber \\ &&~~~~~~~~~~\times \frac{y K_d}{\nu^2 d (d+2)} \Lambda^{-\epsilon} \nonumber \\ \frac{d \ln \lambda}{d l} &=& -\alpha - [(d^2+2d-2) D_\alpha + (d-2) D_\beta] \nonumber \\ &&~~~~~~~~~~\times \frac{K_d}{\nu^2 d (d+2)} \Lambda^{-\epsilon} \nonumber \\ \frac{d \ln D_\alpha}{d l} &=& 2z-4 + \epsilon - [2 D_\alpha + 4 D_\beta] \nonumber \\ &&~~~~~~~~~~\times \frac{2 K_d (d^2 + 2 d - 2)}{\nu^2 d (d+2) (d+4)} \Lambda^{-\epsilon} \nonumber \\ \frac{d \ln D_\beta}{d l} &=& 2z-4 + \epsilon - [2 (d^2 + 2 d - 2) D_\alpha^2 / D_\beta \nonumber \\ &&~~~+ (d^3 + 6 d^2 - 8) D_\alpha + (d^2 + 2 d - 8)D_\beta] \nonumber \\ &&~~~~~~~~~~\times \frac{2 K_d }{\nu^2 d (d+2) (d+4)} \Lambda^{-\epsilon} \ . \label{8a}\end{aligned}$$ The constant $K_d = S_d / (2 \pi)^d$, and $S_d = 2 \pi^{d/2} / \Gamma(d/2)$. In solving these equations, we set $\alpha=0$ for dimensional consistency. In the standard model of turbulence [@Martin], terms higher order in $\lambda(l)$ must be kept in the flow equations. In the present model, $\lambda(l)$ flows to zero rapidly, and higher order do not contribute at this level. Results and Discussion ====================== In two dimensions, both $D_\alpha$ and $D_\beta$ are relevant. We find a fixed point of $D_\alpha^* = D_\beta^* = 2 \epsilon \nu^2 \Lambda^\epsilon/ [ 3 K_2 (1+y)]$. The dynamical exponent is given by $z = 2 - y \epsilon / [2 (1+y)]$. The Reynolds number term scales as $\lambda(l) = \lambda_0 e^{-\epsilon l /[2(1+y)]}$. In greater than two dimensions, $D_\beta$ reaches a fixed point, but $D_\alpha$ flows to zero. We find $$\begin{aligned} D_\beta^* &=& \frac{d (d+2) \nu^2 \Lambda^\epsilon }{2 (1+y) (d-2) K_d} \epsilon \nonumber \\ z &=& 2 - \frac{y \epsilon}{2 (1+y)} \nonumber \\ \lambda(l) &=& \lambda_0 e^{-\epsilon l /[2(1+y)]} \ . \label{8aa}\end{aligned}$$ Interestingly, the dynamical exponent is the same in two and greater dimensions, as is the decay of $\lambda(l)$. A particularly beautiful feature of this theory is that $\lambda(l)$ decays exponentially to zero. This property is what makes the theory well defined within the epsilon expansion. If $\lambda(l)$ had stayed at unity, the vertex in Fig. \[1\]b could be inserted arbitrarily many times in the loop expansion, and terminating the expansion at any finite order would not be justified by any small parameter. The present calculation, on the other hand, is a controlled expansion in $\epsilon$ and $\lambda(l)$. Note that the quenched random forces, which mimic the effects of, say, wall roughness, break statistical Galilean invariance, and this allows $\lambda(l)$ to flow, in contrast to the conventional model with random forces delta-correlated in time. Indeed, our explicit calculation shows that $\lambda(l)$ decays exponentially to zero. We now turn to a calculation of the energy spectrum, defined by [@Orszag1a] $$E({\bf k}) = \frac{(d-1)}{2} K_d k^{d-1} \hat C_{11} ({\bf k}) \ ,$$ where the velocity-velocity correlation function is given in the field-theoretic language as $$(2 \pi)^d \delta({\bf k}_1 + {\bf k}_2) \hat \Pi_{ij}({\bf k}_1) \hat C_{ij}({\bf k}_1) = \langle \hat b_i^\perp({\bf k}_1,t) \hat b_j^\perp({\bf k}_2,t) \rangle \ .$$ Under the scaling of time and space that occurs within the renormalization group calculation, this correlation function scales as $$\begin{aligned} \hat C_{ij}({\bf k}) &=& \frac{a^{2 (d+1-z - \alpha)}}{a^d} \frac{\langle {\hat{b}_i^\perp}{'} ({\bf k}_1',t') {\hat{b}_j^\perp} {'}({\bf k}_2',t') \rangle} {(2 \pi)^d \delta({\bf k}_1 ' + {\bf k}_2 ')\hat\Pi_{ij}({\bf k}_1')} \nonumber \\ &=& a^{d+2-2z - 2\alpha} \hat C'_{ij}({\bf k}') \ . \label{12}\end{aligned}$$ Making the assumption that $\hat C_{ij} ({\bf k}) \sim ({\rm const}) k^{-\delta}$, we find from Eq. \[12\] that $\delta = d + 2 - 2 z - 2 \alpha$. The energy spectrum, therefore, scales as $$E({\bf k}) \sim ({\rm const}) k^{2z-3 + 2 \alpha} \ . \label{12a}$$ For any isotropic statistical theory of turbulence, then, the dimensional consistency condition of $\alpha=0$ enforces a relation between the dynamical exponent and the exponent of the energy cascade. In particular, the Richardson separation law implies $z = 2/3$, and this result is equivalent to enforcing the Kolmogorov energy cascade: $E({\bf k}) \sim ({\rm const}) k^{-5/3}$. The relation $z=2/3$ implies $y = 3.44152$ in two dimensions and $y = 4.28849$ in three dimensions. To calculate the Kolmogorov constant $C_{\rm K}$ we introduce a source of randomness into the model: $$\delta S = -\frac{D}{2} \sum_i \int dt \int_{\bf k} \hat c(k) \hat{\bar b}_i^\perp (-{\bf k},t) \hat{\bar b}_i^\perp({\bf k},t) \ . \label{14}$$ In the range for which scaling occurs, we set $\hat c(k) \sim k^{3z-2-d}$. For later convenience, we also require that $\lim_{r \to 0} c(r) = 1$. The randomness expressed in this source term drives the model away from the trivial solution $v_i({\bf x},t) \equiv 0$. Note that the randomness parameter scales as $D(l) \equiv D$. This randomness parameter does not contribute to $\nu$ since $\lambda(l)$ flows to zero, and nothing contributes to $D(l)$ at one loop. Since $D(l)$ contributes to physical properties at higher loops only through $\lambda(l)$, $D_\alpha(l)$, and $D_\beta(l)$, all of which are small, the effects of $D$ are controlled within the $\epsilon$ expansion. Using the matching Eq. \[12\], the correlation function is given by $\hat C_{ij}({\bf k}) = [D \Lambda^{z-2} / (2\nu)] k^{-\delta}$. For fully-developed isotropic turbulence $z=2/3$, and using the notation [@Orszag] $$E({\bf k}) \sim C_{\rm K} \varepsilon^{2/3} k^{-5/3} \ , \label{15}$$ we find $C_{\rm K} \varepsilon^{2/3} = (d-1) K_d D \Lambda^{-4/3}/ (4 \nu) $. Similarly, the wavevector-dependent viscosity considered in the fluid mechanics literature [@Orszag] is given by $\nu({\bf k}) = \nu \Lambda^{4/3} k^{-4/3}$. Using the notation [@Orszag] $$\nu({\bf k}) \sim N \varepsilon^{1/3} k^{-4/3} \ ,$$ we find $N \varepsilon^{1/3} = \nu \Lambda^{4/3}$. The energy dissipation rate is given by $$\varepsilon = \int_0^\Lambda dk~ 2 \nu k^2 E(k) \ . \label{17}$$ Using Eqs. (\[15\]) and (\[17\]), we find $\varepsilon = (27/8) \nu^3 C_{\rm K}^3 \Lambda^4$. Finally, to complete the matching we assume that the wavevector cutoff is one-half the Kolmogorov dissipation number, $2 \Lambda = k_d \equiv (\varepsilon / \nu^3)^{1/4}$ [@Pao]. Putting these relations together in three dimensions, we find $$\begin{aligned} \varepsilon &=& 0.0380 D \nonumber \\ C_{\rm K} &=& 1.68 \nonumber \\ N &=& 0.397 \ .\end{aligned}$$ Note that field theory cannot calculate non-universal parameters such as these with precision, as these results depend on the assumption in Eq. (\[17\]) and the relation between $\Lambda$ and $k_d$. A more detailed matching calculation of these values using the model of turbulence in Eqs. (\[3\]) and (\[14\]) to refine the matching calculation would be of interest. Intermittency ============= We here address the issue of intermittency in our model. That is, we seek to determine the scaling of the single-time structure function $$S_{2n}(r) = \langle \left[ \vert {\bf b}^\perp ({\bf x} + {\bf r}) - {\bf b}^\perp ({\bf x}) \vert^2 \right]^n \rangle \sim ({\rm const}) r^{\zeta_{2n}} \ . \label{18}$$ From simple dimensional analysis, we find $\zeta_{2n} = n z$. From explicit calculation for our model, we find an exponent that differs from this value. This difference is referred to in the fluid mechanics literature as intermittency. That $\zeta_{2n} \ne n z$ is an expression of the non-Gaussian nature of the fixed point identified in our model and of the divergence of the single-time structure functions in the limit of an infinite integral length scale. We use the same arguments about scaling of space and time as in Eq.\[12\] to express the original correlation function in terms of the renormalized correlation function: $$S_{2n}(r) \sim e^{n(2-2z)l} S_{2n}(r(l); l) \ . \label{19}$$ Here $r = \exp(l) r(l)$. This equation is applied until $r(l)$ is of the order of the dissipation length scale $r(l^*) = 2 \pi / \Lambda \equiv h$. At this length scale, we then match the correlation function to a perturbation theory result: $$\begin{aligned} S_{2n}(r(l^*); l^*) &\propto& \left[ \frac{h^2}{2 \nu} c(h; l^*) \right]^n \nonumber \\ &=& \left( \frac{h^2}{2 \nu} \right)^n e^{(3 z - 2)n l^*} c^n(h) \nonumber \\ &=& \left( \frac{h^2}{2 \nu} \right)^n e^{(3 z - 2)n l^*} \ , \label{20}\end{aligned}$$ where we have used the scaling of the small-$r$ behavior of the $c(r)$ function in Eq. \[14\] and have used $\lim_{h \to 0} c(h) = 1$. Combining Eqs. \[19\] and \[20\] we find $$\begin{aligned} S_{2n} (r) \sim ({\rm const}) r^{n z} f_{n} (r/L) \ . \label{21}\end{aligned}$$ We have here introduced the fact that traditional renormalization group arguments can determine asymptotic behavior only up to a scaling function of the ratio $r/L$, where $L$ is the macroscopic size of the system. This is because $r$ and $L$ are scaled by the same factor in the renormalization group analysis, and so no dependence on the ratio $r/L$ is detectable. For turbulence, $h$ is the dissipation length scale, and $L$ is the integral length scale. In many applications of renormalization group theory to condensed matter systems, the scaling function $f_n(r/L) \sim ({\rm const})$ as $L \to \infty$, and so it does not play a role. In our case, on the other hand, the scaling function gives the corrections to intermittency. To determine the function $f_n$ we use the operator product expansion [@Justin; @Wilson1969; @Kadanoff1969]. A similar strategy has proven successful in the study of turbulent transport of passive scalars [@Adzhemyan]. The operator product expansion states that $$\langle F({\bf x}_1) F({\bf x}_2) \rangle \sim \sum_\alpha c_\alpha(\vert {\bf x}_1 - {\bf x}_2 \vert) \langle F_\alpha^R({\bf x}) \rangle \ . \label{22}$$ Here the ${\bf x} = ({\bf x}_1 + {\bf x}_2 )/2$, and $F_\alpha^R$ is the set of all renormalized operators that are generated by the renormalization group flow of $F$. Equation \[22\] is nothing more than a Taylor series expansion, where both the bare terms in the Taylor series and those terms that are generated by the renormalization flow are included. In our particular case, instead of a pair of operators, we have $S_{2n}$, which is a product of $2n$ factors on the left hand side of this equation. The important point about this expansion is that the functions $c_\alpha(r)$ are finite and exhibit no dependence on the system size $L$. Any possible system size dependence of this expansion, therefore, is contained within $\langle F_\alpha^R({\bf x}) \rangle$. By comparison to Eq. \[21\], we see that the scaling function $f_n$ is thus determined by the behavior of these renormalized operators. We first determine the scaling of the renormalized operators: $$\langle F_\alpha^R \rangle = e^{\Delta_\alpha l^* } \langle F_\alpha^R(l^*) \rangle \ . \label{23}$$ We follow the renormalization flows until $L(l^*) = \exp(-l^*) L = r$, a criterion that automatically ensures the functional form specified in Eq. \[21\]. We thus conclude that $$\langle F_\alpha^R \rangle = \left( \frac{r}{L}\right)^{-\Delta_\alpha} \langle F_\alpha^R(l^*) \rangle \ . \label{24}$$ In fact, we determine the scaling of these operator averages by using the generating functional $\delta S = -A_\alpha \int dt \int d^d {\bf x} F_\alpha^R$. We will find $$A_\alpha(l) \sim e^{(z + d - 2 n z + \gamma_\alpha)l} A_\alpha(0) \ . \label{24a}$$ In terms of $A_\alpha$ and the partition function $Z$, the operator average is given by $$\int dt \int d^d {\bf x} \langle F_\alpha^R \rangle = \left. \frac{d \ln Z} {d A_\alpha(0)} \right\vert_{A_\alpha(0)=0} \ . \label{24aa}$$ This equation makes clear why Eq. \[23\] has the form that it does and identifies $\Delta_\alpha = - 2 n z + \gamma_\alpha$. The value of $\gamma_\alpha$ will be determined by a nontrivial fixed point of the renormalization group flow equations for $A_\alpha$. Combining Eqs. \[24\] and \[24a\], we find $$\langle F_\alpha^R \rangle \sim \left(\frac{r}{L}\right)^{2 n z - \gamma_\alpha} \langle F_\alpha^R(l^*) \rangle \ .$$ The function $\langle F_\alpha^R(l^*) \rangle$ is determined by matching exactly as in Eq.  \[20\]. We, thus, find that $$\begin{aligned} \langle F_\alpha^R \rangle &\sim & \left(\frac{r}{L}\right)^{2 n z - \gamma_\alpha} \left(\frac{r}{L}\right)^{(2 - 3 z)n} \nonumber \\ &=& \left(\frac{r}{L}\right)^{n(2-z) - \gamma_\alpha} \ . \label{25}\end{aligned}$$ To make use of this result of the operator product expansion, it remains only to calculate the value of $\gamma_\alpha$. Once we have this value, we find the scaling function to be $$f_{2n}(r/L) \sim \left(\frac{r}{L}\right)^{n(2-z) - \gamma_\alpha} {\rm as~}L{\rm ~to~} \infty \ . \label{26}$$ In fact, the operator $S_{2n}$ will generate several new operators $F_\alpha$ in the expansion of Eq. \[22\]. The appropriate value of $\gamma_\alpha$ to use in Eq. \[26\] is the largest one. These generated operators may mix upon the renormalization, in which case the appropriate value of $\gamma_\alpha$ is the largest eigenvalue of the flow equation matrix. We first determine the scaling function $f_1$. We limit consideration to the case $d > 2$, where $D_\alpha^* = 0$. The correlation function $S_{2n}$ is given in a Taylor series as $S_{2n}(r) \sim [r^2 \partial_x b_i \partial_x b_i]^n$. This will generate the symmetrized operator $F = [\partial_j b_i \partial_j b_i + \partial_i b_j \partial_j b_i]^n$, which we consider. For the case $n=1$, we consider the generating functional $$\begin{aligned} \delta S_{II} &=& -\int dt \int_{{\bf k}_1 {\bf k}_2} \bigg[ A^{(1)} {\bf k}_1 \cdot {\bf k}_2 \hat{ b}^\perp ({\bf k}_1,t) \cdot \hat{ b}^\perp({\bf k}_2,t) \nonumber \\ &&+ A^{(0)} {\bf k}_1 \cdot \hat{ b}^\perp ({\bf k}_2,t) {\bf k}_2 \cdot \hat{ b}^\perp({\bf k}_1,t) \bigg] \ . \label{27}\end{aligned}$$ We find $$\frac{d A^{(i)}}{dl} = (z+d-2z) A^{(i)} + \frac {2 (d-1) g}{d} A^{(1)} \ , \label{28}$$ where $g = D_\beta K_d / (\nu^2 \Lambda^\epsilon)$. We, thus, identify $\gamma_1 = 2 (d-1) g^*/d$. Using Eq. \[8aa\] we find in three dimensions $z = 2 - \epsilon/4 + O(\epsilon^2)$ and $g^* = 15 \epsilon/4 + O(\epsilon^2)$ and conclude from Eqs. \[21\] and \[26\] that $$\begin{aligned} S_2(r) &\sim& ({\rm const}) r^z \left( \frac{r}{L} \right)^{2-z - \gamma_1} \nonumber \\ &=& ({\rm const}) r^{2-\epsilon/4} \left( \frac{r}{L} \right)^{\epsilon/4-5 \epsilon} \ . \label{29}\end{aligned}$$ We now determine the scaling function $f_2$. We start with the symmetrized generating functional $$\begin{aligned} \delta S^{(2)} &=& -\int dt \int_{{\bf k}_1 {\bf k}_2 {\bf k}_3 {\bf k}_4} \hat{ b}^\perp ({\bf k}_1,t) \cdot \hat{ b}^\perp({\bf k}_2,t) \nonumber \\ &&\times \hat{ b}^\perp ({\bf k}_3,t) \cdot \hat{ b}^\perp({\bf k}_4,t) \bigg[ A_1^{(2)} {\bf k}_1 \cdot {\bf k}_2 {\bf k}_3 \cdot {\bf k}_4 \nonumber \\ && + A_2^{(2)} ({\bf k}_1 \cdot {\bf k}_3 {\bf k}_2 \cdot {\bf k}_4 + {\bf k}_1 \cdot {\bf k}_4 {\bf k}_2 \cdot {\bf k}_3) \bigg] \ .\end{aligned}$$ This term generates two additional generating functionals: $$\begin{aligned} \delta S^{(1)} &=& -\int dt \int_{{\bf k}_1 {\bf k}_2 {\bf k}_3 {\bf k}_4} \hat{ b}^\perp ({\bf k}_1,t) \cdot \hat{ b}^\perp({\bf k}_2,t) \nonumber \\ &&\times \bigg[ A_1^{(1)} {\bf k}_1 \cdot {\bf k}_2 {\bf k}_4 \cdot \hat{ b}^\perp({\bf k}_3,t) {\bf k}_3 \cdot \hat{ b}^\perp({\bf k}_4,t) \nonumber \\ && + A_2^{(1)} \bigg( {\bf k}_1 \cdot {\bf k}_3 {\bf k}_4 \cdot \hat{ b}^\perp({\bf k}_3,t) {\bf k}_2 \cdot \hat{ b}^\perp({\bf k}_4,t) \nonumber \\ && + {\bf k}_1 \cdot {\bf k}_4 {\bf k}_2 \cdot \hat{ b}^\perp({\bf k}_3,t) {\bf k}_3 \cdot \hat{ b}^\perp({\bf k}_4,t) \nonumber \\ && + {\bf k}_2 \cdot {\bf k}_3 {\bf k}_4 \cdot \hat{ b}^\perp({\bf k}_3,t) {\bf k}_1 \cdot \hat{ b}^\perp({\bf k}_4,t) \nonumber \\ && + {\bf k}_2 \cdot {\bf k}_4 {\bf k}_1 \cdot \hat{ b}^\perp({\bf k}_3,t) {\bf k}_3 \cdot \hat{ b}^\perp({\bf k}_4,t) \bigg) \nonumber \\ && + A_3^{(1)} \bigg( {\bf k}_3 \cdot {\bf k}_4 {\bf k}_1 \cdot \hat{ b}^\perp({\bf k}_3,t) {\bf k}_2 \cdot \hat{ b}^\perp({\bf k}_4,t) \nonumber \\ && + {\bf k}_3 \cdot {\bf k}_4 {\bf k}_2 \cdot \hat{ b}^\perp({\bf k}_3,t) {\bf k}_1 \cdot \hat{ b}^\perp({\bf k}_4,t) \bigg) \bigg]\end{aligned}$$ and $$\begin{aligned} \delta S^{(0)} &=& -\int dt \int_{{\bf k}_1 {\bf k}_2 {\bf k}_3 {\bf k}_4} \hat{ b}_i^\perp ({\bf k}_1,t) \hat{ b}_j^\perp({\bf k}_2,t) \hat{ b}_k^\perp ({\bf k}_3,t) \hat{ b}_l^\perp({\bf k}_4,t) \nonumber \\ &&\times \bigg[ A_1^{(0)} {{\bf k}_1}_j {{\bf k}_2}_i {{\bf k}_3}_l {{\bf k}_4}_k \nonumber \\ &&+ A_2^{(0)} \bigg( {{\bf k}_1}_j {{\bf k}_2}_l {{\bf k}_3}_i {{\bf k}_4}_k + {{\bf k}_1}_j {{\bf k}_2}_k {{\bf k}_3}_l {{\bf k}_4}_i \nonumber \\ &&+ {{\bf k}_1}_l {{\bf k}_2}_i {{\bf k}_3}_j {{\bf k}_4}_k + {{\bf k}_1}_k {{\bf k}_2}_i {{\bf k}_3}_l {{\bf k}_4}_j \bigg) \nonumber \\ &&+ A_3^{(0)} \bigg( {{\bf k}_1}_k {{\bf k}_2}_l {{\bf k}_3}_i {{\bf k}_4}_j + {{\bf k}_1}_k {{\bf k}_2}_l {{\bf k}_3}_j {{\bf k}_4}_i \nonumber \\ &&+ {{\bf k}_1}_l {{\bf k}_2}_k {{\bf k}_3}_i {{\bf k}_4}_j + {{\bf k}_1}_l {{\bf k}_2}_k {{\bf k}_3}_j {{\bf k}_4}_i \bigg) \bigg]\end{aligned}$$ We will find $A_2^{(1)}(l) = A_3^{(1)}(l)$. Although we have included $A_3^{(0)}$ for generality, we will find that this term is not generated, and $A_3^{(0)}(l) = 0$. A lengthy calculation shows that $$\frac{d {\bf A}}{ d l} = (z+d-4 z)I {\bf A} + \frac{g}{d (d+2) (d+4)} M {\bf A} \label{40}$$ where the vector ${\bf A} = (A_1^{(2)}, A_2^{(2)}, A_1^{(1)}, A_2^{(1)}, A_3^{(1)}, A_1^{(0)}, A_2^{(0)} )$, and the matrix $M$ is given in Table \[matrixM\]. $$\begin{aligned} M = \left( \begin{array}{ccccccc} 4 (d^3 + 5 d^2 + 2 d - 6)& 8 (d+3)^2& 8&16&4 (d^2 + 6 d + 10)& 8&16 \\ 2 (d+2)^2& 2 (d^3 + 5 d^2 - 12) & -4 (d+2)& -8(d+2) &d^3 + 4 d^2 - 8 d - 24 &2(d+2)^2&4 (d+2)^2 \\ 4 (d^3 + 5 d^2 + 2 d - 4)& 8 (d^2 + 6 d + 10)& 2 (d^3 + 5 d^2 + 2 d)& 8 (d^2 + 6 d + 12)& 4(d^2 + 6 d + 12) & 16 & 32 \\ 2 d (d+2) & 2 (d^3 + 5 d^2 - 2 d - 16) & 2 d (d+2) & 2(d^3 + 5 d^2 - 6 d - 24) & d^3 + 4 d^2 - 12 d - 32 & 2 d (d +2)& 4 d (d +2) \\ 2d (d +2) & 2 (d^3 + 5 d^2 - 2 d - 16) & 2 d (d +2) & 2 ( d^3 + 5 d^2 - 6 d - 24) & d^3 + 4 d^2 - 12 d - 32 & 2 d (d +2 ) & 4 d (d +2) \\ 8 & 8 & 2 (d^3 + 5 d^2 + 2 d - 4) & 8 (d^2 + 6 d + 10) & 8 & 8 & 16 \\ -2 (d+2) & -2 (d+2) & (d+2)^2 & d^3 + 5 d^2 - 2 d - 16 & -2 (d+2) & -2 (d+2) & -4 (d+2) \end{array} \right)\end{aligned}$$ We diagonalize the matrix $M$, finding the eigenvalues $(0,0,0,0, d^3 + 4 d^2 - 4 d - 16, 2(d^3 + 5 d^2 + 2 d - 8), 4 (d^3 + 7 d^2 + 6 d) )$. The largest eigenvalue is the last one for all $d$. In three dimensions we identify $\gamma_2 = 432 g^* / 105$. Using Eqs.  \[21\] and \[26\] we find that $$S_4(r) \sim ({\rm const}) r^{4-\epsilon/2} \left( \frac{r}{L} \right)^{\epsilon/2-108 \epsilon/7} \ .$$ We have, therefore, derived the intermittency corrections to dimensional analysis for the present model. The corrections are calculated in a controlled fashion and are proportional to $\epsilon$. The coefficients of the correction are not small, and for finite $\epsilon$, higher order terms in the expansion are required for an accurate estimation of the effects of intermittency. One might wonder whether there are any corrections to the Kolmogorov energy cascade, Eq.  \[12a\], that arise from the operator product expansion. More generally, are there any corrections to $\langle [b_i^\perp ({\bf x}, t) b_i^\perp ({\bf x}, t)]^n \rangle$? There are no such corrections. This type of operator flows under the renormalization group only to operators with more derivatives, such as $ \partial_l b_j^\perp \partial_m b_k^\perp (b_i^\perp b_i^\perp )^{n-1}$. These operators are less relevant than the original operator, and so they make no contribution to the scaling at leading order. In the language of Eq. \[22\], $F_\alpha^R \equiv 1$, and the scaling function $f_{n}(r/L) \sim ({\rm const})$ as $L \to \infty$. Conclusion ========== An alternative, simpler model would have been to take the turbulent forces to be proportional to the velocity, rather than the velocity gradient. The simplest model, moreover, would take the forces to be white noise in time and uncorrelated in each of the spatial dimensions. For this model to be nontrivial, a mean fluid flow must be introduced [@Deem2001]. Interestingly, when this is done for forces that are random in time as well as space, the resulting theory has the same flow equations as the traditional random force model of turbulence [@Martin; @Forster]. Also of interest to note is that a theory with velocity-gradient-dependent random forces that are white noise in time would have no renormalization of any parameter, as the diagrams of Fig. \[fig2\] would all vanish due to the causality of the bare propagator, Eq. (\[5aaa\]). The scaling of the Kolmogorov energy cascade is determined once the value of the dynamical exponent is fixed, *i.e.* $z=2/3$ for isotropic turbulence. In random force models such as the present one, the scaling of the energy cascade simply serves to fix the correlation function of the random forcing. The predictive power of models such as these lie in their ability to provide nontrivial predictions of the intermittency corrections. In the present model, we are able to provide these corrections as a systematic expansion in $\epsilon$. In summary, we have introduced a new statistical mechanics model for isotropic turbulence. This model makes use of a random, velocity-gradient-dependent force. This model is both consistent with practical, engineering-type calculations and well-defined within the renormalization group $\epsilon$ expansion. This model makes stronger the analogy between turbulence and critical phenomena. Owing to the irrelevance of the convection terms at the fixed point, our results may alternatively be viewed as an analysis of transport in a new class of random media. This research was supported by the National Science Foundation and by an Alfred P. Sloan Foundation Fellowship to M.W.D. Appendix A: One-Loop contributions to $\nu$ {#appendix-a-one-loop-contributions-to-nu .unnumbered} =========================================== We here show how the diagrams of Fig. \[fig2\]a contribute to the propagator, Fig. \[fig1\]a. Each of the terms is associated with one of the $D_{ijk}^{lmn}$ terms in Eq. (\[5\]). In the calculation of the averages on the shell, there are five terms associated with $D_\alpha$ and three associated with $D_\beta$, as the last two terms associated with $D_\alpha$ are identical, and the last two terms associated with $D_\beta$ are also identical. The first term, associated with $D_\alpha$, is $$\begin{aligned} I_1 &=& 2 \times \frac{D_\alpha}{2} \int d t_1 d t_2 \int_{ {\bf k}_1 {\bf k}_2 {\bf k}_3 {\bf k}_4} \nonumber \\ && \times (2 \pi)^d \delta({\bf k}_1+ {\bf k}_2 + {\bf k}_3 + {\bf k}_4 ) \vert {\bf k}_1 + {\bf k}_2 \vert^{-y} \nonumber \\ && \times \Theta (t_1 - t_2) (2 \pi)^d \delta({\bf k}_2 + {\bf k}_3) e^{-\nu k_2^2 (t_1 - t_2)} \nonumber \\ && \times \left\{ {\bf k}_2 \cdot {\bf k}_4 \hat{ \bar b}_i^\perp({\bf k}_1,t_1) \hat b_j^\perp({\bf k}_4,t_2) \hat \Pi_{ij} ({\bf k}_2) \right\} \nonumber \\ &=& \frac{K_d}{\nu^2} \left[ D_\alpha \frac{y(d+1) }{ d (d+2)} \right] \int_{\Lambda/a}^\Lambda dq q^{d-y-3} \nonumber \\ && \times \int dt \int_{\bf k} \nu k^2 \hat{\bar b}_i^\perp(-{\bf k},t) \hat b_i^\perp({\bf k},t) \label{eqI1}\end{aligned}$$ For the remaining contributions, we list the symmetry factor, terms in braces in the integrand of Eq. (\[eqI1\]) that change, and the final contribution in brackets that change. The contributions are shown in Table \[intI\], where the dependence of the fields upon time has been suppressed. The term $e^{-\nu k_i^2(t_1-t_2)}$, which has the same momentum argument as the $\hat \Pi({\bf k}_i)$ term, has been suppressed. The delta function is also suppressed. Summing all these contributions to $\nu$, we get the first flow equation of Eq. (\[8a\]). --------------------------------------------------------------------------------------------------------------------------------------------------- Integral Symmetry Integrand Result ---------- -------------- ------------------------------------------------------------- ----------------------------------------------------------- $I_1$ 2 $ $\left[ D_\alpha \frac{y(d+1) }{ d (d+2) } \right]$ \left\{ { k}_{2_l} { k}_{4_l} \hat{ \bar b}_i^\perp({\bf k}_1) \hat b_j^\perp({\bf k}_4) \hat \Pi_{ij} ({\bf k}_2) \right\} $ $I_2$ 2 $\left\{ { k}_{2_l} { k}_{4_l} $\left[ D_\alpha \frac{y(d-1) }{ d } \right]$ \hat{ \bar b}_i^\perp({\bf k}_1) \hat b_i^\perp({\bf k}_4) \hat \Pi_{jj} ({\bf k}_2) \right\} $ $I_3^a$ 1 $\left\{ { k}_{2_i} { k}_{4_j} $\left[ -\frac{D_\alpha}{2} \frac{y }{ d(d+2) } \right]$ \hat{ \bar b}_k^\perp({\bf k}_1) \hat b_i^\perp({\bf k}_4) \hat \Pi_{jk} ({\bf k}_2) \right\} $ $I_3^b$ 1 $\left\{ { k}_{2_i} { k}_{4_j} $\left[ -\frac{D_\alpha}{2} \frac{y }{ d(d+2) } \right]$ \hat{ \bar b}_k^\perp({\bf k}_3) \hat b_j^\perp({\bf k}_2) \hat \Pi_{ik} ({\bf k}_4) \right\} $ $I_4^a$ 1 $\left\{ { k}_{2_i} { k}_{4_j} 0 \hat{ \bar b}_j^\perp ({\bf k}_1) \hat b_k^\perp({\bf k}_4) \hat \Pi_{ik} ({\bf k}_2) \right\} $ $I_4^b$ 1 $\left\{ { k}_{2_i} { k}_{4_j} 0 \hat{ \bar b}_i^\perp ({\bf k}_3) \hat b_k^\perp({\bf k}_2) \hat \Pi_{jk} ({\bf k}_4) \right\} $ $I_5^a$ $1 \times 2$ $\left\{ { k}_{2_i} { k}_{4_j} 0 \hat{ \bar b}_j^\perp ({\bf k}_1) \hat b_i^\perp({\bf k}_4) \hat \Pi_{kk} ({\bf k}_2) \right\} $ $I_5^b$ $1 \times 2$ $\left\{ { k}_{2_i} { k}_{4_j} 0 \hat{ \bar b}_k^\perp ({\bf k}_3) \hat b_k^\perp({\bf k}_2) \hat \Pi_{ij} ({\bf k}_4) \right\} $ $I_6^a$ $1 $ $\left\{ { k}_{2_i} { k}_{4_j} $\left[ -\frac{D_\beta}{2} \frac{y }{ d(d+2) } \right]$ \hat{ \bar b}_i^\perp ({\bf k}_1) \hat b_k^\perp({\bf k}_4) \hat \Pi_{jk} ({\bf k}_2) \right\} $ $I_6^b$ 1 $\left\{ { k}_{2_i} { k}_{4_j} $\left[ -\frac{D_\beta}{2} \frac{y }{ d(d+2) } \right]$ \hat{ \bar b}_j^\perp ({\bf k}_3) \hat b_k^\perp({\bf k}_2) \hat \Pi_{ik} ({\bf k}_4) \right\} $ $I_7$ 2 $\left\{ { k}_{2_l} { k}_{4_l} $\left[ D_\beta \frac{y (d+1) }{ d(d+2) } \right]$ \hat{ \bar b}_i^\perp ({\bf k}_1) \hat b_j^\perp({\bf k}_4) \hat \Pi_{ij} ({\bf k}_2) \right\} $ $I_8^a$ $1 \times 2$ $\left\{ { k}_{2_i} { k}_{4_j} $\left[ -D_\beta \frac{y }{ d(d+2) } \right]$ \hat{ \bar b}_i^\perp ({\bf k}_1) \hat b_k^\perp({\bf k}_4) \hat \Pi_{jk} ({\bf k}_2) \right\} $ $I_8^b$ $1 \times 2$ $\left\{ { k}_{2_i} { k}_{4_j} $\left[ -D_\beta \frac{y }{ d(d+2) } \right]$ \hat{ \bar b}_k^\perp ({\bf k}_3) \hat b_j^\perp({\bf k}_2) \hat \Pi_{ik} ({\bf k}_4) \right\} $ --------------------------------------------------------------------------------------------------------------------------------------------------- Appendix B: One-Loop contributions to $\lambda$ {#appendix-b-one-loop-contributions-to-lambda .unnumbered} =============================================== We here show how the diagrams of Fig. \[fig2\]b contribute to the convection term, Fig. \[fig1\]b. Each of the terms is associated with one of the $D_{ijk}^{lmn}$ terms in Eq. (\[5\]). It is convenient to define the convection operator $\hat M_{ijk}({\bf k}) = k_j \hat \Pi_{ik}({\bf k}) + k_k \hat \Pi_{ij}({\bf k})$. The convection term of Eq. (\[5\]) then becomes $$\begin{aligned} &+& \frac{i \lambda}{2} \int_{{\bf k}_1 {\bf k}_2 {\bf k}_3} \int d t~ (2 \pi)^d \delta({\bf k}_1 + {\bf k}_2 + {\bf k}_3) \nonumber \\ && \times \hat M_{ijk}( k_1) \hat{\bar b}_i^\perp ({\bf k}_1,t) \hat b_j^\perp ({\bf k}_2,t) \hat b_k^\perp({\bf k}_3,t) \ .\end{aligned}$$ In the calculation of the averages on the shell, there are again five terms associated with $D_\alpha$ and three associated with $D_\beta$. The first such term is $$\begin{aligned} J_1 &=& 2 \times \left(-\frac{1}{2!}\right) 2 \left( \frac{i \lambda}{2} \right) \frac{D_\alpha}{2} \int d t_1 d t_2 \nonumber \\ && \times \int_{{\bf k}_1 {\bf k}_2 {\bf k}_3 {\bf k}_4} (2 \pi)^d \delta({\bf k}_1+ {\bf k}_2 + {\bf k}_3 + {\bf k}_4 ) \nonumber \\ && \times \int d t_3 \int_{{\bf k}_5 {\bf k}_6 {\bf k}_7} (2 \pi)^d \delta({\bf k}_5+ {\bf k}_6 + {\bf k}_7) \nonumber \\ && \times \vert {\bf k}_1 + {\bf k}_2 \vert^{-y} \Theta (t_1 - t_3) \Theta (t_3 - t_2) \nonumber \\ && \times (2 \pi)^d \delta({\bf k}_2 + {\bf k}_5) (2 \pi)^d \delta({\bf k}_6 + {\bf k}_3) \nonumber \\ && \times e^{-\nu k_2^2 (t_1 - t_3)} e^{-\nu k_6^2 (t_3 - t_2)} \hat M_{lmn}({\bf k}_5) \nonumber \\ && \times \bigg\{ k_{2_k} k_{4_k} \hat{ \bar b}_i^\perp({\bf k}_1,t_1) \hat b_j^\perp({\bf k}_4,t_2) \hat b_n^\perp({\bf k}_7,t_3) \nonumber \\ && \times \hat \Pi_{jl}({\bf k}_2) \hat \Pi_{im}({\bf k}_6) \bigg\} \nonumber \\ &=& \frac{i \lambda}{2} \frac{K_d}{\nu^2} \left[ - D_\alpha \frac{(d+1) }{ d (d+2)} \right] \int_{\Lambda/a}^\Lambda dq q^{d-y-3} \nonumber \\ && \times \int dt \int_{{\bf k}_1 {\bf k}_2 {\bf k}_3} (2 \pi)^d \delta({\bf k}_1 + {\bf k}_2 + {\bf k}_3) \nonumber \\ && \times \hat M_{ijk}({\bf k}_1) \hat{\bar b}_i^\perp({\bf k}_1,t) \hat b_j^\perp({\bf k}_2,t) \hat b_k^\perp({\bf k}_3,t) \label{eqJ1}\end{aligned}$$ For the remaining contributions, we list the symmetry factor, terms in braces in the integrand of Eq. (\[eqJ1\]) that change, and the final contribution in brackets that change. The contributions are shown in Table \[intJ\], where again the dependence on time has been suppressed. The terms $e^{-\nu k_i^2(t_1-t_3)}$ and $e^{-\nu k_j^2(t_3-t_2)}$, which have the same momentum arguments as the two $\hat \Pi({\bf k}_i)$ terms, have been suppressed. The delta functions have also been suppressed. Summing all these contributions to $\lambda$, we get the second flow equation of Eq. (\[8a\]). ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Integral Symmetry Integrand Result ---------- -------------- -------------------------------------------------------------------------------------- ------------------------------------------------------------ $J_1$ 2 $ $\left[ - D_\alpha \frac{(d+1) }{ d (d+2)} \right]$ \left\{ { k}_{2_k} { k}_{4_k} \hat{ \bar b}_i^\perp({\bf k}_1) \hat b_j^\perp({\bf k}_4) \hat b_n^\perp({\bf k}_7) \hat \Pi_{jl}({\bf k}_2) \hat \Pi_{im}({\bf k}_6) \right\} $ $J_2$ 2 $ $\left[ - D_\alpha \frac{(d-1) }{ d } \right]$ \left\{ { k}_{2_k} { k}_{4_k} \hat{ \bar b}_i^\perp({\bf k}_1) \hat b_i^\perp({\bf k}_4) \hat b_n^\perp({\bf k}_7) \hat \Pi_{jl}({\bf k}_2) \hat \Pi_{mj}({\bf k}_6) \right\} $ $J_3^a$ 1 $ $\left[ + \frac{D_\alpha}{2} \frac{1 }{ d (d+2)} \right]$ \left\{ { k}_{2_i} { k}_{4_j} \hat{ \bar b}_k^\perp({\bf k}_1) \hat b_i^\perp({\bf k}_4) \hat b_n^\perp({\bf k}_7) \hat \Pi_{jl}({\bf k}_2) \hat \Pi_{km}({\bf k}_6) \right\} $ $J_3^b$ 1 $ $\left[ + \frac{D_\alpha}{2} \frac{ 1}{ d (d+2)} \right]$ \left\{ { k}_{2_i} { k}_{4_j} \hat{ \bar b}_k^\perp({\bf k}_3) \hat b_j^\perp({\bf k}_2) \hat b_n^\perp({\bf k}_7) \hat \Pi_{il}({\bf k}_4) \hat \Pi_{km}({\bf k}_6) \right\} $ $J_4^a$ 1 $ 0 \left\{ { k}_{2_i} { k}_{4_j} \hat{ \bar b}_j^\perp({\bf k}_1) \hat b_k^\perp({\bf k}_4) \hat b_n^\perp({\bf k}_7) \hat \Pi_{lk}({\bf k}_2) \hat \Pi_{im}({\bf k}_6) \right\} $ $J_4^b$ 1 $ 0 \left\{ { k}_{2_i} { k}_{4_j} \hat{ \bar b}_i^\perp({\bf k}_3) \hat b_k^\perp({\bf k}_2) \hat b_n^\perp({\bf k}_7) \hat \Pi_{kl}({\bf k}_4) \hat \Pi_{mj}({\bf k}_6) \right\} $ $J_5^a$ $1 \times 2$ $ 0 \left\{ { k}_{2_i} { k}_{4_j} \hat{ \bar b}_j^\perp({\bf k}_1) \hat b_i^\perp({\bf k}_4) \hat b_n^\perp({\bf k}_7) \hat \Pi_{kl}({\bf k}_2) \hat \Pi_{mk}({\bf k}_6) \right\} $ $J_5^b$ $1 \times 2$ $ 0 \left\{ { k}_{2_i} { k}_{4_j} \hat{ \bar b}_k^\perp({\bf k}_3) \hat b_k^\perp({\bf k}_2) \hat b_n^\perp({\bf k}_7) \hat \Pi_{il}({\bf k}_4) \hat \Pi_{mj}({\bf k}_6) \right\} $ $J_6^a$ 1 $ $\left[ + \frac{D_\beta}{2} \frac{ 1}{ d (d+2)} \right]$ \left\{ { k}_{2_i} { k}_{4_j} \hat{ \bar b}_i^\perp({\bf k}_1) \hat b_k^\perp({\bf k}_4) \hat b_n^\perp({\bf k}_7) \hat \Pi_{kl}({\bf k}_2) \hat \Pi_{mj}({\bf k}_6) \right\} $ $J_6^b$ 1 $ $\left[ + \frac{D_\beta}{2} \frac{ 1}{ d (d+2)} \right]$ \left\{ { k}_{2_i} { k}_{4_j} \hat{ \bar b}_j^\perp({\bf k}_3) \hat b_k^\perp({\bf k}_2) \hat b_n^\perp({\bf k}_7) \hat \Pi_{kl}({\bf k}_4) \hat \Pi_{im}({\bf k}_6) \right\} $ $J_7$ 2 $ $\left[ - D_\beta \frac{ (d+1)}{ d (d+2)} \right]$ \left\{ { k}_{2_k} { k}_{4_k} \hat{ \bar b}_i^\perp({\bf k}_1) \hat b_j^\perp({\bf k}_4) \hat b_n^\perp({\bf k}_7) \hat \Pi_{il}({\bf k}_2) \hat \Pi_{jm}({\bf k}_6) \right\} $ $J_8^a$ $1 \times 2$ $ $\left[ + D_\beta \frac{ 1}{ d (d+2)} \right]$ \left\{ { k}_{2_i} { k}_{4_j} \hat{ \bar b}_i^\perp({\bf k}_1) \hat b_k^\perp({\bf k}_4) \hat b_n^\perp({\bf k}_7) \hat \Pi_{jl}({\bf k}_2) \hat \Pi_{km}({\bf k}_6) \right\} $ $J_8^b$ $1 \times 2$ $ $\left[ + D_\beta \frac{ 1}{ d (d+2)} \right]$ \left\{ { k}_{2_i} { k}_{4_j} \hat{ \bar b}_k^\perp({\bf k}_3) \hat b_j^\perp({\bf k}_2) \hat b_n^\perp({\bf k}_7) \hat \Pi_{kl}({\bf k}_4) \hat \Pi_{mi}({\bf k}_6) \right\} $ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Appendix C: One-Loop contributions to $D_\alpha, D_\beta$ {#appendix-c-one-loop-contributions-to-d_alpha-d_beta .unnumbered} ========================================================= We here show how the diagram of Fig. \[fig2\]c contribute to the disorder term, Fig. \[fig1\]c. Due to the symmetry of the $D_{ijk}^{lmn}$ term in Eq. (\[4\]), the four possible types of diagrams in Fig. \[fig2\]c contribute the same value. The result is $$\begin{aligned} L &=& 2^3 \left (- \frac{1}{2!} \right) \left( \frac{1}{2} \right)^2 \nonumber \\ && \times \int d t_1 d t_2 \int_{{\bf k}_1 {\bf k}_2 {\bf k}_3 {\bf k}_4} (2 \pi)^d \delta({\bf k}_1+ {\bf k}_2 + {\bf k}_3 + {\bf k}_4 ) \nonumber \\ && \times \int d t_3 d t_4 \int_{{\bf k}_5 {\bf k}_6 {\bf k}_7 {\bf k}_8} (2 \pi)^d \delta({\bf k}_5+ {\bf k}_6 + {\bf k}_7 + {\bf k}_8 ) \nonumber \\ && \times \vert {\bf k}_1 + {\bf k}_2 \vert^{-y} k_{2_j} k_{4_m} \hat{\bar b}_i^\perp({\bf k}_1,t_1) \hat b_k^\perp({\bf k}_2,t_1) \nonumber \\ && \times \vert {\bf k}_5 + {\bf k}_6 \vert^{-y} k_{6_s} k_{8_v} \hat{\bar b}_r^\perp({\bf k}_5,t_3) \hat b_w^\perp({\bf k}_8,t_4) \nonumber \\ && \times e^{-\nu k_6^2 (t_3-t_2)} \Theta(t_3-t_2) \hat \Pi_{lt}({\bf k}_6) (2 \pi)^d \delta({\bf k}_6 + {\bf k}_3) \nonumber \\ && \times e^{-\nu k_4^2 (t_2-t_4)} \Theta(t_2-t_4) \hat \Pi_{nu}({\bf k}_4) (2 \pi)^d \delta({\bf k}_4 + {\bf k}_7) \nonumber \\ && \times D_{ijk}^{lmn} D_{rst}^{uvw} \nonumber \\ &=& \frac{-K_d}{2 \nu^2} \bigg[ \frac{2}{d} \delta_{ms} \delta_{lt} \delta_{nu} \nonumber \\ && - \frac{2}{d (d+2)} \delta_{nu}\left( \delta_{ms} \delta_{lt} + \delta_{ml} \delta_{st} + \delta_{mt} \delta_{ls} \right) \nonumber \\ && - \frac{2}{d (d+2)} \delta_{lt}\left( \delta_{ms} \delta_{nu} + \delta_{mn} \delta_{su} + \delta_{mu} \delta_{sn} \right) \nonumber \\ && + \frac{2}{d (d+2) (d+4)} M_{lmn}^{stu} \bigg] D_{ijk}^{lmn} D_{rst}^{uvw} \nonumber \\ && \times \int_{\Lambda/a}^\Lambda dq q^{d-y-3} \nonumber \\ && \times \int d t_1 d t_2 \int_{{\bf k}_1 {\bf k}_2 {\bf k}_3 {\bf k}_4} (2 \pi)^d \delta({\bf k}_1+ {\bf k}_2 + {\bf k}_3 + {\bf k}_4 ) \nonumber \\ && \times \vert {\bf k}_1 + {\bf k}_2 \vert^{-y} {{\bf k}_2}_j {{\bf k}_4}_v \nonumber \\ && \times \hat {\bar b}_i^\perp ({\bf k}_1, t_1) \hat b_k^\perp ({\bf k}_2, t_1) \hat {\bar b}_r^\perp ({\bf k}_3, t_2) \hat b_w^\perp ({\bf k}_4, t_2) \label{eqL1}\end{aligned}$$ It is clear that to evaluate this expression, we need to evaluate a term such as $$\begin{aligned} \sum_{lmn, stu} && k_{2_j} k_{4_v} \hat{\bar b}_i^\perp({\bf k}_1,t_1) \hat b_k^\perp({\bf k}_2,t_1) \hat{\bar b}_r^\perp({\bf k}_3,t_2) \hat b_w^\perp({\bf k}_4,t_2) \nonumber \\ && \times D_{ijk}^{lmn} D_{rst}^{uvw} \left[ \delta_{lt} \delta_{ms} \delta_{nu} \right] \label{eqL2}\end{aligned}$$ Fourteen other terms need to be evaluated in order to calculate the total contribution from Eq. (\[eqL1\]). These terms each contribute in a form that can be cast as a contribution to $D_\alpha$ and $D_\beta$ in Eq. (\[5\]). Shown in Table \[intL\] are the terms and their contributions. Term Contribution to $D_\alpha$ Contribution to $D_\beta$ ------------------------------------------ -------------------------------------- ------------------------------------------------------------------------- $\delta_{lt} \delta_{ms} \delta_{nu}$ $2 D_\alpha^2 + 4 D_\alpha D_\beta$ $4 D_\alpha^2 + (4+d) D_\alpha D_\beta + 3 D_\beta^2 $ $ \delta_{lt} \delta_{mn} \delta_{su} $ 0 $(4+2d) D_\alpha^2 + (6+2 d) D_\alpha D_\beta + 2 D_\beta^2 $ $ \delta_{lt} \delta_{mu} \delta_{ns} $ $2 D_\alpha^2 + 4 D_\alpha D_\beta$ $4 D_\alpha^2 + (4+d) D_\alpha D_\beta + 3 D_\beta^2 $ $ \delta_{lm} \delta_{ts} \delta_{nu} $ 0 $4 D_\alpha^2 + (6+2 d) D_\alpha D_\beta + (2 + 2 d)D_\beta^2 $ $ \delta_{lm} \delta_{nt} \delta_{su} $ 0 $(4+2d) D_\alpha^2 + (4+3 d + d^2) D_\alpha D_\beta + (1+d) D_\beta^2 $ $ \delta_{lm} \delta_{tu} \delta_{ns} $ 0 $(4+2d) D_\alpha^2 + (4+3 d + d^2) D_\alpha D_\beta + (1+d) D_\beta^2 $ $ \delta_{ls} \delta_{mt} \delta_{nu} $ $2 D_\alpha^2 + 4 D_\alpha D_\beta$ $4 D_\alpha^2 + (4+d) D_\alpha D_\beta + 3 D_\beta^2 $ $ \delta_{ls} \delta_{mn} \delta_{tu} $ 0 $(4+2d) D_\alpha^2 + (6+2 d) D_\alpha D_\beta + 2 D_\beta^2 $ $ \delta_{ls} \delta_{mu} \delta_{nt} $ $2 D_\alpha^2 + 4 D_\alpha D_\beta$ $4 D_\alpha^2 + (4+d) D_\alpha D_\beta + 3 D_\beta^2 $ $ \delta_{ln} \delta_{mt} \delta_{su} $ 0 $(4+2d) D_\alpha^2 + (4+3 d + d^2) D_\alpha D_\beta + (1+d) D_\beta^2 $ $ \delta_{ln} \delta_{ms} \delta_{tu} $ 0 $(4+2d) D_\alpha^2 + (4+3 d + d^2) D_\alpha D_\beta + (1+d) D_\beta^2 $ $ \delta_{ln} \delta_{mu} \delta_{st} $ 0 $4 D_\alpha^2 + (6+2 d) D_\alpha D_\beta + (2 +2 d) D_\beta^2 $ $ \delta_{lu} \delta_{mt} \delta_{ns} $ $2 D_\alpha^2 + 4 D_\alpha D_\beta$ $4 D_\alpha^2 + (4+2 d) D_\alpha D_\beta + 2 D_\beta^2 $ $ \delta_{lu} \delta_{mn} \delta_{st}$ 0 $4 D_\alpha^2 + 8 D_\alpha D_\beta + 4 D_\beta^2 $ $ \delta_{lu} \delta_{ms} \delta_{nt}$ $2 D_\alpha^2 + 4 D_\alpha D_\beta$ $4 D_\alpha^2 + (4+2 d) D_\alpha D_\beta + 2 D_\beta^2 $ Summing all these contributions to $D_\alpha$ and $D_\beta$, we get the third and fourth flow equations of Eq. (\[8a\]).
{ "pile_set_name": "ArXiv" }
--- abstract: 'For a family of second-order parabolic systems with bounded measurable, rapidly oscillating and time-dependent periodic coefficients, we investigate the sharp convergence rates of weak solutions in $L^2$. Both initial-Dirichlet and initial-Neumann problems are studied.' author: - 'Jun Geng[^1]Zhongwei Shen[^2]' bibliography: - 'convergence.bib' title: | Convergence Rates in Parabolic Homogenization\ with Time-Dependent Periodic Coefficients --- **Introduction** ================ The primary purpose of this paper is to investigate the sharp convergence rates in $L^2$ for a family of second-order parabolic operators $\partial_t+\mathcal{L}_\varepsilon$ with bounded measurable, rapidly oscillating and time-dependent periodic coefficients. Both the initial-Dirichlet and initial-Neumann boundary value problems are studied. Specifically, we consider $$\label{elliptic operator} \mathcal{L}_\varepsilon=-\text{div}\left(A\big({x}/{\varepsilon},{t}/{\varepsilon^2}\big)\nabla \right),$$ where $\e>0$ and $A(y,s)= \big(a_{ij}^{\alpha\beta} (y,s)\big)$ with $1\le i, j\le d$ and $1\le \alpha, \beta \le m$. Throughout this paper we will assume that the coefficient matrix $A(y,s)$ is real, bounded measurable, and satisfies the ellipticity condition, $$\label{ellipticity} \mu |\xi|^2\le a^{\alpha\beta}_{ij}(y,s)\xi_i^\alpha \xi^\beta_j\leqslant \frac{1}{\mu}|\xi|^2 \quad \text{ for any } \xi=(\xi_i^\alpha ) \in \mathbb{R}^{m\times d} \text{ and a.e. } (y,s)\in \mathbb{R}^{d+1},$$ where $\mu >0$, and the periodicity condition, $$\label{periodicity} A(y+z,s+t)=A(y,s)~~~\text{ for }(z,t)\in \mathbb{Z}^{d+1}\text{ and a.e. }(y,s)\in \mathbb{R}^{d+1}.$$ No additional smoothness condition will be imposed on $A$. Let $\Omega\subset\mathbb{R}^d$ be a bounded domain and $0<T<\infty$. We are interested in the initial-Dirichlet problem, $$\label{IDP} \left\{\aligned (\partial_t +\mathcal{L}_\e) u_\e &= F &\quad &\text { in } \Omega \times (0, T),\\ u_\e & = g & \quad & \text{ on } \partial\Omega \times (0, T),\\ u_\e &=h &\quad &\text{ on } \Omega \times \{ t=0\}, \endaligned \right.$$ and the initial-Neumann problem, $$\label{INP} \left\{\aligned (\partial_t +\mathcal{L}_\e) u_\e &= F &\quad &\text { in } \Omega \times (0, T),\\ \frac{\partial u_\e}{\partial \nu_\e} & = g & \quad & \text{ on } \partial\Omega \times (0, T),\\ u_\e &=h &\quad &\text{ on } \Omega \times \{ t=0\}, \endaligned \right.$$ where $\displaystyle \left(\frac{\partial u_\e}{\partial \nu_\e}\right)^\alpha =n_i a_{ij}^{\alpha\beta} (x/\e, t/\e^2)\frac{\partial u_\e^\beta}{\partial x_j}$ denotes the conormal derivative of $u_\e$ associated with $\mathcal{L}_\e$ and $n=(n_1, \dots, n_d)$ is the outward normal to $\partial\Omega$. Under suitable conditions on $F$, $g$, $h$ and $\Omega$, it is known that the weak solution $u_\e$ of (\[IDP\]) converges weakly in $L^2(0, T; H^1(\Omega))$ and strongly in $L^2(\Omega_T)$ to $u_0$, where $\Omega_T =\Omega\times (0, T)$. Furthermore, the function $u_0$ is the weak solution of the (homogenized) initial-Dirichlet problem, $$\label{IDP-0} \left\{\aligned (\partial_t +\mathcal{L}_0) u_0 &= F &\quad &\text { in } \Omega \times (0, T),\\ u_0 & = g & \quad & \text{ on } \partial\Omega \times (0, T),\\ u_0 &=h &\quad &\text{ on } \Omega \times \{ t=0\}. \endaligned \right.$$ Similarly, the weak solution $u_\e$ of (\[INP\]) converges weakly in $L^2(0, T; H^1(\Omega))$ and strongly in $L^2(\Omega_T)$ to the weak solution of the (homogenized) initial-Neumann problem, $$\label{INP-0} \left\{\aligned (\partial_t +\mathcal{L}_0) u_0 &= F &\quad &\text { in } \Omega \times (0, T),\\ \frac{\partial u_0}{\partial \nu_0} & = g & \quad & \text{ on } \partial\Omega \times (0, T),\\ u_0 &=h &\quad &\text{ on } \Omega \times \{ t=0\}. \endaligned \right.$$ The operator $\mathcal{L}_0$ in (\[IDP-0\]) and (\[INP-0\]), called the homogenized operator, is a second-order elliptic operator with constant coefficients [@bensoussan-1978]. The following are the main results of the paper, which establish the sharp $O(\e)$ convergence rates in $L^2(\Omega_T)$ for both the initial-Dirichlet and the initial-Neumann problems. \[main-theorem-1\] Suppose that the coefficient matrix $A$ satisfies (\[ellipticity\]) and (\[periodicity\]). Let $\Omega$ be a bounded $C^{1,1}$ domain in $\R^d$. Let $u_\e, u_0\in L^2(0, T; H^1(\Omega))$ be weak solutions of (\[IDP\]) and (\[IDP-0\]), respectively, for some $F\in L^2(\Omega_T)$. Assume that $u_0\in L^2(0, T; H^2(\Omega))$. Then $$\label{main-estimate-1} \aligned & \| u_\e -u_0\|_{L^2(\Omega_T)}\\ & \le C \e \left\{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\| F\|_{L^2(\Omega_T)} +\sup_{\e^2 <t<T} \left(\frac{1}{\e} \int_{t-\e^2}^t \int_\Omega |\nabla u_0|^2\right)^{1/2} \right\}, \endaligned$$ where $C$ depends at most on $d$, $m$, $\mu$, $T$ and $\Omega$. \[main-theorem-2\] Let $u_\e\in L^2(0, T; H^1(\Omega))$ be a weak solution of (\[INP\]) for some $F\in L^2(\Omega_T)$ and $u_0\in L^2(0, T; H^1(\Omega))$ the weak solution of the homogenized problem (\[INP-0\]). Under the same assumptions as in Theorem \[main-theorem-1\], the estimate (\[main-estimate-1\]) holds. \[remark-1.1\] [ In Theorems \[main-theorem-1\] and \[main-theorem-2\] we do not specify the conditions directly on $g$ and $h$, but rather require $u_0 \in L^2(0, T; H^2(\Omega))$. In the case that either $u_\e=u_0=0$ or $\displaystyle \frac{\partial u_\e}{\partial \nu_\e}=\frac{\partial u_0}{\partial \nu_0}=0$ on $\partial\Omega \times (0, T)$, i.e. $g=0$, the third term in the r.h.s. of (\[main-estimate-1\]) may be bounded by $$C\big\{ \|\partial_t u_0\|_{L^2(\Omega_T)} + \| F\|_{L^2(\Omega_T)} +\| h\|_{L^2(\Omega)}\big\}.$$ See (\[3.10-2\]). As a result, we obtain $$\label{main-estimate-2} \| u_\e -u_0\|_{L^2(\Omega_T)} \le C\, \e \Big\{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\| F\|_{L^2(\Omega_T)} +\| h\|_{L^2(\Omega)} \Big\},$$ where $C$ depends at most on $d$, $m$, $\mu$, $T$ and $\Omega$. In particular, if $g=0$ and $h=0$, then $$\| u_0\|_{L^2(0, T; H^2(\Omega))} \le C \| F\|_{L^2(\Omega_T)}$$ (see (\[3.10-5\])). It follows that $$\label{main-estimate-3} \| u_\e -u_0\|_{L^2(\Omega_T)} \le C\, \e \| F\|_{L^2(\Omega_T)}.$$ Also, in the case that $g=0$ on $\partial\Omega\times (0, T)$ and $h\in H^1(\Omega)$, it is known that if $\mathcal{L}_0^* =\mathcal{L}_0$, then $$\| u_0\|_{L^2(0, T; H^2(\Omega))} \le C \Big\{ \| F\|_{L^2(\Omega_T)} +\| h\|_{H^1(\Omega)} \Big\}$$ [@Lady]. This gives $$\label{main-estimate-4} \| u_\e -u_0\|_{L^2(\Omega_T)} \le C\, \e \Big\{ \| F\|_{L^2(\Omega_T)} +\| h\|_{H^1(\Omega)} \Big\},$$ where $C$ depends at most on $d$, $m$, $\mu$, $T$ and $\Omega$. ]{} The sharp convergence rate is one of the central issues in quantitative homogenization and has been studied extensively in the various settings. For elliptic equations and systems in divergence form with periodic coefficients, related results may be found in the recent work [@Suslina-2012; @Suslina-2013; @KLS2; @KLS3; @KLS4; @Shen-Boundary-2015; @Gu-2015; @SZ-2015] (also see [@bensoussan-1978; @Jikov-1994; @Griso-2004; @Griso-2006; @Onofrei-2007] for references on earlier work). In particular, the order sharp estimate $$\label{elliptic-estimate} \| u_\e -u_0\|_{L^2(\Omega)} \le C\,\e \| F\|_{L^2(\Omega)},$$ holds, if $\mathcal{L}_\e (u_\e)=\mathcal{L}_0 (u_0)=F$ in $\Omega$ and $u_\e =u_0=0$ or $\frac{\partial u_\e}{\partial\nu_\e}=\frac{\partial u_0}{\partial \nu_0}=0$ on $\partial\Omega$ (see [@Suslina-2012; @Suslina-2013; @Gu-2015; @SZ-2015] for $C^{1,1}$ domains and [@KLS2; @KLS4; @Shen-Boundary-2015] for Lipschitz domains). For parabolic equations and systems various results are known in the case where the coefficients are time-independent [@Jikov-1994; @Suslina-2004; @Zhikov-2006; @Suslina-2015]. We note that in this case, using the partial Fourier transform in the $t$ variable, it is possible to represent the solution of the parabolic system as an integral of the resolvent of the elliptic operator $\mathcal{L}_\e$ and apply the elliptic estimates. Very few results are known if the coefficients are time-dependent. In fact, to the authors’ best knowledge, the only known estimate in this case is $$\label{parabolic-max} \| u_\e -u_0\|_{L^\infty(\Omega_T)} \le C \e,$$ obtained by the use of the maximum principle, where $C$ depends on $u_0$ and coefficients are assumed to be smooth [@bensoussan-1978]. Our order sharp estimates (\[main-estimate-2\])-(\[main-estimate-4\]), which extend (\[elliptic-estimate\]) to the parabolic setting, seem to be the first work in this area beyond the rough estimate (\[parabolic-max\]). We now describe some of key ideas in the proof of Theorems \[main-theorem-1\] and \[main-theorem-2\]. Although it is not clear how to reduce parabolic systems with time-dependent coefficients to elliptic systems by some simple transformations, our general approach to the estimate (\[main-estimate-1\]) is inspired by the work on elliptic systems mentioned above. We consider the function $$\label{w-1} w_\e =u_\e (x, t)-u_0 (x, t) -\e \chi (x/\e, t/\e^2) K_\e (\nabla u_0) -\e^2 \phi (x/\e, t/\e^2) \nabla K_\e (\nabla u_0),$$ where $\chi (y, s)$ and $\phi(y,s)$ are correctors and dual correctors for the family of operators $\partial_t +\mathcal{L}_\e$, $\e>0$ (see Section 2 for their definitions). In (\[w-1\]) the operator $K_\e: L^2(\Omega_T) \to C_0^\infty(\Omega_T)$ is a parabolic smoothing operator at scale $\e$. We note that in the elliptic case [@Suslina-2012; @Suslina-2013; @Shen-Boundary-2015; @SZ-2015], only the first three terms in the r.h.s. of (\[w-1\]) are used. By computing $ (\partial_t +\mathcal{L}_\e) w_\e$, we are able to show that $$\label{1.1-dual} \aligned &\Big|\int_0^T \langle \partial_t w_\e, \psi\rangle +\iint_{\Omega_T} A^\e\nabla w_\e \cdot \nabla \psi \Big|\\ & \le C \Big\{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0\|_{L^2(\Omega_T)} +\e^{-1/2}\| \nabla u_0\|_{L^2(\Omega_{T, \e})} \Big\}\\ & \qquad \qquad \qquad \qquad \cdot \Big\{ \e \| \nabla \psi\|_{L^2(\Omega_T)} +\e^{1/2} \|\nabla \psi\|_{L^2(\Omega_{T, \e})} \Big\} \endaligned$$ for any $\psi \in L^2(0, T; H^1_0(\Omega))$ in the case of Dirichlet condition (\[IDP\]), and for any $\psi \in L^2(0, T; H^1(\Omega))$ in the case of the Neumann condition (\[INP\]), where $\Omega_{T, \e}$ denotes the set of points in $\Omega_T$ whose (parabolic) distances to the boundary of $\Omega_T$ are less than $\e$ (see Section 3 for details). By taking $\psi =w_\e$ in (\[1.1-dual\]) we obtain an $O(\sqrt{\e})$ error estimate in $L^2(0, T; H^1(\Omega))$, $$\label{H-1} \| \nabla w_\e\|_{L^2(\Omega_T)} \le C \sqrt{\e} \Big\{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0\|_{L^2(\Omega_T)} +\e^{-1/2} \| \nabla u_0\|_{L^2(\Omega_{T, \e})} \Big\},$$ which is more or less sharp, for both the initial-Dirichlet and the initial-Neumann problems. Finally, with (\[1.1-dual\]) at our disposal, we give the proof of Theorems \[main-theorem-1\] and \[main-theorem-2\] in Section 4. This is done by a dual argument, inspired by [@Suslina-2012; @Suslina-2013]. We point out that results on convergence rates are useful in the study of regularity estimates that are uniform in $\e>0$ [@Armstrong-Smart-2014; @Armstrong-Shen-2016; @Shen-Boundary-2015]. For solutions of $(\partial_t +\mathcal{L}_\e)u_\e=F$, the uniform boundary Hölder and interior Lipschitz estimates were proved in [@Geng-Shen-2015] by a compactness method, introduced to the study of homogenization problems in [@AL-1987]. The results obtained in this paper should allow us to establish the boundary Lipschitz estimates as well as Rellich estimates at large scale for parabolic systems in a manner similar to that in [@Shen-Boundary-2015] for elliptic systems of linear elasticity. We plan to carry this out in a separate study. We end this section with some notations that will be used throughout the paper. A function $h=h(y,s)$ in $\R^{d+1}$ is said to be $1$-periodic if $h$ is periodic with respect to $\mathbb{Z}^{d+1}$. We will use the notation $$h^\e (x,t)= h (x/\e, t/\e^2)$$ for $\e>0$, and the summation convention that the repeated indices are summed. Finally, we use $C$ to denote constants that depend at most on $d$, $m$, $\mu$, $T$ and $\Omega$, but never on $\e$. **Correctors and dual correctors** ================================== Let $\mathcal{L}_\varepsilon=-\text{div}\left(A^\e(x,t) \nabla\right)$, where $A^\e (x,t)=A(x/\e, t/\e^2)$ and $A(y, s)$ is 1-periodic and satisfies the ellipticity condition (\[ellipticity\]). For $1\leq j\leq d$ and $1\le \beta\le m$, the corrector $\chi_j^\beta=\chi_j^\beta (y,s)=(\chi_{j}^{\alpha\beta} (y, s))$ is defined as the weak solution of the following cell problem: $$\label{corrector} \begin{cases} \big(\partial_s +\mathcal{L}_1\big) (\chi_j^\beta) =-\mathcal{L}_1(P_j^\beta ) ~~~\text{in}~~Y, \\ \chi_j^\beta =\chi^\beta_j(y,s)~~ \text{is } \text{1-periodic in } (y,s),\\ \int_{Y} \chi_j^\beta = 0, \end{cases}$$ where $Y=[0,1)^{d+1}$, $P_j^\beta (y)=y_j e^\beta$, and $e^\beta=(0, \dots, 1, \dots, 0 )$ with $1$ in the $\beta^{th}$ position. Note that $$(\partial_s+\mathcal{L}_1)(\chi_j^\beta +P_j^\beta )=0~~~\text{in}~~\mathbb{R}^{d+1}.$$ By the rescaling property of $\partial_t +\mathcal{L}_\e$, one obtains that $$(\partial_t+\mathcal{L}_\varepsilon)\left\{\varepsilon\chi_j^\beta(x/\varepsilon,t/\varepsilon^2)+P^\beta_j(x)\right\} =0~~~\text{in}~~\mathbb{R}^{d+1}.$$ Let $\widehat{A}=(\widehat{a}^{\alpha\beta}_{ij})$, where $1\leq i,j\leq d$, $1\le \alpha, \beta\le m$, and $$\begin{aligned} \label{A} \widehat{a}^{\alpha\beta}_{ij}=\dashint_{Y}\left[a^{\alpha\beta}_{ij}+a^{\alpha\gamma} _{i k}\frac{\partial}{\partial y_k}\chi^{\gamma\beta}_{j}\right];\end{aligned}$$ that is $$\widehat{A}=\dashint_Y \Big\{ A +A\nabla \chi \Big\}.$$ It is known that the constant matrix $\widehat{A}$ satisfies the ellipticity condition, $$\mu |\xi|^2 \le \widehat{a}_{ij}^{\alpha\beta} \xi_i^\alpha \xi_j^\beta \le \mu_1 |\xi|^2 \qquad \text{ for any } \xi=(\xi_j^\beta) \in \R^{m\times d},$$ where $\mu_1>0$ depends only on $d$, $m$ and $\mu$ [@bensoussan-1978]. Denote $\mathcal{L}_0=-\text{div}(\widehat{A}\nabla)$. Then $\partial_t+\mathcal{L}_0$ is the homogenized operator for the family of parabolic operators $\partial_t+\mathcal{L}_\varepsilon$, $\e>0$. To introduce the dual correctors, we consider the 1-periodic matrix-valued function $$\label{B} B= A + A\nabla \chi -\widehat{A}.$$ More precisely, $B=B(y, s)= \big( b_{ij}^{\alpha\beta}\big)$, where $1\le i, j\le d$, $1\le \alpha, \beta\le m$, and $$\label{b} b_{ij}^{\alpha\beta}=a_{ij}^{\alpha\beta}+a_{ik}^{\alpha\gamma} \frac{\partial \chi^{\gamma\beta}_j}{\partial y_k}-\widehat{a}^{\alpha\beta}_{ij}.$$ \[1.15\] Let $1\leq j\leq d$ and $1\le \alpha, \beta\le m$. Then there exist 1-periodic functions $\phi_{kij}^{\alpha\beta}(y,s)$ in $\R^{d+1}$ such that $\phi_{kij}^{\alpha\beta}\in H^1(Y)$, $$\label{1.10} b_{ij}^{\alpha\beta}=\frac{\partial}{\partial y_k}(\phi^{\alpha\beta}_{kij})~~\text{ and }~~\phi^{\alpha\beta}_{kij} =-\phi_{ikj}^{\alpha\beta},$$ where $1\le k, i\le d+1$, $b_{ij}^{\alpha\beta}$ is defined by (\[b\]) for $1\le i\le d$, $b_{(d+1)j}^{\alpha\beta}=-\chi_j^{\alpha\beta}$, and we have used the notation $y_{d+1}=s$. Observe that by (\[corrector\]) and (\[A\]), $b_{ij}^{\alpha\beta} \in L^2 (Y)$ and $$\label{1.8-1} \int_{Y} b_{ij}^{\alpha \beta}=0$$ for $1\le i\le d+1$. It follows that there exist $f_{ij}^{\alpha\beta} \in H^2 (Y)$ such that $$\label{1.8-2} \begin{cases} \Delta_{d+1} f_{ij}^{\alpha\beta}=b^{\alpha\beta}_{ij}~~~~\text{ in } \R^{d+1}, \\ f_{ij}^{\alpha\beta} ~~\text{is 1-periodic}~~~\text{ in } \R^{d+1}, \end{cases}$$ where $\Delta_{d+1}$ denotes the Laplacian in $\R^{d+1}$. Write $$\begin{aligned} \label{1.11} b_{ij}^{\alpha\beta}= \frac{\partial}{\partial y_k}\left\{\frac{\partial}{\partial y_k}f_{ij}^{\alpha\beta} -\frac{\partial}{\partial y_i}f_{kj}^{\alpha\beta}\right\}+\frac{\partial}{\partial y_i}\left\{\frac{\partial}{\partial y_k}f_{kj}^{\alpha\beta}\right\},\end{aligned}$$ where the index $k$ is summed from $1$ to $d+1$. Note that by (\[corrector\]), $$\label{div} \sum_{i=1}^{d+1} \frac{\partial b^{\alpha\beta}_{ij}}{\partial y_i} =\sum_{i=1}^d \frac{\partial}{\partial y_i}b^{\alpha\beta}_{ij}-\frac{\partial}{\partial s}\chi_j^{\alpha\beta}=0.$$ In view of (\[1.8-2\]) this implies that $$\sum_{i=1}^{d+1}\frac{\partial }{\partial y_i} f_{ij}^{\alpha\beta}$$ is harmonic in $\R^{d+1}$. Since it is 1-periodic, it must be constant. Consequently, by (\[1.11\]), we obtain $$\begin{aligned} \label{1.13} b_{ij}^{\alpha\beta}=\frac{\partial}{\partial y_k}(\phi_{kij}^{\alpha\beta}),\end{aligned}$$ where $$\begin{aligned} \label{1.14} \phi_{kij}^{\alpha\beta} =\frac{\partial}{\partial y_k}f_{ij}^{\alpha\beta} -\frac{\partial}{\partial y_i}f_{kj}^{\alpha\beta}\end{aligned}$$ is 1-periodic and belongs to $H^1(Y)$. It is easy to see that $\phi_{kij}^{\alpha\beta}=-\phi_{ikj}^{\alpha\beta}$. This completes the proof. The 1-periodic functions $(\phi_{kij}^{\alpha\beta})$ given by Lemma \[1.15\] are called dual correctors for the family of parabolic operators $\partial_t +\mathcal{L}_\e$, $\e>0$. As in the elliptic case [@Jikov-1994; @KLS2], they play an important role in the study of the problem of convergence rates. Indeed, to establish the main results of this paper, we shall consider the function $w_\e = ( w_\e^\alpha)$, where $$\label{w} \aligned w_\e ^\alpha (x, t) = u_\e^\alpha (x, t) -u_0^\alpha (x, t) & -\e \chi_j^{\alpha\beta} (x/\e, t/\e^2) K_\e \left(\frac{\partial u_0^\beta}{\partial x_j}\right)\\ &-\e^2 \phi_{(d+1) ij}^{\alpha\beta} (x/\e, t/\e^2) \frac{\partial}{\partial x_i} K_\e \left(\frac{\partial u_0^\beta}{\partial x_j}\right), \endaligned$$ and $K_\e : L^2(\Omega_T) \to C_0^\infty(\Omega_T)$ is a linear operator to be chosen later. The repeated indices $i, j$ in (\[w\]) are summed from $1$ to $d$. \[Theorem-2.1\] Let $\Omega$ be a bounded Lipschitz domain in $\R^d$ and $0<T<\infty$. Let $u_\e \in L^2(0, T; H^1(\Omega))$ and $u_0\in L^2(0, T; H^2(\Omega))$ be solutions of the initial-Dirichlet problems (\[IDP\]) and (\[IDP-0\]), respectively. Let $w_\e$ be defined by (\[w\]). Then for any $\psi \in L^2(0, T; H^1_0(\Omega))$, $$\label{L-w} \aligned \int_0^T &\big\langle \partial_t w_\e, \psi\big\rangle_{H^{-1}(\Omega)\times H^1_0(\Omega)} +\iint_{\Omega_T} A^\e \nabla w_\e \cdot \nabla \psi\\ =& \iint_{\Omega_T} (\widehat{a}_{ij} -a_{ij}^\e ) \left( \frac{\partial u_0}{\partial x_j} - K_\e \left( \frac{\partial u_0}{\partial x_j}\right) \right) \frac{\partial \psi}{\partial x_i}\\ & - \e \iint_{\Omega_T} a_{ij}^\e \cdot \chi_k^\e \cdot \frac{\partial }{\partial x_j} K_\e \left(\frac{\partial u_0}{\partial x_k} \right) \cdot \frac{\partial \psi}{\partial x_i} \\ & -\e \iint_{\Omega_T} \phi_{kij}^\e \cdot \frac{\partial}{\partial x_i} K_\e \left(\frac{\partial u_0}{\partial x_j}\right) \cdot \frac{\partial \psi}{\partial x_k}\\ &-\e^2 \iint_{\Omega_T} \phi_{k(d+1)j}^\e \cdot \partial_t K_\e \left(\frac{\partial u_0}{\partial x_j}\right)\cdot\frac{\partial \psi}{\partial x_k} \\ &+ \e \iint_{\Omega_T} a_{ij}^\e \cdot \left(\frac{\partial}{\partial x_j} ( \phi_{(d+1) \ell k} ) \right)^\e \cdot \frac{\partial}{\partial x_\ell} K_\e \left(\frac{\partial u_0}{\partial x_k} \right) \cdot \frac{\partial \psi}{\partial x_i} \\ &+\e^2 \iint_{\Omega_T} a_{ij}^\e \cdot \phi_{(d+1) \ell k } ^\e \cdot \frac{\partial^2 }{\partial x_j \partial x_\ell} K_\e \left(\frac{\partial u_0}{\partial x_k}\right) \cdot \frac{\partial \psi}{\partial x_i}, \endaligned$$ where we have suppressed superscripts $\alpha, \beta$ for the simplicity of presentation. The repeated indices $i, j, k, \ell$ are summed from $1$ to $d$. Using (\[IDP\]) and (\[IDP-0\]), we see that $$\aligned \big(\partial_t +\mathcal{L}_\e) w_\e &=(\mathcal{L}_0 -\mathcal{L}_\e) u_0 -(\partial_t +\mathcal{L}_\e) \left\{ \e \chi_j^\e K_\e \left(\frac{\partial u_0}{\partial x_j}\right)\right\}\\ &\qquad \qquad \qquad -(\partial_t +\mathcal{L}_\e ) \left\{ \e^2 \phi_{(d+1) ij}^\e \frac{\partial}{\partial x_i} K_\e \left(\frac{\partial u_0}{\partial x_j} \right)\right\}\\ &=-\frac{\partial}{\partial x_i} \left\{ (\widehat{a}_{ij} -a_{ij}^\e ) \left( \frac{\partial u_0}{\partial x_j} - K_\e \left( \frac{\partial u_0}{\partial x_j}\right) \right) \right\}\\ &\qquad -\frac{\partial}{\partial x_i} \left\{ (\widehat{a}_{ij} -a_{ij}^\e) K_\e \left(\frac{\partial u_0}{\partial x_j} \right)\right\} -(\partial_t +\mathcal{L}_\e) \left\{ \e \chi_j^\e K_\e \left(\frac{\partial u_0}{\partial x_j}\right)\right\}\\ &\qquad \qquad \qquad -(\partial_t +\mathcal{L}_\e ) \left\{ \e^2 \phi_{(d+1) ij}^\e \frac{\partial}{\partial x_i} K_\e \left(\frac{\partial u_0}{\partial x_j} \right)\right\}. \endaligned$$ By computing the third term in the r.h.s. of the equalities above and using (\[b\]), we obtain $$\aligned \big(\partial_t +\mathcal{L}_\e) w_\e &=-\frac{\partial}{\partial x_i} \left\{ (\widehat{a}_{ij} -a_{ij}^\e ) \left( \frac{\partial u_0}{\partial x_j} - K_\e \left( \frac{\partial u_0}{\partial x_j}\right) \right) \right\}\\ &\qquad +\frac{\partial}{\partial x_i} \left\{ b_{ij}^\e K_\e \left(\frac{\partial u_0}{\partial x_j} \right)\right\} +\e \frac{\partial}{\partial x_i} \left\{ a_{ij}^\e \cdot \chi_k^\e \cdot \frac{\partial}{\partial x_j} K_\e \left(\frac{\partial u_0}{\partial x_k}\right) \right\}\\ &\qquad - \e \partial_t \left\{ \chi_j^\e K_\e \left(\frac{\partial u_0}{\partial x_j}\right)\right\} -(\partial_t +\mathcal{L}_\e ) \left\{ \e^2 \phi_{(d+1) ij}^\e \frac{\partial}{\partial x_i} K_\e \left(\frac{\partial u_0}{\partial x_j} \right)\right\}. \endaligned$$ In view of (\[div\]) this gives $$\label{2.2-10} \aligned \big(\partial_t +\mathcal{L}_\e) w_\e &=-\frac{\partial}{\partial x_i} \left\{ (\widehat{a}_{ij} -a_{ij}^\e ) \left( \frac{\partial u_0}{\partial x_j} - K_\e \left( \frac{\partial u_0}{\partial x_j}\right) \right) \right\}\\ & \qquad +\e \frac{\partial}{\partial x_i} \left\{ a_{ij}^\e \cdot \chi_k^\e \cdot \frac{\partial}{\partial x_j} K_\e \left(\frac{\partial u_0}{\partial x_k}\right) \right\} +b_{ij}^\e\cdot \frac{\partial}{\partial x_i} K_\e \left(\frac{\partial u_0}{\partial x_j} \right)\\ &\qquad -\e \chi_j^\e \partial_t K_\e \left(\frac{\partial u_0}{\partial x_j}\right) -(\partial_t +\mathcal{L}_\e ) \left\{ \e^2 \phi_{(d+1) ij}^\e \frac{\partial}{\partial x_i} K_\e \left(\frac{\partial u_0}{\partial x_j} \right)\right\}. \endaligned$$ Next, by Lemma \[1.15\], we may write $$\aligned & b_{ij}^\e\cdot \frac{\partial}{\partial x_i} K_\e \left(\frac{\partial u_0}{\partial x_j} \right) -\e \chi_j^\e \partial_t K_\e \left(\frac{\partial u_0}{\partial x_j}\right)\\ & \qquad =\e \frac{\partial}{\partial x_k} \Big(\phi_{kij}^\e\Big)\cdot \frac{\partial}{\partial x_i} K_\e \left(\frac{\partial u_0}{\partial x_j}\right) +\e^2 \partial_t \Big(\phi_{(d+1) ij}^\e \Big) \cdot \frac{\partial}{\partial x_i} K_\e \left(\frac{\partial u_0}{\partial x_j} \right)\\ &\qquad \qquad +\e^2 \frac{\partial}{\partial x_k} \Big( \phi_{k (d+1) j }^\e\Big )\cdot \partial_t K_\e \left(\frac{\partial u_0}{\partial x_j} \right), \endaligned$$ where we have also used the fact $\phi_{(d+1)(d+1) j}=0$. Furthermore, by the skew-symmetry in (\[1.10\]), we see that $$\aligned & b_{ij}^\e\cdot \frac{\partial}{\partial x_i} K_\e \left(\frac{\partial u_0}{\partial x_j} \right) -\e \chi_j^\e \partial_t K_\e \left(\frac{\partial u_0}{\partial x_j}\right)\\ & \qquad =\e \frac{\partial}{\partial x_k} \left\{ \phi_{kij}^\e \cdot \frac{\partial}{\partial x_i} K_\e \left(\frac{\partial u_0}{\partial x_j}\right)\right\} +\e^2 \partial_t \left\{ \phi_{(d+1) ij}^\e \cdot \frac{\partial}{\partial x_i} K_\e \left(\frac{\partial u_0}{\partial x_j} \right)\right\} \\ &\qquad \qquad +\e^2 \frac{\partial}{\partial x_k} \left\{ \phi_{k (d+1) j }^\e\cdot \partial_t K_\e \left(\frac{\partial u_0}{\partial x_j} \right)\right\}. \endaligned$$ This, combined with (\[2.2-10\]), gives the desired equation (\[L-w\]). The next theorem is concerned with the initial-Neumann problem. \[Theorem-2.2\] Let $\Omega$ be a bounded Lipschitz domain in $\R^d$ and $0<T<\infty$. Let $u_\e \in L^2(0, T; H^1(\Omega))$ and $u_0\in L^2(0, T; H^2(\Omega))$ be solutions of the initial-Neumann problems (\[INP\]) and (\[INP-0\]), respectively. Let $w_\e$ be defined by (\[w\]). Then the equation (\[L-w\]) holds for any $\psi \in L^2(0, T; H^1(\Omega))$, if $\langle, \rangle$ in its l.h.s. denotes the pairing between $H^1(\Omega)$ and its dual. It follows from (\[INP\]) and (\[INP-0\]) that $$\int_0^T \big\langle \partial_t u_\e, \psi \big\rangle +\iint_{\Omega_T} A^\e \nabla u_\e \cdot \nabla \psi =\int_0^T \big\langle \partial_t u_0, \psi\big \rangle +\iint_{\Omega_T} \widehat{A} \nabla u_0 \cdot \nabla \psi$$ for any $\psi \in L^2(0, T; H^1(\Omega))$. This gives $$\aligned \int_0^T &\big \langle \partial_t w_\e, \psi\big\rangle +\iint_{\Omega_T} A^\e \nabla w_\e \cdot \nabla \psi\\ &=\iint_{\Omega_T} (\widehat{A}-A^\e )\nabla u_0 \cdot \nabla \psi -\int_0^T \Big\langle (\partial_t +\mathcal{L}_\e) \left\{ \e \chi_j^\e K_\e \left(\frac{\partial u_0}{\partial x_j}\right)\right\}, \psi \Big\rangle \\ & \qquad -\int_0^T \Big\langle (\partial_t +\mathcal{L}_\e ) \left\{ \e^2 \phi_{(d+1) ij}^\e \frac{\partial}{\partial x_i} K_\e \left(\frac{\partial u_0}{\partial x_j} \right)\right\}, \psi \Big\rangle, \endaligned$$ where we have used the fact $K_\e (\nabla u_0)\in C_0^\infty(\Omega_T)$. The rest of the proof is similar to that of Theorem \[Theorem-2.1\]. We omit the details. Error estimates in $L^2(0, T; H^1(\Omega))$ =========================================== We begin by introducing a parabolic smoothing operator. Fix a nonnegative function $\theta=\theta (y,s) \in C_0^\infty(B(0,1))$ such that $\int_{\mathbb{R}^{d+1}}\theta=1$. Define $$\label{1.18} \aligned S_\varepsilon(f)(x,t)&=\frac{1}{\varepsilon^{d+2}}\int_{\mathbb{R}^{d+1}}f(x-y,t-s) \theta(y/\varepsilon,s/\varepsilon^2)\, dyds \\&=\int_{\mathbb{R}^{d+1}}f(x-\varepsilon y,t-\varepsilon^2 s)\theta(y,s)\, dyds. \endaligned$$ \[lemma-S-1\] Let $S_\varepsilon$ be defined as in (\[1.18\]). Then $$\begin{aligned} \label{1.19-2} \| S_\varepsilon (f)\|_{L^2({\mathbb{R}^{d+1}})}\leq \| f \|_{L^2({\mathbb{R}^{d+1}})},\end{aligned}$$ $$\begin{aligned} \label{1.19-3} \e\, \| \nabla S_\varepsilon (f)\|_{L^2({\mathbb{R}^{d+1}})} +\e^2 \|\nabla^2 S_\e (f)\|_{L^2(\R^{d+1})} \leq C\, \| f \|_{L^2({\mathbb{R}^{d+1}})},\end{aligned}$$ $$\begin{aligned} \label{1.19-4} \e^2 \| \partial_t S_\varepsilon (f)\|_{L^2({\mathbb{R}^{d+1}})}\leq C\, \| f \|_{L^2({\mathbb{R}^{d+1}})},\end{aligned}$$ where $C$ depends only on $d$. This follows easily from the Plancherel Theorem. \[lemma-S-2\] Let $S_\e$ be defined as in (\[1.18\]). Then $$\label{S-approx} \| \nabla S_\e (f) -\nabla f \|_{L^2(\R^{d+1})} \le C \e \Big\{ \| \nabla^2 f \|_{L^2(\R^{d+1})} +\| \partial_t f \|_{L^2(\R^{d+1})} \Big\},$$ where $C$ depends only on $d$. By the Plancherel Theorem it suffices to show that $$|\xi_i \widehat{\theta} (\e \xi^\prime, \e^2 \xi_{d+1}) -\xi_i \widehat{\theta}(0, 0)| \le C\e \big\{ |\xi^\prime|^2 +|\xi_{d+1}| \big\},$$ where $1\le i\le d$ and $\xi^\prime=(\xi_1, \dots, \xi_d)\in \R^d$. Furthermore, by a change of variables, one may assume that $\e=1$. In this case, if $|\xi^\prime|\ge 1$, then $$|\xi_i \widehat{\theta} ( \xi^\prime, \xi_{d+1}) -\xi_i \widehat{\theta}(0, 0)| \le C |\xi^\prime|\le C (|\xi^\prime|^2 +|\xi_{d+1}|).$$ If $|\xi^\prime|\le 1$, we have $$|\xi_i \widehat{\theta} ( \xi^\prime, \xi_{d+1}) -\xi_i \widehat{\theta}(0, 0)| \le C |\xi^\prime| ( |\xi^\prime| +|\xi_{d+1}|) \le C ( |\xi^\prime|^2 +|\xi_{d+1}|).$$ This completes the proof. \[lemma-S-3\] Let $g=g(y,s)$ be a 1-periodic function in $(y,s)$. Then $$\begin{aligned} \label{1.22} \| g^\e S_\varepsilon (f)\|_{L^p({\mathbb{R}^{d+1}})}\leq C\, \| g \|_{L^p(Y)} \| f \|_{L^p({\mathbb{R}^{d+1}})}\end{aligned}$$ for any $1\le p<\infty$, where $g^\e (x, t)= g(x/\e, t/\e^2)$ and $C$ depends only on $d$ and $p$. Note that $S_\e (f) (x, t)= S_1 (f_\e ) (\e^{-1} x, \e^{-2} t)$, where $f_\e (x, t)= f(\e x , \e^2 t)$. As a result, by a change of variables, it suffices to consider the case $\e=1$. In this case we first use $\int_{\R^{d+1}} \theta =1$ and Hölder’s inequality to obtain $$|S_1 (f) (x, t)|^p \le \int_{\R^{d+1}} |f(y, s)|^p \, \theta (x-y, t-s)\, dyds.$$ It follows by Fubini’s Theorem that $$\aligned \int_{\R^{d+1}} |g(x,t)|^p | S_1 (f)(x, t)|^p\, dx dt &\le \sup_{(y, s)\in \R^{d+1}} \int_{B((y,s), 1)} | g(x, t)|^p \, dx dt \int_{\R^{d+1}} | f(y, s)|^p\, dyds\\ &\le C\, \| g \|^p_{L^p(Y)} \| f \|_{L^p(\R^{d+1})}^p, \endaligned$$ where $C$ depends only on $d$. This gives (\[1.22\]) for the case $\e=1$. \[remark-S\] [The same argument as in the proof of Lemma \[lemma-S-3\] also shows that $$\label{S-remark-1} \aligned \| g^\e \nabla S_\e (f)\|_{L^p(\R^{d+1})} & \le C \e^{-1} \| g\|_{L^p(Y)} \| f\|_{L^p(\R^{d+1})},\\ \| g^\e \partial_t S_\e (f)\|_{L^p(\R^{d+1})} & \le C \e^{-2} \| g\|_{L^p(Y)} \| f\|_{L^p(\R^{d+1})} \endaligned$$ ]{} for $1\le p<\infty$, where $C$ depends only on $d$ and $p$. Let $\delta \in (2\e, 20\e)$. Choose $\eta_1 \in C_0^\infty(\Omega)$ such that $0\le \eta_1\le 1$, $\eta_1 (x)=1$ if dist$(x, \partial\Omega)\ge 2\delta$, $\eta_1 (x)=0$ if dist$(x, \partial\Omega)\le \delta$, and $|\nabla_x \eta_1|\le C \delta^{-1}$. Similarly, we choose $\eta_2\in C_0^\infty(0, T)$ such that $0\le \eta_2\le 1$, $\eta_2 (t) =1$ if $2\delta^2 \le t\le T-2\delta^2 $, $\eta_2 (t)=0$ if $t\le \delta^2 $ or $t>T-\delta^2 $, and $| \eta_2^\prime (t)|\le C \delta^{-2}$. We define the operator $K_\e=K_{\e, \delta}: L^2(\Omega_T) \to C_0^\infty (\Omega_T)$ by $$\label{K} K_\e (f) (x, t) = S_\e ( \eta_1 \eta_2 f ) (x, t).$$ \[main-lemma-3.1\] Let $\Omega$ be a bounded Lipschitz domain in $\R^d$ and $0<T<\infty$. Let $u_\varepsilon, u_0\in L^2(0,T;H^1(\Omega))$ be weak solutions of (\[IDP\]) and (\[IDP-0\]), respectively, for some $F\in L^2(\Omega_T)$. We further assume that $u_0\in L^2(0, T; H^2(\Omega))$ and $\partial_t u_0 \in L^2(\Omega_T)$. Let $w_\e$ be defined by (\[w\]), where the operator $K_\e$ is given by (\[K\]). Then for any $\psi \in L^2(0, T; H_0^1(\Omega))$, $$\label{main-estimate-3.1} \aligned & \Big| \int_0^T \big\langle (\partial_t +\mathcal{L}_\e) w_\e, \psi \big\rangle_{H^{-1}(\Omega) \times H^1_0(\Omega)} \, dt \Big| \\ &\le C \Big\{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0 \|_{L^2(\Omega_T)} +\e^{-1/2} \| \nabla u_0\|_{L^2(\Omega_{T, 3\delta})} \Big\}\\ &\qquad \qquad \cdot \Big\{ \e \|\nabla \psi\|_{L^2(\Omega_T)} +\e^{1/2} \|\nabla \psi\|_{L^2(\Omega_{T, 3\delta})} \Big\}, \endaligned$$ where $$\label{o-e} \Omega_{T, \delta} =\left( \big\{ x\in \Omega: \, \text{\rm dist}(x, \partial\Omega)\le \delta \big\} \times (0, T) \right)\cup \left( \Omega \times (0, \delta^2)\right) \cup \left(\Omega\times (T-\delta^2, T)\right),$$ and $C>0$ depends at most on $d$, $m$, $\mu$, $T$ and $\Omega$. Using Theorem \[Theorem-2.1\], it is not hard to see that the l.h.s. of (\[main-estimate-3.1\]) is bounded by $$\label{3.4-1} \aligned &C \iint_{\Omega_T} |\nabla u_0 -K_\e (\nabla u_0)| |\nabla \psi|\\ &\qquad +C \e \iint_{\Omega_T} \Big\{ |\chi^\e| +|\phi^\e| +|(\nabla \phi)^\e|\Big\} |\nabla K_\e (\nabla u_0)||\nabla \psi|\\ &\qquad + C \e^2 \iint_{\Omega_T} |\phi^\e| \Big\{ |\partial_t K_\e (\nabla u_0)| +|\nabla^2 K_\e (\nabla u_0)|\Big\} |\nabla \psi|\\ &=I_1 +I_2 +I_3, \endaligned$$ where $C$ depends only on $d$, $m$ and $\mu$. To estimate $I_2$, we note that $$\label{3.4-2} \nabla K_\e (\nabla u_0) =\nabla S_\e (\eta_1\eta_2(\nabla u_0)) =S_\e (\nabla (\eta_1 \eta_2) (\nabla u_0)) +S_\e (\eta_1\eta_2(\nabla^2 u_0)).$$ It follows by the Cauchy inequality and Lemma \[lemma-S-3\] that $$\aligned I_2 \le &C \e \left(\iint_{\Omega_T} | \Big\{ |\chi^\e| +|\phi^\e| +|(\nabla \phi)^\e|\Big\} S_\e (\nabla (\eta_1\eta_2)(\nabla u_0))|^2 \right)^{1/2} \left(\iint_{\Omega_{T, 3\delta}} |\nabla \psi|^2 \right)^{1/2}\\ &\quad +C \e \left(\iint_{\Omega_T} | \Big\{ |\chi^\e| +|\phi^\e| +|(\nabla \phi)^\e|\Big\} S_\e ( \eta_1\eta_2(\nabla^2 u_0))|^2 \right)^{1/2} \left(\iint_{\Omega_T} |\nabla \psi|^2 \right)^{1/2}\\ &\le C \left(\iint_{\Omega_{T, 3\delta}} |\nabla u_0|^2\right)^{1/2} \left(\iint_{\Omega_{T, 3\delta}} |\nabla \psi|^2\right)^{1/2}\\ &\qquad+C \e \left(\iint_{\Omega_{T}} |\nabla^2 u_0|^2\right)^{1/2} \left(\iint_{\Omega_T} |\nabla \psi|^2\right)^{1/2}, \endaligned$$ where we also have used the observation that $S_\e (\nabla (\eta_1\eta_2)(\nabla u_0))$ is supported in $\Omega_{T, 3\delta}$. This shows that $I_2$ is bounded by the r.h.s. of (\[main-estimate-3.1\]). Next, to handle the term $I_3$, we note that $$\aligned \partial_t K_\e (\nabla u_0) &=\partial_t S_\e (\eta_1\eta_2(\nabla u_0)) =S_\e (\partial_t (\eta_1\eta_2) \nabla u_0) +S_\e (\eta_1\eta_2(\nabla \partial_t u_0))\\ & =S_\e (\partial_t (\eta_1\eta_2) \nabla u_0) +\nabla S_\e (\eta_1\eta_2 (\partial_t u_0)) -S_\e (\nabla (\eta_1\eta_2) (\partial_t u_0)), \endaligned$$ and $$\nabla^2 K_\e (\nabla u_0) =\nabla S_\e (\nabla (\eta_1\eta_2) (\nabla u_0)) +\nabla S_\e (\eta_1 \eta_2( \nabla^2 u_0)).$$ As in the case of $I_2$, by the Cauchy inequality and Remark \[remark-S\] , this gives $$\aligned I_3 \le &C \left(\iint_{\Omega_{T, 3\delta}} |\nabla u_0|^2\right)^{1/2} \left(\iint_{\Omega_{T, 3\delta}} |\nabla \psi|^2 \right)^{1/2}\\ &+ C\e \left(\iint_{\Omega_{T}} |\partial_t u_0|^2\right)^{1/2} \left(\iint_{\Omega_{T}} |\nabla \psi|^2 \right)^{1/2}\\ &+C\e \left(\iint_{\Omega_{T}} |\nabla^2 u_0|^2\right)^{1/2} \left(\iint_{\Omega_{T}} |\nabla \psi|^2 \right)^{1/2}, \endaligned$$ which is bounded by the r.h.s. of (\[main-estimate-3.1\]). Finally, to estimate $I_1$, we observe that $$\label{3.4-3} \aligned I_1 \le & C\iint_{\Omega_{T, 2\delta}}\Big\{ |\nabla u_0| +S_\e (\eta_1\eta_2 |\nabla u_0|)\Big\} |\nabla \psi| +C\iint_{\Omega_T\setminus \Omega_{T, 2\delta}} | (\nabla u_0 -S_\e (\nabla u_0)) | |\nabla \psi|\\ &\le C \left(\iint_{\Omega_{T, 3\delta}} |\nabla u_0|^2\right)^{1/2} \left(\iint_{\Omega_{T, 3\delta}} |\nabla \psi|^2 \right)^{1/2}\\ &\qquad + C \left(\iint_{\Omega\setminus \Omega_{T, 2\delta}} |\nabla u_0 -S_\e (\nabla u_0)|^2\right)^{1/2} \left(\iint_{\Omega_T} |\nabla \psi|^2 \right)^{1/2}. \endaligned$$ To treat the second term in the r.h.s. of (\[3.4-3\]), we extend $u_0$ to a function $\widetilde{u}_0$ in $\R^{d+1}$ such that $$\left(\iint_{\R^{d+1}} |\nabla^2 \widetilde{u}_0|^2\right)^{1/2} +\left(\iint_{\R^{d+1}} |\partial_t \widetilde{u}_0|^2\right)^{1/2} \le C \Big\{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0\|_{L^2(\Omega_T)} \Big\},$$ using the Calderón’s extension theorem. It follows that $$\aligned \left(\iint_{\Omega\setminus \Omega_{T, 2\delta}} |\nabla u_0 -S_\e (\nabla u_0)|^2\right)^{1/2} & \le \left(\iint_{\R^{d+1}} |\nabla \widetilde{u}_0 -S_\e (\nabla \widetilde{u}_0)|^2\right)^{1/2}\\ &\le C\e \Big\{ \|\nabla^2 \widetilde{u}_0\|_{L^2(\R^{d+1})} +\|\partial_t \widetilde{u}_0\|_{L^2(\R^{d+1})}\Big\} \\ &\le C\e \Big\{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0\|_{L^2(\Omega_T)} \Big\}, \endaligned$$ where we have used Lemma \[lemma-S-2\] for the second inequality. As a result, we see that $I_1$ is also bounded by the r.h.s. of (\[main-estimate-3.1\]). This completes the proof. \[remark-u-0\] [ Let $\Omega^\delta=\big\{ x\in \Omega: \text{dist}(x, \partial\Omega)< \delta \big\}$. Then $$\label{boundary-estimate} \int_{\Omega^{\delta}} |\nabla u_0|^2 \le C \delta \| \nabla u_0\|^2_{H^1(\Omega)}$$ (see e.g. [@SZ-2015] for a proof). It follows that $$\aligned \|\nabla u_0\|_{L^2(\Omega_{T, 3\delta})} &\le \left(\int_0^T \int_{\Omega^{3\delta}} |\nabla u_0|^2 \right)^{1/2} +\left(\int_0^{c\e^2} \int_\Omega |\nabla u_0|^2 \right)^{1/2} +\left(\int_{T-c\e^2}^T \int_\Omega |\nabla u_0|^2\right)^{1/2}\\ &\le C \e^{1/2} \left\{ \| u_0\|_{L^2(0, T; H^2(\Omega))} + \sup_{\e^2<t<T} \left(\frac{1}{\e} \int_{t-\e^2}^t \int_\Omega |\nabla u_0|^2 \right)^{1/2} \right\}. \endaligned$$ ]{} The next theorem provides an $O(\sqrt{\e})$ error estimate in $L^2(0, T; H^1_0(\Omega))$ for the initial-Dirichlet problem (\[IDP\]). \[IDP-H-1\] Let $w_\e$ be defined by (\[w\]). Under the same assumptions as in Lemma \[main-lemma-3.1\], we have $$\label{estimate-IDP-H-1} \aligned &\|\nabla w_\e\|_{L^2(\Omega_T)}\\ &\le C \sqrt{\e} \left\{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0\|_{L^2(\Omega_T)} +\sup_{\e^2<t<T} \left(\frac{1}{\e} \int_{t-\e^2}^t \int_\Omega |\nabla u_0|^2 \right)^{1/2} \right\}, \endaligned$$ where $C$ depends at most on $d$, $m$, $\mu$, $T$ and $\Omega$. Note that $w_\e \in L^2(0, T; H^1_0(\Omega))$ and $w_\e =0$ on $\Omega \times \{ t=0\}$. It follows that $$\aligned &\mu \iint_{\Omega_T} |\nabla w_\e|^2 \le \int_0^T \big \langle (\partial_t +\mathcal{L}_\e) w_\e, w_\e\big\rangle_{H^{-1}(\Omega) \times H^1_0(\Omega) }\\ &\le C \sqrt{\e} \|\nabla w_\e\|_{L^2(\Omega_T)} \Big\{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0 \|_{L^2(\Omega_T)} +\e^{-1/2} \| \nabla u_0\|_{L^2(\Omega_{T, 60\e})} \Big\}, \endaligned$$ where we have used Lemma \[main-lemma-3.1\] for the last step. This, together with Remark \[remark-u-0\], gives (\[estimate-IDP-H-1\]). Next we consider the initial-Neumann problem (\[INP\]). \[main-lemma-3.2\] Let $\Omega$ be a bounded Lipschitz domain in $\R^d$ and $0<T<\infty$. Let $u_\varepsilon, u_0\in L^2(0,T;H^1(\Omega))$ be weak solutions of the initial-Neumann problems (\[INP\]) and (\[INP-0\]), respectively, for some $F\in L^2(\Omega_T)$. We further assume that $u_0\in L^2(0, T; H^2(\Omega))$ and that $\partial_t u_0 \in L^2(\Omega_T)$. Let $w_\e$ be defined by (\[w\]), where the operator $K_\e$ is given by (\[K\]). Then for any $\psi \in L^2(0, T; H^1(\Omega))$, $$\label{main-estimate-3.2} \aligned & \Big| \int_0^T \big\langle \partial_t w_\e, \psi\big \rangle +\iint_{\Omega_T} A^\e \nabla w_\e \cdot \nabla \psi \Big| \\ &\le C \Big\{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0 \|_{L^2(\Omega_T)} +\e^{-1/2} \| \nabla u_0\|_{L^2(\Omega_{T, 3\delta})} \Big\}\\ &\qquad \qquad \cdot \Big\{ \e \|\nabla \psi\|_{L^2(\Omega_T)} +\e^{1/2} \|\nabla \psi\|_{L^2(\Omega_{T, 3\delta})} \Big\}, \endaligned$$ where $\langle, \rangle$ denotes the pairing between $H^1(\Omega)$ and its dual. The constant $C>0$ depends at most on $d$, $m$, $\mu$, $T$ and $\Omega$. This follows from Theorem \[Theorem-2.2\] by the same argument as in the proof of Lemma \[main-lemma-3.1\]. \[INP-H-1\] Let $w_\e$ be defined by (\[w\]). Under the same assumptions as in Lemma \[main-lemma-3.2\], we have $$\label{estimate-INP-H-1} \aligned &\|\nabla w_\e\|_{L^2(\Omega_T)}\\ &\le C \sqrt{\e} \left\{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0\|_{L^2(\Omega_T)} +\sup_{\e^2<t<T} \left(\frac{1}{\e} \int_{t-\e^2}^t \int_\Omega |\nabla u_0|^2 \right)^{1/2} \right\}, \endaligned$$ where $C$ depends at most on $d$, $m$, $\mu$, $T$ and $\Omega$. As in the proof of Theorem \[IDP-H-1\], this follows from Lemma \[main-lemma-3.2\] by letting $\psi =w_\e$. \[remark-3.10\] [In the case of $u_\e =u_0=0$ or $\frac{\partial u_\e}{\partial \nu_\e}=\frac{\partial u_0}{\partial \nu_0} =0$ on $\partial\Omega\times (0,T)$, we may bound the third term in the r.h.s. of (\[estimate-INP-H-1\]) as follows. Note that $$\label{3.10-1} \int_\Omega \widehat{A}\nabla u_0 \cdot \nabla u_0 =-\int_\Omega \partial_t u_0 \cdot u_0 +\int_\Omega F\cdot u_0.$$ It follows that $$\aligned \mu \int_{t-\e^2}^t \int_\Omega |\nabla u_0|^2 &\le \int_{t-\e^2}^t \int_\Omega |\partial_t u_0||u_0| + \int_{t-\e^2}^t \int_\Omega |F||u_0| \\ & \le \Big\{ \| \partial_t u_0\|_{L^2(\Omega_T)} +\| F\|_{L^2(\Omega_T)} \Big\} \left(\int_{t-\e^2}^t \int_\Omega |u_0|^2\right)^{1/2}\\ &\le \e \Big\{ \| \partial_t u_0\|_{L^2(\Omega_T)} +\| F\|_{L^2(\Omega_T)} \Big\} \sup_{0<t<T} \| u_0(\cdot, t)\|_{L^2(\Omega)}. \endaligned$$ This, together with the standard energy estimates, gives $$\label{3.10-2} \sup_{\e^2<t<T} \left(\frac{1}{\e} \int_{t-\e^2}^t \int_\Omega |\nabla u_0|^2 \right)^{1/2} \le C \Big\{ \|\partial_t u_0\|_{L^2(\Omega_T)} +\| F\|_{L^2(\Omega_T)} + \| h \|_{L^2(\Omega)} \Big\},$$ where $C$ depends only on $d$, $m$, $\mu$ and $\Omega$. As a result, for both the initial-Dirichlet problem (\[IDP\]) and the initial-Neumann problem (\[INP\]), if $g=0$ on $\partial\Omega \times (0, T)$, then $$\label{3.10-3} \| \nabla w_\e\|_{L^2(\Omega_T)} \le C \sqrt{\e} \Big\{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\| F\|_{L^2(\Omega_T)} +\| h\|_{L^2(\Omega)} \Big\},$$ where we have used the fact $$\|\partial _t u_0\|_{L^2(\Omega_T)} \le C \left\{ \|\nabla^2 u_0\|_{L^2(\Omega_T)} +\| F\|_{L^2(\Omega_T)} \right\}.$$ In particular, if $\Omega$ is $C^{1,1}$, $g=0$ on $\partial\Omega\times (0, T)$ and $h=0$ on $\Omega$, then $$\label{3.10-4} \| \nabla w_\e\|_{L^2(\Omega_T)} \le C \sqrt{\e} \| F \|_{L^2(\Omega_T)}.$$ To see this, we use the well-known estimate $$\label{3.10-5} \| u_0\|_{L^2(0, T; H^2(\Omega))} \le C\, \| F\|_{L^2(\Omega_T)},$$ which may be proved by using the partial Fourier transform in the $t$ variable and reducing the problem to the $H^2$ estimate for the elliptic operator $\mathcal{L}_0$ in $C^{1,1}$ domains. We also note that in the case that $g=0$ on $\partial\Omega\times (0, T)$ and $h\in H^1(\Omega; \R^m)$, if $\mathcal{L}^*_0=\mathcal{L}_0$ and $\Omega$ is $C^{1,1}$, then $$\label{3.10-10} \| u_0\|_{L^2(0, T; H^2(\Omega))} \le C \Big\{ \| F\|_{L^2(\Omega_T)} +\| h\|_{H^1(\Omega)} \Big\}.$$ This may be proved by using integration by parts as well as $H^2$ estimates for $\mathcal{L}_0$ [@Lady]. ]{} Proof of Theorems \[main-theorem-1\] and \[main-theorem-2\] =========================================================== In this section we study the convergence rates in $L^2(\Omega_T)$ and give the proof of Theorems \[main-theorem-1\] and \[main-theorem-2\]. Throughout the section we will assume that $\Omega$ is a bounded $C^{1,1}$ domain in $\R^d$. We first consider the initial-Dirichlet problem. Let $A^*$ denote the adjoint of $A$; i.e., $A^*=( a^{*\alpha\beta}_{ij})$ with $a_{ij}^{*\alpha\beta} (y, s)=a_{ji}^{\beta\alpha}(y,s)$. For $G\in L^2(\Omega_T)$, let $v_\e$ be the weak solution to $$\label{IDP-dual} \left\{ \aligned (-\partial_t +\mathcal{L}^*_\e) v_\e & = G &\quad & \text{ in } \Omega \times (0, T),\\ v_\e &=0 &\quad &\text{ on } \partial\Omega \times (0, T),\\ v_\e & =0& \quad & \text{ on } \Omega \times \{ t=T\}, \endaligned \right.$$ where $\mathcal{L}^*_\e =-\text{\rm div} (A^{*\e} (x, t)\nabla )$ denotes the adjoint of $\mathcal{L}_\e$, and $v_0$ the weak solution to $$\label{IDP-0-dual} \left\{ \aligned (-\partial_t +\mathcal{L}^*_0) v_0 & = G &\quad & \text{ in } \Omega \times (0, T),\\ v_0 &=0 &\quad &\text{ on } \partial\Omega \times (0, T),\\ v_0 & =0& \quad & \text{ on } \Omega \times \{ t=T\}, \endaligned \right.$$ where $\mathcal{L}_0^*=-\text{\rm div} (\widehat{A}^* \nabla )$. Observe that $v_\e (x, T-t)$ and $v_0 (x, T-t)$ are solutions of the initial-Dirichlet problems of (\[IDP\]) and (\[IDP-0\]), respectively, with coefficient matrix $A(x/\e,t/\e^2)$ replaced by $A^*(x/\e, (T-t)/\e^2)$, and with $g=0$ and $h=0$. Also note that $A^* (y, T-s)$ satisfies the same ellipticity and periodicity conditions as $A(y,s)$. \[lemma-4.0\] Let $v_0$ be the weak solution to (\[IDP-0-dual\]). Then $$\label{4.0-0} \| \nabla v_0\|_{L^2(\Omega_T)} +\delta^{-1/2} \|\nabla v_0\|_{L^2(\Omega_{T, \delta})} \le C \| G \|_{L^2(\Omega_T)},$$ where $\delta \in (2\e, 20\e)$ and $C$ depends at most on $d$, $m$, $\mu$, $T$ and $\Omega$. The estimate for $\|\nabla v_0\|_{L^2(\Omega_T)}$ follows directly from the energy estimate, while the estimate for $\delta^{-1/2} \|\nabla v_0\|_{L^2(\Omega_{T, \delta})}$ is proved in Remarks \[remark-u-0\] and \[remark-3.10\]. Let $$\label{z} \aligned z_\e (x, t)= v_\e (x, T-t) -v_0 (x, T-t)- & \e \chi_{T, j}^{*\e} S_\e \left(\widetilde{\eta}(x,t) \frac{\partial v_0}{\partial x_j} (x, T-t)\right) \\ - &\e^2 \phi_{T, (d+1)ij}^{*\e} \frac{\partial}{\partial x_i} S_\e \left( \widetilde{\eta}(x,t) \frac{\partial v_0}{\partial x_j} (x, T-t)\right), \endaligned$$ where $\chi_T^*$ and $\phi_T^*$ denote the correctors and dual correctors, respectively, for the family of parabolic operators $\partial_t +\text{div} (A^*(x/\e, (T-t)/\e^2)\nabla )$, $\e>0$. The cut-off function $\widetilde{\eta}$ in (\[z\]) is chosen so that $\widetilde{\eta} (x,t)=0$ if $(x, t)\in \Omega_{T, 10\e}$, $\eta(x, t)=1$ if $(x, t)\in \Omega_T \setminus \Omega_{T, 15\e}$, $|\nabla \widetilde{\eta} |\le C \e^{-1}$ and $|\partial_t \widetilde{\eta}|\le C \e^{-2}$. \[lemma-4.1\] Let $z_\e$ be defined by (\[z\]). Then $$\label{4.1-0} \| \nabla z_\e\|_{L^2(\Omega_T)} \le C \sqrt{\e} \| G\|_{L^2(\Omega_T)},$$ where $C$ depends at most on $d$, $m$, $\mu$, $T$ and $\Omega$. Since $A^* (y, T-s)$ satisfies the same ellipticity and periodicity conditions as $A(y,s)$ and $\Omega$ is $C^{1,1}$, this follows from the estimate (\[3.10-4\]). We are in a position to give the proof of Theorem \[main-theorem-1\]. Let $u_\e\in L^2(0, T; H^1(\Omega))$ and $u_0\in L^2(0, T; H^2(\Omega))$ be solutions of (\[IDP\]) and (\[IDP-0\]), respectively. Let $G\in L^2(\Omega_T)$. By duality it suffices to show that $$\label{4.3-1} \aligned & \Big| \iint_{\Omega_T} (u_\e -u_0) \cdot G \Big|\\ &\le C \e \| G\|_{L^2(\Omega_T)} \left\{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0\|_{L^2(\Omega_T)} +\sup_{\e^2<t<T} \left(\frac{1}{\e} \int_{t-\e^2}^t \int_\Omega |\nabla u_0|^2 \right)^{1/2} \right\}. \endaligned$$ Let $w_\e$ be defined by (\[w\]), with $\delta=2\e$. Since $$\| \chi^\e K_\e (\nabla u_0)\|_{L^2(\Omega_T)} +\e \| \phi^\e \nabla K_\e (\nabla u_0)\|_{L^2(\Omega_T)} \le C \|\nabla u_0\|_{L^2(\Omega_T)},$$ we only need to prove that $\displaystyle | \iint_{\Omega_T} w_\e \cdot G|$ is bounded by the r.h.s. of (\[4.3-1\]). To this end we write $$\label{4.3-2} \aligned &\iint_{\Omega_T} w_\e \cdot G = \int_0^T \big\langle \partial_t w_\e, v_\e \big\rangle +\iint_{\Omega_T} A^\e \nabla w_\e \cdot \nabla v_\e\\ &= \left\{ \int_0^T \big\langle \partial_t w_\e, z_\e (\cdot, T-t)\big\rangle +\iint_{\Omega_T} A^\e \nabla w_\e \cdot \nabla z_\e (x, T-t)\right\}\\ &\quad +\left\{ \int_0^T\big \langle \partial_t w_\e, v_0 \big\rangle +\iint_{\Omega_T} A^\e \nabla w _\e\cdot \nabla v_0\right\} \\ &\quad +\left\{ \int_0^T\big \langle \partial_t w_\e, v_\e -v_0 -z_\e (\cdot, T-t)\big \rangle + \iint_{\Omega_T} A^\e \nabla w_\e \cdot \big\{ v_\e -v_0 - z_\e (\cdot, T-t) \big\}\right\}\\ &=J_1 +J_2 +J_3, \endaligned$$ where $\langle, \rangle$ denotes the pairing between $H^1_0(\Omega)$ and its dual $H^{-1}(\Omega)$. We shall use Lemma \[main-lemma-3.1\] to bound $J_1$, $J_2$ and $J_3$. For the term $J_1$, it follows by Lemma \[main-lemma-3.1\] that $$\label{4.3-3} \aligned |J_1| & \le C \sqrt{\e} \Big \{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0\|_{L^2(\Omega_T)} +\e^{-1/2} \|\nabla u_0\|_{L^2(\Omega_{T, 6\e})} \Big\} \| \nabla z_\e\|_{L^2(\Omega_T)}\\ & \le C \e \Big \{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0\|_{L^2(\Omega_T)} +\e^{-1/2} \|\nabla u_0\|_{L^2(\Omega_{T, 6\e})} \Big\} \| G\|_{L^2(\Omega_T)}, \endaligned$$ where we have used Lemma \[lemma-4.1\] for the last step. Next, for $J_2$, we obtain $$\label{4.3-4} \aligned |J_2| &\le C \Big \{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0\|_{L^2(\Omega_T)} +\e^{-1/2} \|\nabla u_0\|_{L^2(\Omega_{T, 6\e})} \Big\}\\ &\qquad\qquad \qquad\qquad \cdot \Big\{ \e \| \nabla v_0\|_{L^2(\Omega)} +\e^{1/2} \|\nabla v_0\|_{L^2(\Omega_{T, 6\e})} \Big\}\\ &\le C \e \Big \{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0\|_{L^2(\Omega_T)} +\e^{-1/2} \|\nabla u_0\|_{L^2(\Omega_{T, 6\e})} \Big\}\| G\|_{L^2(\Omega_T)}, \endaligned$$ where we have used Lemma \[lemma-4.0\] for the last inequality. To estimate $J_3$, we note that $v_\e -v_0 -z_\e(x, T-t)$ is supported in $\Omega_T\setminus \Omega_{T, 10\e}$ and in view of (\[z\]) and Lemmas \[lemma-S-1\] and \[lemma-S-3\], $$\|\nabla (v_\e -v_0 -z_\e (x, T-t))\|_{L^2(\Omega_T)} \le C \| \nabla v_0\|_{L^2(\Omega_T)}\le C \| G\|_{L^2(\Omega_T)}.$$ It follows by Lemma \[main-lemma-3.1\] that $$|J_3| \le C \e \Big \{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0\|_{L^2(\Omega_T)} +\e^{-1/2} \|\nabla u_0\|_{L^2(\Omega_{T, 6\e})} \Big\} \| G\|_{L^2(\Omega_T)}.$$ This, together with (\[4.3-3\]) and (\[4.3-4\]), shows that $$\aligned &\Big| \iint_{\Omega_T} w_\e \cdot G\Big|\\ & \le C \e\Big \{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0\|_{L^2(\Omega_T)} +\e^{-1/2} \|\nabla u_0\|_{L^2(\Omega_{T, 6\e})} \Big\} \| G\|_{L^2(\Omega_T)}\\ & \le C \e\left \{ \| u_0\|_{L^2(0, T; H^2(\Omega))} +\|\partial_t u_0\|_{L^2(\Omega_T)} +\sup_{\e^2<t<T} \left(\frac{1}{\e} \int_{t-\e^2}^t \int_\Omega |\nabla u_0|^2 \right)^{1/2} \right\} \| G\|_{L^2(\Omega_T)}, \endaligned$$ which completes the proof. Finally, we give the proof of Theorem \[main-theorem-2\] The proof of Theorem \[main-theorem-2\] is similar to that of Theorem \[main-theorem-1\]. Indeed, let $u_\e\in L^2(0, T; H^1(\Omega))$ and $u_0\in L^2(0,T; H^2(\Omega))$ be solutions of (\[INP\]) and (\[INP-0\]), respectively. Let $w_\e$ be defined as in (\[w\]), with $\delta=2\e$. To estimate $\| u_\e -u_0\|_{L^2(\Omega_T)}$, we consider $\displaystyle \iint_{\Omega_T} w_\e \cdot G$, where $G\in L^2(\Omega_T)$. Let $v_\e$ be the weak solution to $$\label{INP-dual} \left\{ \aligned (-\partial_t +\mathcal{L}^*_\e) v_\e & = G &\quad & \text{ in } \Omega \times (0, T),\\ \frac{\partial v_\e}{\partial \nu^*_\e} &=0 &\quad &\text{ on } \partial\Omega \times (0, T),\\ v_\e & =0& \quad & \text{ on } \Omega \times \{ t=T\}, \endaligned \right.$$ and $v_0$ the weak solution to $$\label{INP-dual-0} \left\{ \aligned (-\partial_t +\mathcal{L}^*_0) v_0 & = G &\quad & \text{ in } \Omega \times (0, T),\\ \frac{\partial v_0}{\partial \nu^*_0} &=0 &\quad &\text{ on } \partial\Omega \times (0, T),\\ v_0 & =0& \quad & \text{ on } \Omega \times \{ t=T\}, \endaligned \right.$$ where $\frac{\partial v_\e}{\partial\nu^*_\e}$ and $\frac{\partial v_0}{\partial\nu_0^*}$ denote the conormal derivatives associated with the operators $\mathcal{L}_\e^*$ and $\mathcal{L}_0^*$, respectively. Let $z_\e$ be defined as before. Note that estimates in Lemmas \[lemma-4.0\] and \[lemma-4.1\] continue to hold. Moreover, by (\[INP-dual\]), we have $$\iint_{\Omega_T} w_\e \cdot G =\int_0^T \big\langle \partial_t w_\e, v_\e\big\rangle +\iint_{\Omega_T} A^\e \nabla w_\e \cdot \nabla v_\e,$$ where $\langle, \rangle$ denotes the pairing between $H^1(\Omega)$ and its dual. With Lemma \[main-lemma-3.2\] at our disposal, the rest of the proof is exactly the same as that of Theorem \[main-theorem-1\]. We omit the details. Jun Geng, School of Mathematics and Statistics, Lanzhou University, Lanzhou, P.R. China. E-mail:[email protected] Zhongwei Shen, Department of Mathematics, University of Kentucky, Lexington, Kentucky 40506, USA. E-mail: [email protected] [^1]: Supported in part by the NNSF of China (11571152) and Fundamental Research Funds for the Central Universities (LZUJBKY-2015-72). [^2]: Supported in part by NSF grant DMS-1161154.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The galactic kinematics of Mira variables derived from radial velocities, Hipparcos proper motions and an infrared period-luminosity relation are reviewed. Local Miras in the 145-200day period range show a large asymmetric drift and a high net outward motion in the Galaxy. Interpretations of this phenomenon are considered and (following Feast and Whitelock 2000) it is suggested that they are outlying members of the bulge-bar population and indicate that this bar extends beyond the solar circle.' author: - Michael Feast --- stars: AGB: - Galaxy: kinematics and dynamics - Galaxy: structure. Introduction ============ The galactic kinematics of Mira variables have for a long while been of importance in helping us understand both the nature and evolution of Miras as well as the structure of our own Galaxy. This paper is primarily concerned with the second point - what do we learn from Miras about galactic structure? The paper also concentrates on local kinematics, leaving out a detailed discussion of the kinematics of the galactic bulge, much work on which has of course been done here in Japan. It has been possible to take a fresh look at the local kinematics of Miras using Hipparcos astrometry, extensive new infrared photometry and published radial velocities. This was done in a series of three papers (Whitelock, Marang and Feast (2000), paper I: Whitelock and Feast (2000), paper II: Feast and Whitelock (2000a), paper III). The present paper summarizes some of the relevant results from these papers and extends the discussion of the kinematics. The Period-Kinematic Sequence ============================= It has been known for many years that there is a relation between the kinematics and the periods of Mira variables in the general solar neighbourhood. In particular the asymmetric drift and the velocity dispersion increase as one goes from longer to shorter periods. There was however an anomaly. The shortest period group (periods less than $\sim 150$ days) differed strongly from the general trend (Feast 1963). Following an initial study by Hron (1991), it was shown in paper I (see also Whitelock 2002) that combining infrared and Hipparcos magnitudes one sees two sequences of Miras at shorter periods in the period - colour plane, the SP-red and SP-blue stars, and this must be taken into account when discussing the kinematics. ![(a) The mean $V_{\theta}$ in various period groups plotted against the mean period of the group. (b) The mean $V_{R}$ in various period groups plotted against the mean period of the group. The cross represents the SP-red variables in both figures.](mwf_fig1.eps){height="10cm"} Figure 1a shows a plot of mean velocity of Miras in the direction of galactic rotation, $V_{\theta}$, in period groups. This plot (data in paper III) is a distinct improvement on earlier work. It depends on space motions rather than just radial velocities and the SP-red stars have been grouped separately and are denoted by a cross. The figure shows that at the longest periods the stars of the main Mira sequence are moving at close to the circular velocity of galactic rotation, which is taken as 231 $\rm km.s^{-1}$ (Feast and Whitelock 1997). There is a clear dependence of mean rotational velocity on period for this main Mira sequence with the short period (145-200 day) Miras having a large asymmetric drift ($-97 \pm 20 \rm km.s^{-1}$).\ The main Mira sequence now seems reasonably well understood. The infrared colour-period relation for this sequence is the same as that for Miras in globular clusters (paper I and Whitelock 2002), strongly suggesting a similarity between the field and cluster Miras over the period range in common. Not only do the clusters show that their Miras lie at the tip of the AGB, they also show that there is a period-metallicity relation (e.g. Feast and Whitelock 2000b). This allows us to study the galactic kinematics of old populations as a function of metallicity (and possibly also age) over a range of ages and metallicities where there are few if any useful, precise, tracers. For instance globular clusters containing Miras are often classed as “disc" clusters and treated together in kinematic discussions. However the field Miras show that there is a considerable variation of kinematic properties in the period/metallicity range of relevance to these clusters.\ Where do the SP-red stars fit in? A full answer to this question may come when more members of the class are identified and when good individual parallaxes are available for a significant number. The best guess at present (see e.g. Whitelock 2002) is that, unlike the stars of the main Mira sequence, they are not at the end of their AGB lives but that they will evolve into longer period Miras of the main Mira sequence.\ Local Miras and a Galactic Bar ============================== A particularly interesting result in paper III is the evidence of a net radial, outward motion ($V_{R}$) for Miras in the solar neighbourhood. This is shown as a function of period in figure 1b There is evidence of a small outward motion over most of the period range of the Miras. However it is very marked in the group with periods in the range 145 to 200 days where one finds, $V_{R} = +75 \pm 18 \rm km.s^{-1}$ for a mean period of 173 days. Note that in view of the discussion in section 2, the SP-red Miras are omitted in the discussion of the present section.\ There has been evidence in the literature for many years that some groups of old stars in the solar neighbourhood show a predominant, but modest, outward motion in the Galaxy, though this has not always been recognized. The Hipparcos results, giving parallaxes and proper motions for many thousands of common stars have enabled this to be studied in greater detail. It is now clear that there is a group of old stars with a small asymmetric drift and a modest outward velocity $(V_{\theta} = \sim 190 \rm km.s^{-1}$, $V_{R} = \sim +30 \rm km.s^{-1}$ (Dehnen 1998, 1999, 2000, Fux 2001). Fux has referred to these stars as forming the “Hercules Stream”. Dehnen, Fux and also Quillen (2002) each suggest that the stars of the Hercules Stream were originally in circular orbits which have been perturbed into their present state by the influence of a (pre-existing) galactic bar. However, the detailed dynamic processes envisaged differ from author to author.\ Fux has suggested that the short period Mira group just discussed belongs to the Hercules Stream. However whilst the Hercules Stream consists of old stars with a wide range of metallicities (Raboud et al. 1998) the large local outward motion in the case of Miras is confined to a restricted range in period, and therefore also in metallicity. The effect is also more extreme than in the Hercules group. This is shown in Figure 2 which plots $V_{\theta}$ against $V_{R}$ for individual Miras in the 145 to 200 day group. ![A plot of $V_{\theta}$ against $V_{R}$ for local Miras with periods in the range 145 to 200 days. The open circles denote stars for which the standard error of a velocity component is greater than $20 \rm km.s^{-1}$. The asterisk show the position of S Car which is on a highly eccentric retrograde orbit. The curve and dotted oval are discussed in the text.](mwf_fig2.eps){height="7cm"} The dotted oval shows the extent of the Hercules Stream as delimited by Dehnen (2000, fig 9). Evidently these Miras do not concentrate in the region of the Hercules Stream.\ It is clear that some of the stars in figure 2 are on highly eccentric orbits. About half the sample have perigalactic distances less than about 2kpc and some at least will pass through the galactic bulge. Despite the small number of stars in this diagram, there is also evidence of a relation between $V_{\theta}$ and $V_{R}$. In paper III we derived the curve shown in the fig 3 which makes the first order assumption that the stars are moving in simple elliptical orbits of different eccentricities but with their major axes aligned. There is then only one free parameter; the position of this major axis with respect to the sun-centre line. The curve is drawn for the best fitting value of this position angle ($ 17^{+11}_{-4}$ degrees). It provides a rather good fit to the data and since this angle is, within the uncertainties, similar to that proposed for the bar in the galactic bulge, we suggested that these 145-200 day Miras in the general solar neighbourhood were outlying members of the bar population itself. If this is the case the bar population extends out to beyond the solar circle.\ This discussion depends on the Miras in the general solar neighbourhood and is thus limited in number. In a recent paper Kharchenko et al. (2002) have suggested that over a larger volume of space, the mean $V_{R}$ for 145-200 day Miras is near zero. More details of their work are need for a proper discussion. However the following points are worth noting. (1) the distances adopted by Kharchenko et al. are obtained from visual magnitudes and the reddenings from a model. They must thus be rather uncertain. (2) A certain amount of trimming is carried out (velocities further from means than $3\sigma$ are rejected). (3) Complications in the analysis will arise when one goes to a large volume if the Galaxy is not axi-symmetrical. (4) No distinction is made in Kharchenko et al. between the SP-red and SP-blue stars discussed above and this is particularly important in the period range under discussion.\ Whilst therefore the large scale picture remains unclear, it seems rather remarkable that the nearby Miras in the period range 145-200 days with radial velocities, Hipparcos proper motions and infrared photometry show a marked asymmetry in $V_{R}$. As figure 2 shows, all the stars in this group with ($V_{\theta} \laeq \rm 160 km.s^{-1}$) have positive values of $V_{R}$. For an axi-symmetrical galaxy there should be a symmetrical distribution of $V_{R}$ about zero in this figure at any given $V_{\theta}$. The deviation from such a distribution is sufficiently striking that it seems difficult to attribute it entirely to chance. I would like to thank Professor Nakada (University of Tokyo) and the organizers of this meeting for making my attendance possible. This paper depends on work done in collaboration with Patricia Whitelock. Dehnen, W. (1998) [*AJ*]{}, [**115**]{}, 2384. Dehnen, W. (1999) [*ApJ*]{}, [**524**]{}, L35. Dehnen, W. (2000) [*AJ*]{}, [**119**]{}, 800. Feast, M.W. (1963) [*MNRAS*]{}, [**125**]{}, 367. Feast, M.W. and Whitelock, P.A. (1997) [*MNRAS*]{}, [**291**]{}, 683. Feast, M.W and Whitelock, P.A. (2000a)[*MNRAS*]{}, [**317**]{}, 460 (paper III). Feast, M.W. and Whitelock, P.A. (2000b) in [*The Evolution of the Milky Way*]{}, ed. Matteucci, F. and Giovannelli, F., Kluwer, Dordrecht, p. 229. Fux, R. (2001) [*A&A*]{}, [**373**]{}, 511. Hron, J. (1991) [*A&A*]{}, [**252**]{}, 583. Kharchenko, N., Kilpio, E., Malkov, O. and Schilbach, E. (2002) [*A&A*]{}, [**384**]{}, 925. Quillen, A.C. (2002) [*astro-ph*]{} 0204040. Raboud, D., Grenon, M., Martinet, L., Fux, R. and Urdy, S. (1998) [*A&A*]{}, [**335**]{}, L61. Whitelock, P.A. (2002) [*this volume*]{}. Whitelock, P.A., Marang, F. and Feast, M.W. (2000) [*MNRAS*]{}, [**319**]{}, 728 (paper I). Whitelock, P.A. and Feast, M.W. (2000) [*MNRAS*]{}, [**319**]{}, 759 (paper II). Discussion ========== [**Habing**]{}\ In your calculations of the galactic orbit, did you assume the bar is stationary? It may rotate.\ [**Feast**]{}\ For a rotating bar one needs to assume the simple elliptical orbits precess. One is then concerned with the present orientations of the Mira orbits and the bar. So the conclusions are not affected\ [**van Langevelde**]{}\ 1. You find a best fit of $\phi$, the angle between the major axis of the Mira orbits and the bar, but there must be a whole range of eccentricities. What is the range of perigalactic distances?\ 2. What makes the 145-200 day group special? Would that indicate something about the population/age of the bar?\ [**Feast**]{}\ 1. The distribution in figure 2 is essentially a distribution in eccentricity. About half the stars in that plot go within 2 kpc of the centre. The exact perigalactic distance depends of course on the mass model.\ 2. That is not entirely clear. There are Miras with a range of periods in the bulge. At periods longer than about 200 days, the local population is dominated by variables on much more nearly circular orbits. Possibly this is because the galactic density gradient of Miras is a function of period.\ [**Nakada**]{}\ Do the bulge Miras dominate the short period (P 145-200 days) group in the solar neighbourhood?\ [**Feast**]{}\ About half the Miras in this group (SP-red stars omitted) have perigalactic distances sufficiently small that we can probably say they belong to a bulge population. However there is no evidence at present that the stars in this period group which are on more nearly circular orbits are different physically from those on highly eccentric orbits. In that sense they can perhaps all be regarded as a bulge type population.\
{ "pile_set_name": "ArXiv" }
--- abstract: | We show that small quasicategories embed, both simplicially and 2-categorically, into prederivators defined on arbitrary small categories, so that in some senses prederivators can serve as a model for $(\infty,1)$-categories. The result for quasicategories that are not necessarily small, or analogously for small quasicategories when mapped to prederivators defined only on finite categories, is not as strong. We prove, instead, a Whitehead theorem that prederivators (defined on any domain) detect equivalences between arbitrarily large quasicategories. author: - Kevin Carlson bibliography: - 'bib.bib' title: 'On the $\infty$-categorical Whitehead theorem and the embedding of quasicategories in prederivators' --- Prederivators, and especially derivators, are structures defined independently by Grothendieck [@grothendieck], Heller [@heller] and Franke [@Franke], as a minimally complex notion of abstract homotopy theory. Every notion of an abstract homotopy theory $\mathcal C$, where for instance we might consider $\mathcal C$ as a quasicategory or a model category, shares as a common underlying structure the homotopy category ${\mathrm{Ho}}({\mathcal}C)$. Moreover for every category $J$ there exists (unless ${\mathcal}C={\mathrm{Ho}}({\mathcal}C)$ is a bare homotopy category) a homotopy theory ${\mathcal}C^J$ of $J$-shaped diagrams in ${\mathcal}C$, which thus has its own homotopy category ${\mathrm{Ho}}({\mathcal}C^J)$. Indeed, each homotopy theory ${\mathcal}C$ gives rise to a 2-functor ${\mathrm{Ho}}({\mathcal}C^{(-)})$ sending categories to categories. This is known as the “prederivator" of ${\mathcal}C$. (Derivators themselves, which arise when ${\mathcal}C$ admits homotopy Kan extensions, will be of limited relevance to this paper.) Prederivators are thus often referred to in the literature as a model of abstract homotopy theory, but this intuition has not always been referred to mathematical. Especially in light of the program, culminating in [@BSP], showing that all notions of $(\infty,1)$-category live in Quillen equivalent model categories, one might hope that such a claim should entail a Quillen equivalence between a model category of prederivators and some notion of abstract homotopy theory. It is more reasonable to ask for an embedding of homotopy theories in prederivators, as many prederivators visibly do not arise from any homotopy theory, and there is no suggestion of an axiomatization of the image. We view the latter problem as the natural generalization of the Brown representability problem from spaces to homotopy theories, and thus as the major remaining question in this area. However, it is not immediately clear that even an embedding is a reasonable thing to ask, due to the primitivity of the notion of equivalence of prederivators. The usual equivalences of prederivators are the pseudonatural equivalences of 2-functors. As was first remarked by Toën and Vezzosi in [@toen] and sharpened in [@muroold], these equivalences are too coarse to preserve the homotopy type of the mapping spaces between homotopy theories. Thus to get a true embedding of the homotopy theory of homotopy theories into some homotopy theory of prederivators, one must refine the notion of equivalence of prederivators, as was done in [@muro] to satisfy related requirements of algebraic K-theory. Alternatively, one can settle for answering the weaker question of whether the 2-category of homotopy theories embeds into the 2-category ${\underline{\mathbf{PDer}}}$ of prederivators. We investigate both approaches in this paper. The first approach leads to a simplicial category of prederivators ${\mathbf{PDer}_\bullet}$, so we have the two animating problems of this work: *The simplicial embedding problem:* Let ${\mathbf{HoTh}_\bullet}$ be a simplicial category of homotopy theories. Is there a simplicially fully faithful (at least up to homotopy equivalence) functor ${\mathrm{HO}}:{\mathbf{HoTh}_\bullet}\to {\mathbf{PDer}_\bullet}$ sending a homotopy theory ${\mathcal}C$ to its associated prederivator? *The 2-categorical embedding problem:* Now let ${\underline{\mathbf{HoTh}}}$ be a 2-category of homotopy theories. Again, can we construct ${\mathrm{HO}}:{\underline{\mathbf{HoTh}}}\to{\underline{\mathbf{PDer}}}$ which is 2-categorically fully faithful? If not, is it at least 2-categorically full, or even less, conservative? That is, if ${\mathrm{HO}}(f)$ is an equivalence, must $f$ also be so? We are able to give a positive answer to the simplicial embedding problem, which might seem to be the end of the story. However, we regard the 2-categorical embedding problem as not just a truncation of the simplicial problem, but as significant in its own right, for the following reasons. The 2-category ${\underline{\mathbf{PDer}}}$ is much more elementary than the simplicial category ${\mathbf{PDer}_\bullet}$: it is a completely ordinary 2-category of 2-functors valued in categories, so nothing more than a 2-categorical version of a presheaf category, constructed with no input from homotopy theory. Given the success of the program of Riehl and Verity [@riehl],[@riehl2],[@riehl3] (etc) in reconstructing much of the theory of quasicategories by working in the 2-category ${\underline{\mathbf{QCAT}}}$ thereof, to the extent we can give a positive answer to the 2-categorical embedding problem, we will thus have reduced a large part of abstract homotopy theory to ordinary category theory. Not to be coy, we will find that the 2-categorical embedding problem does not always have a positive solution. We draw the following analogies, extending that above between the problem of the image of ${\mathrm{HO}}$ and Brown representability: the question of the faithfulness of ${\mathrm{HO}}$ on 2-morphisms is essentially a question of phantom maps, or of the concreteness of ${\underline{\mathbf{HoTh}}}$. The question of fullness of ${\mathrm{HO}}$ is analogous to another Brown representability problem, namely, the homological Brown representability, which asks when not only objects but also morphisms are representable. For this to be nontrivial, we must consider a 2-category of prederivators defined only on homotopically finite categories. We expect the solutions to these problems to be negative, though they must await future work. Finally, the question of 2-categorical conservativity for ${\mathrm{HO}}$ is analogous to that resolved by Whitehead’s theorem. Summary of Results {#summary-of-results .unnumbered} ------------------ We take quasicategories as our model for homotopy theories. We will denote by ${\underline{\mathbf{QCAT}}}$ the 2-category of quasicategories, and by ${\underline{\mathbf{QCat}}}$ the 2-category of small quasicategories. A prederivator is a 2-functor ${\mathscr{D}}:{\underline{\mathbf{Dia}}}^{\mathrm{op}}\to{\underline{\mathbf{CAT}}}$, where ${\underline{\mathbf{CAT}}}$ is the 2-category of categories and ${\underline{\mathbf{Dia}}}$, for us, may be either ${\underline{\mathbf{HFin}}}$, the 2-category of homotopically finite categories, or ${\underline{\mathbf{Cat}}}$, the 2-category of small categories. Note that other authors axiomatize a more general class of possible 2-categories ${\underline{\mathbf{Dia}}}$. We must distinguish carefully between ${\underline{\mathbf{Cat}}}$ and ${\underline{\mathbf{CAT}}}$. Though size issues are often brushed aside, they are to a great extent the crux of this paper. A vague way to summarize our core results is this: all quasicategories can be probed by small categories, but a large quasicategory can only be *constructed* out of large categories. Denoting the 2-category of prederivators with domain ${\underline{\mathbf{Dia}}}$ by ${\underline{\mathbf{PDer}}}_{{\underline{\mathbf{Dia}}}}$, we can construct 2-functors ${\mathrm{HO}}:{\underline{\mathbf{QCAT}}}\to{\underline{\mathbf{PDer}}}_{{\underline{\mathbf{Dia}}}}$ for each ${\underline{\mathbf{Dia}}}$, as well as their restrictions to ${\underline{\mathbf{QCat}}}$. We can also restrict to the underlying 1-categories, where it is most natural to take as codomain the category ${\mathbf{PDer}}_{{\underline{\mathbf{Dia}}}}^{\mathrm{str}}$ of prederivators and *strict* morphisms. We shall use ${\underline{\mathbf{PDer}}}$ as a shorthand for ${\underline{\mathbf{PDer}}}_{{\underline{\mathbf{Dia}}}}$ in statements holding for either choice of ${\underline{\mathbf{Dia}}}$. In short, our results are as follows: every version of ${\mathrm{HO}}$ gives a positive solution to the simplicial embedding problem and to the conservativity clause in the 2-categorical embedding problem. But for a positive answer to the full 2-categorical embedding problem, we must take a form of ${\mathrm{HO}}$ in which ${\underline{\mathbf{Dia}}}$ contains categories as large as the quasicategories in the domain. *First result:* In Theorem \[Maintheorem\], we show that the ordinary category of quasicategories embeds fully faithfully in any category ${\mathbf{PDer}}^{\mathrm{str}}$ of prederivators with strictly 2-natural morphisms. This extends to an embedding ${\mathbf{QCAT}_\bullet}\to{\mathbf{PDer}_\bullet}$ of simplicial categories, where the domain has the usual simplicial enrichment. Thus, quasicategories and their mapping spaces, higher homotopy groups and all, can be recovered *up to isomorphism* from their prederivators and strict maps. The surprising on-the-nose quality of this statement reflects the use of strict transformations of prederivators. Thus much hinges on the presence, in the more natural 2-category, ${\underline{\mathbf{PDer}}}$, of *pseudo*natural transformations. *Second result:* In Theorem \[Mainthm2cat\], we answer the 2-categorical embedding problem positively for the case of small quasicategories and prederivators defined on small categories, that is, for ${\mathrm{HO}}:{\underline{\mathbf{QCat}}}\to{\underline{\mathbf{PDer}}}_{{\underline{\mathbf{Cat}}}}$. We give the analogous result, as a corollary, for 2-categories of quasicategories and prederivators admitting various limits and colimits. The main tool is the delocalization theorem, Theorem \[delocalization\] published by Stevenson, which realizes every quasicategory as a localization of a category. The main previous positive result on the 2-categorical embedding problem is due to Renaudin [@Ren]. He is able to embed a 2-categorical localization of the 2-category ${\underline{\mathbf{Mod}}}$ obtained from the 2-category of combinatorial model categories, left Quillen functors, and natural transformations into the 2-category ${\underline{\mathbf{Der}}}_!$ of derivators, cocontinuous pseudonatural transformations, and modifications. By “embed," we specifically mean that Renaudin gives a 2-functor ${\underline{\mathbf{Mod}}}\to{\underline{\mathbf{Der}}}_!$ which, after localization, induces equivalences on hom-categories, so that it is 2-categorically fully faithful. Thus we are giving, in Theorem \[Mainthm2cat\], a new proof of Renaudin’s result, insofar as combinatorial model categories are equivalent to locally presentable quasicategories, which are in turn equivalent to small quasicategories admitting colimits of some bounded size. (See [@lurie Section 5.5].) *Third result:* Our final result is Theorem \[whiteheadforquasicats\], which shows that every version of ${\mathrm{HO}}$ satisfies the conservativity clause of the 2-categorical embedding problem. In other words, the prederivator is enough to distinguish equivalence classes of abstract homotopy theories, no matter which size choices we make. The proof is unrelated to that of Theorem \[Mainthm2cat\], and relies on the author’s Whitehead theorem for the 2-category of unpointed spaces [@whiteheadforspaces]. Conventions and Notation {#conventions-and-notation .unnumbered} ------------------------ If ${\mathcal}C$ is a category (or a 2-category, simplicially enriched category, etc) with objects $c_1$ and $c_2,$ we denote the set (or category, simplicial set, etc) of morphisms by ${\mathcal}C(c_1,c_2)$. We will frequently alternate between viewing the same collection of objects as a category, 2-category, or a simplicially enriched, or just “simplicial," category. *Convention:* We will denote the category, the 2-category, and the simplicial category of foos respectively by $${\mathbf{foo}},{\underline{\mathbf{foo}}},{\mathbf{foo}_\bullet}$$ Furthermore, when applicable, the above will designate the category of *small* foos while $${\mathbf{FOO}},{\underline{\mathbf{FOO}}},{\mathbf{FOO}_\bullet}$$ will refer to *large* ones. We operationalize the term *large* to mean “small with respect to the second-smallest Grothendieck universe." We denote isomorphisms by ${\cong}$ and equivalences (in any 2-category) by ${\simeq}$. We denote the category associated to the poset $0<1<\cdots<n$ by $[n]$, so that $[0]$ is the terminal category. The simplex category $\Delta$ is the full subcategory of ${\mathbf{Cat}}$ on the categories $[n]$. If $S$ is a simplicial set, that is, a functor $\Delta^{{\mathrm{op}}}\to{\mathbf{Set}}$, then we denote its set of $n$-simplices by $S([n])=S_n$. The face map $S_n\to S_{n-1}$ which forgets the $i^{\mathrm{th}}$ vertex will be denoted $d^n_i$ or just $d_i$. We denote by $\Delta^n$ the simplicial set represented by $[n]\in\Delta$. Equivalently, $\Delta^n=N([n])$, where we recall that the nerve $N(J)$ of a category $J$ is the simplicial set defined by the formula $N(J)_n={\mathbf{Cat}}([n],J)$. The natural extension of $N$ to a functor is a fully faithful embedding of categories in simplicial sets. See [@joyal Proposition B.0.13]. \[2Cat\] Below we recall the various 2-categorical definitions we will require. For us 2-categories are strict: they have strictly associative composition and strict units preserved on the nose by 2-functors. We denote the horizontal composition of 2-morphisms by $*$, so that if ${\alpha}:f\Rightarrow g:x\to y$ and ${\beta}:h\Rightarrow k:y\to z$, we have ${\beta}*{\alpha}:h\circ f\Rightarrow k\circ g$. Morphisms between 2-functors will be either 2-natural or pseudonatural transformations depending on context. Let us recall that, if ${\mathcal}K,{\mathcal}L$ are 2-categories and $F,G:{\mathcal}K\to{\mathcal}L$ are 2-functors, a pseudonatural transformation $\Lambda:F\Rightarrow G$ consists of - Morphisms $\Lambda_x:F(x)\to G(x)$ associated to every object $x\in{\mathcal}K$ - 2-morphisms $\Lambda_f: \Lambda_y\circ F(f)\Rightarrow G(f)\circ \Lambda_x$ for every morphism $f:x\to y$ in ${\mathcal}K$ satisfying the coherence conditions - (Pseudonaturality) $\Lambda_f$ is an isomorphism, for every $f$. - (Coherence) $\Lambda$ is a functor from the underlying 1-category of ${\mathcal}K$ to the category of pseudo-commutative squares in ${\mathcal}L$, that is, squares commuting up to a chosen isomorphism, where composition is by pasting. - (Respect for 2-morphisms) For every 2-morphism ${\alpha}:f\Rightarrow g:x\to y$ in ${\mathcal}K$, we have the equality of 2-morphisms $$\Lambda_g\circ(\Lambda_y*F(\alpha))=(G(\alpha)*\Lambda_x)\circ\Lambda_f:\Lambda_y\circ F(f)\Rightarrow G(g)\circ \Lambda_x.$$ In case all the $\Lambda_f$ are identities, we say that $\Lambda$ is strictly 2-natural, in which case the axiom of coherence is redundant, and that of respect for 2-morphisms becomes simply $\Lambda_y*F(\alpha)=G(\alpha)*\Lambda_x$. Finally, we have the morphisms between pseudonatural transformations, which are called *modifications.* A modification $\Xi:\Lambda\Rrightarrow \Gamma:F\Rightarrow G:{\mathcal}K\to {\mathcal}L$ consists of 2-morphisms $\Xi_x:\Lambda_x\to\Gamma_x$ for each object $x\in {\mathcal}K$, subject to the single condition (note the analogy with the definition of respect for 2-morphisms) which is simply $G(f)* \Xi_x=\Xi_y* F(f)$ when $F$ and $G$ are strict, and in general is $(G(f)*\Xi_x)\circ\Lambda_f=\Gamma_f\circ(\Xi_y*F(f)):\Lambda_y\circ F(f)\Rightarrow G(f)\circ \Gamma_x$, for any morphism $f:x\to y$ in ${\mathcal}K$. An *equivalence* between the objects $x,y\in{\mathcal}K$ consists of two morphisms $f:x\leftrightarrow y:g$ together with invertible 2-morphisms $\alpha:g\circ f{\cong}{\mathrm{id}}_x$ and $\beta:f\circ g{\cong}{\mathrm{id}}_y$. We now recall the definitions relevant to the theory of derivators. A *prederivator* is a 2-functor ${\mathscr{D}}:{\underline{\mathbf{Dia}}}^{{\mathrm{op}}}\to{\underline{\mathbf{CAT}}}$ into the 2-category ${\underline{\mathbf{CAT}}}$ of large categories. The 2-category ${\underline{\mathbf{Dia}}}$ will be, for us, either the 2-category of small categories ${\underline{\mathbf{Cat}}}$ or the 2-category ${\underline{\mathbf{HFin}}}$ of homotopy finite categories. We recall that a category is *homotopy finite*, often (and confusingly) called *finite direct*, if its nerve has finitely many nondegenerate simplices; equivalently, if it is finite, skeletal, and admits no nontrivial endomorphisms. For categories $J,K\in {\underline{\mathbf{Dia}}}$, we have a functor ${\mathrm{dia}}_J^K:{\mathscr{D}}(J\times K)\to {\mathscr{D}}(J)^K$ induced by the action of ${\mathscr{D}}$ on the functors and natural transformations from $[0]$ to $K$. We refer to ${\mathrm{dia}}_J^K$ as a “partial underlying diagram functor," and when $J=[0]$ simply as the “underlying diagram functor," denoted ${\mathrm{dia}}^K$. We will often denote ${\mathscr{D}}(u)$ by $u^*$, for $u:J\to K$ a functor in ${\underline{\mathbf{Dia}}}$, and similarly for a 2-morphism ${\alpha}$ in ${\underline{\mathbf{Dia}}}$. Below are those axioms of derivators that are relevant to this paper. We stick with the traditional numbering, but leave out the axioms we shall not consider. The 2-functor ${\mathscr{D}}$ is a *semiderivator* if it satisfies the first two of the following axioms, and *strong* if it satisfies (Der5). We introduce here a variant (Der5) of the fifth axiom, prederivators satisfying which will be called *smothering*, à la [@riehl]. 1. \[Der1\] Let $(J_i)_{i\in I}$ be a family of objects of ${\underline{\mathbf{Dia}}}$ such that $\coprod_I J_i\in{\underline{\mathbf{Dia}}}$. Then the canonical map $${\mathscr{D}}\left(\coprod_I J_i\right)\to\prod_I{\mathscr{D}}(J_i)$$ is an equivalence. 2. \[Der2\] For every $J\in{\underline{\mathbf{Dia}}}$, the underlying diagram functor $${\mathrm{dia}}^J:{\mathscr{D}}(J)\to{\mathscr{D}}([0])^J$$ is conservative. 3. \[Der5\] For every $J\in{\underline{\mathbf{Dia}}}$, the partial underlying diagram functor ${\mathrm{dia}}_J^{[1]}:{\mathscr{D}}(J\times [1])\to{\mathscr{D}}(J)^{[1]}$ is full and essentially surjective on objects. 4. \[Der5’\] For every $J\in{\underline{\mathbf{Dia}}},$ the partial underlying diagram functor ${\mathrm{dia}}_J^{[1]}:{\mathscr{D}}(J\times[1])\to {\mathscr{D}}(J)^{[1]}$ is full and surjective on objects. A morphism of prederivators is a pseudonatural transformation, and a 2-morphism is a modification (see Definition \[2Cat\].) Altogether, we get the 2-category ${\underline{\mathbf{PDer}}}_{{\underline{\mathbf{Dia}}}}$ of prederivators defined on ${\underline{\mathbf{Dia}}}$. We shall make use of the shorthand ${\underline{\mathbf{PDer}}}$ to represent a 2-category of prederivators defined on an arbitrary ${\underline{\mathbf{Dia}}}$. When we insist on strictly 2-natural transformations, we get the sub-2-category ${\underline{\mathbf{PDer}}}^{\mathrm{str}}$, of which we will primarily use the underlying category, ${\mathbf{PDer}}^{\mathrm{str}}$. Let us remark that, in the presence of Axiom , requires exactly that ${\mathrm{dia}}_J^{[1]}$ be smothering in the sense of [@riehl], which explains the nomenclature. *Acknowledgements:* Thanks to Paul Balmer, for his advice and support; Denis-Charles Cisinski, for suggesting the application of the delocalization theorem; James Richardson, for pointing out an error in the original proof of Theorem \[whiteheadforquasicats\], and Martin Gallauer, Mike Shulman, Ioannis Lagkas, Ian Coley, and John Zhang, for helpful comments and conversations. The basic construction ====================== In this section, we will describe association of a prederivator to a quasicategory as a functor, a 2-functor, and a simplicial functor. The prederivator associated to a quasicategory {#the-prederivator-associated-to-a-quasicategory .unnumbered} ---------------------------------------------- We recall that a *quasicategory* [@joyal], called an $\infty$-category in [@lurie], is a simplicial set $Q$ in which every inner horn has a filler. That is, every map $\Lambda_i^n\to Q$ extends to an $n$-simplex $\Delta^n\to Q$ when $0<i<n$, where $\Lambda_i^n{\subseteq}\Delta^n$ is the simplicial subset generated by all faces $d_j\Delta^n$ with $j\neq i$. For instance, when $n=2$, the only inner horn is $\Lambda^2_1$, and then the filler condition simply says we may compose “arrows" (that is, 1-simplices) in $Q$, though not uniquely. Morphisms of quasicategories are simply morphisms of simplicial sets. The quasicategories in which every inner horn has a *unique* filler are, up to isomorphism, the nerves of categories; in particular the nerve functor $N:{\mathbf{CAT}}\to{\mathbf{SSET}}$ factors through the subcategory of quasicategories, ${\mathbf{QCAT}}$. Every quasicategory $Q$ has a homotopy category ${\mathrm{Ho}}(Q)$, the ordinary category defined as follows. The objects of ${\mathrm{Ho}}(Q)$ are simply the 0-simplices of $Q$. For two 0-simplices $q_1,q_2$, temporarily define $Q_{q_1,q_2}{\subseteq}Q_1$ to be the set of 1-simplices $f$ with initial vertex $q_1$ and final vertex $q_2$. Then the hom-set ${\mathrm{Ho}}(Q)(q_1,q_2)$ is the quotient of $Q_{q_1,q_2}$ which identifies *homotopic* 1-simplices. Here two 1-simplices $f_1,f_2\in Q_{q_1,q_2}$ are said to be homotopic if $f_1,f_2$ are two faces of some 2-simplex in which the third face is both outer and degenerate. We have a functor ${\mathrm{Ho}}:{\mathbf{QCAT}}\to{\mathbf{CAT}}$ from quasicategories to categories, left adjoint to the nerve $N:{\mathbf{CAT}}\to {\mathbf{QCAT}}$. This follows from the fact that a morphism $f:Q\to R$ of quasicategories preserves the homotopy relation between 1-simplices, so that it descends to a well defined functor ${\mathrm{Ho}}(f):{\mathrm{Ho}}(Q)\to {\mathrm{Ho}}(R)$. In fact, ${\mathrm{Ho}}:{\mathbf{QCat}}\to{\mathbf{Cat}}$ admits an extension, sometimes denoted $\tau_1$, to all of ${\mathbf{SSet}}$, which is still left adjoint to $N$. But it is not amenable to computation. The fact that quasicategories are the fibrant objects for a Cartesian model structure on ${\mathbf{SSET}}$ in which every object is cofibrant (see [@riehl 2.2.8]) implies that $Q^S$ is a quasicategory for every simplicial set $S$ and quasicategory $Q$. In particular, quasicategories are enriched over themselves via the usual simplicial exponential $(R^Q)_n={\mathbf{SSET}}(Q\times\Delta^n,R)$. It is immediately checked that the homotopy category functor ${\mathrm{Ho}}$ preserves finite products, so that by change of enrichment we get finally *the 2-category of quasicategories*, ${\underline{\mathbf{QCAT}}}.$ Its objects are quasicategories, and for quasicategories $Q,R,$ the hom-category ${\underline{\mathbf{QCAT}}}(Q,R)$ is simply the homotopy category ${\mathrm{Ho}}(R^Q)$ of the hom-quasicategory $R^Q$. This permits the following tautological definition of equivalence of quasicategories. \[equivalenceofquasicategories\] An equivalence of quasicategories is an equivalence in ${\underline{\mathbf{QCAT}}}$. \[2catequiv\] Thus an equivalence of quasicategories is a pair of maps $f:Q\leftrightarrows R:g$ together with two homotopy classes $a=[{\alpha}],b=[{\beta}]$ of morphisms ${\alpha}:Q\to Q^{\Delta^1}, {\beta}:R\to R^{\Delta^1}$, with endpoints $gf$ and ${\mathrm{id}}_Q$, respectively, $fg$ and ${\mathrm{id}}_R$, such that $a$ is an isomorphism in ${\mathrm{Ho}}(Q^Q)$, as is $b$ in ${\mathrm{Ho}}(R^R)$. We can make the definition yet more explicit by noting that, for each $q\in Q_0$, the map ${\alpha}$ sends $q$ to some ${\alpha}(q)\in Q_1$, and recalling that the invertibility of $a$ is equivalent to that of each homotopy class $[{\alpha}(q)]$, as explicated for instance in the statement below: \[pointwiseness\] The equivalence class $[f]$ of a map $f:Q\to R^{[1]}$ is an isomorphism in the homotopy category ${\mathrm{Ho}}(R^Q)$ if and only if, for every vertex $q\in Q_0$ of $Q$, the equivalence class $[f(q)]$ is an isomorphism in ${\mathrm{Ho}}(R)$. We now construct the 2-functor ${\mathrm{HO}}:{\underline{\mathbf{QCAT}}}\to{\underline{\mathbf{PDer}}}$ (with respect to an arbitrary ${\underline{\mathbf{Dia}}}$.) Restricting to ${\underline{\mathbf{QCat}}}$ gives us all the forms of ${\mathrm{HO}}$ of interest to us. We first extend ${\mathrm{Ho}}$ to a 2-functor of the same name, ${\mathrm{Ho}}:{\underline{\mathbf{QCAT}}}\to{\underline{\mathbf{CAT}}}$. This still sends a quasicategory to its homotopy category; we must define the action on morphism categories. This will be for each $R$ and $Q$ a functor $${\mathrm{Ho}}_{Q,R}:{\underline{\mathbf{QCAT}}}(Q,R)={\mathrm{Ho}}(R^Q)\to {\mathrm{Ho}}(R)^{{\mathrm{Ho}}(Q)}={\underline{\mathbf{CAT}}}({\mathrm{Ho}}(Q),{\mathrm{Ho}}(R))$$ The functor ${\mathrm{Ho}}_{Q,R}$ is defined as the transpose of the following composition across the product-hom adjunction in the 1-category ${\mathbf{Cat}}$. $${\mathrm{Ho}}(R^Q)\times {\mathrm{Ho}}(Q){\cong}{\mathrm{Ho}}(R^Q\times Q){\stackrel{ {{\mathrm{Ho}}(\mathrm{ev})} }{ \longrightarrow }}{\mathrm{Ho}}(R)$$ For this isomorphism we have used again the preservation of finite products by ${\mathrm{Ho}}$. The morphism $\mathrm{ev}:R^Q\times Q\to R$ is evaluation, the counit of the adjunction $(-)\times Q{\dashv}(-)^Q$ between endofunctors of ${\mathbf{QCAT}}$. We also need a 2-functor $N:{\underline{\mathbf{CAT}}}\to{\underline{\mathbf{QCAT}}}$ sending a category $J\in{\underline{\mathbf{CAT}}}$ to $N(J)$. The map on hom-categories is the composition $J^K{\cong}{\mathrm{Ho}}(N(J^K)) {\cong}{\mathrm{Ho}}(N(J)^{N(K)})$. The first isomorphism is the inverse of the counit of the adjunction ${\mathrm{Ho}}{\dashv}N$, which is an isomorphism by full faithfulness of the nerve. The second uses the fact that $N$ preserves exponentials, see [@joyal Proposition B.0.16]. Now we define the associated prederivator. \[defofHOQ\] Let $Q$ be a quasicategory. Then the prederivator ${\mathrm{HO}}(Q):{\underline{\mathbf{Dia}}}^{{\mathrm{op}}}\to{\underline{\mathbf{CAT}}}$ is given as the composition $${\underline{\mathbf{Dia}}}^{{\mathrm{op}}}{\stackrel{ {N^{\mathrm{op}}} }{ \longrightarrow }} {\underline{\mathbf{QCAT}}}^{{\mathrm{op}}} {\stackrel{ {Q^{(-)}} }{ \longrightarrow }}{\underline{\mathbf{QCAT}}} {\stackrel{ {{\mathrm{Ho}}} }{ \longrightarrow }}{\underline{\mathbf{CAT}}}$$ In particular, ${\mathrm{HO}}(Q)$ maps a category $J$ to the homotopy category of $J$-shaped diagrams in $Q$, that is, to ${\mathrm{Ho}}(Q^{N(J)})$. Given a morphism of quasicategories $f:Q\to R$, we have a strictly 2-natural morphism of prederivators (see Definition \[2Cat\]) ${\mathrm{HO}}(f):{\mathrm{HO}}(Q)\to{\mathrm{HO}}(R)$ given as the analogous composition ${\mathrm{HO}}(f)={\mathrm{Ho}}\circ f^{(-)}\circ N$, so that for each category $J$ the functor ${\mathrm{HO}}(f)_J$ is given by post-composition with $f$, that is, by ${\mathrm{Ho}}(f^{N(J)}):{\mathrm{Ho}}(Q^{N(J)})\to{\mathrm{Ho}}(R^{N(J)})$. \[enriched\] We have left implicit some details above, for instance, that any quasicategory map induces a 2-natural transformation $f^{(-)}\!:\!Q^{(-)}\!\to\! R^{(-)}\!:\!{\underline{\mathbf{QCAT}}}^{\mathrm{op}}\to{\underline{\mathbf{QCAT}}}$. All such claims follow from the following fact: a monoidal functor $F:{\mathcal}V\to {\mathcal}W$ induces a 2-functor $(-)_F:{\mathcal}V-{\underline{\mathbf{Cat}}}\to{\mathcal}W-{\underline{\mathbf{Cat}}}$ between 2-categories of ${\mathcal}V$- and ${\mathcal}W$-enriched categories. The fully general version of this claim was apparently not published until recently; it comprises Chapter 4 of [@cruttwell]. In our case, the functor ${\mathrm{Ho}}$ is monoidal insofar as it preserves products and thus it induces the 2-functor $(-)_{{\mathrm{Ho}}}$ sending simplicially enriched categories, simplicial functors, and simplicial natural transformations to 2-categories, 2-functors, and 2-natural transformations. We record the axioms which are satisfied by the prederivator associated to any quasicategory. First, a lemma: \[liftingsquares\] Let $Q$ be a quasicategory, and $X:[1]\times[1]\to{\mathrm{Ho}}(Q)$ a commutative square in its homotopy category. Suppose we have chosen $f,g\in Q_1$ representing the vertical edges of $X$, so that $[f]=X|_{\{0\}\times[1]}$ and $[g]=X|_{\{1\}\times[1]}$. Then there exists $\widehat X:f\to g$ in ${\mathrm{Ho}}(Q^{\Delta^1})$ lifting $X$, in the sense that $0^*\widehat X=X|_{[1]\times\{0\}}$ and $1^*\widehat X=X|_{[1]\times\{1\}}$. We must show that any homotopy-commutative square $X:\square\to{\mathrm{Ho}}(Q)$ with lifts $f,g\in (Q)_1$ of its left and right edges underlies a morphism $\widehat X:f\to g$ in ${\mathrm{Ho}}(Q^{\Delta^1})$. For this we first lift the top and bottom edges of $X$ to some $h$ and $k$ in $Q_1$ and choose 2-simplices $a,b$ with $d_0a=g,d_2a=h,d_0b=k,$ and $d_2b=f$, so that $d_1a$ is a composition $g\circ h$ and similarly, $d_1b$ is a choice of $k\circ f$. Since $X$ was homotopy commutative, we know $[g]\circ[h]=[k]\circ[f]$ in ${\mathrm{Ho}}(Q)$, that is, $[d_1a]=[d_1b]$. So there exists a 2-simplex $c$ with $d_0c=d_1b,d_1c=d_1,$ and $d_2c$ degenerate, giving a homotopy between $d_1a$ and $d_1b$. Now we have a map $H:\Lambda^3_1\to Q$ with $d_0H=b,d_2H=c$, and $d_3H$ degenerate on $f$. Filling this to a 3-simplex $\hat H$, we have $d_1\hat H$ a 2-simplex with faces $k,d_1a,$ and $f$, and so $d_1\hat H$ and $a$ fit together into a map $S:\Delta^1\to Q^{\Delta^1}$, which represents the desired $\hat X$. The prederivator ${\mathrm{HO}}(Q)$ satisfies the axioms , , and . The axiom follows from the fact that $Q\mapsto Q^J$ preserves coproducts in $J$, and that ${\mathrm{Ho}}$ preserves all products. is precisely Lemma \[pointwiseness\]. For , surjectivity of ${\mathrm{dia}}_J^{[1]}$ follows immediately from the definition of the homotopy category. Fullness is exacctly the statement of Lemma \[liftingsquares\]. It may be worth noting that, while it is possible to define a 2-category ${\underline{\mathbf{SSet}}}$ of simplicial sets using $\tau_1$ and extend ${\mathrm{HO}}$ to ${\underline{\mathbf{SSet}}}$, the prederivator associated to an arbitrary simplicial set will not, in general, satisfy any of the three axioms. It is straightforward to see that ${\mathrm{HO}}(S)$ need not satisfy or , while the reason may fail is that $\tau_1$, unlike ${\mathrm{Ho}}$, need not preserve infinite products. The simplicial enrichment of prederivators {#the-simplicial-enrichment-of-prederivators .unnumbered} ------------------------------------------ The 2-functor ${\mathrm{HO}}:{\underline{\mathbf{QCAT}}}\to {\underline{\mathbf{PDer}}}$ factors through the subcategory ${\underline{\mathbf{PDer}}}^{\mathrm{str}}$ in which the morphisms are required to be strictly 2-natural. Its underlying category ${\mathbf{PDer}}^{\mathrm{str}}$ admits a simplicial enrichment ${\mathbf{PDer}_\bullet}$, as we now recall. Muro and Raptis showed how to define the simplicially enriched category ${\mathbf{PDer}_\bullet}$ in [@muro]. First, note that for any prederivator ${\mathscr{D}}$ and each category $J\in {\underline{\mathbf{Dia}}}$ we have a shifted prederivator ${\mathscr{D}}^J={\mathscr{D}}\circ(J\times -)$. This shift is a special case of the cartesian closed structure on ${\underline{\mathbf{PDer}}}$ discussed in [@heller2 Section 4]. Explicitly, given two prederivators ${\mathscr{D}}_1,{\mathscr{D}}_2,$ and denoting by $\widehat J$ the prederivator represented by a small category $J$, the exponential is defined by ${\mathscr{D}}_2^{{\mathscr{D}}_1}(J)={\underline{\mathbf{PDer}}}(\widehat J \times {\mathscr{D}}_1,{\mathscr{D}}_2)$. Then the 2-categorical Yoneda lemma implies that the shifted prederivator ${\mathscr{D}}^J$ is canonically isomorphic to the prederivator exponential ${\mathscr{D}}^{\widehat J}$. This allows us to interpret expressions such as ${\mathscr{D}}^{\alpha}:{\mathscr{D}}^u\Rightarrow{\mathscr{D}}^v:{\mathscr{D}}^K\to{\mathscr{D}}^J$, when ${\alpha}:u\Rightarrow v:J\to K$ is a natural transformation, by using the internal hom 2-functor. \[simp\] For a natural transformation ${\alpha}:u\Rightarrow v:J\to K$ between functors in ${\mathbf{Cat}}$, the preceding definition of ${\mathscr{D}}^{\alpha}$ gives only a shadow of the full action of ${\alpha}$ on ${\mathscr{D}}$. The natural transformation ${\alpha}$ corresponds naturally to a functor $\bar{\alpha}: J\times [1]\to K$, associated to which we have a prederivator morphism ${\mathscr{D}}^{\bar{\alpha}}:{\mathscr{D}}^K\to {\mathscr{D}}^{J\times [1]}$, that is, a family of functors ${\mathscr{D}}(K\times I)\to{\mathscr{D}}(J\times I\times [1])$. This is strictly more information, as composing with the underlying diagram functor ${\mathrm{dia}}^{[1]}_{J\times I}:{\mathscr{D}}(J\times I\times [1])\to {\mathscr{D}}(J\times I)^{[1]}$ recovers our original ${\mathscr{D}}^{\alpha}$. What is happening here is that the entity ${\mathscr{D}}^{(-)}$ is more than a 2-functor ${\underline{\mathbf{Cat}}}^{{\mathrm{op}}}\to{\underline{\mathbf{PDer}}}$: it is a simplical functor $({\mathbf{Cat}_\bullet}^{{\mathrm{op}}})_N\to{\mathbf{PDer}_\bullet}$ from the simplicial category of nerves of categories to the simplicial category of prederivators, which we must now define. For each category $J$ let ${\mathrm{diag}}_J:J\to J\times J$ be the diagonal functor. We define ${\mathbf{PDer}_\bullet}$ as a simplicially enriched category whose objects are the prederivators. The mapping simplicial sets have $n-$simplices as follows: ${\mathbf{PDer}}_n({\mathscr{D}}_1,{\mathscr{D}}_2)={\mathbf{PDer}}^{\mathrm{str}}({\mathscr{D}}_1,{\mathscr{D}}^{[n]}_2)$. For $(f,g)\in {\mathbf{PDer}}_n({\mathscr{D}}_2,{\mathscr{D}}_3)\times{\mathbf{PDer}}_n({\mathscr{D}}_1,{\mathscr{D}}_2)$, the composition $f * g:{\mathscr{D}}_1\to {\mathscr{D}}_3^{[n]}$ is given by the formula below, in which we repeatedly apply the internal hom 2-functor discussed above Remark \[simp\]. $$\label{MRcomposition}{\mathscr{D}}_1{\stackrel{ {g} }{ \longrightarrow }} {\mathscr{D}}_2^{[n]}{\stackrel{ {f^{[n]}} }{ \longrightarrow }} \left({\mathscr{D}}_3^{[n]}\right)^{[n]}{\cong}{\mathscr{D}}_3^{[n]\times [n]}{\stackrel{ {{\mathscr{D}}_3^{{\mathrm{diag}}_{[n]}}} }{ \longrightarrow }}{\mathscr{D}}_3^{[n]}$$ In [@muro] a restriction of this enrichment, which we now recall, was of primary interest. Each prederivator ${\mathscr{D}}$ has an “essentially constant" shift by a small category $J$ denoted ${\mathscr{D}}_{{\mathrm{eq}}}^J$. This is defined as follows: ${\mathscr{D}}_{\mathrm{eq}}^J(K){\subseteq}{\mathscr{D}}(J\times K)$ is the full subcategory on those objects $X\in {\mathscr{D}}(J\times K)$ such that in the partial underlying diagram ${\mathrm{dia}}_K^J (X)\in {\mathscr{D}}(K)^J$, the image of every morphism of $J$ is an isomorphism in ${\mathscr{D}}(K)$. We shall only need $J=[n]$, when an object of ${\mathscr{D}}_{\mathrm{eq}}^{[n]}(K)$ has as its partial underlying diagram a chain of $n$ isomorphisms in ${\mathscr{D}}(K)$. Then we get another simplicial enrichment: \[scateq\] The simplicial category ${\mathbf{PDer}_\bullet}^{\mathrm{eq}}$ is the sub-simplicial category of ${\mathbf{PDer}_\bullet}$ with ${\mathbf{PDer}}^{\mathrm{eq}}_n({\mathscr{D}}_1,{\mathscr{D}}_2)={\mathbf{PDer}}^{\mathrm{str}}({\mathscr{D}}_1,{\mathscr{D}}_{2,{\mathrm{eq}}}^{[n]})$. This leads to the notion of equivalence of prederivators under which Muro and Raptis showed Waldhausen K-theory is invariant. \[coherentequivalence\] A coherent equivalence of prederivators is a quadruple $(F,G,{\alpha},{\beta})$ of prederivator morphisms $F:{\mathscr{D}}_1\to {\mathscr{D}}_2,G:{\mathscr{D}}_2\to{\mathscr{D}}_1,{\alpha}:{\mathscr{D}}_1\to{\mathscr{D}}_{1,{\mathrm{eq}}}^{[1]},$ and such that the vertices of ${\alpha}$ are $GF$ and ${\mathrm{id}}_{{\mathscr{D}}_1}$, and similarly for ${\beta}$. \[modificationsarebad\] A coherent equivalence of prederivators gives rise to an equivalence in ${\underline{\mathbf{PDer}}}$, but the converse does not hold. To illustrate this, recall (\[2Cat\]) that a 2-morphism in ${\underline{\mathbf{PDer}}}$ is a modification , which amounts to a family of natural transformations $\Xi_J:{\alpha}_J\to{\beta}_J$. The components of $\Xi_J$ are morphisms in ${\mathscr{D}}_2(J)$: heuristically, homotopy classes of morphisms in some background model. Then $\Xi_J$ may be thought of as a transformation between functors, only natural up to homotopy. In contrast, a 1-simplex $F$ in the mapping simplicial set from ${\mathscr{D}}_1$ to ${\mathscr{D}}_2$ is more rigid: $F$ sends each object $X\in{\mathscr{D}}_1(J)$ to an object of ${\mathscr{D}}_2(J\times[1])$, so that, roughly, in passing from ${\underline{\mathbf{PDer}}}$ to ${\mathbf{PDer}_\bullet}$ we have refined a natural tranformation up-to-homotopy to a homotopy coherent natural transformation. The embedding ${\mathbf{QCAT}}\to{\mathbf{PDer}}^{\mathrm{str}}$ of ordinary categories {#section:simplicial} ======================================================================================= In this section, we prove that categories of arbitrarily large quasicategories embed fully faithfully in any category of prederivators and strict morphisms. We extend this result to a fully faithful embedding of simplicial categories, as well as of categories enriched in Kan complexes. \[Maintheorem\] The ordinary functor ${\mathrm{HO}}:{\mathbf{QCAT}}\to{\mathbf{PDer}}^{\mathrm{str}}$ is fully faithful. It follows that the simplicial functor ${\mathrm{HO}}:{\mathbf{QCAT}_\bullet}\to{\mathbf{PDer}_\bullet}$ is simplicially fully faithful. We first give a corollary. Define, for the moment, ${\mathbf{QPDer}_\bullet}{\subseteq}{\mathbf{PDer}_\bullet}$ to be the image of quasicategories in prederivators, so that the theorem gives an isomorphism of simplicial categories ${\mathbf{QCAT}_\bullet}{\cong}{\mathbf{QPDer}_\bullet}$. In particular, ${\mathbf{QPDer}_\bullet}$ is not merely a simplicial category, but actually a category enriched in quasicategories. Recall that the inclusion of Kan complexes into quasicategories has a right adjoint $\iota$, which we will call the Kan core. For a quasicategory $Q$, the core $\iota Q$ is the sub-simplicial set such that an $n$-simplex $x\in Q_n$ is in $(\iota Q)_n$ if and only if every 1-simplex of $x$ is an isomorphism in ${\mathrm{Ho}}(Q)$. See [@joyal2 Section 1]. As a right adjoint, $\iota$ preserves products, so that for any quasicategorically enriched category ${\mathcal}C$ we have an associated Kan complex-enriched category  ${\mathcal}C_\iota$, given by taking the core homwise. This change of enrichment is more difficult to achieve for general simplicially enriched categories, which explains our inelegant introduction of ${\mathbf{QPDer}_\bullet}$. \[bigdiagramcorollary\] The associated prederivator functor ${\mathrm{HO}}:{\mathbf{QCAT}_\bullet}\to{\mathbf{PDer}_\bullet}$ induces an isomorphism of Kan-enriched categories ${\mathrm{HO}}_\iota:\left(\mathbf{QCAT}_{\bullet}\right)_\iota\to \mathbf{QPDer}_{\bullet,\iota}$. The given Kan-enriched functor exists via the base change construction of enriched category theory; see Remark \[enriched\]. It is defined predictably, in the manner of Equation \[kanff\] below. We just have to show that ${\mathrm{HO}}_\iota$ induces isomorphisms on hom-objects, since ${\mathrm{HO}}_\iota$ is bijective on objects by definition. Given the isomorphism ${\mathrm{HO}}_{Q,R}:{\mathbf{QCAT}_\bullet}(Q,R){\cong}{\mathbf{PDer}_\bullet}({\mathrm{HO}}(Q),{\mathrm{HO}}(R))$ of Theorem \[Maintheorem\], we get isomorphisms $$\label{kanff}\iota({\mathrm{HO}}_{Q,R}):\iota({\mathbf{QCAT}_\bullet}(Q,R)){\cong}\iota({\mathbf{PDer}_\bullet}({\mathrm{HO}}(Q),{\mathrm{HO}}(R)))$$ as desired. The Kan-enriched category $\mathbf{QCAT}_{\bullet,\iota}$ is a model of the homotopy theory of homotopy theories, which thus embeds into prederivators. In particular, the homotopy category of homotopy theories embeds in the simplicial homotopy category of ${\mathbf{PDer}_\bullet}^{\mathrm{eq}}$. In Section \[section:2cat\], we improve this to show that *the homotopy 2-category* in the sense of [@riehl2] embeds in the 2-category ${\underline{\mathbf{PDer}}}_{{\underline{\mathbf{Cat}}}}$, a much more concrete object, under certain size assumptions. The word *the* is partially justified here by work of Low [@Zhen] indicating that the 2-category ${\underline{\mathbf{QCat}}}$ has a universal role analogous to that of “*the* homotopy category", namely, the homotopy category of spaces. We turn to the proof of Theorem \[Maintheorem\]. We must show that the ordinary functor ${\mathrm{HO}}$ gives an isomorphism between the sets ${\mathbf{QCAT}}(Q,R)$ and ${\mathbf{PDer}}^{\mathrm{str}}({\mathrm{HO}}(Q),{\mathrm{HO}}(R))$. This is Proposition \[main prop\], whose proof has the following outline: (1) \[proofstep1\] Eliminate most of the data of a prederivator map by showing strict maps ${\mathrm{HO}}(Q)\to {\mathrm{HO}}(R)$ are determined by their restriction to natural transformations between ordinary functors ${\mathbf{Cat}}^{{\mathrm{op}}}\to{\mathbf{Set}}$. This is Lemma \[catset lemma\]. (2) \[proofstep2\] Show that ${\mathrm{HO}}(Q)$ and ${\mathrm{HO}}(R)$ recover $Q$ and $R$ upon restricting the domain to $\Delta^{{\mathrm{op}}}$ and the codomain to ${\mathbf{Set}}$, and that natural transformations as in the previous step are in bijection with maps $Q\to R$. This is Lemma \[kan lemma\]. (3) \[proofstep3\] Show that ${\mathrm{HO}}(f)$ restricts back to $f$ for a map $f:Q\to R$, which implies that ${\mathrm{HO}}$ is faithful, and that a map $F:{\mathrm{HO}}(Q)\to{\mathrm{HO}}(R)$ is exactly ${\mathrm{HO}}$ applied to its restriction, which implies that ${\mathrm{HO}}$ is full. This constitutes the proof of Proposition \[main prop\] proper. Let us begin with step . \[catset def\] A ${\mathbf{Dia}}$-set is a large presheaf on ${\mathbf{Dia}}$ that is, an ordinary functor ${\mathbf{Dia}}^{{\mathrm{op}}}\to{\mathbf{SET}}$. Given a prederivator ${\mathscr{D}}$, let ${\mathscr{D}}^{{\mathrm{ob}}}:{\mathbf{Dia}}^{{\mathrm{op}}}\to{\mathbf{SET}}$ be its underlying ${\mathbf{Dia}}$-set, so that ${\mathscr{D}}^{{\mathrm{ob}}}$ sends a small category $J$ to the set of objects ${\mathrm{ob}}({\mathscr{D}}(J))$ and a functor $u:I\to J$ to the action of ${\mathscr{D}}(u)$ on objects. Recall that where requires that ${\mathrm{dia}}:{\mathscr{D}}(J\times[1])\to{\mathscr{D}}(J)^{[1]}$ be (full and) essentially surjective, insists on actual surjectivity on objects. The following lemma shows that under this assumption most of the apparent structure of a strict prederivator map is redundant. \[catset lemma\] A strict morphism $F:{\mathscr{D}}_1\to{\mathscr{D}}_2$ between prederivators satisfying is determined by its restriction to the underlying ${\mathbf{Dia}}$-sets ${\mathscr{D}}_1^{{\mathrm{ob}}},{\mathscr{D}}_2^{{\mathrm{ob}}}$. That is, the restriction functor from prederivators satisfying to ${\mathbf{Dia}}$-sets is faithful. The data of a strict morphism $F:{\mathscr{D}}_1\to{\mathscr{D}}_2$ is that of a functor $F_J:{\mathscr{D}}_1(J)\to{\mathscr{D}}_2(J)$ for every $J$. (Note the simplification here over pseudonatural transformations, which require also a natural transformation associated to every functor and do not induce maps of ${\mathbf{Dia}}$-sets. That is the fundamental difficulty leading to the dramatically different techniques of the next sections.) The induced map $F^{{\mathrm{ob}}}:{\mathscr{D}}_1^{{\mathrm{ob}}}\to{\mathscr{D}}_2^{{\mathrm{ob}}}$ is given by the action of $F$ on objects. So to show faithfulness it is enough to show that, given a family of functions $r_J:{\mathrm{ob}}({\mathscr{D}}_1(J))\to{\mathrm{ob}}({\mathscr{D}}_2(J))$, that is, the data required in a natural transformation between ${\mathbf{Dia}}$-sets, there is at most one 2-natural transformation with components $F_J:{\mathscr{D}}_1(J)\to{\mathscr{D}}_2(J)$ and object parts ${\mathrm{ob}}(F_J)=r_J$. Indeed, suppose $F$ is given with object parts $r_J={\mathrm{ob}}(F_J)$ and let $f:X\to Y$ be a morphism in ${\mathscr{D}}_1(J)$. Then by Axiom , $f$ is the underlying diagram of some $\widehat f~\in~{\mathscr{D}}_1(J\times[1])$. By 2-naturality, the following square must commute: $$\begin{CD} {\mathscr{D}}_1(J\times[1])@>F_{J\times[1]}>>{\mathscr{D}}_2(J\times [1])\\ @V{\mathrm{dia}}_J^{[1]} VV @V{\mathrm{dia}}_J^{[1]} VV\\ {\mathscr{D}}_1(J)^{[1]}@>F_J>> {\mathscr{D}}_2(J)^{[1]} \end{CD}$$ Indeed, ${\mathrm{dia}}_J^{[1]}$ is the action of a prederivator on the unique natural transformation between the two functors $0,1:[0]\to[1]$ from the terminal category to the arrow category, as is described in full detail below [@groth Proposition 1.7]. Thus the square above is an instance of the axiom of respect for 2-morphisms. It follows that we must have $F_J(f)=F_J({\mathrm{dia}}_J^{[1]}\widehat f)={\mathrm{dia}}_J^{[1]}(r_{J \times [1]}(\widehat f))$. Thus if $F,G$ are two strict morphisms ${\mathscr{D}}_1\to{\mathscr{D}}_2$ with the same restrictions to the underlying ${\mathbf{Dia}}$-sets, they must coincide, as claimed. Note the above does not claim that the restriction functor is full: the structure of a strict prederivator map is determined by the action on objects of each ${\mathscr{D}}_1(J),{\mathscr{D}}_2(J),$ but it is not generally true that an arbitrary map of ${\mathbf{Dia}}-$sets will admit a well defined extension to morphisms. We proceed to step of the proof. Let us recall the theory of pointwise Kan extensions for ordinary categories. Let $F:{\mathcal}C\to{\mathcal}D$ and $G:{\mathcal}C\to {\mathcal}E$ be functors. At least if ${\mathcal}C$ and ${\mathcal}D$ are small and ${\mathcal}E$ is complete, then we always have a right Kan extension $F_*G:{\mathcal}D\to {\mathcal}E$ characterized by the adjunction formula ${\mathcal}E^{{\mathcal}D}(H,F_*G){\cong}{\mathcal}E^{{\mathcal}C}(H\circ F,G)$ and computed on objects by $$F_*G(d)=\lim_{d\downarrow F}G\circ q$$ Here $d\downarrow F$ is the comma category with objects $(c,f:d\to F(c))$ and morphisms the maps in ${\mathcal}C$ making the appropriate triangle commute, and $q:d\downarrow F\to {\mathcal}C$ is the projection. \[kan lemma\] Let $j:\Delta^{{\mathrm{op}}}\to{\mathbf{Dia}}^{{\mathrm{op}}}$ be the inclusion. Then for any quasicategory $R$, the ${\mathbf{Dia}}$-set ${\mathrm{HO}}(R)^{{\mathrm{ob}}}$ underlying ${\mathrm{HO}}(R)$ is the right Kan extension of $R$ along $j$. For any small category $J$, the ${\mathbf{Dia}}$-set ${\mathrm{HO}}(R)^{{\mathrm{ob}}}$ takes $J$ to the set of simplicial set maps from $J$ to $R$: $${\mathrm{HO}}(R)^{{\mathrm{ob}}}(J)={\mathrm{ob}}({\mathrm{Ho}}(R^{N(J)})) ={\mathbf{SSET}}(N(J),R)$$ We shall show that the latter is the value required of $j_*R$, which exists and is calculated via Equation 5.1 since ${\mathbf{SET}}$ is complete (in the sense of a universe in which its objects constitute the small sets.) First, one of the basic properties of presheaf categories implies that $N(J)$ is a colimit over its category of simplices. That is, , where $q:\Delta\downarrow NJ\to \Delta$ is the projection and $y:\Delta\to{\mathbf{SSet}}$ is the Yoneda embedding. Then we can rewrite the values of ${\mathrm{HO}}(R)^{{\mathrm{ob}}}$ as follows: $${\mathrm{HO}}(R)^{{\mathrm{ob}}}(J)={\mathbf{SSET}}(N(J),R)={\mathbf{SSET}}({\operatornamewithlimits{colim}}\limits_{\Delta\downarrow NJ} y\circ q,R){\cong}$$$$\lim\limits_{(\Delta\downarrow NJ)^{{\mathrm{op}}}}{\mathbf{SSET}}(y\circ q,R){\cong}\lim_{(\Delta\downarrow NJ)^{{\mathrm{op}}}}R\circ q^{{\mathrm{op}}}$$ The last isomorphism follows from the Yoneda lemma. The indexing category $(\Delta\downarrow N(J))^{{\mathrm{op}}}$ has as objects pairs $(n,f:\Delta^n\to N(J))$ and as morphisms $\bar a:(n,f)\to(m,g),$ the maps $a:\Delta^m\to \Delta^n$ such that $f\circ a=g$. That is, $(\Delta\downarrow N(J))^{{\mathrm{op}}}{\cong}N(J)\downarrow\Delta^{{\mathrm{op}}}$, where on the right-hand side $N(J)$ is viewed as an object of ${\mathbf{SSET}}^{{\mathrm{op}}}$. Using the full faithfulness of the nerve functor $N$, we see $(\Delta\downarrow N(J))^{{\mathrm{op}}}{\cong}J\downarrow\Delta^{{\mathrm{op}}}$, where again $J\in {\mathbf{Dia}}^{{\mathrm{op}}}$. Thus, if $q^{{\mathrm{op}}}$ serves also to name the projection $J/\Delta^{{\mathrm{op}}}\to \Delta^{{\mathrm{op}}}$, we may continue the computation above with $${\mathrm{HO}}(R)^{{\mathrm{ob}}}(J){\cong}\lim\limits_{J\downarrow\Delta^{{\mathrm{op}}}}R\circ q^{{\mathrm{op}}}$$ This is exactly the formula for $j_*R(J)$ recalled above. The isomorphism thus constructed is certainly natural with respect to the action on maps of the Kan extension, so the lemma is established. We arrive at step . \[main prop\] The homotopy category functor ${\mathrm{HO}}:{\mathbf{QCAT}}\to{\mathbf{PDer}}^{\mathrm{str}}$ is a fully faithful embedding of ordinary categories. Note that, by Lemma \[kan lemma\], the restriction of ${\mathrm{HO}}(Q)^{\mathrm{ob}}$ to a functor $\Delta^{{\mathrm{op}}}\to{\mathbf{SET}}$ is canonically isomorphic to $Q$, since Kan extensions along fully faithful functors are splittings of restriction. Thus a map $F:{\mathrm{HO}}(Q)\to {\mathrm{HO}}(R)$ restricts to a map $\rho(F):Q\to R$. In fact, we have a natural isomorphism $\rho\circ{\mathrm{HO}}{\cong}{\mathrm{id}}_{{\mathbf{QCAT}}}$, so that $\rho\circ{\mathrm{HO}}(f)$ is again $f$, up to this isomorphism. Indeed, given $f:Q\to R$, we already know how to compute ${\mathrm{HO}}(f)$ as ${\mathrm{Ho}}\circ \left(f^{N(-)}\right)$. Then the restriction $\rho({\mathrm{HO}}(f)):Q\to R$, which we are to show coincides with $f$, is given by $\rho({\mathrm{HO}}(f))_n={\mathrm{ob}}\circ {\mathrm{Ho}}\circ f^{\Delta^n}$. That is, $\rho({\mathrm{HO}}(f))$ acts by the action of $f$ on the objects of the homotopy categories of $Q^{\Delta^n}$ and $R^{\Delta^n}$. In other words, it acts by the action of $f$ on the sets ${\mathbf{SSET}}(\Delta^n,Q)$ and ${\mathbf{SSET}}(\Delta^n,R)$; via Yoneda, $\rho({\mathrm{HO}}(f))$ acts by $f$ itself. It remains to show that ${\mathrm{HO}}(\rho(F))=F$ for any $F:{\mathrm{HO}}(Q)\to{\mathrm{HO}}(R)$. By Lemma \[catset lemma\] it suffices to show that the restrictions of ${\mathrm{HO}}(\rho (F))$ and $F$ to the underlying ${\mathbf{Dia}}-$sets coincide. Using Lemma \[kan lemma\] and the adjunction characterizing the Kan extension, we have $${\mathbf{SET^{Dia^{{\mathrm{op}}}}}}({\mathrm{HO}}(Q)^{{\mathrm{ob}}}, {\mathrm{HO}}(R)^{{\mathrm{ob}}}) ={\mathbf{SET^{Dia^{{\mathrm{op}}}}}}(j_*Q,j_*R){\cong}{\mathbf{SSET}}(j^*j_*Q,R){\cong}{\mathbf{SSET}}(Q,R)$$ In particular, maps between $Q^{{\mathrm{ob}}}$ and $R^{{\mathrm{ob}}}$ agree when their restrictions to $Q$ and $R$ do. Thus we are left to show that $\rho({\mathrm{HO}}(\rho(F)))=\rho(F).$ But as we showed above, $\rho\circ{\mathrm{HO}}$ is the identity map on ${\mathbf{SSET}}(Q,R)$, so the proof is complete. We have just one loose end to tie up to finish the simplicial part of Theorem \[Maintheorem\]: we must extend ${\mathrm{HO}}$ to a simplicially enriched functor. This follows formally from the following interpretation of the simplicial enrichments on ${\mathbf{QCAT}}$ and ${\mathbf{PDer}}^{\mathrm{str}}$. Each category has a given cosimplicial object, respectively given by the representable simplicial sets $\Delta^\bullet$ and the representable prederivators $\widehat{[\bullet]}$, where we have a natural isomorphism $\widehat{[\bullet]}{\cong}{\mathrm{HO}}(\Delta^\bullet)$ following from the full faithfulness of the nerve. Similarly, we have the canonical bicosimplicial objects $\Delta^\bullet\times\Delta^\bullet$ and $\widehat{[\bullet]\times[\bullet]}\cong {\mathrm{HO}}(\Delta^\bullet\times\Delta^\bullet)$. This shows that for any quasicategory $R$, the simplicial prederivator ${\mathrm{HO}}(R)^{[\bullet]}$ is naturally isomorphic to ${\mathrm{HO}}(R^{\Delta^\bullet})$. Thus the isomorphisms of Proposition \[main prop\] are in fact isomorphisms of simplicial sets: $${\mathbf{PDer}_\bullet}({\mathrm{HO}}(Q),{\mathrm{HO}}(R))={\mathbf{PDer}}^{\mathrm{str}}({\mathrm{HO}}(Q),{\mathrm{HO}}(R)^{[\bullet]}){\cong}{\mathbf{QCAT}}(Q,R^{\Delta^\bullet})$$ As to respect for the simplicial compositions in ${\mathbf{QCAT}_\bullet}$ and ${\mathbf{PDer}_\bullet}$, we observe similarly that the operations $f\mapsto f^{[n]}$ and ${\mathrm{diag}}_{[n]}$ are preserved by ${\mathrm{HO}}$, both being induced by the action of functors between $\Delta$ and $\Delta\times\Delta$, namely, projection and diagonal, on the canonical cosimplicial objects in ${\mathbf{QCat}}$ and ${\mathbf{PDer}}$, as well as their bisimplical analogues. The embedding ${\underline{\mathbf{QCat}}}\to{\underline{\mathbf{PDer}}}$ of 2-categories {#section:2cat} ========================================================================================= We shall now prove \[Mainthm2cat\] Let ${\underline{\mathbf{QCat}}}$ denote the 2-category of small quasicategories. Then the 2-functor ${\mathrm{HO}}:{\underline{\mathbf{QCat}}}\to{\underline{\mathbf{PDer}}}_{{\underline{\mathbf{Cat}}}}$ is bicategorically fully faithful; that is, it induces equivalences of hom-categories ${\underline{\mathbf{QCat}}}(Q,R)\simeq {\underline{\mathbf{PDer}}}_{{\underline{\mathbf{Cat}}}}({\mathrm{HO}}(Q),{\mathrm{HO}}(R))$. The core tool for the proof is Theorem \[delocalization\] below, which says that every quasicategory is a localization of a category. It is due to Joyal but first published by Stevenson in [@stevenson]. First we recall the notion of $\infty$-localization, often just “localization," for simplicial sets and quasicategories. Let $f:S\to T$ be a map of simplicial sets and ${\mathcal}W{\subseteq}S_1$ a set of edges. For any quasicategory $Q$, let $Q^S_{{\mathcal}W}$ be the full sub-quasicategory of $Q^S$ on those maps $g:S\to Q$ such that $g(w)$ is an equivalence in $Q$ for every edge $w\in {\mathcal}W$. Then we say $f$ exhibits $T$ as an $\infty$-localization of $S$ at ${\mathcal}W$ if, for every quasicategory $Q$, pullback along $f$ induces an equivalence $f^*:Q^T\to Q^S_{{\mathcal}W}$ of quasicategories. In particular, if $f:S\to T$ is a localization at ${\mathcal}W$ then for any quasicategory $Q$, the pullback $f^*:{\mathrm{Ho}}(Q^T)\to{\mathrm{Ho}}(Q^S)$ is fully faithful, as we will use repeatedly below. Specifically, $f^*$ is an equivalence onto the full subcategory ${\mathrm{Ho}}(Q^S_{{\mathcal}W}){\subseteq}{\mathrm{Ho}}(Q^S)$, since the 2-functor ${\mathrm{Ho}}$ preserves equivalences. Let $\Delta\downarrow S$ be the category of simplices of a simplicial set $S$, and let $p_S:N(\Delta~\downarrow~S)~\to~S$ be the natural extension of the projection $(f:\Delta^m\to S)\mapsto f(m)$. Finally, let ${\mathcal}L_S$ be the class of arrows $a:(f:\Delta^m\to S)\to (g:\Delta^n\to S)$ in ${\Delta\downarrow{S}}$ such that $a(n)=m$, that is, the last-vertex maps. Then we have the following theorem: \[delocalization\] For any quasicategory $Q$, the last-vertex projection $p_Q$ exhibits $Q$ as an $\infty$-localization of the nerve $N(\Delta\downarrow Q)$ at the class ${\mathcal}L_Q$. Thus every quasicategory $Q$ is canonically a localization of its category $\Delta\downarrow Q$ of simplices. Observe that $N(\Delta\downarrow(-))$ constitutes an endofunctor of simplicial sets and that $p:N(\Delta\downarrow(-))\to \mathrm{id}_{{\mathbf{SSet}}}$ is a natural transformation We turn to the proof. First, we must show that if $F:{\mathrm{HO}}(Q)\to{\mathrm{HO}}(R)$ is a pseudonatural transformation, then there exists $h:Q\to R$ and an isomorphism $\Lambda:{\mathrm{HO}}(h)\cong F$. Observe that, since $Q$ is small, $\Delta\downarrow Q$ is in ${\underline{\mathbf{Cat}}}$. Now we claim that $F_{\Delta\downarrow Q}(p_Q):$$\to R$ sends the class ${\mathcal}L_Q$ of last-vertex maps into equivalences in $R$. Indeed, if $\ell:\Delta^1\to {\Delta\downarrow{Q}}$ is in ${\mathcal}L_Q$, then we have, using $F$’s respect for 2-morphisms and the structure isomorphism $F_\ell$, $$F_{[0]}({\mathrm{dia}}(\ell^*p_Q))={\mathrm{dia}}( F_{[1]}(\ell^*p_Q))\cong{\mathrm{dia}}(\ell^*F_{\Delta\downarrow J}(p_Q))$$ Thus ${\mathrm{dia}}(\ell^*F_{\Delta\downarrow J}(p_Q))$ is an isomorphism in ${\mathrm{Ho}}(R)$, since ${\mathrm{dia}}(\ell^*p_Q)$ is an isomorphism in ${\mathrm{Ho}}(Q)$. Then using the delocalization theorem, we can define $h:Q\to R$ as any map admitting an isomorphism ${\sigma}:h\circ p_Q\cong F_{\Delta\downarrow Q}(p_Q)$. We must prove that ${\mathrm{HO}}(h)$ is isomorphic to $F$. From $\sigma$, we get an invertible modification ${\mathrm{HO}}(\sigma):{\mathrm{HO}}(h\circ p_Q)\Rightarrow {\mathrm{HO}}(F_{{\Delta\downarrow{Q}}}(p_Q)):{\mathrm{HO}}({\Delta\downarrow{Q}})\to {\mathrm{HO}}(R)$. Now for each $X:J\to Q$, we can define $\Lambda_{X,J}:h\circ X\cong F_J(X)$ uniquely by requiring $\Lambda_{X,J}*p_J$ to be the composition $$h\circ X\circ p_J=h\circ p_Q\circ {\Delta\downarrow{X}} {\cong}F_{{\Delta\downarrow{Q}}}(p_Q)\circ {\Delta\downarrow{X}} \cong F_{\Delta\downarrow J}(p_Q\circ \Delta\downarrow X)=F_{\Delta\downarrow J}(X\circ p_J)\cong F_J(X)\circ p_J$$ The naturality of $\Lambda_{J,X}$ in $X$ follows from the pseudonaturality of $F$. Specifically, of the three isomorphisms which compose $\Lambda_{J,X}$, the first is a component of one of the natural transformations making up the modification ${\mathrm{HO}}(\sigma)$, while the latter two are instances of the natural isomorphisms given as part of the structure of $F$. That gives us natural isomorphisms $\Lambda_J:{\mathrm{HO}}(h)_J\Rightarrow F_J$ for each $J$. To verify that the $\Lambda_J$ assemble into a modification, consider any $u:K\to J$. Then we must show that, for any $X:J\to Q$, the diagram $$\begin{tikzcd} h X u\ar[r,"\Lambda_{J,X}*u"] \ar[dr,"\Lambda_{K,X u}"]&F_J(X) u\ar[d,"F_u"]\\ &F_K(X u)\end{tikzcd}$$ commutes. Using, as always, full faithfulness of the pullback along a localization, we may precompose with $p_K$. Then the modification axiom is verified by the commutativity of the following diagram: $$\begin{tikzcd} hXup_K\ar[d,equals]\ar[r,"\Lambda_{J,X}*up_K"] & F_J(X)up_K\ar[d,equals]\\ hXp_J{\Delta\downarrow{u}}\ar[d,equals]\ar[r,"\Lambda_{J,X}*p_J{\Delta\downarrow{u}}"]& F_J(X) p_J{\Delta\downarrow{u}} \ar[r,equals] & F_J(X)up_K\ar[from=d,"F_u*p_K"]\\ hp_Q{\Delta\downarrow{X}}u\ar[d,equals]& F_{{\Delta\downarrow{J}}}(Xp_J){\Delta\downarrow{u}}\ar[u,"F_{p_J}*{\Delta\downarrow{u}}"]\ar[d,equals]& F_K(Xu)p_K\ar[from=d,"F_{p_K}"]\\ F_{{\Delta\downarrow{Q}}}(p_Q){\Delta\downarrow{Xu}}\ar[r,"F^{-1}_{{\Delta\downarrow{X}}}*{\Delta\downarrow{u}}"]& F_{{\Delta\downarrow{J}}}(p_Q{\Delta\downarrow{X}}){\Delta\downarrow{u}}&F_{{\Delta\downarrow{K}}}(Xup_K)\\ &F_{{\Delta\downarrow{K}}}(p_Q{\Delta\downarrow{Xu}})\ar[from=ul,"F^{-1}_{{\Delta\downarrow{Xu}}}"]\ar[u,"F_{{\Delta\downarrow{u}}}"]\ar[ur,equals] \end{tikzcd}$$ The upper left square commutes since $up_K=p_J{\Delta\downarrow{u}}$. The left central hexagon commutes by definition of $\Lambda_{J,X}$, and the lower left triangle and right-hand heptagon commute by functoriality of the pseudonaturality isomorphisms of $F$. Meanwhile, the outer route aruond the diagram from $hXup_K$ to $F_J(X)up_K$ is $F_u\Lambda_{K,Xu}$, while the inner route is $\Lambda_{J,X}*up_K$. So $\Lambda$ is an invertible modification ${\mathrm{HO}}(h){\cong}F$, as desired. Now we assume given a modification $\Xi:{\mathrm{HO}}(f)\Rightarrow{\mathrm{HO}}(g):{\mathrm{HO}}(Q)\to{\mathrm{HO}}(R)$, and must show there exists a unique $\xi:f\Rightarrow g$ with ${\mathrm{HO}}(\xi)=\Xi$. First, we consider $\Xi_{p_Q}:f\circ p_Q\to g\circ p_Q$, which is a morphism in the homotopy category ${\mathrm{Ho}}(R^{\Delta\downarrow Q})$. According to (Der5’), we can lift this to a map $\widehat{\Xi}_{p_Q}:\Delta\downarrow Q\to R^{\Delta^1}$ with ${\mathrm{dia}}(\widehat{\Xi}_{p_Q})=\Xi_{p_Q}.$ Since the domain and codomain $f\circ p_Q$ and $g\circ p_Q$ of $\widehat{\Xi}_{p_Q}$ invert the last-vertex maps ${\mathcal}L_Q$, by (Der\[Der2\]) so does $\widehat{\Xi}_{p_Q}$ itself, so that we have $\widehat{\Xi}':Q\to R^{\Delta^1}$ with an isomorphism $a:\widehat\Xi'*p_Q{\cong}\widehat\Xi_{p_Q}$. The domain and codomain $0^*a:0^*\widehat\Xi'{\cong}fp_Q$ and $1^*a:1^*\widehat\Xi'p_Q{\cong}gp_Q$ give rise to unique isomorphisms $i:0^*\widehat\Xi'{\cong}f$ and $j:1^*\widehat \Xi'{\cong}g$. Now let $\widehat\Xi:Q\to R^{\Delta^1}$ satisfy ${\mathrm{dia}}(\widehat\Xi)=j\circ {\mathrm{dia}}(\widehat\Xi')\circ i^{-1}$ and let $b:\widehat\Xi{\cong}\widehat\Xi'$ be an isomorphism lifting $(i^{-1},j^{-1}):{\mathrm{dia}}(\widehat\Xi)\to{\mathrm{dia}}(\widehat\Xi')$. Then $a\circ(b*p_Q):\widehat\Xi\circ p_Q\to \widehat{\Xi}_{p_Q}$ is an isomorphism with endpoints fixed, insofar as $0^*(b*p_Q)=i^{-1}*p_Q=0^*a^{-1}$ and similarly $1^*(b*p_Q)=1^*a^{-1}$. Thus $[\widehat\Xi\circ p_Q]=[\widehat \Xi_{p_Q}]$ in ${\mathrm{Ho}}(R^{{\Delta\downarrow{Q}}})$. Notice that any other choice $\widehat\Xi_2$ for $\widehat\Xi$ is homotopic to ours with endpoints fixed, since $\widehat\Xi_2*p_Q$ and $\widehat\Xi*p_Q$ are homotopic via the composition of their homotopies with $\widehat{\Xi}_{p_Q}$ and pullback along $p_Q$ is faithful. So $\xi:={\mathrm{dia}}(\widehat\Xi)$ is unique; it remains to show that it maps to $\Xi$ under ${\mathrm{HO}}$. To that end, we claim that for every $X:J\to Q$, $\xi * X=\Xi_X$. As above, it suffices to precompose $X$ with $p_J$, and then we have $$\xi*X*p_J={\mathrm{dia}}(\widehat\Xi)*p_Q*{\Delta\downarrow{X}}= {\mathrm{dia}}(\widehat\Xi\circ p_Q)\circ \Delta\downarrow X$$ $$=\Xi_{p_Q}*\Delta\downarrow X=\Xi_{p_Q\circ {\Delta\downarrow{X}}}=\Xi_{X\circ p_J}=\Xi_X*p_J$$ as desired. In the equations above we have used the 2-functoriality of ${\mathrm{HO}}(R)$, naturality of $p$, and the modification property of $\Xi$. So ${\mathrm{HO}}(\xi)=\Xi$, as was to be shown. Whitehead’s theorem for quasicategories {#section:whitehead} ======================================= In this section, we prove that ${\mathrm{HO}}:{\underline{\mathbf{QCAT}}}\to{\underline{\mathbf{PDer}}}$ detects equivalences, regardless of the choice of ${\underline{\mathbf{Dia}}}$. One does not hope to prove all of Theorem \[Mainthm2cat\] for arbitrary quasicategories, as is most intuitive to see in the case of ${\mathrm{HO}}:{\underline{\mathbf{QCat}}}\to{\underline{\mathbf{PDer}}}_{{\underline{\mathbf{HFin}}}}$. Since the 2-category ${\underline{\mathbf{HFin}}}$ of homotopy finite categories is small, prederivators of that domain form a strictly concrete 2-category in the sense that we have a 2-functor $U$, faithful on 1- and 2-morphisms, from ${\underline{\mathbf{PDer}}}_{{\underline{\mathbf{HFin}}}}\to {\underline{\mathbf{Cat}}}$ given by $$U({\mathscr{D}})={\mathscr{D}}\mapsto \prod_{J\in{\underline{\mathbf{HFin}}}}{\mathscr{D}}(J)\times\prod_{u:K\to J}{\mathscr{D}}(J)^{[1]}$$ For $F:{\mathscr{D}}_1\to{\mathscr{D}}_2$ , we have $U(F)=((F_J),(F_u:{\mathscr{D}}(J)^{[1]}\to{\mathscr{D}}(K)^{[1]}))$, while for $\Xi:F_1\Rightarrow F_2$, we have $U(\Xi)=((\Xi_J),(\Xi_u:F_u\Rightarrow G_u))$. The functor $F_u$ sends $f:X\to Y$ to the arrow $u^*F(X)\to F(u^*Y)$ which can be defined in two equivalent ways using the pseudonaturality isomorphisms of $F$. Similarly, the components of $\Xi_u$ are $u*\Xi_X$ and $\Xi_{u^*Y}$, and it is straightforward to check that these objects are, respectively, a functor and a natural transformation. The reason for the unfamiliar $u$ terms in the definition of $C$ is that a pseudonatural transformation is not determined by its action on objects. Since ${\mathrm{HO}}:{\underline{\mathbf{QCat}}}\to{\underline{\mathbf{PDer}}}_{{\underline{\mathbf{HFin}}}}$ is faithful on 1-morphisms, if it were also faithful on 2-morphisms then ${\underline{\mathbf{QCat}}}$ would be a concrete 2-category. In perhaps more familiar terms, there would be no “phantom homotopies" between maps of quasicategories. That this should be the case strains credulity, given the famous theorem of Freyd [@freyd] that the category of spaces ${\mathbf{Hot}}$ is not concrete. We will use the main theorem of [@whiteheadforspaces], which says that the 2-category ${\underline{\mathbf{KAN}}}{\subseteq}{\underline{\mathbf{QCAT}}}$ of Kan complexes is strongly generated by the tori $(S^1)^n$, in the sense that a morphism $f:X\to Y$ of Kan complexes is a homotopy equivalence if and only if, for each $n$, the functor ${\underline{\mathbf{KAN}}}((S^1)^n,f)$ is an equivalence of groupoids. We rephrase this in a form more convenient for our purposes: \[whiteheadforspaces\] The restriction of ${\mathrm{HO}}:{\underline{\mathbf{QCAT}}}\to{\underline{\mathbf{PDer}}}_{{\underline{\mathbf{HFin}}}}$ to the 2-category ${\underline{\mathbf{KAN}}}$ reflects equivalences. Given $f:X\to Y$ in ${\underline{\mathbf{KAN}}}$, the image ${\mathrm{HO}}(f)$ is an equivalence in ${\underline{\mathbf{PDer}}}_{{\underline{\mathbf{HFin}}}}$ if and only if, for every homotopy finite category $J$, the induced functo ${\mathrm{Ho}}(f^{N(J)}):{\mathrm{Ho}}(X^{N(J)})\to {\mathrm{Ho}}(Y^{N(J)})$ is an equivalence. Since the classical model structure on simplicial sets is also Cartesian, we have equivalences ${\mathrm{Ho}}(X^{N(J)}){\simeq}{\mathrm{Ho}}(X^{RN(J)})$, and similarly for $Y$, where $R$ is a Kan fibrant replacement functor. Now, by Thomason’s theorem [@thomason], as $J$ varies, $RN(J)$ runs through all finite homotopy types. In particular, if ${\mathrm{HO}}(f)$ is an equivalence of prederivators, then $f$ induces equivalences ${\mathrm{Ho}}(X^{(S^1)^n})\to{\mathrm{Ho}}(Y{(S^1)^n})$ for every $n$, which is to say, ${\underline{\mathbf{KAN}}}((S^1)^n,f)$ is an equivalence. Thus $f$ must be an equivalence. To make use of the above result to prove results on the relationship between quasicategories and their prederivators, we first recall what Rezk has described as the fundamental theorem of quasicategory theory. First, recall that a quasicategory $Q$ has mapping spaces $Q(x,y)$ for each $x,y\in Q$, which can be given various models. We shall use the balanced model in which we have $Q(x,y)=\{(x,y)\}\times_{Q\times Q} Q^{\Delta^1}$, so that an $n$-simplex of $Q(x,y)$ is a prism $\Delta^n\times\Delta^1$ in $Q$ which is degenerate on $x$ and $y$ at its respective endpoints. We say that a map $f:Q\to R$ of quasicategories is *fully faithful* if it induces an equivalence of Kan complexes $Q(x,y)\to R(f(x),f(y))$ for every $x,y\in Q$. It is essentially surjective if, for every $z\in R$, there exists $x\in Q$ and an edge $a:f(x)\to z$ which becomes an isomorphism in ${\mathrm{Ho}}(R)$. Then we have A map $f:Q\to R$ of quasicategories is an equivalence in the sense of Definition \[equivalenceofquasicategories\] if and only if it is fully faithful and essentially surjective. Now we can prove our Whitehead theorem for quasicategories. \[whiteheadforquasicats\] Let $f:Q\to R$ be a map of quasicategories, and suppose that ${\mathrm{HO}}(f)$ is an equivalence of prederivators. Then $f$ is an equivalence of quasicategories. Since ${\mathrm{HO}}(f)$ is an equivalence on the base, $f$ is essentially surjective. Thus we have only to show $f$ is fully faithful. By Theorem \[whiteheadforspaces\], it suffices to show that ${\mathrm{HO}}(f)$ induces an equivalence of prederivators ${\mathrm{HO}}(Q(x,y))\cong {\mathrm{HO}}(R(f(x),f(y)))$ for every $x$ and $y$ in $Q$. What is more, since for any $J$ we have $Q(x,y)^{NJ}{\cong}Q^{NJ}(p_J^*x,p_J^*y)$, it suffices at last to show that $f$ induces equivalences $f_{x,y}:{\mathrm{Ho}}(Q(x,y))\to {\mathrm{Ho}}(R(f(x),f(y)))$ on the homotopy categories of mapping spaces. Essential surjectivity is proved via an argument that also appeared in the construction of $\widehat\Xi$ in the proof of Theorem \[Mainthm2cat\]. Namely, from essential surjectivity of ${\mathrm{HO}}(f)$, given any $X\in {\mathrm{Ho}}(R(f(x),f(y)))$ and any $Y\in {\mathrm{HO}}(Q)([1])$ with an isomorphism $s:{\mathrm{HO}}(f)(Y)\cong X$ in ${\mathrm{HO}}(R)([1])$, we see by conservativity and fullness of ${\mathrm{HO}}(f)$ that we have isomorphisms $0^*Y{\cong}x\in{\mathrm{HO}}(Q)(J)$ and, similarly, $1^*Y{\cong}y$. Composing these isomorphisms and ${\mathrm{dia}}Y$ in ${\mathrm{Ho}}(Q)$ gives a morphism $x\to y$ in ${\mathrm{Ho}}(Q)$ isomorphic to ${\mathrm{dia}}Y$ in ${\mathrm{Ho}}(Q)^{[1]}$. By (Der5’) and (Der\[Der2\]) we can lift this to an isomorphism $r:Y'\cong Y$ in ${\mathrm{HO}}(Q)([1])$ such that $0^*(s\circ{\mathrm{HO}}(f)(r))={\mathrm{id}}_{x}$ and $1^*(s\circ{\mathrm{HO}}(f)(r))={\mathrm{id}}_{y}$. This implies that $s\circ {\mathrm{HO}}(f)(r)$ may be lifted to an isomorphism ${\mathrm{HO}}(f)(Y'){\cong}X$ in ${\mathrm{Ho}}(R(f(x),f(y))$. Thus $f_{x,y}$ is essentially surjective. For fullness, we observe that if $a:Y_1\to Y_2\in {\mathrm{HO}}(Q)([1])$ verifies $Y_1,Y_2:x\to y$, $0^*{\mathrm{HO}}(f)(a)={\mathrm{id}}_{f(x)}$, and $1^*{\mathrm{HO}}(f)(a)={\mathrm{id}}_{f(y)}$, then we have also $0^*(a)={\mathrm{id}}_x$ and $1^*a={\mathrm{id}}_y$, since ${\mathrm{HO}}(f)$ is faithful. This implies that $a$ can be lifted to a morphism $a':Y_1\to Y_2$ in ${\mathrm{Ho}}(Q(x,y))$ with $f_{x,y}(a)={\mathrm{HO}}(f)(a)$. And since ${\mathrm{HO}}(f)$ is full, every morphism ${\mathrm{HO}}(f)(Y_1)\to {\mathrm{HO}}(f)(Y_2)$ in ${\mathrm{Ho}}(R(f(x),f(y)))$ is equal to ${\mathrm{HO}}(f)(a)$ in ${\mathrm{HO}}(R)([1])$, for some $a$. Finally, we turn to faithfulness. Suppose we have morphisms $a,b:Y_1\to Y_2$ in ${\mathrm{Ho}}(Q(x,y))$ with $f_{x,y}(a)=f_{x,y}(b)$ in ${\mathrm{Ho}}(R(f(x),f(y)))$. We wish to show $a=b$. First, we may represent $a$ and $b$ by $\hat a,\hat b\in {\mathrm{HO}}(Q)( [1]\times [1])$, each with boundary $$\begin{tikzcd} x\ar[r,"Y_1"]\ar[d,equals]&y\ar[d,equals]\\ x\ar[r,"Y_2"]&y \end{tikzcd}$$ Let ${\partial}[2]$ denote the category on objects $0,1,2$ freely generated by *three* arrows $0\to 1,1\to 2,0\to 2$, so that $N{\partial}[2]$ is Joyal equivalent to ${\partial}\Delta^2$. The lifts $\hat a$ and $\hat b$ fit together in a diagram $W\in{\mathrm{HO}}(Q)([1]\times {\partial}[2])$ with $(01)^* W=q^*Y_1$, $(02)^*W=\hat a$, and $(12)^*W=\hat b$, where $q:[1]\times[1]\to [1]$ projects out the last coordinate. The significance of $W$ is that we have $a=b$ if and only if $W$ admits an extension $Z$ to ${\mathrm{HO}}(Q)([1]\times [2])$ such that $Z|_{\{0\}\times [2]}=p_{[2]}^* x$ and $Z|_{\{1\}\times [2]}=p_{[2]}^* y$. It suffices to exhibit $W'\in {\mathrm{HO}}(Q)([1]\times {\partial}[2])$ with $W'|_{0\times {\partial}[2]}=p_{{\partial}[2]}^*x$ and $W'|_{1\times {\partial}[2]}=p_{{\partial}[2]}^*y$ admitting such an extension $Z'$, together with an isomorphism $t:W\to W'$ in ${\mathrm{HO}}(Q)([1]\times{\partial}[2])$ such that $t|_{{0}\times {\partial}[2]}={\mathrm{id}}_{p_{{\partial}[2]}^* x}$ and $t|_{{1}\times {\partial}[2]}={\mathrm{id}}_{p_{{\partial}[2]}^* y}$. Indeed, in this situation $W$ and $W'$ both represent maps from $S^1$ to the Kan complex $Q(x,y)$, $Z$ and $Z'$ represent putative extensions to $\Delta^2$, and $t$ represents a homotopy between them. In particular, since by assumption ${\mathrm{HO}}(f)(a)={\mathrm{HO}}(f)(b)$ in ${\mathrm{Ho}}(R(f(x),f(y)))$, there exists an extension $T$ of ${\mathrm{HO}}(f)(W)$ to ${\mathrm{HO}}(R)([1]\times [2])$ with trivial endpoints, as above. Now take $\hat T\in {\mathrm{HO}}(Q)( [1]\times[2])$ with an isomorphism $s:{\mathrm{HO}}(f)(\hat T){\cong}T$. In particular, this gives isomorphisms ${\mathrm{HO}}(f)(\hat T)|_{\{0\}\times [2]}{\cong}p_{[2]}^*f(x)$ and ${\mathrm{HO}}(f)(\hat T)|_{\{1\}\times [2]}{\cong}p_{[2]}^*f(y)$ in ${\mathrm{HO}}(R)([2])$, which lift uniquely to isomorphisms $\hat T|_{\{0\}\times [2]}{\cong}p_{[2]}^*x$ and $\hat T|_{\{1\}\times [2]}{\cong}p_{[2]}^*y$ in ${\mathrm{HO}}(Q)([2])$. Composing these isomorphisms with ${\mathrm{dia}}\hat T$ and lifting into ${\mathrm{HO}}(Q)([1]\times [2])$ gives $Z'\in {\mathrm{HO}}(Q)([1]\times [2])$ with $Z'|_{\{0\}\times [2]}=p_{[2]}^*x$ and $Z'|_{\{1\}\times [2]}=p_{[2]}^*y$, together with an isomorphism $t':{\mathrm{HO}}(f)( Z'){\cong}T$ in ${\mathrm{HO}}(R)([1]\times[2])$ inducing the identity on $p_{[2]}^*f(x)$ and $p_{[2]}^*f(y)$, respectively. Restricting $t'$ to $[1]\times {\partial}[2]$ and lifting to ${\mathrm{HO}}(Q)([1]\times{\partial}[2])$ specifies an isomorphism $t:Z'|_{[1]\times{\partial}[2]}{\cong}W$ such that $t|_{{0}\times {\partial}[2]}={\mathrm{id}}_{p_{{\partial}[2]}^* x}$ and $t|_{{1}\times {\partial}[2]}={\mathrm{id}}_{p_{{\partial}[2]}^* y}$. As we saw above, this suffices to guarantee that $W$ admits an extension $Z$ as desired.
{ "pile_set_name": "ArXiv" }
--- abstract: | This work is focused on the dissipative system $$\begin{cases} \ptt u+\partial_{xxxx}u +\partial_{xx}\theta-\big(\beta+\|\partial_x u\|_{L^2(0,1)}^2\big)\partial_{xx}u=f\\ \noalign{\vskip.7mm} \pt \theta -\partial_{xx}\theta -\partial_{xxt} u= g \end{cases}$$ describing the dynamics of an extensible thermoelastic beam, where the dissipation is entirely contributed by the second equation ruling the evolution of $\theta$. Under natural boundary conditions, we prove the existence of bounded absorbing sets. When the external sources $f$ and $g$ are time-independent, the related semigroup of solutions is shown to possess the global attractor of optimal regularity for all parameters $\beta\in\R$. The same result holds true when the first equation is replaced by $$\ptt u-\gamma\partial_{xxtt} u+\partial_{xxxx}u +\partial_{xx}\theta-\big(\beta+\|\partial_x u\|_{L^2(0,1)}^2\big)\partial_{xx}u=f$$ with $\gamma>0$. In both cases, the solutions on the attractor are strong solutions. address: - 'Università di Brescia - Dipartimento di Matematica Via Valotti 9, 25133 Brescia, Italy' - 'Politecnico di Milano - Dipartimento di Matematica “F. Brioschi” Via Bonardi 9, 20133 Milano, Italy' - 'Kharkov National University - Department of Mathematics and Mechanics 4 Svobody sq, 61077 Kharkov, Ukraine' author: - 'C. Giorgi, M.G. Naso, V. Pata, M. Potomkin' title: | Global Attractors for the Extensible\ Thermoelastic Beam System --- Introduction ============ For $t>0$, we consider the evolution system $$\label{PROB} \begin{cases} \displaystyle \ptt u+\partial_{xxxx}u +\partial_{xx}\theta-\Big(\beta+\int_0^1 |\partial_x u(x,\cdot)|^2\d x\Big)\partial_{xx}u=f,\\ \noalign{\vskip.7mm} \pt \theta -\partial_{xx}\theta -\partial_{xxt} u= g, \end{cases}$$ in the unknown variables $u=u(x,t):[0,1]\times\R^+\to\R$ and $\theta=\theta(x,t):[0,1]\times\R^+\to\R$, having put $\R^+=[0,\infty)$. The two equations are supplemented with the initial conditions $$\label{IC} \begin{cases} u(x,0)=u_0(x),\\ \pt u(x,0)=u_1(x),\\ \theta(x,0)=\theta_0(x), \end{cases}$$ for every $x\in[0,1]$, where $u_0$, $u_1$ and $\theta_0$ are assigned data. System  describes the vibrations of an extensible thermoelastic beam of unitary natural length, and is obtained by combining the pioneering ideas of Woinowsky-Krieger [@W] with the theory of linear thermoelasticity [@CAR]. Although a rigorous variational derivation of the model will be addressed in a forthcoming paper, it is worth noting that is a mild quasilinear version of the nonlinear motion equations devised in [@LLS §3]. With regard to the physical meaning of the variables in play, $u$ represents the vertical deflection of the beam from its configuration at rest, while the “temperature variation" $\theta$ actually arises from an approximation of the temperature variation with respect to a reference value, and it has the dimension of a temperature gradient (see [@LLS]). The real function $f=f(x,t)$ is the lateral load distribution and $g=g(x,t)$ is the external heat supply, having the role of a control function. Finally, the parameter $\beta\in\R$ accounts for the axial force acting in the reference configuration: $\beta>0$ when the beam is stretched, $\beta<0$ when compressed. Concerning the boundary conditions, for all $t\geq 0$ we assume $$\label{BC} \begin{cases} u(0,t)=u(1,t)=\partial_{xx}u(0,t)=\partial_{xx}u(1,t)=0,\\ \theta(0,t)=\theta(1,t)=0. \end{cases}$$ Namely, we take Dirichlet boundary conditions for the temperature variation $\theta$ and hinged boundary conditions for the vertical deflection $u$. The focus of this paper is the study of the longterm properties of the dynamical system generated by problem - in the natural weak energy phase space. In particular, for the autonomous case, we prove the existence of the global attractor of optimal regularity, for all values of the real parameter $\beta$. The main difficulty arising in the asymptotic analysis comes from the very weak dissipation exhibited by the model, entirely contributed by the thermal component, whereas the mechanical component, by itself, does not cause any decrease of energy. Hence, the loss of mechanical energy is due only to the coupling, which propagates the thermal dissipation to the mechanical component with the effect of producing mechanical dissipation. From the mathematical side, in order to obtain stabilization properties it is necessary to introduce sharp energy functionals, which allow to exploit the thermal dissipation in its full strength. A similar situation has been faced in [@AL1; @AL2; @AL3; @CL; @CHLA02; @CHLA08; @GIOP; @LAS; @POT], dealing with linear and semilinear thermoelastic problems without mechanical dissipation. Along the same line, we also mention [@LMS], which considers a quasilinear thermoelastic plate system. After this work was finished, we learned of a paper by Bucci and Chueshov [@BUC], which treats (actually, in a more general version) the same problem discussed here. In [@BUC], borrowing some techniques from the recent article [@CHLA08], the authors prove the existence of the global attractor of optimal regularity and finite fractal dimension for the semigroup generated by the autonomous version of -. The existence of the (regular) attractor is also shown in presence of a rotational term in the first equation, whose dynamics becomes in turn of hyperbolic type. The proofs of [@BUC] heavily rely on two basic facts: a key estimate, nowadays known in the literature as [*stabilizability inequality*]{} (cf. [@CL; @CHLA08]), and the gradient system structure featured by the model. The regularity of the attractor is demonstrated only in a second moment, exploiting the peculiar form of the attractor itself: a section of all bounded complete orbits of the semigroup. Nevertheless, we still believe that our paper might be of some interest, at least for the following reasons: - We do not appeal to the gradient system structure, except for the characterization of the attractor as the unstable set of stationary solutions. Accordingly, the existence of absorbing sets is established via explicit energy estimates, providing a precise (uniform) control on the entering times of the trajectories. The method applies to the nonautonomous case as well, where the gradient system structure is lost. - Our proof of asymptotic compactness is rather direct and simpler than in [@BUC]. Indeed, it merely boils down to the construction of a suitable decomposition of the semigroup. Incidentally, the required regularity is gained in just one single step, without making use of bootstrapping arguments. - As a matter of fact, we prove a stronger result: we find exponentially attracting sets of optimal regularity, obtaining at the same time the attractor and its regularity. Having such exponentially attracting sets, it is possible to show with little effort the existence of regular exponential attractors in the sense of [@EMZ], having finite fractal dimension. - In the rotational case, where the first equation contains the extra term $-\gamma\partial_{xxtt}u$ with $\gamma>0$, we improve the regularity of the attractor devised in [@BUC]. Plan of the paper {#plan-of-the-paper .unnumbered} ----------------- In Section 2, we consider an abstract generalization of -, whose solutions are generated by a family of solution operators $S(t)$. We also discuss the changes needed in the abstract framework to take into account other types of boundary conditions for $u$. Section 3 is devoted to the existence of an absorbing set for $S(t)$. In Section 4, we dwell on the autonomous case, where $S(t)$ is a semigroup, establishing the existence and the regularity of the global attractor. The proofs are carried out in the subsequent Section 5. In the last Section 6, we extend the results to a more general model, where an additional rotational inertia term is present. The Abstract Problem ==================== Notation -------- Let $(H,\l\cdot,\cdot\r,\|\cdot\|)$ be a separable real Hilbert space, and let $A:H\to H$ be a strictly positive selfadjoint operator with domain $\D(A)\Subset H$. For $r\in\R$, we introduce the scale of Hilbert spaces generated by the powers of $A$ $$H^r=\D(A^{r/4}),\quad \l u,v\r_r=\l A^{r/4}u,A^{r/4}v\r,\quad \|u\|_r=\|A^{r/4}u\|.$$ We will always omit the index $r$ when $r=0$. The symbol $\l\cdot,\cdot\r$ will also be used to denote the duality product between $H^r$ and its dual space $H^{-r}$. In particular, we have the compact embeddings $H^{r+1}\Subset H^r$, along with the generalized Poincaré inequalities $$\lambda_1\|u\|_r^4\leq \|u\|_{r+1}^4,\quad\forall u\in H^{r+1},$$ where $\lambda_1>0$ is the first eigenvalue of $A$. Finally, we define the the product Hilbert spaces $$\H^r=H^{r+2}\times H^r\times H^r.$$ Formulation of the problem -------------------------- For $\beta\in\R$, we consider the abstract Cauchy problem on $\H$ in the unknown variables $u=u(t)$ and $\theta=\theta(t)$ $$\label{BASE} \begin{cases} \ptt u+Au-A^{1/2}\theta+ \big(\beta+\|u\|^2_1\big)A^{1/2}u= f(t),\quad t>0,\\ \noalign{\vskip.7mm} \pt \theta+A^{1/2}\theta+A^{1/2}\pt u=g(t),\quad t>0,\\ u(0)=u_0,\quad \pt u(0)=u_1, \quad\theta(0)=\theta_0. \end{cases}$$ The following well-posedness result holds. \[EU\] Assume that $$f\in L^1_{\rm loc}(\R^+,H),\quad g\in L^1_{\rm loc}(\R^+,H)+L^2_{\rm loc}(\R^+,H^{-1}).$$ Then, for all initial data $(u_0,u_1,\theta_0)\in\H$, problem  admits a unique (weak) solution $$(u(t),\pt u(t),\theta(t))\in\C(\R^+,\H)$$ with $$(u(0),\pt u(0),\theta(0))=(u_0,u_1,\theta_0).$$ Moreover, calling $\bar z(t)$ the difference of any two solutions corresponding to initial data having norm less than or equal to $R\geq 0$, there exists $C=C(R)\geq 0$ such that $$\|\bar z(t)\|_\H \leq C\e^{C t}\|\bar z(0)\|_\H,\quad\forall t\geq 0.$$ We omit the proof, based on a standard Galerkin approximation procedure together with a slight generalization of the usual Gronwall lemma (cf. [@PPV]). Proposition \[EU\] translates into the existence of the [*solution operators*]{} $$S(t):\H\to\H,\quad t\geq 0,$$ acting as $$z=(u_0,u_1,\theta_0)\mapsto S(t)z=(u(t),\pt u(t),\theta(t)),$$ and satisfying the joint continuity property $$(t,z)\mapsto S(t)z\in\C(\R^+\times\H,\H).$$ In the autonomous case, namely, when both $f$ and $g$ are time-independent, the family $S(t)$ fulfills the semigroup property $$S(t+\tau)=S(t)S(\tau),\quad\forall t,\tau\geq 0.$$ Thus, $S(t)$ is a strongly continuous semigroup of operators on $\H$. We define the [*energy*]{} at time $t\geq 0$ corresponding to the initial data $z=(u_0,u_1,\theta_0)\in\H$ as $$\E(t)=\frac12\|S(t)z\|^2_\H+\frac14\big(\beta+\|u(t)\|_1^2\big)^2.$$ Multiplying the first equation of by $\pt u$ and the second one by $\theta$, we find the [*energy identity*]{} $$\label{E} \ddt \E+\|\theta\|_1^2=\l \pt u,f\r+\l\theta,g\r.$$ Indeed, $$\begin{aligned} \frac14\ddt\big(\beta+\|u\|_1^2\big)^2 &=\frac12\big(\beta+\|u\|_1^2\big) \ddt\big(\beta+\|u\|_1^2\big)\\ &=\frac12\big(\beta+\|u\|_1^2\big)\ddt \|u\|_1^2 =\big(\beta+\|u(t)\|_1^2\big)\l A^{1/2}u,\pt u\r.\end{aligned}$$ As a consequence, for every $T>0$, there exists a positive increasing function $\Q_T$ such that $$\label{TFIN} \E(t)\leq \Q_T(\E(0)),\quad\forall t\in [0,T].$$ The concrete problem: other boundary conditions ----------------------------------------------- The abstract system  serves as a model to describe quite general situations, including thermoelastic plates. In particular, problem - is a concrete realization of , obtained by putting $H=L^2(0,1)$ and $A=\partial_{xxxx}$ with domain $$\D(A)=\big\{u\in H^4(0,1):u(0)=u(1)=\partial_{xx} u(0)=\partial_{xx} u(1)=0\big\}.$$ In which case, $$A^{1/2}=-\partial_{xx},\quad\D(A^{1/2}) =H^2(0,1)\cap H_0^1(0,1).$$ However, although we supposed in that both ends of the beam are hinged, different boundary conditions for $u$ are physically significant as well, such as $$\label{CLAMP} u(0,t)=u(1,t)=\partial_{x}u(0,t)=\partial_{x}u(1,t)=0,$$ when both ends of the beam are clamped, or $$\label{MIX} u(0,t)=u(1,t)=\partial_{x}u(0,t)=\partial_{xx}u(1,t)=0,$$ when one end is clamped and the other one is hinged. On the contrary, the so-called cantilever boundary condition (one end clamped and the other one free) does not comply with the extensibility assumption, since no geometric constraints compel the beam length to change. In order to write an abstract formulation accounting also for the boundary conditions and , let $A_\star:\D(A_\star)\Subset H\to H$ be another selfadjoint strictly positive operator. Accordingly, for $r\in\R$, we have the further scale of Hilbert spaces $$H^r_\star=\D(A_\star^{r/4}),\quad \l u,v\r_{r,\star}=\l A^{r/4}_\star u,A^{r/4}_\star v\r,\quad \|u\|_{r,\star}=\|A^{r/4}_\star u\|.$$ Defining $$\H^r_\star=H^{r+2}_\star\times H^r\times H^r,$$ we consider the evolution system $$\label{BASEOTHER} \begin{cases} \ptt u+A_\star u-A^{1/2}\theta+ \big(\beta+\|u\|^2_1\big)A^{1/2}u= f,\\ \noalign{\vskip.7mm} \pt \theta+A^{1/2}\theta+A^{1/2}\pt u=g,\\ \end{cases}$$ with initial data $(u_0,u_1,\theta_0)\in\H_\star$, and we look for solutions $$(u(t),\pt u(t),\theta(t))\in\C(\R^+,\H_\star).$$ If there exists a dense subspace $D$ of $H$, contained in $\D(A_\star)\cap\D(A)$ and such that $$\label{relabell} A_\star u=A u\in D,\quad\forall u\in D,$$ we can work within a suitable Galerkin approximation scheme, using an orthonormal basis of $H$ made of elements in $D$. Then, exploiting the equality $$\|u\|^2_{2,\star}=\l A_\star u,u\r=\l A u,u\r=\|u\|^2_{2},$$ and, in turn, the interpolation inequalities $$\|u\|^2_{1} \leq \eps\|u\|_{2,\star}^2+\frac1{4\eps}\|u\|^2, \quad \|u\|^2_{3} \leq \eps\|u\|_{4,\star}^2+\frac1{4\eps}\|u\|_{2,\star}^2,$$ valid for every $u\in D$, we can adapt the proofs of the subsequent sections, in order to establish to the existence of the global attractor. This abstract scheme applies to the concrete case of the extensible thermoelastic beam with boundary conditions for $u$ of the form or , upon choosing $H=L^2(0,1)$, $A_\star=\partial_{xxxx}$ with domain $$\D(A_\star) =\begin{cases} H^4(0,1)\cap H^2_0(0,1) & \text{b.c.\ \eqref{CLAMP}},\\ \big\{u\in H^4(0,1)\cap H_0^1(0,1):\partial_{x}u(0)=\partial_{xx}u(1)=0\big\} & \text{b.c.\ \eqref{MIX}}, \end{cases}$$ and setting, for instance, $D=\C^\infty_{{\rm cpt}}(0,1)$. The Absorbing Set ================= In this section, we prove the existence of an absorbing set for the family $S(t)$. This is a bounded set $\BB\subset \H$ with the following property: for every $R\geq 0$, there is an [*entering time*]{} $t_R\geq 0$ such that $$\bigcup_{t\geq t_R}S(t)z\subset \BB,$$ whenever $\|z\|_\H\leq R$. In fact, we establish a more general result. \[thmABS\] Let $f\in L^\infty(\R^+,H)$, and let $\pt f$ and $g$ be translation bounded functions in $L^2_{\rm loc}(\R^+,H^{-1})$, that is, $$\label{tb} \sup_{t\geq 0}\int_t^{t+1}\big\{\|\pt f(\tau)\|^2_{-1} +\|g(\tau)\|^2_{-1}\big\}\d\tau<\infty.$$ Then, there exists $R_0>0$ with the following property: in correspondence of every $R\geq 0$, there is $t_0=t_0(R)\geq 0$ such that $$\E(t)\leq R_0,\quad \forall t\geq t_0,$$ whenever $\E(0)\leq R$. Both $R_0$ and $t_0$ can be explicitly computed. The absorbing set, besides giving a first rough estimate of the dissipativity of the system, is the preliminary step to prove the existence of much more interesting objects describing the asymptotic dynamics, such as global or exponential attractors (see, for instance, [@BV; @CV; @CVbook; @Chbook; @HAL; @MZ; @TEM]). Unfortunately, in certain situations where the dissipation is very weak, a direct proof of the existence of the absorbing set via [*explicit*]{} energy estimates might be very hard to find. On the other hand, for a quite general class of autonomous problems (the so-called gradient systems), it is possible to use an alternative approach and overcome this obstacle, appealing to the existence of a Lyapunov functional (see [@CP; @HAL; @LAD]). In which case, if the semigroup possesses suitable smoothing properties, one obtains right away the global attractor, and the absorbing set is then recovered as a byproduct. Though, the procedure provides no quantitative information on the entering time $t_R$, which is somehow unsatisfactory, especially in view of numerical simulations. This technique has been successfully adopted in the recent paper [@GPV], concerned with the longterm analysis of an integrodifferential equation with low dissipation, modelling the transversal motion of an extensible viscoelastic beam. As mentioned in the introduction, the problem considered in the present work is also weakly dissipative. But if we assume $f$ and $g$ independent of time, there is a way to define a Lyapunov functional (actually, for an equivalent problem), which would allow to exploit the method described above. In any case, in order to exhibit an actual bound on $t_R$, and also to deal with time-dependent external forces, a direct proof of Theorem \[thmABS\] would be much more desirable. However, due to presence of the coupling term, when performing the standard (and unavoidable) estimates, some “pure" energy terms having a power strictly greater than one pop up with the wrong sign. Such terms cannot be handled by means of standard Gronwall-type lemmas. Nonetheless, we are still able to establish the result, leaning on the following novel Gronwall-type lemma with parameter devised in [@GPZ]. \[superl\] Let $\Lambda:\R^+\to\R^+$ be an absolutely continuous function satisfying, for some $K\geq 0$, $Q\geq 0$, $\eps_0>0$ and every $\eps\in(0,\eps_0]$, the differential inequality $$\ddt\Lambda(t)+\eps \Lambda(t)\leq K\eps^2 [\Lambda(t)]^{3/2}+\eps^{-2/3}\varphi(t),$$ where $\varphi:\R^+\to\R^+$ is any locally summable function such that $$\sup_{t\geq 0}\int_t^{t+1}\varphi(\tau)\d \tau\leq Q.$$ Then, there exist $R_1>0$ and $\kappa>0$ such that, for every $R\geq 0$, it follows that $$\Lambda(t)\leq R_1,\quad\forall t\geq R^{1/\kappa}(1+\kappa Q)^{-1},$$ whenever $\Lambda(0)\leq R$. Both $R_1$ and $\kappa$ can be explicitly computed in terms of the constants $K,Q$ and $\eps_0$ (cf. [@GPZ]). We are now ready to proceed to the proof of the theorem. Here and in the sequel, we will tacitly use several times the Young and the Hölder inequalities, besides the usual Sobolev embeddings. The generic positive constant $C$ appearing in this proof may depend on $\beta$ and $\|f\|_{L^\infty(\R^+,H)}$. On account of , the functional $$\L(t)=\E(t)-\l u(t),f(t)\r$$ satisfies the differential equality $$\ddt \L+\|\theta\|_1^2=-\l u,\pt f \r+\l\theta, g\r.$$ Observing that $$\label{immediate} \|u\|_1\leq C|\beta+\|u\|_1^2|^{1/2}+C|\beta|^{1/2}\leq C\E^{1/4}+C,$$ we have the control $$-\l u,\pt f\r\leq C\eps^{2/3} \E^{1/2}+\eps^{-2/3}\|\pt f\|^2_{-1} \leq C\eps^2 \E^{3/2}+\eps^{-2/3}\|\pt f\|^2_{-1}+C,$$ for all $\eps\in(0,1]$. Moreover, $$\l \theta,g\r\leq \frac12\|\theta\|_1^2+C\|g\|^2_{-1}.$$ Thus, we obtain the differential inequality $$\label{L} \ddt \L+\frac12\|\theta\|_1^2 \leq C\eps^2 \E^{3/2}+\eps^{-2/3}\|\pt f\|^2_{-1}+C\|g\|^2_{-1}+C.$$ Next, we consider the auxiliary functionals $$\Phi(t)=\l\pt u(t),u(t)\r,\quad \Psi(t)=\l \pt u(t),\theta(t)\r_{-1}.$$ Concerning $\Phi$, we have $$\ddt\Phi+\|u\|_2^2+\big(\beta+\|u\|_1^2\big)^2 -\beta\big(\beta+\|u\|_1^2\big) =\|\pt u\|^2+\l u,\theta\r_1+\l u,f\r.$$ Noting that $$\frac12\big(\beta+\|u\|_1^2\big)^2 -\beta\big(\beta+\|u\|_1^2\big)=\frac12\|u\|_1^4-\frac12 \beta^2\geq -C,$$ and $$\l u,\theta\r_1+\l u,f\r\leq \frac14\|u\|_2^2+C\|\theta\|_1^2+C,$$ we are led to $$\label{PSI1} \ddt\Phi+\frac34\|u\|_2^2+\frac12\big(\beta+\|u\|_1^2\big)^2 \leq \|\pt u\|^2+C\|\theta\|_1^2+C.$$ Turning to $\Psi$, we have the differential equality $$\ddt\Psi+\|\pt u\|^2=\|\theta\|^2 -\l\pt u,\theta\r-\l u,\theta\r_1+\l\pt u,g\r_{-1}+\l\theta,f\r_{-1}+\J,$$ having put $$\J=-\big(\beta+\|u\|_1^2\big)\l u,\theta\r.$$ We easily see that $$\begin{aligned} &\|\theta\|^2 -\l\pt u,\theta\r-\l u,\theta\r_1+\l g,\pt u\r_{-1}+\l f,\theta\r_{-1}\\ &\leq\frac{1}{8}\|u\|_2^2 + \frac14 \|\pt u \|^2 +C\|\theta\|_1^2+C\|g\|^2_{-1}+C,\end{aligned}$$ whereas, in light of , the remaining term $\J$ is controlled as $$\J\leq C\|\theta\|\big(\|u\|_1^3+1\big)\leq C\|\theta\|\E^{3/4} +C\|\theta\| \leq C\|\theta\|\E^{3/4}+C\|\theta\|_1^2+C.$$ In conclusion, $$\label{PSI2} \ddt\Psi+\frac34\|\pt u\|^2 \leq \frac{1}{8}\|u\|_2^2+C\|\theta\|_1^2+C\|\theta\|\E^{3/4}+C\|g\|^2_{-1}+C.$$ Collecting -, we end up with $$\label{UNODUE} \ddt\big\{\Phi+2\Psi\big\} +\E\leq C\|\theta\|_1^2+C\|\theta\|\E^{3/4}+C\|g\|^2_{-1}+C.$$ Finally, for $\eps\in(0,1]$, we set $$\Lambda(t)=\L(t)+2\eps\big\{\Phi(t)+2\Psi(t)\big\}+C,$$ where the above $C$ is large enough and $\eps$ is small enough such that $$\label{CTRL} \frac12 \E\leq\Lambda\leq 2\E+C.$$ Then, calling $$\varphi(t)=C+C\|\pt f(t)\|^2_{-1}+C\|g(t)\|^2_{-1},$$ the inequalities , and entail $$\begin{aligned} \ddt\Lambda +\eps\Lambda +\frac12(1-C\eps)\|\theta\|_1^2 &\leq C\eps^2 \Lambda^{3/2}+C\eps\|\theta\|\Lambda^{3/4}+\eps^{-2/3}\varphi\\ &\leq C\eps^2 \Lambda^{3/2}+\eps^{-2/3}\varphi +\frac14\|\theta\|_1^2.\end{aligned}$$ It then is apparent that there exists $\eps_0>0$ small such that, for every $\eps\in(0,\eps_0]$, $$\ddt\Lambda +\eps\Lambda \leq C\eps^2 \Lambda^{3/2}+\eps^{-2/3}\varphi.$$ By virtue of , we are in a position to apply Lemma \[superl\]. Using once more , the proof is finished. The Global Attractor ==================== In the sequel, we will assume the external forces $f$ and $g$ to be independent of time. In which case, $S(t)$ is a strongly continuous semigroup on $\H$. We define $$\theta_g=A^{-1/2}g,\quad z_g=(0,0,\theta_g).$$ The main result, which will be proved in the next section, reads as follows. \[MAIN\] Let $f,g\in H$. Then, the semigroup $S(t)$ acting on $\H$ possesses the (connected) global attractor $\A$. Moreover, $$\A=z_g+\A_0,$$ where $\A_0$ is a bounded subset of the space $\H^2\Subset\H$. We recall that the global attractor $\A$ of $S(t)$ acting on $\H$ is the unique compact subset of $\H$ which is at the same time fully invariant, i.e., $$S(t)\A=\A,\quad\forall t\geq 0,$$ and attracting, i.e., $$\lim_{t\to\infty}\boldsymbol{\delta}_\H(S(t)B,\A)= 0,$$ for every bounded set $B\subset\H$, where $\boldsymbol{\delta}_\H$ denotes the standard Hausdorff semidistance in $\H$ (see [@BV; @HAL; @TEM]). Within our hypotheses, the regularity of $\A$ is optimal. On the other hand, one can prove that $\A$ is as regular as $f$ and $g$ permit. For instance, if $f,g\in H^n$ for every $n\in\N$, then each component of $\A$ belongs to $H^n$ for every $n\in\N$. The proof of the theorem will be carried out by showing a suitable (exponential) asymptotic compactness property of the semigroup, which will be obtained exploiting a particular decomposition of $S(t)$ devised in [@GPV]. Besides, due to such a decomposition, it is not hard to demonstrate (e.g., following [@EMZ]) the existence of regular exponential attractors for $S(t)$ having finite fractal dimension in $\H$. As a straightforward consequence, recalling that the global attractor is the [*minimal*]{} closed attracting set, we have \[FRAC\] The fractal dimension of $\A$ in $\H$ is finite. In fact, having proved the existence of the absorbing set $\BB$, we could also consider the nonautonomous case (when $f$ and $g$ depend on time), establishing a more general result on the existence of the global attractor for a process of operators, provided that $f$ and $g$ fulfill suitable translation compactness properties (see [@CV; @CVbook] for more details). However, in that case, the decomposition from [@GPV] fails to work, and other techniques should be employed in order to establish asymptotic compactness, such as the $\alpha$-contraction method [@HAL] (see also [@EM], where the method is applied to a similar, albeit autonomous, problem). We now dwell on the structure of the global attractor. To this aim, we introduce the set $${\mathcal S}=\big\{z\in\H:S(t)z=z,\,\forall t\geq 0\big\}$$ of stationary points of $S(t)$, which clearly consists of all vectors of the form $(u,0,\theta_g)$, where $u\in H^4$ is a solution to the elliptic problem $$Au+\big(\beta+\|u\|^2_1\big)A^{1/2}u= f+g.$$ The set ${\mathcal S}$ turns out to be nonempty and $-z_g+{\mathcal S}$ is bounded in $\H^2$. Then, the following characterization of $\A$ holds. \[propSTAZ\] The global attractor $\A$ coincides with the unstable set of ${\mathcal S}$; namely, $$\A= \big\{z(0): z(t) \text{ is a complete trajectory of $S(t)$ and } \lim_{t\to \infty}\|z(-t)-{\mathcal S}\|_{\H}=0\big\}.$$ Recall that $z(t)$ is called a complete trajectory of $S(t)$ if $$z(t+\tau)=S(t)z(\tau),\quad\forall t\geq 0,\,\forall \tau\in\R.$$ \[corSTAZ\] If ${\mathcal S}$ is finite, then $$\A= \big\{z(0): \lim_{t\to \infty}\|z(-t)-z_1\|_{\H} =\lim_{t\to \infty}\|z(t)-z_2\|_{\H}=0\big\},$$ for some $z_1,z_2\in{\mathcal S}$. If ${\mathcal S}$ consists of a single element $z_{\rm s}\in\H^2$, then $\A=\{z_{\rm s}\}$. As shown in [@CZGP], the set ${\mathcal S}$ is always finite when all the eigenvalues $\lambda_n$ of $A$ (recall that $\lambda_n\uparrow\infty$) satisfying the relation $$\beta<-\sqrt{\lambda_n}\,$$ are simple (this is the case in the concrete problem -), while it possesses a single element $z_{\rm s}$ if $\beta\geq -\sqrt{\lambda_1}\,$. In particular, we have \[corSTAZ2\] If $\beta>-\sqrt{\lambda_1}\,$ and $f+g=0$, then $\A=\{z_g\}$ and $$\boldsymbol{\delta}_\H(S(t)B,\A) =\sup_{z\in B}\|S(t)z-z_g\|_\H\leq\Q(\|B\|_\H)\e^{-\varkappa t},$$ for some $\varkappa>0$ and some positive increasing function $\Q$. Both $\varkappa$ and $\Q$ can be explicitly computed. We conclude the section discussing the injectivity of $S(t)$ on $\A$. \[BACK\] The map $S(t)_{|\A}:\A\to\A$ fulfills the backward uniqueness property; namely, the equality $S(t)z_1=S(t)z_2$, for some $t>0$ and $z_1,z_2\in\A$, implies that $z_1=z_2$. As a consequence, the map $S(t)_{|\A}$ is a bijection on $\A$, and so it can be extended to negative times by the formula $$S(-t)_{|\A}=[S(t)_{|\A}]^{-1}.$$ In this way, $S(t)_{|\A}$, $t\in\R$, is a strongly continuous (in the topology of $\H$) group of operators on $\A$. Proofs of the Results ===================== An equivalent problem --------------------- Denoting as usual $S(t)z=(u(t),\pt u(t),\theta(t))$, for some given $z=(u_0,u_1,\theta_0)\in\H$, we introduce the function $$\omega(t)=\theta(t)-\theta_g.$$ It then is apparent that $(u(t),\pt u(t),\omega(t))$ solves $$\label{BASENEW} \begin{cases} \ptt u+Au-A^{1/2}\omega+ \big(\beta+\|u\|^2_1\big)A^{1/2}u= h,\\ \noalign{\vskip.7mm} \pt \omega+A^{1/2}\omega+A^{1/2}\pt u=0, \end{cases}$$ where $$h=f+g\in H,$$ with the initial conditions $$(u(0),\pt u(0),\omega(0))=z-z_g.$$ According to Proposition \[EU\], system generates a strongly continuous semigroup $S_0(t)$ on $\H$, which clearly fulfills the relation $$\label{SIM} S(t)(\zeta+z_g)=z_g+S_0(t)\zeta,\quad\forall \zeta\in\H.$$ Thus, from Theorem \[thmABS\], we learn that $S_0(t)$ possesses the absorbing set $$\BB_0=-z_g+\BB.$$ Using also , we have the uniform bound $$\label{BOUND} \sup_{t\geq 0}\sup_{\zeta\in\BB_0}\|S_0(t)\zeta\|_\H\leq C.$$ Here and till the end of the section, the generic constant $C$ depends only on $\beta$, $\|h\|$ and the size of the absorbing set $\BB_0$. In light of , Theorem \[MAIN\] is an immediate consequence of the following result. \[MAINNEW\] The semigroup $S_0(t)$ acting on $\H$ possesses the connected global attractor $\A_0$ bounded in $\H^2$. We postpone the proof of Theorem \[MAINNEW\], which requires several steps. In the sequel, for $\zeta=(u_0,u_1,\omega_0)\in\H$, we denote $$S_0(t)\zeta=(u(t),\pt u(t),\omega(t)),$$ whose corresponding energy is given by $$\E_0(t)=\frac12\|S_0(t)\zeta\|^2_\H+\frac14\big(\beta+\|u(t)\|_1^2\big)^2.$$ Therefore, the functional $$\L_0(t)=\E_0(t)-\l h, u(t)\r$$ satisfies the differential equality $$\label{L0} \ddt \L_0+\|\omega\|_1^2=0.$$ It is then an easy matter to show that $\L_0$ is a Lyapunov functional for $S_0(t)$, and by means of standard arguments (see, e.g., [@BV; @HAL; @TEM]) we conclude that $$\A_0= \big\{\zeta(0): \zeta(t) \text{ is a complete trajectory of $S_0(t)$ and } \lim_{t\to \infty}\|\zeta(-t)-{\mathcal S}_0\|_{\H}=0\big\},$$ where $${\mathcal S}_0=-z_g+{\mathcal S}$$ is the (nonempty) set of stationary points of $S_0(t)$. Besides, if ${\mathcal S}_0$ is finite, $$\A_0= \big\{\zeta(0): \lim_{t\to \infty}\|\zeta(-t)-\zeta_1\|_{\H} =\lim_{t\to \infty}\|\zeta(t)-\zeta_2\|_{\H}=0\big\},$$ for some $\zeta_1,\zeta_2\in{\mathcal S}_0$. In particular, when ${\mathcal S}_0$ consists of a single element, recalling that the Lyapunov functional is decreasing along the trajectories, there exists only one (constant) complete trajectory of $S_0(t)$, so implying that $\A_0$ is a singleton. On account of , this provides the proofs of Proposition \[propSTAZ\] and Corollary \[corSTAZ\]. In the same fashion, Proposition \[BACK\] follows from the analogous statement for $S_0(t)$, detailed in the next proposition. \[BACK0\] The map $S_0(t)_{|\A_0}:\A_0\to\A_0$ fulfills the backward uniqueness property. We follow a classical method devised by Ghidaglia [@GHID] (see also [@TEM], §III.6), along with an argument devised in [@CHLA02]. For $\zeta_1,\zeta_2\in\A_0$, let us denote $$S_0(t)\zeta_\imath=(u_\imath(t),v_\imath(t),\omega_\imath(t)),\quad \imath=1,2,$$ and $$\zeta(t)=(u(t),v(t),\omega(t))=S_0(t)\zeta_1-S_0(t)\zeta_2.$$ Assume that $\zeta(T)=0$ for some $T>0$. We draw the conclusion if we show that $$\label{ZETA} \zeta(0)=\zeta_1-\zeta_2=0.$$ On account of , the column vector $$\xi(t)=(A^{1/2}u(t),v(t),\omega(t))^\top$$ satisfies the differential equation $$\partial_t \xi+A^{1/2}\!\!\cdot{\mathbb B}\, \xi =\G,$$ where we put $${\mathbb B}= \left( \begin{array}{rrr} 0 & -1 &0\\ 1 & 0 & -1\\ 0 & 1 & 1 \end{array} \right),\quad \G= \left( \begin{matrix} 0 \\ \big(\|u_2\|_1^2-\|u_1\|_1^2\big)A^{1/2}u_2-\big(\beta+\|u_1\|_1^2\big)A^{1/2}u\\ 0 \end{matrix} \right).$$ The matrix ${\mathbb B}$ possesses three distinct eigenvalues of strictly positive real parts, precisely: $a\sim 0.57$ and $b\pm {\rm i} c$, with $b\sim 0.22$ and $c\sim 1.31$. Hence, there exists a (complex) invertible $(3\times 3)$-matrix ${\mathbb U}$ such that $${\mathbb U}^{-1}{\mathbb B}{\mathbb U}={\mathbb D},$$ where ${\mathbb D}$ is the diagonal matrix whose entries are the eigenvalues of ${\mathbb B}$. Accordingly, setting $\G_\star={\mathbb U}^{-1}\G$, the (complex) function $\xi_\star(t)={\mathbb U}^{-1}\xi(t)$ fulfills $$\partial_t \xi_\star+A^{1/2}\!\!\cdot{\mathbb D}\, \xi_\star =\G_\star.$$ Besides, $\xi_\star(T)=0$. At this point, for $r=0,1$, we consider the complex Hilbert spaces $$\W^r=H_{\mathbb C}^r\times H_{\mathbb C}^r\times H_{\mathbb C}^r,$$ $H_{\mathbb C}^r$ being the complexification of $H^r$. It is convenient to endow $\W^1$ with the equivalent norm $$\|\xi_\star\|_{\W^1}^2=a\|w_\star\|_{1}^2 +b\|v_\star\|_{1}^2+b\|\omega_\star\|_{1}^2,\quad \xi_\star=(w_\star,v_\star,\omega_\star).$$ With this choice, $$\l A^{1/2}\!\!\cdot{\mathbb D}\, \xi_\star,\xi_\star\r_\W=\|\xi_\star\|^2_{\W^1}.$$ It is also apparent that $$\|\G\|_\W\leq k\|A^{1/2}u\|=k\|\xi\|_\W,$$ for some $k>0$ independent of $\zeta_1,\zeta_2\in\A_0$. Thus, up to redefining $k>0$, $$\|\G_\star\|_{\W}\leq k\|\xi_\star\|_{\W}.$$ Next, we define the function $$\Gamma(t)=\frac{\|\xi_\star (t)\|_{\W^1}^2}{\|\xi_\star(t)\|_\W^2}.$$ Taking the time-derivative of $\Gamma$, and exploiting the equality $$\l A^{1/2}\!\!\cdot{\mathbb D}\, \xi_\star-\Gamma\xi_\star,\Gamma \xi_\star\r_\W=0,$$ we obtain $$\begin{aligned} \ddt\Gamma&=\frac{-2\| A^{1/2}\!\!\cdot{\mathbb D}\, \xi_\star -\Gamma \xi_\star\|^2_\W}{\|\xi_\star\|_\W^2} +\frac{2\Re\langle A^{1/2}\!\!\cdot{\mathbb D}\, \xi_\star-\Gamma \xi_\star, \G_\star\rangle_\W}{\|\xi_\star\|_\W^2}\\ &\leq \frac{-2\| A^{1/2}\!\!\cdot{\mathbb D}\, \xi_\star -\Gamma \xi_\star\|^2_\W}{\|\xi_\star\|_\W^2} +\frac{\| A^{1/2}\!\!\cdot{\mathbb D}\, \xi_\star -\Gamma \xi_\star\|^2_\W}{\|\xi_\star\|_\W^2} +\frac{\|\G_\star\|^2_\W}{\|\xi_\star\|_\W^2}\leq k^2,\end{aligned}$$ and an integration in time provides the estimate $$\Gamma(t)\leq \Gamma(0)+k^2t.$$ If is false, by continuity, $\zeta(t)\neq 0$ in a neighborhood of zero. In turn, $\xi_\star(t)\neq 0$ in the same neighborhood. Recalling that $\xi_\star(T)=0$, there exists $T_0\in(0,T]$ such that $\xi_\star(t)\neq 0$ on $[0,T_0)$ and $\xi_\star(T_0)=0$. Taking the time-derivative of $\log\|\xi_\star\|_\W^{-1}$, we find $$\ddt\log \|\xi_\star\|_\W^{-1} =-\frac12\ddt\log \|\xi_\star\|_\W^{2} =\Gamma -\frac{\Re\langle \xi_\star,\G_\star\rangle_\W}{\|\xi_\star\|_\W^2} \leq \Gamma +\frac{\|\G_\star\|_\W}{\|\xi_\star\|_\W}\leq\Gamma+k.$$ Integrating on $(0,t)$, with $t<T_0$, we conclude that $$\log \|\xi_\star(t)\|_\W^{-1} \leq \log \|\xi_\star(0)\|_\W^{-1}+T_0\Gamma(0) +k T_0+\frac12 k^2T_0^2.$$ This produces a uniform bound on $\log \|\xi_\star(t)\|_\W^{-1}$ over $[0,T_0)$, in contradiction with the fact that $\|\xi_\star(T_0)\|_{\W}=0$. Proof of Theorem \[MAINNEW\] ---------------------------- We need first to prove a suitable dissipation integral for the norm of $\pt u$. \[INTut\] For every $\nu>0$ small, there is $C_\nu>0$ such that $$\int_s^t \|\pt u(\tau)\|^2\d \tau\leq \nu(t-s)+C_\nu,\quad\forall t>s\geq 0,$$ whenever $\zeta\in\BB_0$. Let $\zeta\in\BB_0$. An integration of , together with , yield the uniform bound $$\label{omegaINTB} \int_0^\infty\|\omega(t)\|_1^2\d t\leq C.$$ uniformly with respect to $\zeta\in\BB_0$. We now consider the auxiliary functionals, analogous to those in the proof of Theorem \[thmABS\], $$\Phi_0(t)=\l\pt u(t),u(t)\r,\quad \Psi_0(t)=\l \pt u(t),\omega(t)\r_{-1},$$ which satisfy the equalities $$\ddt\Phi_0+\|u\|_2^2+\|u\|_1^4+\beta\|u\|_1^2 =\|\pt u\|^2+\l u,\omega\r_1+\langle u, h \rangle,$$ and $$\ddt\Psi_0+\|\pt u\|^2=\|\omega\|^2 -\l\pt u,\omega\r-\l u,\omega\r_1+\l\omega,h\r_{-1} -\big(\beta+\|u\|_1^2\big)\l u,\omega\r.$$ Then, we easily get $$\ddt\Phi_0+\frac12 \|u\|_2^2\leq \|\pt u\|^2+C+C\|\omega\|_1^2,$$ and, for all positive $\eps\leq 1$, $$\ddt\Psi_0+\frac12 \|\pt u\|^2\leq \frac{\eps}{2}\|u\|_2^2 + C \eps +\frac{C}{\eps}\|\omega\|_1^2.$$ Therefore, for every $\eps\leq 1/4$, $$\ddt\big\{\eps\Phi_0+\Psi_0\big\} +\frac14 \|\pt u\|^2\leq C \eps+\frac{C}{\eps}\|\omega\|_1^2.$$ Integrating the last inequality on $(s,t)$, and using and , we conclude that $$\int_s^t\|\pt u(\tau)\|^2\d \tau\leq C\eps(t-s)+\frac{C}{\eps}.$$ Setting $\nu=C\eps$ and $C_\nu=C/\eps$ the proof is completed. We shall also make use of the following Gronwall-type lemma (see, e.g., [@GGMP1]). \[GRNWOLD\] Let $\Lambda:\R^+\to\R^+$ be an absolutely continuous function satisfying, for some $\nu>0$, the differential inequality $$\ddt\Lambda(t)+2\nu \Lambda(t)\leq \psi(t)\Lambda(t),$$ where $\psi:\R^+\to\R^+$ is any locally summable functions such that $$\int_s^t \psi(\tau)\d \tau\leq \nu (t-s)+K,\quad \forall t>s\geq 0,$$ with $K\geq 0$. Then, $$\Lambda(t)\leq \e^K\Lambda(0)\e^{-\nu t}.$$ At this point, exploiting the interpolation inequality $$\|u\|_1^2\leq \|u\|\|u\|_2,$$ we choose $\alpha>0$ large enough such that $$\label{ESTIMA} \frac14\|u\|_2^2\leq \frac12\|u\|_2^2+\beta\|u\|_1^2+\alpha\|u\|^2\leq M\|u\|_2^2,$$ for some $M=M(\alpha,\beta)\geq 1$. Then, following [@GPV], we decompose the solution $S_0(t)\zeta$, with $\zeta\in\BB_0$, into the sum $$S_0(t)\zeta=L(t)\zeta+K(t)\zeta,$$ where $$L(t)\zeta=(v(t),\pt v(t),\eta(t))\quad\text{and}\quad K(t)\zeta=(w(t),\pt w(t),\rho(t))$$ are the (unique) solutions to the Cauchy problems $$\label{DECAY} \begin{cases} \ptt v+Av-A^{1/2}\eta+ (\beta+\|u\|^2_1)A^{1/2}v+\alpha v= 0,\\ \noalign{\vskip.5mm} \pt \eta+A^{1/2}\eta+A^{1/2}\pt v=0,\\ \noalign{\vskip.7mm} (v(0),\pt v(0),\eta(0))=\zeta, \end{cases}$$ and $$\label{CPT} \begin{cases} \ptt w+Aw-A^{1/2}\rho+ (\beta+\|u\|^2_1)A^{1/2}w-\alpha v= h,\\ \noalign{\vskip.5mm} \pt \rho+A^{1/2}\rho+A^{1/2}\pt w=0,\\ \noalign{\vskip.7mm} (w(0),\pt w(0),\rho(0))=0. \end{cases}$$ We begin to prove the exponential decay of $L(t)\zeta$. \[lemmaDECAY\] There is $\varkappa>0$ such that $$\sup_{\zeta\in\BB_0}\|L(t)\zeta\|_{\H}\leq C\e^{-\varkappa t}.$$ Denoting for simplicity $$E_0(t)=\|L(t)\zeta\|_\H^2,$$ we define, for $\eps>0$, the functional $$\Lambda_0(t)=\Theta_0(t)+\eps\Upsilon_0(t),$$ where $$\begin{aligned} \Theta_0(t) &=E_0(t)+\beta\|v(t)\|_1^2 +\alpha\|v(t)\|^2+\|u(t)\|_1^2\|v(t)\|_1^2,\\ \Upsilon_0(t)&=\l\pt v(t),v(t)\r+2\l \pt v(t),\eta(t)\r_{-1}.\end{aligned}$$ It is clear from and that, for all $\eps$ small enough, $$\label{CTRLE0} \frac12 E_0\leq \Lambda_0\leq C E_0.$$ Due to , and , we have $$\ddt\Theta_0+2\|\eta\|^2_1=2\l A^{1/2}u,\pt u\r\|v\|_1^2\leq C\|\pt u\|\|v\|_1^2 \leq C\|\pt u\|\Lambda_0,$$ and $$\begin{aligned} &\ddt\Upsilon_0+\big(\|v\|_2^2+\beta\|v\|_1^2+\alpha\|v\|^2\big) +\|u\|_1^2\|v\|_1^2+\|\pt v\|^2\\ \noalign{\vskip1mm} &=2\|\eta\|^2-2\l\pt v,\eta\r-\l v,\eta\r_1 -2\big(\beta+\|u\|_1^2\big)\l v,\eta\r -2\alpha\l v,\eta\r_{-1}\\ &\leq\frac12\|\pt v\|^2+\frac12\|v\|^2_2+C\|\eta\|^2_1.\end{aligned}$$ Thus, using , we obtain $$\ddt\Upsilon_0+\frac14\|v\|_2^2 +\frac12\|\pt v\|^2\leq C\|\eta\|^2_1.$$ We conclude that $$\ddt\Lambda_0+\frac{\eps}4\|v\|_2^2 +\frac{\eps}2\|\pt v\|^2+(2-C\eps)\|\eta\|_1^2 \leq C\|\pt u\|\Lambda_0 \leq \frac{\eps}{16}\Lambda_0+C\|\pt u\|^2\Lambda_0.$$ Appealing again to , it is apparent that, for all $\eps>0$ small enough, we have the inequality $$\ddt\Lambda_0+\frac{\eps}{16}\Lambda_0 \leq C\|\pt u\|^2\Lambda_0.$$ The conclusion follows from Lemma \[INTut\] and Lemma \[GRNWOLD\], using again . \[corDIRDEC\] If $\beta>-\sqrt{\lambda_1}\,$ and $h=0$, then $$\|S_0(t)\zeta\|_\H\leq\Q(\|\zeta\|_\H)\e^{-\varkappa t},$$ for some $\varkappa>0$ and some positive increasing function $\Q$. Both $\varkappa$ and $\Q$ can be explicitly computed. We preliminarily observe that it suffices to prove the result for $\zeta\in\BB_0$. Moreover, choosing a constant $\sigma$ such that $$-\beta\lambda_1^{-1/2}<\sigma<1,$$ we find the controls $$\label{ESTIMDEC} m\|u\|_2^2 \leq\sigma \|u\|_2^2+\beta\|u\|_1^2 \leq \big(1+|\beta|\lambda_1^{-1/2}\big)\|u\|_2^2,$$ with $$m= \begin{cases} \sigma&\text {if }\beta\geq 0,\\ \sigma+\beta\lambda_1^{-1/2}&\text {if }\beta<0. \end{cases}$$ Then, we just recast the proof of Lemma \[lemmaDECAY\], with in place of . Exploiting , from Corollary \[corDIRDEC\] we readily get the proof of Corollary \[corSTAZ2\]. The next lemma shows the uniform boundedness of $K(t)\BB_0$ in the more regular space $\H^2$, compactly embedded into $\H$. \[lemmaCPT\] The estimate $$\sup_{\zeta\in\BB_0}\|K(t)\zeta\|_{\H^2}\leq C$$ holds for every $t\geq 0$. We first observe that, from and Lemma \[lemmaDECAY\], we have $$\label{ienki} \|w\|_3^2\leq \|w\|_2\|w\|_4 \leq (\|u\|_2+\|v\|_2)\|w\|_4\leq C\|w\|_4.$$ Setting $$E_1(t)=\|K(t)\zeta\|_{\H^2}^2,$$ we define, for $\eps>0$, the functional $$\Lambda_1(t)=\Theta_1(t)+\eps\Upsilon_1(t),$$ where $$\begin{aligned} \Theta_1(t) &=E_1(t)+\big(\beta+\|u(t)\|_1^2\big)\|w(t)\|_3^2-2\l Aw(t),h\r,\\ \Upsilon_1(t)&=\l\pt w(t),w(t)\r_2+2\l \pt w(t),\rho(t)\r_{1}.\end{aligned}$$ Note that, from and , $$\label{CTRLE1} \frac12 E_1-C\leq \Lambda_1\leq C E_1+C,$$ for all $\eps$ small enough. Exploiting Lemma \[lemmaDECAY\], and , we have $$\ddt\Theta_1+2\|\rho\|^2_3=2\alpha\l v,\pt w\r_2+2\l A^{1/2}u,\pt u\r\|w\|_3^2 \leq\frac{\eps}{4}\|w\|_4^2+ \frac{\eps}{4}\|\pt w\|_2^2+\frac{C}{\eps},$$ and $$\begin{aligned} &\ddt\Upsilon_1+\|w\|_4^2+\|\pt w\|_2^2 +\|u\|_1^2\|w\|_3^2 \\ \noalign{\vskip1mm} &=-\beta\|w\|_3^2+2\|\rho\|_2^2 -\l w,\rho\r_3 -2\big(\beta+\|u\|_1^2\big)\l w,\rho\r_2-2 \langle \pt w,\rho \rangle_2\\ &\quad +\alpha\l v,w\r_2+2\alpha\l v,\rho\r_1+\l Aw+2 A^{1/2}\rho,h\r\\ &\leq \frac14\|w\|_4^2+\frac14\|\pt w\|_2^2+C\|\rho\|_3^2+C,\end{aligned}$$ which entails $$\ddt\Upsilon_1+\frac34 \|w\|_4^2+\frac34\|\pt w\|_2^2 \leq C\|\rho\|_3^2+C.$$ Collecting the above estimates, we are led to $$\ddt\Lambda_1+\frac{\eps}2 \|w\|_4^2+\frac{\eps}{2}\|\pt w\|_2^2 +(2-C\eps)\|\rho\|_3^2\leq \frac{C}{\eps}.$$ On account of , we can now fix $\eps$ small enough such that the inequality $$\ddt\Lambda_1+\nu\Lambda_1\leq C$$ holds for some $\nu>0$. Applying the Gronwall lemma, and using again , we are done. In conclusion, Lemma \[lemmaDECAY\] and Lemma \[lemmaCPT\] tell us that $S_0(t)\BB_0$ is (exponentially) attracted by a bounded subset of $\H^2$, thus, precompact in $\H$. As is well known from the theory of dynamical systems (see, e.g., [@BV; @HAL; @TEM]), this yields the existence of the global attractor $\A_0$, bounded in $\H^2$, for the semigroup $S_0(t)$ acting on $\H$. The proof of Theorem \[MAINNEW\] is finished. A More General Model ==================== In this final section, we discuss a more general abstract problem, obtained by adding the term $\gamma A^{1/2}\ptt u$ to the first equation of , where $\gamma\geq 0$ is the so-called rotational parameter. Given $\gamma\geq 0$, we define the strictly positive selfadjoint operator on $H$ $$M_\gamma=1+\gamma A^{1/2},$$ with domain $\D(M_\gamma)=H^2$ (when $\gamma>0$). Since the operator $M_\gamma$ commutes with $A$ and all its powers, we introduce the spaces $$H^r_\gamma=\D(A^{(r-1)/4}M_\gamma^{1/2}),\quad r\in\R,$$ with inner products and norms $$\l u,v\r_{r,\gamma}=\l A^{(r-1)/4}M_\gamma^{1/2}u,A^{(r-1)/4}M_\gamma^{1/2}v\r,\quad \|u\|_{r,\gamma}=\|A^{(r-1)/4}M_\gamma^{1/2}u\|.$$ Finally, we set $$\V^r_\gamma=H^{r+2}\times H^{r+1}_\gamma\times H^r.$$ Again, we agree to omit the index $r$ when $r=0$. Note that $$\label{DUENORME} \|u\|^2_{r,\gamma}=\|u\|^2_{r-1}+\gamma\|u\|^2_{r} \leq \Big(\frac1{\sqrt{\lambda_1}}\,+\gamma\Big)\|u\|^2_{r}.$$ Hence, when $\gamma>0$, the space $H^{r}_\gamma$ is just $H^r$ endowed with an equivalent norm, whereas $H^r_0=H^{r-1}$ and $\V^r_0=\H^r$. We consider the Cauchy problem on $\V_\gamma$ $$\label{ROTSYS} \begin{cases} M_\gamma\ptt u+Au-A^{1/2}\theta+ \big(\beta+\|u\|^2_1\big)A^{1/2}u= f(t),\quad t>0,\\ \noalign{\vskip.7mm} \pt \theta+A^{1/2}\theta+A^{1/2}\pt u=g(t),\quad t>0,\\ u(0)=u_0,\quad \pt u(0)=u_1, \quad\theta(0)=\theta_0, \end{cases}$$ of which is just the particular instance corresponding to $\gamma=0$. In concrete models, the additional term $\gamma A^{1/2} \ptt u$ accounts for the presence of rotational inertia. With $f$ and $g$ as in Proposition \[EU\], this system generates a family of solution operators $S^\gamma(t)$ on $\V_\gamma$, satisfying the joint continuity property $$(t,z)\mapsto S^\gamma(t)z\in\C(\R^+\times\V_\gamma,\V_\gamma).$$ The energy at time $t$ corresponding to the initial data $z\in\V_\gamma$ now reads $$\E^\gamma(t)=\frac12\|S^\gamma(t)z\|^2_{\V_\gamma}+\frac14\big(\beta+\|u(t)\|_1^2\big)^2,$$ and the energy identity is still true replacing $\E$ with $\E^\gamma$. As a matter of fact, all the results stated in the previous sections extend to the present case. \[ALL\] Theorems \[thmABS\], \[MAIN\], Corollaries \[FRAC\], \[corSTAZ\], \[corSTAZ2\] and Proposition \[propSTAZ\] continue to hold with $S^\gamma(t)$ and $\V^r_\gamma$ in place of $S(t)$ and $\H^r$. Repeat exactly the same demonstrations, simply replacing (clearly, besides $S(t)$ with $S^\gamma(t)$ and $\H^r$ with $\V^r_\gamma$) each occurrence of $\pt u$ \[or $\pt v$, $\pt w$\] with $M_\gamma\pt u$ \[or $M_\gamma\pt v$, $M_\gamma\pt w$\] in the definitions of the auxiliary functionals $\Phi$, $\Psi$, $\Phi_0$, $\Psi_0$, $\Upsilon_0$, $\Upsilon_1$. The integral estimate of Lemma \[INTut\] improves to $$\int_s^t \|\pt u(\tau)\|^2_{1,\gamma}\d \tau\leq \nu(t-s)+C_\nu,\quad\forall t>s\geq 0,$$ although, as we will see in a while, the original estimate would suffice. For example, let us examine more closely the modifications needed in the new proofs of Lemma \[lemmaDECAY\] and Lemma \[lemmaCPT\]. We keep the same notation, just recalling that now the terms $\ptt v$ and $\ptt w$ in - are replaced by $M_\gamma\ptt v$ and $M_\gamma\ptt w$, respectively. The estimate on $\frac{\d}{\d t}\Theta_0$ remains unchanged. Concerning $\frac{\d}{\d t}\Upsilon_0$, the term $\|\pt v\|^2$ in the left-hand side turns into $\|\pt v\|_{1,\gamma}^2$, and in the right-hand side we have $-2\l M_\gamma \pt v,\eta\r$ instead of $-2\l \pt v,\eta\r$. But thanks to , $$-2\l M_\gamma \pt v,\eta\r \leq 2\|\pt v\|_{1,\gamma}\|\eta\|_{1,\gamma} \leq \frac12\|\pt v\|_{1,\gamma}^2+C\|\eta\|_1^2,$$ for some $C>0$ independent of $\gamma$, provided that, say, $\gamma\leq 1$. So, we end up with the same differential inequality for $\Lambda_0$, which yields the desired claim in light of the dissipation integral for $\|\pt u\|$. Coming to Lemma \[lemmaCPT\], we readily have (cf. ) $$\ddt\Theta_1+2\|\rho\|^2_3 \leq\frac{\eps}{4}\|w\|_4^2+ \frac{\eps}{4}\|\pt w\|_2^2+\frac{C}{\eps} \leq\frac{\eps}{4}\|w\|_4^2+ \frac{\eps}{4}\|\pt w\|_{3,\gamma}^2+\frac{C}{\eps},$$ whereas in the estimate of $\frac{\d}{\d t}\Upsilon_1$ the term $\|\pt w\|^2_2$ in the left-hand side becomes $\|\pt w\|_{3,\gamma}^2$, and in the right-hand side $-2\l M_\gamma \pt w,\rho\r_2$ substitutes $-2\l \pt w,\rho\r_2$. A further use of entails the control $$-2\l M_\gamma \pt w,\rho\r_2 \leq 2\|\pt w\|_{3,\gamma}\|\rho\|_{3,\gamma} \leq \frac14\|\pt w\|_{3,\gamma}^2+C\|\rho\|_3^2.$$ Once again, we are led to the same differential inequality for $\Lambda_1$. In particular, when $\gamma>0$, the global attractor $\A^\gamma$ of the semigroup $S^\gamma(t)$ is a bounded subset of $\V_\gamma^2=H^4\times H^3\times H^2$. This improves the conclusions of [@BUC] where, for $\gamma>0$, the boundedness of $\A^\gamma$ is obtained only in the [*intermediate*]{} space $H^3\times H^2\times H^2$. A straightforward albeit relevant consequence of the $\V^2_\gamma$-regularity is emphasized in the next corollary. For every $\gamma\geq 0$, the solutions to with initial data on the attractor are strong solutions, i.e., the equations hold almost everywhere. It is worth noting that all the estimates obtained in the proofs are [*uniform*]{} with respect to $\gamma$ (assuming $\gamma$ bounded from above). Indeed, the dependence on the rotational parameter enters only through the definition of the norm. Then, recasting a standard argument from [@HAL], the family $\{\A^\gamma\}$ is easily shown to be upper semicontinuous at $\gamma=0$, namely, $$\lim_{\gamma\to 0}\boldsymbol{\delta}_\H(\A^\gamma,\A)= 0.$$ On the contrary, the analogue of Proposition \[BACK\] does not seem to follow by a straightforward adaptation of the preceding argument. However, the backward uniqueness property on the attractor holds for $\gamma>0$ as well, and it can be proved as in [@CHLA08]. Acknowledgments {#acknowledgments .unnumbered} --------------- We are grateful to the Referee for several valuable suggestions and comments. [99]{} , [Exponential stability of a thermoelastic system without mechanical dissipation]{}, [Rend. Istit. Mat. Univ. Trieste]{} , [Exponential stability of a thermoelastic system with free boundary conditions without mechanical dissipation]{}, [SIAM J. Math. Anal.]{} , [Uniform decays in nonlinear thermoelastic systems]{}, in “Optimal Control" , [Attractors of evolution equations]{}, , [Long-time dynamics of a coupled system of nonlinear wave and thermoelastic plate]{}, [Discrete Cont. Dyn. Systems]{} , [Linear thermoelasticity]{}, in “Handbuch der Physik” (C. Truesdell, Ed.), , [Attractors of nonautonomous dynamical systems and their dimension]{}, [J. Math. Pures Appl.]{} , [Attractors for equations of mathematical physics]{}, , , [Long-time behaviour of second order evolution equations with nonlinear damping]{}, [Mem. Amer. Math. Soc.]{} , [Inertial manifolds for von Karman plate equations]{}, [Appl. Math. Optim.]{} , [Attractors and long time behavior of von Karman thermoelastic plates]{}, [Appl. Math. Optim.]{} , [Weakly dissipative semilinear equations of viscoelasticity]{}, [Commun. Pure Appl. Anal.]{} , [Steady states of the hinged extensible beam with external load]{}, submitted. , [Exponential attractors for extensible beam equations]{}, [Nonlinearity]{} , [Exponential attractors for a nonlinear reaction-diffusion system in $\R^3$]{}, [C.R. Acad. Sci. Paris Sér. I Math.]{} , [On the hyperbolic relaxation of the one-dimensional Cahn-Hilliard equation]{}, [J. Math. Anal. Appl.]{} , [A Gronwall-type lemma with parameter and dissipative estimates for PDEs]{}, [Nonlinear Anal.]{}, in press. , [Some backward uniqueness results]{}, [Nonlinear Anal.]{}, , [Stability of abstract linear thermoelastic systems with memory]{}, [Math. Models Methods Appl. Sci.]{} , [On the extensible viscoelastic beam,]{} [Nonlinearity]{} , [Asymptotic behavior of dissipative systems]{}, , [Finding minimal global attractors for the Navier-Stokes equations and other partial differential equations]{}, [Russian Math. Surveys]{} , [Modelling of dynamic networks of thin thermoelastic beams]{}, [Math. Methods Appl. Sci.]{}, , [Uniform decay rates for full von Karman system of dynamic thermoelasticity with free boundary conditions and partial boundary dissipation]{}, [Commun. Partial Differ. Equ.]{} , [Existence and exponential decay of solutions to a quasilinear thermoelastic plate system]{}, [NoDEA Nonlinear Differential Equations Appl.]{}, in press. , [Attractors for dissipative partial differential equations in bounded and unbounded domains]{}, in “Handbook of Differential Equations: Evolutionary Equations, 4" (C.M. Dafermos and M. Pokorny, Eds.), , [Traveling waves of dissipative non-autonomous hyperbolic equations in a strip]{}, [Adv. Differential Equations]{} (in Russian), [Reports of National Academy of Sciences of Ukraine]{}, in press. , *Infinite-dimensional dynamical systems in mechanics and physics*, , [The effect of an axial force on the vibration of hinged bars]{}, [J. Appl. Mech.]{}
{ "pile_set_name": "ArXiv" }
--- abstract: 'In the past decade, gamma-ray observations and radio observations of our Milky Way and the Milky Way dwarf spheroidal satellite galaxies put very strong constraints on annihilation cross sections of dark matter. In this article, we suggest a new target object (NGC 2976) that can be used for constraining annihilating dark matter. The radio and x-ray data of NGC 2976 can put very tight constraints on the leptophilic channels of dark matter annihilation. The lower limits of dark matter mass annihilating via $e^+e^-$, $\mu^+\mu^-$ and $\tau^+\tau^-$ channels are 200 GeV, 130 GeV and 110 GeV respectively with the canonical thermal relic cross section. We suggest that this kind of large nearby dwarf galaxies with relatively high magnetic field can be good candidates for constraining annihilating dark matter in future analysis.' author: - Man Ho Chan title: A new target object for constraining annihilating dark matter --- Introduction ============ In the past decade, gamma-ray observations and radio observations gave some stringent constraints for annihilating dark matter. For example, Fermi-LAT observations of the Milky Way center and the Milky Way dwarf spheroidal satellite (MW dSphs) galaxies give tight constraints on annihilation cross sections $<\sigma v>$ and dark matter mass $m$ for some annihilation channels [@Abazajian; @Calore; @Daylan; @Abazajian2; @Ackermann; @Sameth; @Li; @Albert]. Also, radio observations of the Milky Way center put strong constraints on the annihilation cross sections of dark matter [@Bertone; @Cholis; @Cirelli2]. Generally speaking, our galaxy and the MW dSphs galaxies are the most important objects for constraining annihilating dark matter. It is because these objects are local or nearby objects so that the uncertainties of observations are generally smaller. Also, most of their properties including dark matter content are well-constrained. In particular, the MW dSphs galaxies are promising targets for detection due to their large dark matter content, low diffuse gamma-ray foregrounds and lack of conventional astrophysical gamma-ray production mechanisms [@Ackermann2]. Therefore, the constraints obtained are usually more stringent so that these objects are commonly believed to be the best targets for constraining annihilating dark matter. Besides these objects, some recent studies use the data of M31 galaxy, M81 galaxy and some large nearby galaxy clusters (e.g. Coma, Fornax) to constrain annihilating dark matter [@Colafrancesco2; @Egorov; @Chan; @Beck; @Storm2]. These objects generally give similar or less stringent constraints compared with the Fermi-LAT observations of the Milky Way center and the MW dSphs galaxies. In this article, we explore a new target object (NGC 2976) and use its radio and x-ray data to constrain annihilating dark matter. We show that this object can give very strong constraints for annihilation cross sections, especially for three channels: $e^+e^-$, $\mu^+\mu^-$ and $\tau^+\tau^-$. The x-ray constraints ===================== Generally speaking, an electron can increase a photon’s energy from $E_0$ to $\sim \gamma^2E_0$ via inverse Compton scattering (ICS), where $\gamma$ is the Lorenz factor of the electron. If dark matter annihilates to give a large amount of high-energy positrons and electrons ($\sim$ GeV), these positrons and electrons would boost the energy of the cosmic microwave background (CMB) photons from $6 \times 10^{-4}$ eV to about 1 keV. Therefore, these photons can be detected by x-ray observations. However, this method is difficult to be used for normal galaxies and galaxy clusters because these objects usually emit strong x-ray radiation (due to hot gas). Unless we can accurately determine the thermal x-ray emission, the resulting constraints would be quite loose. For dwarf galaxies, this method can give much better constraints as the x-ray emission from dwarf galaxies is usually small (except those having AGN). Nevertheless, the size of a typical dwarf galaxy is small ($R \le 5$ kpc) so that the cooling rate of the high-energy electrons produced from dark matter annihilation is lower than their diffusion rate. Consequently, most of the high-energy electrons escape from the dwarf galaxy without losing most of their energy and the resulting x-ray signal is suppressed. For example, @Colafrancesco [@Jeltema] study the x-ray constraints for the local dwarf galaxies and find that the upper bounds of the annihilation cross sections are quite loose. Fortunately, we realize a new target object, NGC 2976, which is a good candidate for applying this method. It is a relatively large nearby dwarf galaxy (linear size = 6 kpc, distance $d=3.5$ Mpc). Also, the total x-ray luminosity observed (0.3-8 keV) is $\sim 10^{36}$ erg s$^{-1}$, which is much lower than the other similar objects ($\ge 10^{38}$ erg s$^{-1}$ for others) [@Grier]. This relatively low x-ray luminosity can give tighter constraints for annihilating dark matter. Furthermore, the magnetic field strength of NGC 2976 is $B=6.6 \pm 1.8$ $\mu$G, which is relatively higher than the local group dwarf galaxies ($B=4.2 \pm 1.8$ $\mu$G) [@Drzazga] so that the cooling rate of high-energy electrons is higher than the diffusion rate. The cooling timescale for a 1 GeV electron in NGC 2976 is $t_c=1/b=7 \times 10^{15}$ s while the diffusion timescale is $t_d=R^2/D_0 \sim 10^{17}$ s, where $b \approx 1.4 \times 10^{-16}$ s$^{-1}$ is the total cooling rate, $R=2.7$ kpc is the isophotal radius of NGC 2976 [@Kennicutt], and we have used a conservative diffusion coefficient $D_0=10^{27}$ cm$^2$ s$^{-1}$ [@Jeltema]. Therefore, we ensure that most of the high-energy positrons and electrons produced would loss most of their energy before escaping the galaxy. Since the diffusion process is not very important, the electron number density energy distribution function can be simply given by [@Storm] $$\frac{dn_e}{dE}(\tilde{E})=\frac{<\sigma v> \rho^2}{2m^2b(\tilde{E})} \int_{\tilde{E}}^{m} \frac{dN'}{dE'}dE',$$ where $\rho$ is the dark matter density profile, $dN'/dE'$ is the energy spectrum of the electrons produced from dark matter annihilation [@Cirelli] and $b(\tilde{E})$ is the total cooling rate, which is given by [@Colafrancesco2] $$b(\tilde{E})=\left[0.25\tilde{E}^2+0.0254 \left( \frac{B}{\rm 1~\mu G} \right)^2 \tilde{E}^2 \right] \times 10^{-16}~{\rm GeV/s},$$ with $\tilde{E}$ in GeV. Here, we neglect the Bremsstrahlung and Coulomb cooling as the thermal electron number density is very low in NGC 2976. The number of CMB photons scattered per second from original frequency $\nu_0$ to new frequency $\nu$ via ICS is given by $$I(\nu)=\frac{3 \sigma_Tc}{16 \gamma^4} \frac{n(\nu_0)\nu}{\nu_0^2} \left[2 \nu \ln \left(\frac{\nu}{4\gamma^2\nu_0} \right)+\nu+4\gamma^2 \nu_0- \frac{\nu^2}{2\gamma^2 \nu_0} \right],$$ where $\sigma_T$ is the Thomson cross section and $n(\nu_0)=170x^2/(e^x-1)$ cm$^{-3}$ is the number density of the CMB photons with frequency $\nu_0$, where $x=h\nu_0/kT_{\rm CMB}$. The total x-ray energy flux in the energy band $E_1$ to $E_2$ is given by $$\Phi=2\times \frac{<\sigma v>J}{8\pi m^2} \int_{E_1}^{E_2}d(h\nu) \int_{m_e}^{m}\frac{Y(\tilde{E})}{b(\tilde{E})}d\tilde{E} \int_0^{\infty}I(\nu)dx,$$ where $$J=\int_{\Delta \Omega}d\Omega \int_{\rm los} \rho^2 ds$$ is called the J-factor and $$Y(\tilde{E})=\int_{\tilde{E}}^m \frac{dN'}{dE'}dE'.$$ The dark matter density profile $\rho$ for NGC 2976 can be modeled by $$\rho=\rho_0\left[1+\left(\frac{r}{r_c} \right)^2 \right]^{-1},$$ where $\rho_0=0.198M_{\odot}$ pc$^{-3}$ and $r_c=1$ kpc [@Adams]. In addition, the substructures in NGC 2976 can greatly enhance the annihilation rate. By using a conservative model of substructure contributions [@Moline], the substructure boost factor is about $B_f=4.44$. By considering the dark matter contribution within the isophotal radius $R=2.7$ kpc, we get $\log (J/\rm GeV^2~cm^{-5})=17.6$. There is another dark matter profile $\rho=\rho_0(r/\rm 1~pc)^{-0.235}$ with $\rho_0=0.260M_{\odot}$ pc$^{-3}$ which can produce good fit to the kinematic data of NGC 2976 [@Adams]. The corresponding J-factor for this dark matter profile is $\log (J/\rm GeV^2~cm^{-5})=17.5$. Therefore, the systematic uncertainty of the J-factor is about 30%. In the following, since the uncertainty is not very large, we will use the dark matter profile in Eq. (7) to perform the analysis. The effect of this uncertainty will be discussed later. The total x-ray flux observed (0.3-8 keV) for NGC 2976 is $\Phi=(0.42 \pm 0.17) \times 10^{-14}$ erg cm$^{-2}$ s$^{-1}$ [@Grier]. By assuming that the observed x-ray flux originates from ICS due to dark matter annihilation only, we can obtain the upper limits of the annihilation cross sections for different channels (see Fig. 1). Here, we can see that the observed x-ray range is not very good for constraining annihilating dark matter. As we can see from Fig. 1, the x-ray constraints are close to the upper bounds obtained by Fermi-LAT observations for $e^+e^-$ and $\mu^+\mu^-$ channels only [@Ackermann]. For the thermal relic cross section $<\sigma v>=2.2 \times 10^{-26}$ cm$^3$ s$^{-1}$ [@Steigman], the minimum allowed $m$ for the $e^+e^-$ channel is 8 GeV, which is slightly tighter than the Fermi-LAT limit for the Milky Way’s dwarf galaxies [@Sameth]. If we can have better x-ray data or more hard x-ray data, the corresponding constraints would be much tighter. Note that we have only included the CMB photons in our calculations. In fact, there are other radiation fields in the infra-red and visible light bands which can also contribute to the x-ray flux via ICS. Nevertheless, the contribution of other radiation fields in NGC 2976 is small and most of the resulting photons via ICS are in MeV or above bands. Therefore, our results would not be significantly affected by other radiation fields. ![The upper limits of the annihilation cross sections for four annihilation channels. The red, green and blue solid lines represent the upper limits for the radio constraints (red: $\nu=1.43$ GHz; green: $\nu=4.85$ GHz; blue: $\nu=8.35$ GHz). The orange dashed lines represent the upper limits for the x-ray constraints. The black solid lines represent the gamma-ray observations of MW dSphs galaxies with Fermi-LAT (with J-factor uncertainties) [@Ackermann]. The black dashed lines represent the gamma-ray observations of recently discovered Milky Way satellites with Fermi-LAT (only for $\tau^+\tau^-$ and $b\bar{b}$ channels) [@Albert]. The dotted lines represent the canonical thermal relic cross section for annihilating dark matter [@Steigman].](sigma2.eps){width="140mm"} The radio constraints ===================== If we assume that all the radio radiation originates from synchrotron radiation of the electron and positron pairs produced by dark matter annihilation, the observed upper limit of the total radio flux can be used to constrain the cross sections of dark matter annihilation. As mentioned above, since the diffusion term can be neglected, the injected spectrum of the electron and positron pairs is proportional to the source spectrum [@Storm]. By using the monochromatic approximation (the radio emissivity is mainly determined by the peak radio frequency), the total synchrotron radiation energy flux of the electron and positron pairs produced by dark matter annihilation is given by [@Bertone; @Profumo]: $$S \approx \frac{1}{4 \pi d^2} \left[ \frac{9 \sqrt{3}<\sigma v>}{2m^2\tilde{b}} \int_0^R 4 \pi r^2 \rho^2EY(E)dr \right],$$ where $E=0.43(\nu/{\rm GHz})^{1/2}(B/{\rm mG})^{-1/2}$ GeV and $\tilde{b} \approx 1.18$ is a correction factor if we include the cooling of ICS. Latest radio observations with three different frequencies $\nu=1.43$ GHz, $\nu=4.85$ GHz and $\nu=8.35$ GHz obtain $2-\sigma$ upper bounds of radio fluxes $S \le 1.02 \times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$, $S \le 1.59 \times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$ and $S \le 1.76 \times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$ respectively [@Drzazga]. By using Eq. (8), we obtain the corresponding upper limits of annihilation cross sections for four popular channels: $e^+e^-$, $\mu^+\mu^-$, $\tau^+\tau^-$ and $b\bar{b}$ (see Fig. 1). For the canonical thermal relic cross section, the minimum allowed $m$ for the $e^+e^-$, $\mu^+\mu^-$ and $\tau^+\tau^-$ are 200 GeV, 130 GeV and 110 GeV respectively. For the $b\bar{b}$ channel, the radio constraints just marginally disfavor the range $20$ GeV $\le m \le 50$ GeV. If we take the systematic uncertainty of the J-factor into account, the minimum allowed $m$ would decrease by about 15%. Generally speaking, the radio constraints of NGC 2976 are tighter than the Fermi-LAT constraints [@Ackermann; @Albert], except for the $\tau^+\tau^-$ channel with $m \ge 100$ GeV and $b\bar{b}$ channel with $m \le 200$ GeV. For the $e^+e^-$ and $\mu^+\mu^-$ channels, the upper limits of annihilation cross sections are at least an order of magnitude tighter than the Fermi-LAT constraints [@Ackermann]. Therefore, we can see that NGC 2976 is a very good candidate for constraining annihilating dark matter. Discussion ========== In this article, we discuss a new target object, NGC 2976, for constraining annihilating dark matter. Generally speaking, nearby dwarf galaxies are good objects for constraining annihilating dark matter because they are rich in dark matter content and the effect of baryons is relatively small. However, since most dwarf galaxies are small and the magnetic fields are weak ($B<5$ $\mu$G), the diffusion of high-energy electrons and positrons would be quite efficient. Most of the electrons and positrons would escape from the dwarf galaxies without losing all of their energy. This suppresses the signals (both radio and x-ray) detected so that the constraints obtained would not be very tight. Nevertheless, this problem would be alleviated if a dwarf galaxy contains a relatively high magnetic field ($B \ge 5$ $\mu$G) and its size is large. The high magnetic field would greatly enhance the cooling rate so that most electrons and positrons would lose their energy within their stopping distance. As we mentioned, NGC 2976 has a high magnetic field $B=6.6 \pm 1.8$ $\mu$G and it is a relatively large dwarf galaxy (linear size = 6 kpc) so that the cooling timescale is much shorter than the diffusion timescale. This can maximize the x-ray and radio fluxes due to dark matter annihilation. The other advantage of using NGC 2976 is that it has tight upper bounds of x-ray flux and radio fluxes. These features suggest that NGC 2976 is a very good candidate for constraining annihilating dark matter, especially for the leptophilic channels. Further observations of NGC 2976 in radio wavelengths and x-ray bands can definitely push the upper bounds of cross sections to a much tighter level. In our analyses, we can see that the x-ray constraints of NGC 2976 are not good enough to give tighter constraints compared with the Fermi-LAT gamma-ray constraints. The limitations of using x-ray constraints are still quite large, unless we can fully identify the contribution of non-thermal x-ray emission. Nevertheless, the radio constraints can give much tighter constraints, which is complementary to the gamma-ray constraints. These constraints can rule out some existing models (via $e^+e^-$, $\mu^+\mu^-$ or $\tau^+\tau^-$) of dark matter interpretation of the excess gamma-ray and positrons in our galaxy [@Calore; @Boudaud]. This work is supported by a grant from The Education University of Hong Kong (Project No.:RG4/2016-2017R). Abazajian K. N., Canac N., Horiuchi S., Kaplinghat M., 2014, Phys. Rev. D 90, 023526. Abazajian K. N., Keeley R. E., 2016, Phys. Rev. D 93, 083514. Ackermann M. [*et al.*]{} \[Fermi-LAT Collaboration\], 2014, Phys. Rev. D 89, 042001. Ackermann M. [*et al.*]{} \[Fermi-LAT Collaboration\], 2015, Phys. Rev. Lett. 115, 231301. Adams J. J., Gebhardt K., Blanc G. A., Fabricius M. H., Hill G. J., Murphy J. D., van den Bosch R. C. E., van de Ven G., 2012, Astrophys. J. 745, 92. Albert A. [*et al.*]{} \[Fermi-LAT, DES Collaborations\], arXiv:1611.03184. Beck G., Colafrancesco S., 2016, J. Cosmol. Astropart. Phys. 05, 013. Bertone G., Cirelli M., Strumia A., Taoso M., 2009, J. Cosmol. Astropart. Phys. 03, 009. Boudaud M. [*et al.*]{}, 2015, Astron. Astrophys., 575, A67. Calore F., Cholis I., McCabe C., Weniger C., 2015, Phys. Rev. D 91, 063003. Chan M. H., 2016, Phys. Rev. D 94, 023507. Cholis I., Hooper D., Linden T., 2015, Phys. Rev. D 91, 083507. Cirelli M. [*et al.*]{}, 2012, J. Cosmol. Astropart. Phys. 10, E01. Cirelli M., Taoso M., 2016, J. Cosmol. Astropart. Phys. 07, 041. Colafrancesco S., Profumo S., Ullio P., 2007, Phys. Rev. D 75, 023513. Colafrancesco S., Profumo S., Ullio P., 2006, Astron. Astrophys. 455, 21. Daylan T., Finkbeiner D. P., Hooper D., Linden T., Portillo S. K. N., Rodd N. L., Slatyer T. R., 2016, Physics of the Dark Universe 12, 1. Drzazga R. T., Chyzy K. T., Heald G. H., Elstner D., Gallagher III J. S., 2016, Astron. Astrophys. 589, A12. Egorov A. E., Pierpaoli E., 2013, Phys. Rev. D 88, 023504. Geringer-Sameth A., Koushiappas S. M., Walker M. G., 2015, Phys. Rev. D 91, 083535. Grier C. J., Mathur S., Ghosh H., Ferrarese L., 2011, Astrophys. J. 731, 60. Jeltema T. E., Profumo S., 2008, Astrophys. J. 686, 1045. Kennicutt R. C. [*et al.*]{}, 2003, Publ. Astron. Soc. Pac. 115, 928. Li S. [*et al.*]{}, 2016, Phys. Rev. D 93, 043518. Molinè À, Sànchez-Conde M. A., Palomares-Ruiz S., Prada F., Mon. Not. R. Astron. Soc., in press (arXiv:1603.04057). Profumo S., Ullio P., 2010, [*Particle Dark Matter: Observations, Models and Searches*]{}, ed. G. Bertone, Cambridge: Cambridge University Press, chapter 27. Steigman G., Dasgupta B., Beacom J. F., 2012, Phys. Rev. D 86, 023506. Storm E., Jeltema T. E., Profumo S., Rudnick L., 2013, Astrophys. J. 768, 106. Storm E., Jeltema T. E., Splettstoesser M., Profumo S., arXiv:1607.01049.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We report the far-infrared spectra of the molecular nanomagnet Mn$_{12}$-acetate (Mn$_{12}$) as a function of temperature (5–300 K) and magnetic field (0–17 T). The large number of observed vibrational modes is related to the low symmetry of the molecule, and they are grouped together in clusters. Analysis of the mode character based on molecular dynamics simulations and model compound studies shows that all vibrations are complex; motion from a majority of atoms in the molecule contribute to most modes. Three features involving intramolecular vibrations of the Mn$_{12}$ molecule centered at 284, 306 and 409 cm$^{-1}$ show changes with applied magnetic field. The structure near 284 cm$^{-1}$ displays the largest deviation with field and is mainly intensity related. A comparison between the temperature dependent absorption difference spectra, the gradual low-temperature cluster framework distortion as assessed by neutron diffraction data, and field dependent absorption difference spectra suggests that this mode may involve Mn motion in the crown.' address: - | Department of Chemistry, State University of New York at Binghamton\ Binghamton, New York 13902–6016 - | National High Magnetic Field Laboratory, Florida State University\ Tallahassee, Florida 32306 - | Department of Chemistry, Florida State University\ Tallahassee, Florida 32310 author: - 'A. B. Sushkov$^*$, B. R. Jones and J. L. Musfeldt$^*$' - 'Y. J. Wang' - 'R. M. Achey and N. S. Dalal' title: 'Magnetic Field Effects on the Far-Infrared Absorption in Mn$_{12}$-acetate' --- Introduction ============ Molecular magnet materials have attracted a great deal of interest in recent years, exhibiting fascinating properties such as cooperative phenomena, magnetic memory, quantum tunneling, and unusual relaxation behavior that are most commonly associated with mesoscopic solids. One prototype single molecule magnet is \[Mn$_{12}$O$_{12}$(CH$_3$COO)$_{16}$(H$_2$O)$_4$\]$\cdot$2CH$_3$COOH$\cdot$ 4H$_2$O, denoted Mn$_{12}$. It consists of eight Mn$^{3+}$ ($S=2$) and four Mn$^{4+}$ ($S=3/2$) ions, held together by oxygen atoms, acetate ligands, and waters of crystallization; the ferrimagnetic spin arrangement yields $S$=10 [@Christou]. Mn$_{12}$ crystallizes in a tetragonal lattice, with weak exchange coupling and no long range magnetic ordering [@Lis; @Robinson1; @Robinson2]. Though a number of efforts have been made to understand the energetics of the Mn$_{12}$ system, a quantitative theory is still lacking. Recent prospective Hamiltonians include anisotropy, Zeeman splitting, spin-phonon interaction, and transverse terms, as well as spin operators up to fourth order [@Teemu; @Loss; @Politi; @fort]. Mn$_{12}$ initially attracted attention due to the striking steps and hysteresis loop in the magnetization, indicative of quantum tunneling. At present, these steps are thought to result from the double well potential that separates spin states; magnetization reorientation transitions have optimal probability when the “spin up” and “spin down” levels of the different magnetic quantum numbers align with applied magnetic field [@Friedman; @Barbara; @Novak; @Thomas2; @Fried1; @Fried2; @Barco]. High field EPR[@Hill; @Barra], neutron scattering[@Mire], and sub-mm [@Mukhin] techniques have been used to measure the excitation energies between levels in these magnetic clusters. The highest energy excitation (m$_s$=10 $\rightarrow$ m$_s$=9) occurs in the very far infrared, near 10 cm$^{-1}$ (300 GHz). That the magnetic dipole transition energies in the Mn$_{12}$ system are irregular (especially near the top of the anisotropy barrier) clearly shows the presence of higher than second order terms in the spin Hamiltonian[@Hill; @Mire]. Early heat capacity measurements revealed the irreversible/reversible effects between the two wells below/above the blocking temperature[@Fomi]. The exact value of the blocking temperature, $T_b$ ($\approx$ 3 K), depends on the probe, due to the dynamic nature of the blocking process. Above 3 K, the magnetization relaxation is exponential in time and reversible; this is the thermally activated regime. Notable deviations from exponential relaxation are observed below 2 K [@Evang]. Despite an explosion of interest in the low-energy quantum behavior, little is known about the vibrational characteristics of Mn$_{12}$ and many other prototypical molecular magnet materials. Spin-phonon coupled modes have been investigated in other transition metal oxides such as EuO, LaMnO$_3$, $\alpha$$^\prime$-NaV$_2$O$_5$, and CuO; the frequencies at which they appear correspond to vibrational modulation of the superexchange integral in the materials [@Balten; @Granado; @Danilo; @Sherman; @Kuzmenko; @Kuzmenko2], rather than the aforementioned low-energy, phonon-assisted relaxation processes. In order to provide further information on the electrodynamic response of single molecule molecular magnets, we have measured the far-infrared spectra of Mn$_{12}$ as a function of both temperature and applied magnetic field in the thermally activated regime. We use this data to assess spin-phonon coupling in this material. Experimental ============ High quality single crystals of Mn$_{12}$ were synthesized following the original procedure described by Lis[@Lis]. The single crystals of Mn$_{12}$ were ground with paraffin at 77 K to prepare pellets of various concentrations suitable for transmittance measurements in the far infrared [@Footy]. A sample with $\approx$3.5% of Mn$_{12}$ by mass proved to be optimal for most of the frequency range under investigation. A more concentrated pellet ($\approx$85%) was used for measurements between 30 and 110 cm$^{-1}$. Infrared transmission measurements were performed in our laboratory and at the National High Magnetic Field Laboratory (NHMFL) in Tallahassee, Florida, using a Bruker 113V Fourier transform infrared spectrometer. Spectra were taken using the 3.5, 12, 23, and 50 $\mu$m mylar beam splitters, covering a frequency range of 25–650 cm$^{-1}$. Both absolute and relative transmittance spectra were measured as a function of temperature using a bolometer detector and a continuous flow helium cryostat. Small differences in the transmittance spectra were assessed at low temperature using absorption differences, calculated from the relative transmittance as $\alpha(T)-\alpha(T=5~K) = -\ln[{\cal T}(T)/{\cal T}(T=5~K)]$. The magnetic field dependence of the transmittance at 5 K was measured at the NHMFL using a 20 T superconducting magnet and a transmission probe equipped with a bolometer detector[@Wang; @Field]. Again, transmittance ratio spectra were used to investigate small deviations from unity due to the applied field. By taking the natural log of the transmittance ratio $(-\ln[{\cal T}(H)/{\cal T}(H=0)])$, we obtain the absorption difference at each field: $\alpha(H)-\alpha(H=0)$. Upon examination of the absorption difference curves over the entire aforementioned frequency range, we identified three features that change with applied magnetic field. In order to distinguish signal from noise in a quantitative way, we calculated the standard deviation from the mean for each feature. In addition, the same analysis was performed on representative nearby frequency ranges containing no field-dependent features. We use these standard deviations from the mean to quantify the effects of the field and to characterize the intrinsic noise level in regimes away from the magneto-optic signatures [@Stand]. Field sweeps on an empty hole were also carried out for reference; as expected, no field dependence was observed. Results and Discussion ====================== Temperature Dependence of the Far Infrared Vibrational Spectra -------------------------------------------------------------- Figure 1 displays the far infrared transmittance of Mn$_{12}$ as a function of temperature. As expected for a molecular solid, intramolecular vibrational modes of the Mn$_{12}$ molecule appear above 150 cm$^{-1}$ [@Dresselhaus]. In agreement with previous authors [@Mukhin; @Hennion], we also observe a weak low-energy structure centered near 35 cm$^{-1}$ (Fig. 1, panel a). This magnetic-dipole allowed excitation has been cited as evidence for other excited state multiplets with $S$$\ne$10 existing in Mn$_{12}$ [@Mukhin; @Hennion]. Within our sensitivity, we have found that the 35 cm$^{-1}$ feature displays limited temperature dependence and no magnetic field dependence. The far infrared spectra of Mn$_{12}$ display a large number of intramolecular modes due to the relatively low symmetry of the molecule. They are grouped together in clusters and superimposed upon one another. These structures sharpen and harden with decreasing temperature (Fig. 1). Recent 20 K neutron diffraction studies by Langan $et~al.$ [@Robinson1] assessed the low-temperature molecular distortion; compared to the 300 K structure, major characteristics include a gradual displacement of Mn(3) and carboxylate groups, and more extensive solvent interactions including a 4-center hydrogen bond. (Note that Mn(3) is part of the crown, according to the numbering scheme of Ref. [@Robinson1], with connections/interactions to the bridges, ligands, and solvent.) These results motivated us to more carefully investigate the far infrared response in the low-temperature regime. Figure \[temp\]a displays the absorption difference spectra of Mn$_{12}$ at low temperature [@noteT]. The gradual distortion of the cluster framework was previously assessed by neutron diffraction studies [@Robinson1]. From a magnetic properties point of view, the behavior of the Mn ions is most important; experiment indicates that Mn(3) on the crown displays the most significant low-temperature displacement [@Robinson1]. A number of vibrational features are modified in this temperature range and therefore likely contain a substantial Mn(3), carboxylate, and acetate ligand contribution. As will be discussed in the next Section, three modes (284, 306, and 409 cm$^{-1}$) are of particular interest because of their dependence on magnetic field. The 284 cm$^{-1}$ mode shows an intensity variation at low temperature and a small frequency shift at higher temperatures (Fig. \[temp\]b). The standard deviation from the mean in the vicinity of the 284 cm$^{-1}$ feature (Fig. \[temp\]c) quantifies these changes, with an inflection point around 20 K. The maximum in the derivative of the fourth-order polynomial fit to this data better illustrates the position of this inflection point. The 306 cm$^{-1}$ structure (not shown) is sensitive to this temperature regime as well. Although the feature centered at 409 cm$^{-1}$ also exhibits changes in this temperature range, no peculiarities were observed around 20 K. Preliminary molecular dynamics simulations [@Simulation] suggest that a majority of atoms in the Mn$_{12}$ molecule are, in some way, involved in each vibration in this frequency range. The modes are many, both because of the low molecular symmetry and the large number of atoms. Using mode visualization and the spectra of several model compounds as a guide, particularly that of Mn(II)-Ac$_2$ (Fig. \[model\]), we propose the following general assignments. The low-energy motions below 300 cm$^{-1}$ include a great deal of acetate (ligand) motion. Complex low energy motions of the core and crown begin around 180 cm$^{-1}$. These motions include bending, rocking, shearing, twisting, and wagging. Asymmetric and symmetric stretching of the core and crown seem to begin slightly below 500 cm$^{-1}$, thus providing a likely assignment for the complicated vibrational cluster observed in the spectrum centered near 540 cm$^{-1}$. The simulations show that modes in this energy range contain sizable (core and crown) oxygen contributions. Comparison of the spectral data from the model compound Mn(II)-Ac$_2$ and Mn$_{12}$ confirm that the mode clusters centered near 360, 400, and 540 cm$^{-1}$ in Mn$_{12}$ are mainly motions of the magnetic center rather than the ligands. Related model compounds such as MnO, MnO$_2$, and Mn$_2$O$_3$ also have characteristic vibrational features in the far infrared (below 600 cm$^{-1}$). Although the structures are different and Mn valences unmixed compared to the title compound, the reference spectra of these model Mn-based solids [@Stadtler] indicate that the far infrared response is rich and highly relevant to the Mn–O motion. Measurements on the Mn(II)-Ac$_2$ model compound (Fig. 3) suggest that the structures centered at 590 and 635 cm$^{-1}$ in the spectrum of Mn$_{12}$ are acetate-related. The similar temperature dependence of these modes in the two samples supports this assignment. In order to understand and model physical properties that contain important phonon contributions (for instance $\rho$$_{DC}$ or heat capacity) in the thermally activated regime, it is helpful to know realistic values of the most important low-energy infrared active phonons of Mn$_{12}$. Our data suggests that use of a relatively small series of characteristic frequencies may be adequate. For instance, the three most intense structures are centered at 375, 410, and 540 cm$^{-1}$. More complete models might take additional modes, the detailed mode clustering, and relative intensities into account. Magnetic Field Dependence of the Far Infrared Vibrational Spectra ----------------------------------------------------------------- Figure 4 displays the magnetic field dependence of three features in the far infrared spectrum of Mn$_{12}$. The center positions of these structures ($\approx$284, 306, and 409 cm$^{-1}$) are indicated with arrows on the absolute transmittance spectra in Fig. 1. It is interesting that the 284 and 306 cm$^{-1}$ features do not correspond to major modes in the absolute transmittance spectrum. As shown in the left-hand panels (Figs. 4a, 4c, and 4e), the field dependence as measured by the absorption difference spectra is well-defined for the mode near 284 cm$^{-1}$, whereas it is more modest for the structures centered at 306 and 409 cm$^{-1}$. The filled symbols and solid lines in the right-hand panels (Figs. 4b, 4d, and 4f) provide a quantitative view of the field dependent trends. The standard deviation from the mean for the three modes of interest shows a clear upward trend, in contrast to that of nearby spectral regions which shows no field dependence and provides an estimate of the intrinsic noise level [@Deviation]. The feature at 284 cm$^{-1}$ is the most prominent of the three field dependent structures. The absorption decreases with applied field and shows a tendency toward saturation at 17 T, as indicated in Fig. \[ratio\]b. The standard deviation from the mean in nearby spectral ranges (for instance, similarly sized regimes centered at $\approx$ 277 and 299 cm$^{-1}$) shows no field dependence. The response of the 284 cm$^{-1}$ feature is therefore well above the noise and characteristic of Mn$_{12}$. Similar trends and a tendency towards saturation of the standard deviation from the mean are observed for the more complicated absorption difference structures centered near 306 and 409 cm$^{-1}$, although the overall coupling of these modes to the applied field is weaker and the line shapes are different from that of the 284 cm$^{-1}$ feature. The changes observed near 306 cm$^{-1}$ are mainly caused by a 1 cm$^{-1}$ softening, whereas the 409 cm$^{-1}$ structure is due to both softening and broadening. The field dependence of these three features in the absorption difference spectra is confirmed by repeated measurements using different experimental parameters [@Parameters]. Further, as seen in Fig. 4, the trend with applied field in both the difference spectra and the standard deviation from the mean near 306 and 409 cm$^{-1}$ can be easily distinguished from that of the non-field-dependent intervals. It is of interest to understand the microscopic character of the 284, 306, and 409 cm$^{-1}$ modes and why they are affected by the magnetic field. Although there are a number of mechanisms that might explain this behavior, such as the effect of magnetic field on hybridization of ligand-metal bonds [@Brunel], we believe the most promising is that of spin-phonon coupling. Here, variations in the applied field modulate the magnetic system, thereby affecting the phonons coupled to it. Such an interaction has been thought to be important for describing the slow magnetization relaxation behavior of Mn$_{12}$, making it an important term in the total Hamiltonian [@Politi; @fort]. That only three far-infrared features are observed to change with applied field up to 17 T suggests that these modes are related to motions that maximally change the low temperature spin interactions. One possible candidate for such motion might contain bending between the core and crown structures affecting the Mn-O-Mn angle, therefore modulating the superexchange between Mn ions. An alternate explanation involves the especially curious sensitivity of the 284 cm$^{-1}$ mode to temperature variation (in the range 5-35 K) as well as to the applied magnetic field. This coincidence suggests that the magneto-elastic response at 284 cm$^{-1}$ may be intimately related to the Mn(3) motion on the crown. Interestingly, the 306 cm$^{-1}$ structure also displays some sensitivity to the displacement of the molecular framework. Further work is clearly needed to untangle these complex interactions. The observed magnetic field dependencies at 284, 306, and 409 cm$^{-1}$ suggest that ${\mathcal H}_{s-ph}$ can be augmented by including contributions from these three vibrational modes. Spin-lattice interactions in the far-infrared energy range have been observed in a number of other transition metal oxides in the past [@Balten; @Granado; @Danilo; @Sherman; @Kuzmenko; @Kuzmenko2]. However, the energy scale of the 284, 306, and 409 cm$^{-1}$ features is larger than might be expected for relevant spin-phonon processes in Mn$_{12}$, even in the thermally activated regime. Indeed, previous studies on the title compound have focused on acoustic phonons as the major contributors to ${\mathcal H}_{s-ph}$ [@Teemu]. Thus, if these far-infrared phonons are connected with the Mn$_{12}$ relaxation processes in any way, virtual states would likely be involved. The energy scale of the 284, 306, and 409 cm$^{-1}$ phonons may be more relevant to high field transport or specific heat measurements. Conclusion ========== We have measured the far-infrared response of Mn$_{12}$ as a function of temperature and magnetic field. The features in this region involve vibrations from all of the groups of atoms (core, crown, and ligands) in the molecule. Three structures related to intramolecular vibrations near 284, 306, and 409 cm$^{-1}$ are observed to change in applied magnetic field, suggesting that they are coupled to the spin system. Of these three, the feature centered at 284 cm$^{-1}$ displays the strongest coupling. Based on the similarity between the temperature and field dependent absorption difference data, we speculate that this mode involves Mn(3) motion on the crown. The data reported here may be helpful in understanding the role of vibrations in the theoretical models of magnetization relaxation in Mn$_{12}$ and related systems. $^*$ Current address: Department of Chemistry, University of Tennessee, Knoxville, TN 37996. Acknowledgements ================ Funding from the National Science Foundation (DMR-9623221) to support work at SUNY-Binghamton is gratefully acknowledged. The work at Florida State University was also partially funded by the NSF. A portion of the measurements were performed at the NHMFL, which is supported by NSF Cooperative Agreement No. DMR-9527035 and by the State of Florida. We thank Z.T. Zhu for technical assistance. G. Christou, D. Gatteschi, D.N. Hendrickson, and R. Sessoli, [*MRS Bulletin*]{} [**25**]{}, 66 (2000). T. Lis, [*Acta Crystallogr.*]{} [**36**]{}, 2042 (1980). P. Langan, R.A. Robinson, P.J. Brown, D.N. Argyriou, D.N. Hendrickson, and S.M.J. Aubin, [*Acta. Cryst.*]{}, submitted. R.A. Robinson, P.J. Brown, D.N. Argyriou, D.N. Hendrickson, and S.M.J. Aubin, [*J. Phys.: Condens. Matter*]{} [**12**]{}, 2805 (2000). T. Pohjola and H. Schoeller, [*Phys. Rev. B*]{} [**62**]{}, 15026 (2000). M.N. Leuenberger and D. Loss, [*Phys. Rev. B.*]{} [**61**]{}, 1286 (2000). P. Politi, A. Rettori, F. Hartmann-Boutron, and J. Villain, [*Phys. Rev. Lett.*]{} [**75**]{}, 537 (1995). A. Fort, A. Rettori, J. Villain, D. Gatteschi, and R. Sessoli, Phys. Rev. Lett., [**80**]{}, 612 (1998). J.R. Friedman, M.P. Sarachick, J. Tejada, and R. Ziolo, [*Phys. Rev. Lett.*]{} [**76**]{}, 3830 (1996). B. Barbara, W. Wernsdorfer, L.C. Sampaio, J.G. Park, C. Paulsen, M.A. Novak, R. Ferré, D. Mailly, R. Sessoli, A. Caneschi, K. Hasselbach, A. Benoit, and L. Thomas, [*J. Magn. Magn. Mater.*]{} [**140-144**]{}, 1825 (1994). R. Sessoli, D. Gatteschi, A. Caneschi, and M.A. Novak, [*Nature*]{} [**365**]{}, 141 (1993). L. Thomas, F. Lionti, R. Ballou, D. Gatteschi, R. Sessoli, and B. Barbara, [*Nature*]{} [**383**]{}, 145 (1996). J.R. Friedman, M.P. Sarachik, J. Tejada, J. Maciejewski, and R. Ziolo, [*J. Appl. Phys.*]{} [**79**]{}, 6031 (1996). J.R. Friedman, M.P. Sarachik, J.M. Hernandez, X.X. Zhang, J. Tejada, E. Molins, and R. Ziolo, [*J. Appl. Phys.*]{} [**81**]{}, 3978 (1997). E. del Barco, J.M. Hernandez, M. Sales, J. Tejada, H. Rakoto, J.M. Broto, and E.M. Chudnovsky, [*Phys. Rev. B*]{} [**60**]{}, 11898 (1999). S. Hill, J.A.A.J. Perenboom, N.S. Dalal, T. Hathaway, T. Stalcup, and J.S. Brooks, [*Phys. Rev. Lett.*]{} [**80**]{}, 2453 (1998). A.L. Barra, D. Gatteschi, and R. Sessoli, [*Phys. Rev. B.*]{} [**56**]{}, 8192 (1997). I. Mirebeau, M. Hennion, H. Casalta, H. Andres, H.U. Güdel, A.V. Irodova, and A. Caneschi, [*Phys. Rev. Lett.*]{} [**83**]{}, 628 (1999). A.A. Mukhin, V.D. Travkin, A.K. Zvezdin, S.P. Lebedev, A. Caneschi, and D. Gatteschi, [*Europhysics Letters*]{} [**44**]{}, 778 (1998). F. Fominaya, J. Villian, P. Gandit, J. Chaussy, and A. Caneschi, [*Phys. Rev. Lett.*]{} [**79**]{}, 1126 (1997). M. Evangelisti, J. Bartolomé, and F. Luis, [*Solid State Communications*]{} [**112**]{}, 687 (1999). W. Baltensperger and J.S. Helman, [*Helvetica Physica Acta*]{} [**41**]{}, 668 (1968). E. Granado, A. García, J.A. Sanjurjo, C. Rettori, I. Torriani, F. Prado, R.D. Sánchez, A. Caneiro, and S.B. Oseroff, [*Phys. Rev. B.*]{} [**60**]{}, 11879 (1999). V.B. Podobedov, A. Weber, D.B. Romero, J.P. Rice, and H.D. Drew, [*Phys. Rev. B.*]{} [**58**]{}, 43 (1998). E.Y. Sherman, M. Fischer, P. Lemmens, P.H.M. van Loosdrecht, G. Güntherodt, [*Europhysics Letters*]{} [**48**]{}, 648 (1999). A.B. Kuz’menko, D. van der Marel, P.J.M. van Bentrum, E.A. Tischenko, C. Presura, and A.A. Bush, [*Physica B*]{} [**284-288**]{}, 1396 (2000). A.B. Kuz’menko, D. van der Marel, P.J.M. van Bentum, E.A. Tishchenko, C. Presura, and A.A. Bush, Phys. Rev. B., [**63**]{}, 94303 (2001). Note that this process results in an isotropic sample. H.K. Ng and Y.J. Wang, “Physical Phenomena at High Magnetic Fields II", Z. Fisk, L. Gor’kov, D. Meltzer, and R. Schrieffer, eds., World Scientific, Singapore, 729 (1995). The field dependence was measured in the following order: 0, 9, 13, 15, 17, 16, 14, 11, 8.5, 6.5, 3.5, 0, 3, 6, 7.5, and 0 T. No hysteresis was observed. Standard deviation from the mean was used to quantify the features in the absorbance difference spectra rather than integrated intensity due to the complicated line shapes of some of the features. This method was also used to estimate the intrinsic noise level by analysis of nearby spectral regions. Thus, for consistency, standard deviation from the mean was used for all treatments of the difference spectra. M.S. Dresselhaus, G. Dresselhaus, and P.C. Eklund, “Science of Fullerenes and Carbon Nanotubes", (Academic Press, New York, 1996). M. Hennion, L. Pardi, I. Mirebeau, E. Suard, R. Sessoli, and A. Caneschi, [*Phys. Rev. B*]{} [**56**]{}, 8819 (1997). This temperature range is relevant to the energy scale of the magnetic measurements. Molecular dynamics simulations were performed using the MMFF method of the Titan program by Schrodinger, Inc.. “The Stadtler Standard Spectra, Inorganics and Related Compounds, IR Grating Spectra”, (Stadtler Research Laboratories, Philadelphia, 1972). The method for determining the standard deviation from the mean is described in the Experimental section. The field dependences of these three features were confirmed by repeating the measurements using different (yet valid) scan velocities and beamsplitters. L.C. Brunel, G. Landwehr, A. Bussmann-Holder, H. Bilz, M. Balkanski, M. Massot, and M.K. Ziolkiewicz, [*Journal De Physique*]{} [**42**]{}, 412 (1981).
{ "pile_set_name": "ArXiv" }
--- abstract: 'New streams of data enable us to associate physical objects with rich multi-dimensional data on the urban environment. This study presents how open data integration can contribute to deeper insights into urban ecology. We analyze street trees in New York City (NYC) with cross-domain data integration methods by combining crowd-sourced tree census data - which includes geolocation, species, size, and condition of each street tree - with pollen activity and allergen severity, neighborhood demographics, and spatial-temporal data on tree condition from NYC 311 complaints. We further integrate historical data on neighborhood asthma hospitalization rates by Zip Code and in-situ air quality monitoring data (PM 2.5) to investigate how street trees impact local air quality and the prevalence of respiratory illnesses. The results indicate although the number of trees contributes to better air quality, species with severe allergens may increase local asthma hospitalization rates in vulnerable populations.' author: - | Yuan Lai\ \ \ Constantine E. Kontokosta, PhD\ \ \ bibliography: - 'main.bib' nocite: '[@*]' subtitle: 'A Data-Driven Approach to Environmental Justice' title: Measuring the Impact of Urban Street Trees on Air Quality and Respiratory Illness --- Introduction ============ Environmental justice is the fair treatment and meaningful participation of all people, regardless of their ethnic or socio-economic background, in the formulation of public policy and environmental regulations [@epa]. As increasing population density, high living costs, and climate change have created significant urban challenges, underprivileged communities often face more severe quality–of-life conditions with fewer resources and limited access to public services. In most global cities, including New York City (NYC), there are increasing concerns about air quality and respiratory illness among low-income neighborhoods, immigrants, and ethnic-minority groups due to ambient air pollution, housing conditions, or a lack of awareness of, or access to, basic health-care [@Columbia]. The multidimensional environmental, demographic, social, and operational factors involved make environmental justice a complex issue requiring collaborative efforts from multiple stakeholders. A large number of urban infrastructure objects have been digitalized for agency operations and civic engagement at the community, district, and city level. Publicly-available urban data platforms are an important part of citizen science, information democratization, and analytics-supported operational decisions. However, it is challenging to translate individual datasets into meaningful information, due to the absence of context, real-time situational factors, or social sentiment. Administrative data are typically generated in siloes within a respective agency, and structured for specific domains (transportation, environment, land use, etc.) and uses without considering future data integration opportunities. The absence of readily-scalable approaches for integrating and localizing urban open data further exacerbates the digital divide that is particularly pronounced in low-income communities. Thus, methods to transform urban data into local insights for community decision-making is an urgent issue in the growing field of civic analytics [@kontokosta2016quantified]. Urban infrastructure and public facilities, such as street trees, light poles, parking meters, or bicycle racks, are physical objects located at a fixed location. Digitally, such objects often represent as points with a unique ID, status, and geo-location, collected by city agencies or volunteers. Street trees are critical ecological infrastructure for cities, given their role in mitigating climate change, positive impacts on promoting active living, and aesthetic contributions to property values. In NYC, there are 652,169 documented street trees contributing an estimated total annual benefit of \$122 million [@treefacts]. Growing concerns around urban climate and quality-of-life necessitate multi-disciplinary research on how we plant and manage street trees as part of urban green infrastructure [@locke2010prioritizing]. It is still largely unknown exactly how urban forestry impact local air quality and public health based on tree canopy density and species. Although street trees have been proven to contribute to a lower prevalence of early childhood asthma, certain species are considered as potential source of allergen to exacerbate atopic asthma [@lovasi2008children]. A previous study on multiple cities in Canada finds tree pollen as a significant cause of asthma and allergic sensitization for clinical visits [@dales2008tree]. Another study in NYC demonstrates a significant correlation between tree pollen peak season and allergy medication sales by borough [@sheffield2011association]. Spatial patterns of street trees further reveal issues in environmental justice and health disparities by neighborhood in NYC. A previous study on asthma hazards in Greenpoint/Williamsburg, a neighborhood in Brooklyn, NY, shows that low-income communities and minority groups face higher exposure to air pollution resulting from lower tree canopy coverage [@corburn2002combining]. These studies reinforce the importance of studying street trees in urban environments, but the findings are constrained by data and methodological limitations, as most rely on few monitoring sites or district-level statistics. In this study, we present a data-driven approach to measure the localized environmental health impact of street trees in NYC. We begin by integrating multiple datasets and conduct a spatial join to the point locations of over 600,000 street trees. We then analyze the correlation between the number, size, and species of tree with air quality (particulate matter) data from the NYC Department of Environmental Protection and asthma hospitalization rates at the neighborhood level from the NYS Department of Health. We conclude with a discussion of our findings and its applications, limitations, and future work. Methodology =========== The increasing availability of municipal open data creates a rich resource for data-supported urban operations, but require proper data cleaning and computing techniques for actionable results. We contextualize street trees through a cross-domain data integration approach. Each tree has time-invariant features from the tree census, including its location and species. By combining domain knowledge in plant taxonomy, we can translate tree count data with more meaningful information, such as pollen offenders by species, blooming period by seasons, and toxic species. Beyond fixed attributes, we further infer trees’ condition based on surrounding events. We collect extensive data on street trees, local 311 complaints, population, land use, air quality, and respiratory illness (Table \[table:data\]). We associate each tree with local 311 complaints to understand public observation and reporting of tree health. We develop a scalable model providing insights at both the community level and based on a specific location. Finally, we integrate air quality and asthma hospitalization rate data to measure the environmental and public health impacts of urban street trees at the neighborhood level. ------------------------- --------------------------- -------------- -------------- --  Data Source Period Spatial Unit  Tree Census Dept. of Parks and Rec. 2015 Geo-point  Population Census U.S. Census 2010 NTA\*  Complaints on Trees NYC 311 2010-present Geo-point  Asthma Hospitalization N.Y. Dept.of Health 2012-2014 Zip Code  Community Air Survey NYC Dept.of Health 2015 UHF\*  Air Survey Monitors NYC Dept.of Health 2008-2013 Geo-point  Land Use (PLUTO\*) NYC Dept.of City Planning 2009-2016 Tax Lot ------------------------- --------------------------- -------------- -------------- -- \ \*NTA: Neighborhood Tabulation Area;\ \*UHF: United Hospital Funds Neighborhoods;\ \*PLUTO: Primary Land Use Tax Lot Output.\ Data Collection --------------- In 2015, the Department of Parks and Recreation initiated the third street tree census, which became the largest participatory urban forestry project in U.S. history. From 2015 to 2016, the project involved more than 2,240 volunteers to map 666,134 street trees citywide [@nyctrees]. The final data were published on the NYC open data platform for research and public interest. The NYC tree census dataset (cleaned *n*=652,169) provide each street tree’s location (latitude and longitude), species, diameter at breast height (dbh), surrounding sidewalk condition (during survey), Neighborhood Tabulation Area (NTA), and Zip Code. In addition to tree census data, we extract data from NYC 311, a non-emergency municipal service request system, which receives more than 60,000 complaints annually (2010 to present) on dead or damaged trees, overgrown branches, and requests for new trees from local citizens. Each complaint is reported as a geo-located incident with time-stamp, providing a unique source of near-real-time information on citizens’ interaction with trees. To capture local reporting about street trees, we query from NYC 311 data by complaint category from 2010 to present (*n*=463,376), including local complaints on dead or damaged trees, and service requests for new trees. We collect asthma hospital discharges at the Zip Code level (*n*=168) and NYC Community Air Survey data by United Hospital Funds (UHF) districts (*n*=43) to investigate how street trees relate to local air quality and respiratory illness rates. In order to gain insights at high-spatial resolution, we also utilize air quality monitoring data from 2008-2013 reporting on local PM 2.5 concentrations with specific sensor geolocation (*n*=162) and observation time (four seasons per year). We use population data from the U.S. Census reported for each NTA (*n*=195), including total population and population by age groups. Since land use and building density may have an impact on air quality, we integrate tax lot data from the Department of City Planning’s Primary Land Use Tax Lot Output (PLUTO) database to quantify building density, land use, and building space usage surrounding each air quality sensor location. Contextualizing with Domain Knowledge ------------------------------------- Domain knowledge is key enabler to contextualize urban data for meaningful insights. We associate pollen activity and severity based on tree species to enrich the informational value of tree census data. We focus on pollen due to increasing concerns about seasonal allergies and asthma caused by pollen allergen. We construct a dataset on tree pollen attributes by species through a literature review and on-line research (Table \[table:pollen\]). We create an index score to measure pollen impact by combining a ratio of vulnerable population (age &lt;14 or &gt; 60) and severe allergen ratio as (Eq. \[eq:score\]): $$\label{eq:score} \begin{split} Pollen\,Impact_{i} = (\frac{\sum Trees\,with\,Severe\,Allergen}{\sum Trees})_{i} \\ \times (\frac{\sum Vulnerable Population}{\sum Population})_{i} \end{split}$$ --------------- ----------------- --------------------------------- ---------------  Tree Species Allergic Pollen Allergen Severity Active Season  String Binary (1/0) Categorical (high/moderate/low) Categorical --------------- ----------------- --------------------------------- --------------- By combining tree census data with domain knowledge in plant taxonomy, the resultant dataset can serve local residents through a mapping dashboard and location-based mobile applications, so the general public can be informed about the prevalence of street trees with active allergens during each season. Integrating Real-time Situational Information --------------------------------------------- To further enrich our analysis, we integrate real-time situational information from NYC 311 complaints on trees. Each year there are about 60,000 complaints related to trees, providing dynamic information on how the local population observes and engages with urban ecology during different seasons or extreme weather events. We first query a subset of NYC 311 complaint data (2010-present) by complaint category related to trees. Each complaint was documented as a geo-point with caller’s location (latitude, longitude), a time-stamp for reporting time, complaint type, and zip code. We then spatially join each complaint with its neighborhood boundary defined by NTA, and associate neighborhood demographic attributes to each complaint. Although most complaints report on specific trees (damaged, dead, or overgrown), they capture the geo-location of the caller instead of a specific tree. To associate such information back to trees, we create a spatial query algorithm to extract surrounding complaints for each tree. In the Python language environment, we generate a 100-meter radius buffer for each tree’s geo-location, and extract 311 complaints on trees that occurred within the buffer from 2010 to present. In this way, we estimate each tree’s condition through inferring local complaints by spatial proximity. We use asthma hospital discharge data by Zip Code to investigate potential correlation between street trees and asthma rate. We use air quality survey data by UHF district to measure how tree density and species may have impact on local air quality (based on PM 2.5 levels). For in-situ air monitoring data, we use a similar spatial query approach to extract the total number of trees, total number of species, and tree counts by species within 100-meter buffer for each air quality sensing location. Finally, we use regression models to investigate street trees’ impact on local air quality and asthma hospitalization rates. Findings ======== A comparison of spatial patterns of asthma ED visits by discharge Zip Code, local air quality survey data of PM 2.5 levels, and pollen impact reveals the complex relationship between ambient air quality, respiratory illness, local population, and neighborhood environment (Figure \[fig:pollen\] &  \[fig:central\_park\]). The spatial disparity between asthma rate (Figure \[fig:pollen\_pop2\]a) and PM 2.5 levels (Figure \[fig:pollen\_pop2\]b) indicates potential confounding factors in the prevalence of respiratory illness besides air quality, such as local trees and population, as one would expect (Figure \[fig:pollen\_pop2\]c). We run OLS linear regression models to investigate how local street trees may impact air quality and asthma hospitalizations. The results indicate an overall benefit of street trees on local air quality (Figure \[fig:regression\]a). ![Street tree pollen activity mapped by season.[]{data-label="fig:pollen"}](IMG/PollenSeason.png){width=".45\textwidth"} ![Pollen allergen activity by tree species during spring (Central Park).[]{data-label="fig:central_park"}](IMG/central_park.png){width=".45\textwidth"} ![image](IMG/maps.png){width=".8\textwidth"} ![Regression modeling results on street tree, air quality, and asthma rate.[]{data-label="fig:regression"}](IMG/air-asthma.png){width=".4\textwidth"} [l l l l l]{}\  Model Variable & &Coeff. &(Std. Err.)&\  $ln$(Total Trees Count)& & -0.38\*\* & 0.18&\  $ln$(American Linden)& & -0.46\*\*\* & 0.13 &\  $ln$(Callery Pear)& & -0.38\*\*\* & 0.08 &\  $ln$(American Elm)& & 0.19\*\* & 0.08 &\  $ln$(Japanese Zelkova)& & 0.31\*\*\* & 0.09 &\  $ln$(Little Leaf Linden)& & 0.33\*\*\* & 0.09 &\  $ln$(Honey Locust)& & 0.84\*\*\* & 0.14 &\  Sample size ($N$)& & 174 &&\  Adjusted $R^2$ & & .564 &&\  $F$-test & & 25.85\*\*\* &&\ NOTE: Coeff.= coefficient and Std. Err.= standard error. \*\*\* = significant at 99% ($p\leq 0.01$); \*\*=significant at 95% ($p\leq 0.05$); \*=significant at 90% ($p\leq 0.10$). Model variables are defined in Table 3. Although the initial model indicates there is no significant correlation ($r^2=0.05$) between tree allergen ratio and asthma hospitalization rate (Figure \[fig:regression\]b), we further aggregate street trees by species at the Zip Code level to explore how specific species may impact localized asthma rates. We select each species with a median total number larger than 20 trees per zip code, and run a multivariate linear regression model to measure their impact on local asthma rates. The results reveal a profound effect by tree species. Our findings reveal the overall density of street trees contributes to a lower number of asthma ER visits, but the impact varies by tree species (Table \[table:regression\]). Certain species, such as Honey Locust and Little Leaf Linden, have significant positive correlations with local asthma hospitalization rates. We also find similar results when aggregating data by tree genera. We run a OLS linear panel regression model on in-situ air quality sensing data (162 locations, 4 seasons in 5 years) with surrounding trees and building density, while holding seasonality as a fixed effect. The result has a relatively low R-squared value ($r^2$=0.22), indicating unobserved or confounding factors that further explain the relationship. ![NYC 311 Complaints on dead trees and requests for new trees by borough.[]{data-label="fig:complaint"}](IMG/complaints_boro.png){width=".45\textwidth"} Integrating tree data with local complaints reflects how residents interact with street trees. A comparison by complaint category and borough reveals different public awareness and community engagement across the City. For instance, although residents from Queens made more complaints about dead trees than those in Brooklyn, they also have fewer requests for new trees (Figure \[fig:complaint\]). This discrepancy may indicate a lack of knowledge about possible city services (e.g. the ability to request a new tree) within certain neighborhoods. Local complaint data indicate a regular seasonal pattern of local engagement with trees, possibly due to tree growth, weather events, and outdoor activity intensity. Using our spatial query algorithm, we associate each tree with its surrounding complaints, to further investigate spatial, temporal, and typological (tree species) patterns for predictive modeling. Discussion ========== Neighborhood health disparities and environmental justice are complex issues involving environmental factors, demographics, housing, transportation, and public services [@locke2010prioritizing]. This study contributes to a comprehensive data integration approach for analyzing local environmental health conditions. By contextualizing tree census data with domain knowledge and local situational information, we build a robust model for evaluating the potential public health benefits of urban street trees. For instance, the New York City Community Health Profiles, a comprehensive neighborhood health report by Department of Health and Mental Hygiene in 2015 indicates Bronx Community District 1 (Mott Haven and Melrose) has the highest child asthma hospitalization rate [@health_profile]. Our approach can further infer local conditions on tree species and surrounding complaints, providing additional context for health data. Of course, urban forestry is just one of many factors that influences local air quality and environmental health. Again, taking Bronx Community District 1 as an example, besides its poor air quality (PM 2.5 levels of 10.0 micro-grams per cubic meter, compared with 9.1 in the Bronx and 8.6 citywide), the neighborhood also has one of the highest rates of housing maintenance defects (79%, and the 2nd worst conditions citywide), which is another cause of respiratory illness [@health_profile]. Thus, further investigations in environmental health requires more extensive data integration including land use, transportation, energy usage, and housing conditions [@jain2014big]. Our analysis is limited by the absence of a comprehensive plant taxonomy database on pollen activity and allergen severity. Also, since the NYC tree census data only counts street trees, there are a large number of trees in parks and open spaces not captured in the public data [@zandbergen2009methodological]. Thus, one further expansion of this work is to integrate trees in parks and open spaces in the City for a more complete evaluation of the impact of urban trees. Conclusion ========== In this exploratory research, we provide a data-driven approach to measure the impact of urban street trees on air quality and respiratory illness. We illustrate how cross-domain data integration can address complex environmental justice issues by quantifying local environmental, demographic, and socio-economic characteristics. Results indicate that although street trees contribute to better air quality, certain species may be a local source of allergens that can trigger or exacerbate underlying asthma conditions. Spatial disparities between air quality, asthma rates, and tree pollen impact indicate unobserved factors in neighborhood environmental health. Despite the limitations, this study provides a model for creating more meaningful insights from urban data relating to ecology and public health. Localized urban data provide community-based knowledge for residents and encourage public engagement in participatory urban sensing or citizen science projects that can raise awareness of public health, environmental justice, and access to municipal services [@kontokosta2016quantified; @kontokosta2016]. This process requires the collective efforts of city agencies, domain experts, data scientists, and local communities.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we discuss a novel framework for multiclass learning, defined by a suitable coding/decoding strategy, namely the simplex coding, that allows to generalize to multiple classes a relaxation approach commonly used in binary classification. In this framework, a relaxation error analysis can be developed avoiding constraints on the considered hypotheses class. Moreover, we show that in this setting it is possible to derive the first provably consistent regularized method with training/tuning complexity which is [*independent*]{} to the number of classes. Tools from convex analysis are introduced that can be used beyond the scope of this paper.' author: - | Youssef Mroueh$^{\sharp,\ddagger}$, Tomaso Poggio$^{\sharp}$, Lorenzo Rosasco$^{\sharp,\ddagger}$ Jean-Jacques E. Slotine${\dagger}$\ *$\sharp$ - CBCL, McGovern Institute, MIT;*$\dagger$ - IIT; *$\dagger$ - ME, BCS, MIT\ ymroueh, lrosasco,[email protected] [email protected]*** bibliography: - 'simplex.bib' title: Multiclass Learning with Simplex Coding --- Introduction ============ As bigger and more complex datasets are available, multiclass learning is becoming increasingly important in machine learning. While theory and algorithms for solving binary classification problems are well established, the problem of multicategory classification is much less understood. Practical multiclass algorithms often reduce the problem to a collection of binary classification problems. Binary classification algorithms are often based on a [*relaxation approach*]{}: classification is posed as a non-convex minimization problem and hence relaxed to a convex one, defined by suitable convex loss functions. In this context, results in statistical learning theory quantify the error incurred by relaxation and in particular derive [*comparison inequalities*]{} explicitly relating the excess misclassification risk with the excess expected loss, see for example [@bajomc06; @yaroca07; @RW10; @zhangT] and [@stch08] Chapter 3 for an exhaustive presentation as well as generalizations.\ Generalizing the above approach and results to more than two classes is not straightforward. Over the years, several computational solutions have been proposed (among others, see [@wahba; @dietterich95solving; @Singer; @Weston; @ASS00; @tsochantaridis2005largemargin]. Indeed, most of the above methods can be interpreted as a kind of relaxation. Most proposed methods have complexity which is more than linear in the number of classes and simple one-vs all in practice offers a good alternative both in terms of performance and speed [@Rif]. Much fewer works have focused on deriving theoretical guarantees. Results in this sense have been pioneered by [@zhang04stat; @tewari05consistency], see also [@fisher; @g07; @VWR11]. In these works the error due to relaxation is studied asymptotically and under constraints on the function class to be considered. More quantitative results in terms of comparison inequalities are given in [@chinese] under similar restrictions (see also [@vandeGeer]). Notably, the above results show that seemigly intuitive extensions of binary classification algorithms might lead to methods which are not consistent. Further, it is interesting to note that these restrictions on the function class, needed to prove the theoretical guarantees, make the computations in the corresponding algorithms more involved and are in fact often ignored in practice.\ In this paper we dicuss a novel framework for multiclass learning, defined by a suitable coding/decoding strategy, namely the simplex coding, in which a relaxation error analysis can be developed avoiding constraints on the considered hypotheses class. Moreover, we show that in this framework it is possible to derive the first provably consistent regularized method with training/tuning complexity which is [*independent*]{} to the number of classes. Interestingly, using the simplex coding, we can naturally generalize results, proof techniques and methods from the binary case, which is recovered as a special case of our theory. Due to space restriction in this paper we focus on extensions of least squares, and SVM loss functions, but our analysis can be generalized to large class of simplex loss functions, including extension of logistic and exponential loss functions (used in boosting). Tools from convex analysis are developed in the longer version of the paper and can be useful beyond the scopes of this paper, and in particular in structured prediction. The rest of the paper is organized as follow. In Section \[back\] we discuss problem statement and background. In Section \[sec:simp\] we discuss the simplex coding framework that we analyze in Section \[sec:theory\]. Algorithmic aspects and numerical experiments are discussed in Section \[sec:algo\] and Section \[sec:exp\], respectively. Proofs and supplementary technical results are given in the longer version of the paper. Problem Statement and Previous Work {#back} =================================== Let $(X,Y)$ be two random variables with values in two measurable spaces $\mathcal{X}$ and $\mathcal{Y}=\{1 \dots T \}$, $T\geq 2$. Denote by $\rho_{\XX}$, the law of $X$ on $\XX$, and by $\rho_j(x)$, the conditional probabilities for $j\in \mathcal{Y}$. The data is a sample $S=(x_i,y_i)_{i=1}^n$, from $n$ identical and independent copies of $(X,Y)$.We can think of $\XX$ as a set of possible inputs and of $\mathcal{Y}$, as a set of labels describing a set of semantic categories/classes the input can belong to. A classification rule is a map $b: \XX \to \mathcal {Y}$, and its error is measured by the misclassification risk $R(b)=\mathbb{P}(b(X) \neq Y)=\mathbb{E}(\ind_{[b(x)\neq y]}(X,Y)).$ The optimal classification rule that minimizes $R$ is the Bayes rule, $b_{\rho}(x)=\argmax_{y \in \YY} \rho_y(x), x \in \XX.$ Computing the Bayes rule by directly minimizing the risk $R$, is not possible since the probability distribution is unknown. In fact one could think of minimizing the empirical risk (ERM), $R_S(b)=\frac 1 n \sum_{i=1}^n\ind_{[b(x)\neq y]}(x_i,y_i)$, which is an unbiased estimator of the $R$, but the corresponding optimization problem is in general not feasible. In binary classification, one of the most common way to obtain computationally efficient methods is based on a relaxation approach. We recall this approach in the next section and describe its extension to multiclass in the rest of the paper.\ [**Relaxation Approach to Binary Classification.**]{} If $T=2$, we can set $\YY=\pm 1$. Most modern machine learning algorithms for binary classification consider a convex relaxation of the ERM functional $R_S$. More precisely: 1) the indicator function in $R_S$ is replaced by non negative loss $V: \mathcal{Y}\times \mathbb{R}\to \mathbb{R^{+}}$ which is convex in the second argument and is sometimes called a [*surrogate*]{} loss; 2) the classification rule $b$ replaced by a real valued measurable function $f: \XX \to \mathbb{R}$. A classification rule is then obtained by considering the sign of $f$. It often suffices to consider a special class of loss functions, namely large margin loss functions $V: \mathbb{R}\to \mathbb{R}^{+}$ of the form $V(-yf(x))$. This last expression is suggested by the observation that the misclassification risk, using the labels $\pm1$, can be written as $R(f)=\mathbb{E}(\Theta(-Yf(X))),$ where $\Theta$ is the heavy side step function. The quantity $m=-yf(x)$, sometimes called the [*margin*]{}, is a natural point-wise measure of the classification error. Among other examples of large margin loss functions (such as the logistic and exponential loss), we recall the hinge loss $V(m)=\hi{1+m}=\max\{1+m,0\}$ used in support vector machine, and the square loss $V(m)=(1+m)^2$ used in regularized least squares (note that $(1-yf(x))^2=(y-f(x))^2$). Using surrogate large margin loss functions it is possible to design effective learning algorithms replacing the empirical risk with regularized empirical risk minimization $$\label{ERMV} \EE^\la_S(f)=\frac{1}{n}\sum_{i=1}^nV(y_i,f(x_i))+\la {\cal R}(f),$$ where $\cal R$ is a suitable regularization functional and $\la$ is the regularization parameter, see Section \[sec:algo\]. Relaxation Error Analysis ------------------------- [\[sec:relax\]]{} As we replace the misclassification loss with a convex [*surrogate|*]{} loss, we are effectively changing the problem: the misclassification risk is replaced by the expected loss, $\mathcal{E}(f)=\mathbb{E} (V(-Yf(X)))$ . The expected loss can be seen as a functional on a large space of functions ${\cal F}={\cal F}_{V,\rho}$, which depend on $V$ and $\rho$. Its minimizer, denoted by $f_\rho$, replaces the Bayes rule as the target of our algorithm.\ The question arises of the price we pay by a considering a relaxation approach: “What is the relationship between $f_\rho$ and $b_\rho$?” More generally, “What is the approximation we incur into by estimating the expected risk rather than the misclassification risk?” The [*relaxation error*]{} for a given loss function can be quantified by the following two requirements:\ [1) *Fisher Consistency*]{}. A loss function is Fisher consistent if $\text{sign}(f_{\rho}(x))=b_{\rho}(x)$ almost surely (this property is related to the notion of classification-calibration [@bajomc06]).\ [ 2) *Comparison inequalities*]{}. The excess misclassification risk, and the excess expected loss are related by a comparison inequality $$R(\text{sign}(f))-R(b_{\rho}) \leq \psi(\mathcal{E}(f)-\mathcal{E}(f_\rho)),$$ for any function $f\in \cal F$, where $\psi=\psi_{V,\rho}$ is a suitable function that depends on $V$, and possibly on the data distribution. In particular $\psi$ should be such that $\psi(s)\to 0$ as $s\to0$, so that if $f_n$ is a (possibly random) sequence of functions, such that $\mathcal{E}(f_n)\to \mathcal{E}(f_{\rho})$ (possibly in probability), then the corresponding sequences of classification rules $c_n=\text{sign}(f_n)$ is Bayes consistent, i.e. $R(c_n)\to R(b_{\rho})$ (possibly in probability). If $\psi$ is explicitly known, then bounds on the excess expected loss yields bounds on the excess misclassification risk.\ The relaxation error in the binary case has been thoroughly studied in [@bajomc06; @RW10]. In particular, Theorem 2 in [@bajomc06] shows that if a large margin surrogate loss is convex, differentiable and decreasing in a neighborhood of $0$, then the loss is Fisher consistent. Moreover, in this case it is possible to give an explicit expression of the function $\psi$. In particular, for the hinge loss the target function is exactly the Bayes rule and $\psi(t)=|t|$. For least squares, $f_{\rho}(x)=2\rho_1(x)-1$, and $\psi(t)=\sqrt{t}$. The comparison inequality for the square loss can be improved for a suitable class of probability distribution satisfying the so called Tsybakov noise condition [@T04], $\rho_\XX(\{x \in \XX, |f_{\rho}(x)|\leq s\})\leq B_q s^q ,s \in [0,1], q> 0.$ Under this condition the probability of points such that $\rho_y(x)\sim \frac{1}{2}$ decreases polynomially. In this case the comparison inequality for the square loss is given by $\psi(t)=c_{q} t^{\frac{q+1}{q+2}}$, see [@bajomc06; @yaroca07].\ [**Previous Works in Multiclass Classification.**]{} From a practical perspective, over the years, several computational solutions to multiclass learning have been proposed. Among others, we mention for example [@wahba; @dietterich95solving; @Singer; @Weston; @ASS00; @tsochantaridis2005largemargin]. Indeed, most of the above methods can be interpreted as a kind of relaxation of the original multiclass problem. Interestingly, the study in [@Rif] suggests that the simple one-vs all schemes should be a practical benchmark for multiclass algorithms as it seems to experimentally achive performances that are similar or better to more sophisticated methods.\ As we previously mentioned from a theoretical perspective a general account of a large class of multiclass methods has been given in [@tewari05consistency], building on results in [@bajomc06] and [@zhang04stat]. Notably, these results show that seemingly intuitive extensions of binary classification algorithms might lead to [*inconsistent*]{} methods. These results, see also [@fisher; @VWR11], are developed in a setting where a classification rule is found by applying a suitable prediction/decoding map to a function $f:\XX\to \R^T$ where $f$ is found considering a loss function $V:\YY \times \R^T\to \R^+.$ The considered functions have to satisfy the constraint $\sum_{y\in \YY} f^y(x)=0$, for all $x\in \XX$. The latter requirement is problematic since it makes the computations in the corresponding algorithms more involved and is in fact often ignored, so that practical algorithms often come with no consistency guarantees. In all the above papers relaxation is studied in terms of Fisher and Bayes consistency and the explicit form of the function $\psi$ is not given. More quantitative results in terms of explicit comparison inequality are given in [@chinese] and (see also [@vandeGeer]), but also need to to impose the “sum to zero” constraint on the considered function class. A Relaxation Approach to Multicategory Classification {#sec:simp} ===================================================== In this section we propose a natural extension of the relaxation approach that avoids constraining the class of functions to be considered, and allows to derive explicit comparison inequalities. See Remark \[presimplex\] for related approaches. #### Simplex Coding. {#theory} (-3.5,0) – (3.5,0); (0,-3.5) – (0,3.5); (0,0) circle (2 cm); (A) at (2,0); (B) at (-1,1.732); (C) at (-1,-1.732); (M) at (1.414,1.414); (E) at (-3,0); (F) at (1.5,-2.59); (G) at (1.5,2.59); (O) at (0,0); (0,0) – (A); (0,0) – (B); (0,0) – (C); (0,0) – (M); (A) – (M); (B) – (M); (C) – (M); We start considering a suitable coding/decoding strategy. A [*coding*]{} map turns a label $y\in \YY$ into a code vector. The corresponding [*decoding*]{} map given a vector returns a label in $\cal Y$. Note that, this is what we implicitly did while treating binary classification [*encoding*]{} the label space $\YY=\{1, 2\}$ using the coding $\pm 1$, so that the naturally decoding strategy is simply $\text{sign}(f(x))$. The coding/decoding strategy we study is described by the following definition. \[scode\] The simplex coding is a map $C:\YY \to \R^{T-1},$ $\quad C(y)=\a_y,$ where the code vectors ${\AA}=\{c_y~|~y\in {\cal Y}\}\subset \R^{T-1}$ satisfy: 1) $\norT{\a_y}^2=1$, $\forall y\in \YY$, 2)$\scalT{\a_y}{\a_{y'}}=-\frac{1}{T-1},$ for $ y\neq y'$ with $y,y' \in \YY$, and 3) $\sum_{y\in \YY} \a_y =0$. The corresponding decoding is the map $D: \R^{T-1}\to \{1, \dots, T\}, \quad\quad D(\alpha)=\argmax_{y\in \YY}\scalT{\alpha}{c_y},$ $\forall \alpha \in \R^{T-1}.$ The simplex coding corresponds to the $T$ most separated vectors on the hypersphere $\mathbb{S}^{T-2}$ in $\mathbb{R}^{T-1}$, that is the vertices of the simplex (see Figure \[fig:simplex\]). For binary classification it reduces to the $\pm 1$ coding and the decoding map is equivalent to taking the sign of $f$. The decoding map has a natural geometric interpretation: an input point is mapped to a vector $f(x)$ by a function $f:\XX\to \R^{T-1}$, and hence assigned to the class having closer code vector (for $y,y'\in \YY$ and $\alpha \in \R^{T-1}$, we have $\nor{\a_y-\alpha}^2\ge \nor{\a_{y'}-\alpha }^2\Leftrightarrow \scal{\a_{y'}}{\alpha}\le \scal{\a_y}{\alpha}$.\ [**Relaxation for Multiclass Learning.**]{} We use the simplex coding to propose an extension of the binary classification approach. Following the binary case, the relaxation can be described in two steps: 1. using the simplex coding, the indicator function is upper bounded by a non-negative loss function $V: \mathcal{Y}\times \mathbb{R}^{T-1}\to \mathbb{R}^+$, such that $\ind_{[b(x)\neq y]}(x,y)\le V(y, C(b(x))), $for all $b:\XX\to \YY$, and $x\in \XX,y\in \YY$, 2. rather than $C\circ b$ we consider functions with values in $f:\XX\to \R^{T-1}$, so that $V(y, C(b(x)))\le V(y, f(x))$, for all $b:\XX\to \YY, f:\XX\to \R^{T-1}$ and $x\in \XX,y\in \YY$. In the next section we discuss several loss functions satisfying the above definitions and we study in particular the extension of the least squares and SVM loss functions.\ [**Multiclass Simplex Loss Functions.**]{} Several loss functions for binary classification can be naturally extended to multiple classes using the simplex coding. Due to space restriction, in this paper we focus on extensions of least squares, and SVM loss functions, but our analysis can be generalized to large class of simplex loss functions, including extension of logistic and exponential loss functions( used in boosting). The Simplex Least Square loss (**S-LS**) is given by $V(y,f(x))=\nor{c_y-f(x)}^2$, and reduces to the usual least square approach to binary classification for $T=2.$ One natural extension of the SVM’s hinge loss in this setting would be to consider the Simplex Half space SVM loss (**SH-SVM**) $V(y,f(x))=\hi{1-\scalT{\a_y}{f(x)}}$. We will see in the following that while this loss function would induce efficient algorithms in general is not Fisher consistent unless further constraints are assumed. In turn, this latter constraint would considerably slow down the computations. Then we consider a second loss function Simplex Cone SVM (**SC-SVM**), related to the hinge loss, which is defined as $V(y,f(x))=\sum_{y'\neq y}\hi{\frac{1}{T-1}+\scalT{\a_{y'}}{f(x)}}.$ The latter loss function is related to the one considered in the multiclass SVM proposed in [@wahba]. We will see that it is possible to quantify the relaxation error of the loss function without requiring further constraints. Both the above SVM loss functions reduce to the binary SVM hinge loss if $T=2$. \[presimplex\] The simplex coding has been considered in [@hd07],[@wl10], and [@boosting]. In particular, a kind of SVM loss is considered in [@hd07] where $V(y, f(x))=\sum_{y'\neq y}\hi{\eps-\scal{f(x)}{v_{y'}(y)}}$ and $v_{y'}(y)=\frac{c_y-c_{y'}}{\nor{c_y-c_{y'}}},$ with $\eps=\scal{c_y}{v_{y'}(y)}=\frac{1}{\sqrt{2}}\sqrt{\frac{T}{T-1}}$. More recently [@wl10] considered the loss function $ V(y, f(x))=\hi{\eps-\nor{c_y-f(x)}}$, and a simplex multi-class boosting loss was introduced in [@boosting], in our notation $V(y,f(x))=\sum_{j\neq y}e^{-\scalT{\a_{y}-\a_{y'}}{f(x)}}.$ While all those losses introduce a certain notion of margin that makes use of the geometry of the simplex coding, it is not to clear how to derive explicit comparison theorems and moreover the computational complexity of the resulting algorithms scales linearly with the number of classes in the case of the losses considered in [@boosting; @wl10] and $O((nT)^\gamma),\gamma \in\{2,3\}$ for losses considered in [@hd07] . ![Level sets of different losses considered for $T= 3$. A classification is correct if an input $(x,y)$ is mapped to a point $f(x)$ that lies in the neighborhood of the vertex $c_y$. The shape of the neighborhood is defined by the loss, it takes form of a cone supported on a vertex in the case of SC-SVM, a half space delimited by the hyperplane orthogonal to the vertex in the case of the SH-SVM, and a sphere centered on the vertex in the case of S-LS.](./losses){height="35.00000%"} Relaxation Error Analysis {#sec:theory} ========================= If we consider the simplex coding, a function $f$ taking values in $\R^{T-1}$, and the decoding operator $D$, the misclassification risk can also be written as: $R(D(f))=\int_{\XX}(1-\rho_{D(f(x))})d\rho_{\XX}(x)$. Then, following a relaxation approach we replace the misclassification loss by the expected risk induced by one of the loss functions $V$ defined in the previous section. As in the binary case we consider the expected loss $ \mathcal{E}(f)=\int V(y,f(x))d\rho(x,y). $ Let $L^p(\XX, \rho_\XX)=\{f:\XX \to \mathbb{R}^{T-1}~|~ \nor{f}_\rho^p=\int \nor{f(x)}^p d\rho_\XX(x)<\infty \}$, $p\geq1.$ The following theorem studies the relaxation error for SH-SVM, SC-SVM, and S-LS loss functions. For SH-SVM, SC-SVM, and S-LS loss functions, there exists a $p$ such that $\EE: \LLp \to \R^+$ is convex and continuous. Moreover, 1. The minimizer $f_\rho$ of $\EE$ over ${\cal F}= \{f \in \LLp~|~ f(x) \in K ~a.s.\}$ exists and $D(f_\rho)=b_\rho$. 2. For any $f\in {\cal F}$, $ R(D(f))-R(D(f_{\rho}))\leq C_T( \mathcal{E}(f)-\mathcal{E}(f_{\rho}))^{\alpha}, $ where the expressions of $p,K,f_{\rho},C_T,$ and $\alpha$ are given in Table \[table\]. Loss $p$ $K$ $ f_{\rho}$ $C_T$ $\alpha$ -------- ----- -------------------- ------------------------------- --------------------------- --------------- SH-SVM $1$ $conv({\cal C})$ $\a_{b_{\rho}}$ $T-1$ $1$ SC-SVM $1$ $\mathbb{R}^{T-1}$ $\a_{b_{\rho}}$ $T-1$ $1$ S-LS $2$ $\mathbb{R}^{T-1}$ $\sum_{y\in \YY} \rho_y \a_y$ $\sqrt{\frac{2(T-1)}{T}}$ $\frac{1}{2}$ : $conv(\mathcal{C})$ is the convex hull of the set $\mathcal{C}$ defined in . \[table\] \[theo:summary\] The proof of this theorem is given in the longer version of the paper.\ The above theorem can be improved for Least Squares under certain classes of distribution . Toward this end we introduce the following notion of misclassification noise that generalizes Tsybakov’s noise condition. Fix $q>0$, we say that the distribution $\rho$ satisfy the multiclass noise condition with parameter $B_q$, if $$\label{GenTsy1} \rho_{\mathcal{X}}\left(\left\{x\in \mathcal{X}~|~ 0 \leq \min_{j\neq D(f_{\rho}(x))}\frac{T-1}{T} ( \scalT{\a_{D(f_{\rho}(x))}-\a_j}{f_{\rho}(x)})\leq s\right \}\right)\leq B_q s^q,$$ where $s\in[0,1]$. If a distribution $\rho$ is characterized by a very large $q$, then, for each $x\in \XX$, $f_\rho(x)$ is arbitrarily close to one of the coding vectors. For $T=2$, the above condition reduces to the binary Tsybakov noise. Indeed, let $\a_1=1$, and $\a_2=-1$, if $f_{\rho}(x)>0$, $ \frac{1}{2} (\a_1-\a_2) f_{\rho}(x)= f_{\rho}(x)$, and if $f_{\rho}(x)<0$, $\frac{1}{2}(\a_2-\a_1)f_{\rho}(x)=-f_{\rho}(x)$. The following result improves the exponent of simplex-least square to $\frac{q+1}{q+2}>\frac{1}{2}$ : For each $f\in L^2(\XX,\rho_{\mathcal{X}})$, if $(\ref{GenTsy1})$ holds, then for S-LS we have the following inequality, $$R(D(f))-R(D(f_{\rho}))\leq K \left(\frac{2(T-1)}{T}(\mathcal{E}(f)-\mathcal{E}(f_{\rho}))\right)^{\frac{q+1}{q+2}}, \label{eq:betterrate}$$ for a constant $K =\left(2 \sqrt{B_q+1}\right)^{\frac{2q+2}{q+2}}$. \[pro:tsyb1\] Note that the comparison inequalities show a tradeoff between the exponent $\alpha$ and the constant $C(T)$, for S-LS and SVM losses. While the constant is order $T$ for SVM it is order $1$ for S-LS, on the other hand the exponent is $1$ for SVM losses and $\frac{1}{2}$ for S-LS. The latter could be enhanced to $1$ for close to separable classification problems by virtue of the Tsybakov noise condition. Comparison inequalities given in Theorems \[theo:summary\] and \[pro:tsyb1\] can be used to derive generalization bounds on the excess misclassification risk. For least square min-max sharp bound, for vector valued regression are easy to derive.Standard techniques for deriving sample complexity bound in binary classification extended for multi-class SVM losses could be found in [@g07] and could be adapted to our setting. The obtained bound are not known to be tight, better bounds akin to those in [@stch08], will be subject to future work. Computational Aspects and Regularization Algorithms {#sec:algo} =================================================== In this section we discuss some computational implications of the framework we presented. [**Regularized Kernel Methods.**]{} We consider regularized methods of the form , induced by simplex loss functions and where the hypotheses space is a vector valued reproducing kernel Hilbert spaces (VV-RKHSs) and the regularizer the corresponding norm. See Appendix D.$2$ for a brief introduction to VV-RKHSs.\ In the following, we consider a class of kernels such that the corresponding RKHS $\hh$ is given by the completion of the span $\{f(x)=\sum_{i=j}^N \Gamma(x_j,x)\c_j,~\c_j\in \R^{T-1}, x_i, \in \XX,~\forall j=1, \dots, N\}$, where we note that the coefficients are vectors in $\R^{T-1}$. While other choices are possible this is the kernel more directly related to a one vs all approach. We will discuss in particular the case where the kernel is induced by a finite dimensional feature map, $k(x,x')=\scal{\Phi(x)}{\Phi(x')}, \quad \text{where}\quad \Phi:\XX\to \R^p$, and $\scal{\cdot}{\cdot}$ is the inner product in $\R^p$. In this case we can write each function in $\hh$ as $f(x)=W\Phi(x)$, where $W\in \R^{(T-1)\times p}$.\ It is known [@micpon05; @capdev05] that the representer theorem [@wahba70] can be easily extended to a vector valued setting, so that that minimizer of a simplex version of Tikhonov regularization is given by $f_S^\la(x)=\sum_{j=1}^n k(x,x_j)\c_j$, $ \c_j \in \mathbb{R}^{T-1},$ for all $x\in \XX$, where the explicit expression of the coefficients depends on the considered loss function. We use the following notations: $K\in \mathbb{R}^{n \times n},K_{ij}= k(x_i,x_j), \forall i,j \in \{1 \dots n\}, A \in \mathbb{R}^{n\times (T-1)} , \A=(\c_1,...,\c_n)^T.$\ [**Simplex Regularized Least squares (S-RLS).**]{} S-RLS is obtained considering the simplex least square loss in the Tikhonov functionals. It is easy to see [@Rif] that in this case the coefficients must satisfy either $(K+\lambda n I)A=\Y$ or $(\Xn^T\Xn+\la n I) W =\Xn^T\Y$ in the linear case, where $\Xn \in \mathbb{R}^{n\times p} , \Xn=(\Phi(x_1),...,\Phi(x_n))^{\top}$ and $\Y \in \mathbb{R}^{n\times (T-1)}, \Y=(\a_{y_1},...,\a_{y_n})^{\top}$ .\ Interestingly, the classical results from [@wahba90] can be extended to show that the value $f_{S_i}(x_i)$, obtained computing the solution $f_{S_i}$ removing the $i-th$ point from the training set (the leave one out solution), can be computed in closed form. Let $f_{loo}^{\lambda}\in \mathbb{R}^{n\times (T-1)}, f_{loo}^{\lambda}=(f_{S_1}^{\lambda}(x_1),\dots,f_{S_n}^{\lambda}(x_n))$. Let $\mathcal{K}(\lambda)=(K+\lambda n I)^{-1}$and $C(\lambda)=\mathcal{K}(\lambda)\hat{Y}$. Define $M(\lambda)\in \mathbb{R}^{n \times (T-1)}$, such that: $M(\lambda)_{ij}=1/\mathcal{K}(\lambda)_{ii}$, $\forall~j=1\dots T-1.$ One can show similarly to [@Rif], that $f_{loo}^{\lambda}=\hat{Y}-C(\lambda)\odot M(\lambda)$, where $\odot$ is the Hadamard product. Then, the leave-one-out error $\frac 1 n \sum_{i=1}^n \ind_{y\neq D(f_{S^i}(x)) }(y_i, x_i),$ can be minimized at essentially no extra cost by precomputing the eigen decomposition of $K$ (or $\Xn^T\Xn$).\ [**Simplex Cone Support Vector Machine (SC-SVM).**]{} Using standard reasoning it is easy to show that (see Appendix C.$2$), for the SC-SVM the coefficients in the representer theorem are given by $\c_i=-\sum_{y\neq y_i}\alpha^{y}_i \a_y, \quad i=1, \dots, n,$ where $\alpha_i=(\alpha^y_i)_{y\in \YY} \in \R^T, i=1, \dots, n,$ solve the quadratic programming (QP) problem $$\begin{aligned} \label{SCSVMQP} &&\max_{\alpha_1, \dots, \alpha_n \in \R^T } \left\{ -\frac{1}{2}\sum_{y,y',i,j}\alpha^y_i K_{ij} G_{yy'} \alpha ^{y'}_j+\frac{1}{T-1} \sum_{i=1}^n \sum_{y=1}^T\alpha^y_i\right\}\\ && \text{subject to}\quad 0\leq \alpha_i^y \leq C_0\delta_{y, y_i}, ~\forall~i=1, \dots, n, y\in \YY\nonumber\end{aligned}$$ where $G_{y,y'}=\scalT{\a_y}{\a_{y'}} \forall y ,y' \in \mathcal{Y}$ and $C_0=\frac{1}{2n\la}$, $\alpha_i=(\alpha^y_i)_{y\in \YY} \in \R^T$, for $i=1, \dots, n$ and $\delta_{i,j}$ is the Kronecker delta.\ [**Simplex Halfspaces Support Vector Machine (SH-SVM).**]{} A similar, yet more more complicated procedure, can be derived for the SH-SVM. Here, we omit this derivation and observe instead that if we neglect the convex hull constraint from Theorem \[theo:summary\], requiring $f(x)\in \text{co}(\AA)$ for almost all $x\in \XX$, then the SH-SVM has an especially simple formulation at the price of loosing consistency guarantees. In fact, in this case the coefficients are given by $\c_i= \alpha_i \a_{y_i}, \quad i=1, \dots, n,$ where $\alpha_i\in \mathbb{R}$, with $ i=1, \dots, n$ solve the quadratic programming (QP) problem $$\begin{aligned} &&\max_{\alpha_1, \dots, \alpha_n \in \R} -\frac{1}{2}\sum_{i,j} \alpha_i K_{ij}G_{y_iy_j}\alpha_j +\sum_{i=1}^n \alpha_i\\ && \text{subject to}\quad0\leq \alpha_i \leq C_0, ~\forall~ i =1 \dots n,\end{aligned}$$ where $C_0=\frac{1}{2n\la}$. The latter formulation could be trained at the same complexity of the binary SVM (worst case $O(n^3)$) but lacks consistency.\ [**Online/Incremental Optimization**]{} The regularized estimators induced by the simplex loss functions can be computed by mean of online/incremental first order (sub) gradient methods. Indeed, when considering finite dimensional feature maps, these strategies offer computationally feasible solutions to train estimators for large datasets where neither a $p$ by $p$ or an $n$ by $n$ matrix fit in memory. Following [@pegasos] we can alternate a step of stochastic descent on a data point : $W_{\text{tmp}}= (1-\eta_i \lambda)W_i-\eta_i \partial(V(y_i, f_{W_i}(x_i)))$ and a projection on the Frobenius ball $W_i=\min(1, \frac{1}{\sqrt{\lambda}||W_{tmp}||_{F}})W_{\text{tmp}}$ (See Algorithn C.$5$ for details.) The algorithm depends on the used loss function through the computation of the (point-wise) subgradient $\partial(V)$. The latter can be easily computed for all the loss functions previously discussed. For the SLS loss we have $ \partial(V(y_i, f_{W}(x_i)))=2 (\a_{y_i}-Wx_{i}) x_{i}^{\top},$ while for the SC-SVM loss we have $ \partial(V(y_i, f_{W}(x_i)))= (\sum_{k\in I_i} c_k)x_i^{\top} $ where $I_i=\{y\neq y_i | \scalT{c_y}{Wx_i}>-\frac{1}{T-1}\}$. For the SH-SVM loss we have: $ \partial(V(y, f_{W}(x_i)))= - c_{y_i}x_i^{\top} \text{ if } c_{y_i}Wx_i<1 \text{ and } 0 \text{ else }. $ Comparison of Computational Complexity -------------------------------------- The cost of solving S-RLS for fixed $\lambda$ is in the worst case $O(n^3)$ (for example via Choleski decomposition). If we are interested into computing the regularization path for $N$ regularization parameter values, then as noted in [@Rif] it might be convenient to perform an eigendecomposition of the kernel matrix rather than solving the systems $N$ times. For explicit feature maps the cost is $O(np^2)$, so that the cost of computing the regularization path for simplex RLS algorithm is $O(min(n^3,np^2))$ and hence [*independent*]{} of $T$. One can contrast this complexity with the one of a näive One Versus all (OVa) approach that would lead to a $O(Nn^3T)$ complexity. Simplex SVMs can be solved using solvers available for binary SVMs that are considered to have complexity $O(n^\gamma)$ with $\gamma \in \{2,3\}$(actually the complexity scales with the number of support vectors) . For SC-SVM, though, we have $nT$ rather than $n$ unknowns and the complexity is $(O(nT)^{\gamma})$. SH-SVM where we omit the constraint, could be trained at the same complexity of the binary SVM (worst case $O(n^3)$) but lacks consistency. Note that unlike for S-RLS, there is no straightforward way to compute the regularization path and the leave one out error for any of the above SVMs . The online algorithms induced by the different simplex loss functions are essentially the same, in particular each iteration depends linearly on the number of classes. Numerical Results ================= [\[sec:exp\]]{} We conduct several experiments to evaluate the performance of our batch and online algorithms, on 5 UCI datasets as listed in Table \[datas\], as well as on Caltech101 and Pubfig83. We compare the performance of our algorithms to on versus all svm (libsvm) , as well as the simplex based boosting [@boosting]. For UCI datasets we use the raw features, on Caltech101 we use hierarchical features[^1] , and on Pubfig83 we use the feature maps from [@PintoEtAl2011]. In all cases the parameter selection is based either on a hold out (ho) $(80 \% \text{ training }- 20\% \text{ validation})$ or a leave one out error (loo). For the model Selection of $\lambda$ in S-LS, $100$ values are chosen in the range $[\lambda_{min},\lambda_{max}]$,(where $\lambda_{min}$ and $\lambda_{\max}$, correspond to the smallest and biggest eigenvalues of $K $). In the case of a Gaussian kernel (rbf) we use a heuristic that sets the width of the gaussian $\sigma $ to the 25-th percentile of pairwise distances between distinct points in the training set. In Table \[datas\] we collect the resulting classification accuracies: \[datas\] Landsat Optdigit Pendigit Letter Isolet Ctech Pubfig83 ------------------------------ ----------------- ---------------- ---------------- ---------------- ---------------- ---------------- ---------------- SC-SVM $65.15 \%$ $89.57 \%$ $81.62\%$ $52.82\%$ $88.58\%$ $63.33\%$ $84.70\%$ SH-SVM $75.43 \%$ $85.58\%$ $72.54\%$ $38.40\%$ $77.65\%$ $45\%$ $49.76\%$ S-LS $63.62 \%$ $91.68\%$ $81.39\%$ $54.29\%$ $92.62\%$ $58.39\%$ $83.61\%$ S-LS $65.88 \%$ $91.90\%$ $80.69\%$ $54.96\%$ $92.55\%$ $66.35\%$ $86.63\%$ S-LS rbf $\bf{90.15 \%}$ $\bf{97.09\%}$ $\bf{98.17}\%$ $\bf{96.48\%}$ $\bf{97.05\%}$ $\bf{69.38\%}$ $\bf{86.75\%}$ SVM $72.81 \%$ $92.13\%$ $86.93\%$ $62.78\%$ $90.59\%$ $70.13\%$ $85.97\%$ SVM rbf $95.33 \%$ $98.07\%$ $98.88\%$ $97.12\%$ $96.99\%$ $51.77\%$ $85.60\%$ Simplex boosting [@boosting] $ 86.65\%$ $92.82\%$ $92.94\%$ $59.65\%$ $91.02\%$ $-$ $-$ : Accuracies of our algorithms on several datasets. As suggested by the theory, the consistent methods SC-SVM and S-LS have a big advantage over SH-SVM (where we omitted the convex hull constraint) . Batch methods are overall superior to online methods, with online SC-SVM achieving the best results. More generally, we see that rbf S- LS has the best performance among the simplex methods including the simplex boosting [@boosting]. When compared to One Versus All SVM-rbf, we see that S-LS rbf achieves essentially the same performance. [^1]: The data set will be made available upon acceptance.
{ "pile_set_name": "ArXiv" }
**Streamlined Variational Inference for** **Higher Level Group-Specific Curve Models** By M. Menictas$\null^1$, T.H. Nolan$\null^1$, D.G. Simpson$\null^2$ and M.P. Wand$\null^1$ *University of Technology Sydney$\null^1$ and University of Illinois$\null^2$* 11th March, 2019 **Abstract** A two-level group-specific curve model is such that the mean response of each member of a group is a separate smooth function of a predictor of interest. The three-level extension is such that one grouping variable is nested within another one, and higher level extensions are analogous. Streamlined variational inference for higher level group-specific curve models is a challenging problem. We confront it by systematically working through two-level and then three-level cases and making use of the higher level sparse matrix infrastructure laid down in Nolan Wand (2018). A motivation is analysis of data from ultrasound technology for which three-level group-specific curve models are appropriate. Whilst extension to the number of levels exceeding three is not covered explicitly, the pattern established by our systematic approach sheds light on what is required for even higher level group-specific curve models. *Keywords:* longitudinal data analysis, multilevel models, panel data, mean field variational Bayes. Introduction {#sec:intro} ============ We provide explicit algorithms for fitting and approximate Bayesian inference for multilevel models involving, potentially, thousands of noisy curves. The algorithms include covariance parameter estimation and allow for pointwise credible intervals around the fitted curves. Contrast function fitting and inference is also supported by our approach. Both two-level and three-level situations are covered, and a template for even higher level situations is laid down. Models and methodology for statistical analyses of grouped data for which the basic unit is a noisy curve continues to be an important area of research. A driving force is rapid technological change which is resulting in the generation of curve-type data at fine resolution levels. Examples of such technology include accelerometers (e.g. Goldsmith [*et al.*]{}, 2015) personal digital assistants (e.g. Trail [*et al.*]{}, 2014) and quantitative ultrasound (e.g. Wirtzfeld [*et al.*]{}, 2015). In some applications curve-type data have higher levels of grouping, with groups at one level nested inside other groups. Our focus here is streamlined variational inference for such circumstances. Some motivating data is shown in Figure \[fig:MNSWintro\] from an experiment involving quantitative ultrasound technology. Each curve corresponds to a logarithmically transformed backscatter coefficient over a fine grid of frequency values for tumors in laboratory mice, with exactly one tumor per mouse. The backscatter/frequency curves are grouped according to one of 5 slices of the same tumor, corresponding to probe locations. The slices are grouped according to being from one of 10 tumors. We refer to such data as three-level data with frequency measurements at level 1, slices being the level 2 groups and tumors constituting the level 3 groups. The gist of this article is efficient and flexible variational fitting and inference for such data, that scales well to much larger multilevel data sets. Indeed, our algorithms are linear in the number of groups at both level 2 and level 3. Simulation study results given later in this article show that curve-type data with thousands of groups can be analyzed quickly using our new methodology. Depending on sample sizes and implementation language, fitting times range from a few seconds to a few minutes. In contrast, naïve implementations become infeasible when the number of groups are in the several hundreds due to storage and computational demands. We work with a variant of group-specific curve models that at least go back to Donnelly, Laird Ware (1995). Other contributions of this type include Brumback Rice (1998), Verbyla *et al.* (1999), Wang (1998) and Zhang *et al.* (1998). The specific formulation that we use is that given by Durban *et al.* (2005) which involves an embedding within the class of linear mixed models (e.g. Robinson, 1991) with low-rank smoothing splines used for flexible function modelling and fitting. Even though approximate Bayesian variational inference is our overarching goal, we also provide an important parallelism involving classical frequentist inference. Contemporary mixed model software such as `nlme()` (Pinheiro *et al.*, 2018) and `lme4()` (Bates *et al.*, 2015) in the language provide streamlined algorithms for obtaining the best linear unbiased predictions of fixed and random effects in multilevel mixed models with details given in, for example, Pinheiro Bates (2000). However, the sub-blocks of the covariance matrices required for construction of pointwise confidence interval bands around the estimated curves are *not* provided by such software. In the variational Bayesian analog, these sub-blocks are required for covariance parameter fitting and inference which, in turn, are needed for curve estimation. A significant contribution of this article is streamlined computation for both the best linear unbiased predictors and its corresponding covariance computation. Similar mathematical results lead to the mean field variational Bayesian inference equivalent. We present explicit ready-to-code algorithms for both two-level and three-level group-specific curve models. Extensions to higher level models could be derived using the blueprint that we establish here. Nevertheless, the algebraic overhead is increasingly burdensome with each increment in the number of levels. It is prudent to treat each multilevel case separately and here we already require several pages to cover two-level and three-level group-specific curve models. To our knowledge, this is the first article to provide streamlined algorithms for fitting three-level group-specific curve models. Another important aspect of our group-specific curve fitting algorithms is the fact that they make use of the  and  algorithms developed for ordinary linear mixed models in Nolan *et al.* (2018). This realization means that the algorithms listed in Sections \[sec:twoLevMods\] and \[sec:threeLevMods\] are more concise and code-efficient: there is no need to repeat the implementation of these two fundamental algorithms for stable QR-based solving of higher level sparse linear systems. Sections \[sec:Solve2Lev\]–\[sec:Solve3Lev\] of the web-supplement provide details on the  and  algorithms. Section \[sec:twoLevMods\] deals with the two-level case and the three-level case is covered in Section \[sec:threeLevMods\]. In Section \[sec:accAndSpeed\] we provide some assessments concerning the accuracy and speed of the new variational inference algorithms. Two-Level Models {#sec:twoLevMods} ================ The simplest version of group-specific curve models involves the pairs $(x_{ij},y_{ij})$ where $x_{ij}$ is the $j$th value of the predictor variable within the $i$th group and $y_{ij}$ is the corresponding value of the response variable. We let $m$ denote the number of groups and $n_i$ denote the number of predictor/response pairs within the $i$th group. The Gaussian response two-level group specific curve model is $$y_{ij}=f(x_{ij})+g_i(x_{ij})+\varepsilon_i,\quad\varepsilon_{ij}\simind N(0,\sigeps^2),\quad 1\le i\le m,\ 1\le j\le n_i, \label{eq:twoLevelfg}$$ where the smooth function $f$ is the global regression mean function and the smooth functions $g_i$, $1\le i\le m$, allow for flexible group-specific deviations from $f$. As in Durban *et al.* (2005), we use mixed model-based penalized basis functions to model $f$ and the $g_i$. Specifically, [ $$\begin{aligned} f(x)&=&\beta_0+\beta_1\,x+\sum_{k=1}^{\Kgbl}\,\uGblk\,\zgblk(x),\quad \uGblk\simind N(0,\sigmaGbl^2),\ \mbox{and}\\[1ex] g_i(x)&=&\uLiniz+\uLinio\,x+\sum_{k=1}^{\Kgrp}\,\uGrpik\,\zgrpk(x), \ \ \left[\begin{array}{c} \uLiniz\\[1ex] \uLinio \end{array} \right]\simind N(\bzero,\bSigma),\ \ \uGrpik\simind N(0,\sigmaGrp^2), $$ ]{} where $\{\zgblk(\cdot):1\le k\le \Kgbl\}$ and $\{\zgrpk(\cdot):1\le k\le \Kgrp\}$ are suitable sets of basis functions. Splines and wavelet families are the most common choices for the $\zgblk(\cdot)$ and $\zgrpk(\cdot)$. In our illustrations and simulation studies we use the canonical cubic O’Sullivan spline basis as described in Section 4 of Wand Ormerod (2008), which corresponds to a low-rank version of classical smoothing splines (e.g. Wahba, 1990). The variance parameters $\sigmaGbl^2$ and $\sigmaGrp^2$ control the effective degrees of freedom used for the global mean and group-specific deviation functions respectively. Lastly, $\bSigma$ is a $2\times 2$ unstructured covariance matrix for the coefficients of the group-specific linear deviations. We also use the notation: $$\bx_i\equiv\left[\begin{array}{c} x_{i1}\\ \vdots\\ x_{in_i} \end{array} \right]\quad\mbox{and}\quad \by_i\equiv\left[\begin{array}{c} y_{i1}\\ \vdots\\ y_{in_i} \end{array} \right]$$ for the vectors of predictors and responses corresponding to the $i$th group. Notation such as $\zgblo(\bx_i)$ denotes the $n_i\times 1$ vector containing $\zgblo(x_{ij})$ values, $1\le j\le n_i$. Best Linear Unbiased Prediction ------------------------------- Model (\[eq:twoLevelfg\]) is expressible as a Gaussian response linear mixed model as follows: $$\by|\bu\sim N(\bX\bbeta+\bZ\,\bu,\sigeps^2\,\bI),\quad \bu\sim N(\bzero,\bG), \label{eq:twoLevFreq}$$ where $$\bX\equiv \left[ \begin{array}{c} \bX_1\\ \vdots \\ \bX_m \\ \end{array} \right] \quad\mbox{with}\quad\bX_i\equiv[\bone\ \bx_i] \quad\mbox{and}\quad \bbeta\equiv\left[\begin{array}{c} \beta_0\\[1ex] \beta_1 \end{array} \right] $$ are the fixed effects design matrix and coefficients, corresponding to the linear component of $f$. The random effects design matrix $\bZ$ and corresponding random effects vector $\bu$ are partitioned according to $$\bZ=\Big[\bZgbl\ \ \blockdiag{1\le i\le m}([\bX_i\ \bZgrpi])\Big] \quad\mbox{and}\quad \bu=\left[\begin{array}{l} \ \ \ \ \ \buGbl\\[1ex] \left[ \begin{array}{c} \buLini\\ \buGrpi \end{array} \right]_{1\le i\le m} \end{array} \right] \label{eq:ZanduDefn}$$ where $\buGbl= [ \uGblone \ \hdots \ \uGblKGbl ]^T$ are the coefficients corresponding to the non-linear component of $f$, $\buLini=[\uLiniz \ \uLinio ]^T$ are the coefficients corresponding to the linear component of $g_i$ and $\buGrpi=[\uGrpione \ \hdots \ \uGrpiKGrp]^T$ are the coefficients corresponding to the non-linear component of $g_i$, $1\le i\le m$. In (\[eq:ZanduDefn\]), $\bZgbl\equiv\stack{1\le i\le m}(\bZgbli)$ and the matrices $\bZgbli$ and $\bZgrpi$, $1\le i\le m$, contain, respectively, spline basis functions for the global mean function $f$ and the $i$th group deviation functions $g_i$. Specifically, $$\bZgbli\equiv[\begin{array}{ccc} \zgblo(\bx_i) & \cdots & \zgblKgbl(\bx_i) \end{array}]\quad\mbox{and}\quad \bZgrpi=[ \begin{array}{ccccc} \zgrpo(\bx_i) &\cdots &\zgrpKgrp(\bx_i) \end{array} ]$$ for $1\le i\le m$. The corresponding fixed and random effects vectors are $$\buGbl\sim N(\bzero,\sigmaGbl^2\bI_{\Kgbl}) \quad \mbox{and} \quad \left[ \begin{array}{c} \buLini\\[1ex] \buGrpi \end{array} \right] \simind N\left(\left[\begin{array}{c}\bzero\\[1ex]\bzero\end{array}\right], \left[ \begin{array}{cc} \bSigma & \bO \\[1ex] \bO & \sigmaGrp^2\bI_{\Kgrp} \end{array} \right]\right),\quad 1\le i\le m.$$ Hence, the full random effects covariance matrix is $$\bG=\Cov(\bu)=\left[ \begin{array}{ccc} \sigmaGbl^2\bI_{\Kgbl}&\bO \\[1ex] \bO & \bI_m\otimes\left[ \begin{array}{cc} \bSigma & \bO \\[1ex] \bO & \sigmaGrp^2\bI_{\Kgrp} \end{array} \right] \end{array} \right]. \label{eq:Gdefn}$$ Next define the matrices $$\begin{array}{c} \bC\equiv[\bX\ \bZ],\quad\DBLUP\equiv\left[ \begin{array}{cc} \bO & \bO \\[1ex] \bO & \bG^{-1} \end{array} \right]\quad\mbox{and}\quad\RBLUP\equiv\sigeps^2\bI. \end{array} \label{eq:CDRmatBLUPdefs}$$ The best linear unbiased predictor of $[\bbeta\ \bu]^T$ and corresponding covariance matrix are $${\setlength\arraycolsep{1pt} \begin{array}{rcl} \left[\begin{array}{c} \bbetahat\\ \buhat \end{array} \right]&=&(\bC^T\RBLUP^{-1}\bC+\DBLUP)^{-1}\bC^T\RBLUP^{-1}\by\\[3ex] \mbox{and}\quad \mbox{Cov}\left(\left[\begin{array}{c} \bbetahat\\ \buhat-\bu \end{array} \right]\right)&=&(\bC^T\RBLUP^{-1}\bC+\DBLUP)^{-1}. \end{array} } \label{eq:BLUPandCov}$$ This covariance matrix grows quadratically in $m$, so its storage becomes infeasible for large numbers of groups. However, only the following sub-blocks are required for adding pointwise confidence intervals to curve estimates: $${\setlength\arraycolsep{1pt} \begin{array}{rcl} \Cov\left( \left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right]\right) &=&\mbox{top left-hand $(2+\Kgbl)\times(2+\Kgbl)$ }\\[0ex] &&\mbox{sub-block of $(\bC^T\RBLUP^{-1}\bC+\DBLUP)^{-1}$},\\[4ex] \Cov\left(\left[ \begin{array}{c} \buHatLini-\buLini\\[1ex] \buHatGrpi-\buGrpi \end{array} \right] \right)&=&\mbox{subsequent $(2+\Kgrp)\times(2+\Kgrp)$ diagonal}\\[0ex] &&\mbox{sub-blocks of $(\bC^T\RBLUP^{-1}\bC+\DBLUP)^{-1}$}\\[1ex] &&\mbox{below $\Cov\left( \left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right]\right)$,\ $1\le i\le m$, and}\\[2ex] E\left\{ \left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right] \left[ \begin{array}{c} \buHatLini-\buLini\\[1ex] \buHatGrpi-\buGrpi \end{array} \right]^T \right\}&=&\mbox{subsequent $(2+\Kgbl)\times(2+\Kgrp)$ sub-blocks}\\[0ex] &&\mbox{of $(\bC^T\RBLUP^{-1}\bC+\DBLUP)^{-1}$ to the right of}\\[1ex] &&\mbox{$\Cov\left( \left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right]\right)$,\ $1\le i\le m$.} \end{array} } \label{eq:CovMain}$$ As in Nolan, Menictas Wand (2019), we define the generic two-level sparse matrix to be determination of the vector $\bx$ which minimizes the least squares criterion $$\Vert\bb-\bB\bx\Vert^2 \quad\mbox{where $\Vert\bv\Vert^2\equiv\bv^T\bv$ for any column vector $\bv$,} \label{eqn:sparseLeastSquares}$$ with $\bB$ having the two-level sparse form $$\bB\equiv \left[ \arraycolsep=2.2pt\def\arraystretch{1.6} \begin{array}{c|c|c|c|c} \setstretch{4.5} \Bmato &\Bmatdoto &\bO &\cdots&\bO\\ \hline \Bmatt &\bO &\Bmatdott&\cdots&\bO\\ \hline \vdots &\vdots &\vdots &\ddots&\vdots\\ \hline \Bmatm &\bO &\bO &\cdots &\Bmatdotm \end{array} \right] \quad\mbox{and $\bb$ partitioned according to}\quad \bb\equiv\left[ \arraycolsep=2.2pt\def\arraystretch{1.6} \begin{array}{c} \setstretch{4.5} \bveco \\ \hline \bvect \\ \hline \vdots \\ \hline \bvecm \\ \end{array} \right]. \label{eq:BandbForms}$$ In (\[eq:BandbForms\]), for any $1\le i\le m$, the matrices $\bB_i$, $\Bmatdoti$ and $\bb_i$ each have the same number of rows. The numbers of columns in $\bB_i$ and $\Bmatdoti$ are arbitrary whereas the $\bb_i$ are column vectors. In addition to solving for $\bx$, the sub-blocks of $(\bB^T\bB)^{-1}$ corresponding to the non-sparse regions of $\bB^T\bB$ are included in our definition of a two-level sparse matrix least squares problem. Algorithm 2 of Nolan *et al.* (2018) provides a stable and efficient solution to this problem and labels it the  algorithm. Section \[sec:Solve2Lev\] of the web-supplement contains details regarding this algorithm. In Nolan *et al.* (2018) we used   for fitting two-level linear mixed models. However, precisely the same algorithm can be used for fitting two-level group-specific curve models because of: Computation of $[\bbetahat^T\ \ \buhat^T]^T$ and each of the sub-blocks of $\mbox{\rm Cov}([\bbetahat^T\ \ (\buhat-\bu)^T]^T)$ listed in (\[eq:CovMain\]) are expressible as solutions to the two-level sparse matrix least squares problem: $$\left\Vert\bb-\bB\left[ \begin{array}{c} \bbeta\\ \bu \end{array} \right] \right\Vert^2$$ where the non-zero sub-blocks $\bB$ and $\bb$, according to the notation in (\[eq:BandbForms\]), are for $1\le i\le m$: $$\bveci\equiv \left[ \begin{array}{c} \sigeps^{-1}\by_i\\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \end{array} \right], \quad \Bmati\equiv \left[ \begin{array}{cc} \sigeps^{-1}\bX_i & \sigeps^{-1}\bZgbli\\[1ex] \bO & m^{-1/2}\sigmaGbl^{-1}\bI_{\Kgbl}\\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \end{array} \right] \quad\mbox{and}\quad \Bmatdoti\equiv \left[ \begin{array}{cc} \sigeps^{-1}\bX_i & \sigeps^{-1}\bZgrpi \\[1ex] \bO & \bO \\[1ex] \bSigma^{-1/2} & \bO \\[1ex] \bO & \sigmaGrp^{-1}\bI_{\Kgrp} \end{array} \right] $$ with each of these matrices having $\nadj_i=n_i+\Kgbl+2+\Kgrp$ rows and with $\Bmati$ having $p=2+\Kgbl$ columns and $\Bmatdoti$ having $q=2+\Kgrp$ columns. The solutions are $$\left[ \begin{array}{c} \bbetahat\\ \buHatGbl \end{array} \right]=\xveco,\quad \Cov\left(\left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right]\right)=\AUoo$$ and $$\left[ \begin{array}{c} \buHatLini\\[1ex] \buHatGrpi \end{array} \right]=\xvectCi,\quad E\left\{ \left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right] \left[ \begin{array}{c} \buHatLini-\buLini\\[1ex] \buHatGrpi-\buGrpi \end{array} \right]^T \right\}=\AUotCi, $$ $$\Cov\left(\left[ \begin{array}{c} \buHatLini-\buLini\\[1ex] \buHatGrpi-\buGrpi \end{array} \right] \right)=\AUttCi,\ \ 1\le i\le m.$$ \[res:twoLevelBLUP\] A derivation of Result \[res:twoLevelBLUP\] is given in Section \[sec:drvResultOne\] of the web-supplement. Algorithm \[alg:twoLevBLUP\] encapsulates streamlined best linear unbiased prediction computation together with coefficient covariance matrix sub-blocks of interest. - Inputs: $\by_i(n_i\times1),\ \bX_i(n_i\times 2),\ \bZgbli(n_i\times \Kgbl),\ \bZgrpi(n_i\times \Kgrp),\ 1\le i\le m$;  $\sigeps^2,\sigmaGbl^2,\sigmaGrp^2>0$,\                $\bSigma(q\times q), \mbox{symmetric and positive definite.}$ - For $i=1,\ldots,m$: - $\begin{array}{l} \bveci\thickarrow \left[ \begin{array}{c} \sigeps^{-1}\by_i\\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \end{array} \right], \ \Bmati\thickarrow \left[ \begin{array}{cc} \sigeps^{-1}\bX_i & \sigeps^{-1}\bZgbli\\[1ex] \bO & m^{-1/2}\sigmaGbl^{-1}\bI_{\Kgbl} \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \end{array} \right], \end{array}$ - $\begin{array}{l} \Bmatdoti\thickarrow \left[ \begin{array}{cc} \sigeps^{-1}\bX_i & \sigeps^{-1}\bZgrpi \\[1ex] \bO & \bO \\[1ex] \bSigma^{-1/2} & \bO \\[1ex] \bO & \sigmaGrp^{-1}\bI_{\Kgrp} \end{array} \right] \end{array}$ - $\Ssc_1\thickarrow\SolveTwoLevelSparseLeastSquares \Big(\big\{(\bveci,\Bmati,\Bmatdoti):1\le i\le m\big\}\Big)$ - $\left[ \begin{array}{c} \bbetahat\\ \buHatGbl \end{array} \right]\thickarrow\mbox{$\xveco$ component of $\Ssc_1$}$   ;   $\Cov\left(\left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right]\right)\thickarrow\mbox{$\AUoo$ component of $\Ssc_1$}$ - For $i=1,\ldots,m$: - $\left[ \begin{array}{c} \buHatLini\\[1ex] \buHatGrpi \end{array} \right]\thickarrow\mbox{$\xvectCi$ component of $\Ssc_1$}$ - $\Cov\left(\left[ \begin{array}{c} \buHatLini-\buLini\\[1ex] \buHatGrpi-\buGrpi \end{array} \right] \right)\thickarrow\mbox{$\AUttCi$ component of $\Ssc_1$}$ - $E\left\{ \left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right] \left[ \begin{array}{c} \buHatLini-\buLini\\[1ex] \buHatGrpi-\buGrpi \end{array} \right]^T \right\}\thickarrow\mbox{$\AUotCi$ component of $\Ssc_1$}$ - Output: $$\begin{array}{c}\Bigg(\left[ \begin{array}{c} \bbetahat\\ \buHatGbl \end{array} \right],\ \Cov\left(\left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right]\right), \Bigg\{\Bigg( \left[ \begin{array}{c} \buHatLini\\[1ex] \buHatGrpi \end{array} \right],\, \Cov\left(\left[ \begin{array}{c} \buHatLini-\buLini\\[1ex] \buHatGrpi-\buGrpi \end{array} \right]\right),\,\\[2ex] E\left\{ \left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right] \left[ \begin{array}{c} \buHatLini-\buLini\\[1ex] \buHatGrpi-\buGrpi \end{array} \right]^T \right\} \Bigg):\ 1\le i\le m\Bigg\}\Bigg) \end{array}$$ Mean Field Variational Bayes ---------------------------- We now consider the following Bayesian extension of (\[eq:twoLevFreq\]) and (\[eq:Gdefn\]): $$\begin{array}{c} \by|\bbeta, \bu, \sigsqeps \sim N(\bX\bbeta+\bZ\,\bu,\sigeps^2\,\bI),\quad \bu|\sigmaGbl^2,\sigmaGrp^2,\bSigma\sim N(\bzero,\bG), \quad\mbox{$\bG$ as defined in (\ref{eq:Gdefn}),} \\[1ex] \bbeta\sim N(\bmu_{\bbeta},\bSigma_{\bbeta}),\quad\sigeps^2|\aeps\sim\mbox{Inverse-$\chi^2$}(\nuEps,1/\aeps), \quad\aeps\sim\mbox{Inverse-$\chi^2$}(1,1/(\nuEps\sEps^2)),\\[2ex] \quad\sigmaGbl^2|\aGbl\sim\mbox{Inverse-$\chi^2$}(\nuGbl,1/\aGbl), \quad\aGbl\sim\mbox{Inverse-$\chi^2$}(1,1/(\nuGbl\sGbl^2)),\\[2ex] \quad\sigmaGrp^2|\aGrp\sim\mbox{Inverse-$\chi^2$}(\nuGrp,1/\aGrp), \quad\aGrp\sim\mbox{Inverse-$\chi^2$}(1,1/(\nuGrp\sGrp^2)),\\[2ex] \bSigma|\ASigma\sim\mbox{Inverse-G-Wishart}\big(\Gfull,\nuSigma+2,\ASigma^{-1}\big),\\[2ex] \ASigma\sim\mbox{Inverse-G-Wishart}(\Gdiag,1,\bLambda_{\ASigma}),\quad \bLambda_{\ASigma}\equiv\{\nuSigma\diag(\sSigmaOne^2,\sSigmaTwo^2)\}^{-1}. \end{array} \label{eq:twoLevBayes}$$ Here the $2\times1$ vector $\bmu_{\bbeta}$ and $2\times2$ symmetric positive definite matrix $\bSigma_{\bbeta}$ are hyperparameters corresponding to the prior distribution on $\bbeta$ and $$\nuEps,\sEps,\nuGbl,\sGbl,\nuGrp,\sGrp,\nuSigma,\sSigmaOne,\sSigmaTwo>0$$ are hyperparameters for the variance and covariance matrix parameters. Details on the Inverse G-Wishart distribution, and the Inverse-$\chi^2$ special case, are given in Section \[sec:IGWandICS\] of the web-supplement. The auxiliary variable $\aeps$ is defined so that $\sigeps$ has a Half-$t$ distribution with degrees of freedom parameter $\nuEps$ and scale parameter $\sEps$, with larger values of $\sEps$ corresponding to greater noninformativity. Analogous comments apply to the other standard deviation parameters. Setting $\nuSigma=2$ leads to the correlation parameter in $\bSigma$ having a Uniform distribution on $(-1,1)$ (Huang Wand, 2013). Throughout this article we use $\pDens$ generically to denote a density function corresponding to random quantities in Bayesian models such as (\[eq:twoLevBayes\]). For example, $\pDens(\bbeta)$ denotes the prior density function of $\bbeta$ and $\pDens(\bu|\sigmaGbl^2,\sigmaGrp^2,\bSigma)$ denotes the density function of $\bu$ conditional on $(\sigmaGbl^2,\sigmaGrp^2,\bSigma)$. Now consider the following mean field restriction on the joint posterior density function of all parameters in (\[eq:twoLevBayes\]): $$\pDens(\bbeta,\bu,\aeps,\aGbl,\aGrp,\ASigma,\sigeps^2,\sigmaGbl^2,\sigmaGrp^2,\bSigma|\by) \approx \qDens(\bbeta,\bu,\aeps,\aGbl,\aGrp,\ASigma)\,\qDens(\sigeps^2,\sigmaGbl^2,\sigmaGrp^2,\bSigma). \label{eq:producRestrict}$$ Here, generically, each $\qDens$ denotes an approximate posterior density function of the random vector indicated by its argument according to the mean field restriction (\[eq:producRestrict\]). Then application of the minimum Kullback-Leibler divergence equations (e.g. equation (10.9) of Bishop, 2006) leads to the optimal $\qDens$-density functions for the parameters of interest being as follows: $$\begin{array}{c} \begin{array}{ll} &\qDens^*(\bbeta,\bu)\ \mbox{has a $N\big(\bmu_{\qDens(\bbeta,\bu)},\bSigma_{\qDens(\bbeta,\bu)}\big)$ distribution,} \\[1.5ex] &\qDens^*(\sigeps^2)\ \mbox{has an $\mbox{Inverse-$\chi^2$} \big(\xi_{\qDens(\sigeps^2)},\lambda_{\qDens(\sigeps^2)}\big)$ distribution,} \\[1.5ex] &\qDens^*(\sigmaGbl^2)\ \mbox{has an $\mbox{Inverse-$\chi^2$} \big(\xi_{\qDens(\sigmaGbl^2)},\lambda_{\qDens(\sigmaGbl^2)}\big)$ distribution,} \\[1.5ex] &\qDens^*(\sigmaGrp^2)\ \mbox{has an $\mbox{Inverse-$\chi^2$} \big(\xi_{\qDens(\sigmaGrp^2)},\lambda_{\qDens(\sigmaGrp^2)}\big)$ distribution} \\[1.5ex] \mbox{and} & \qDens^*(\bSigma)\ \mbox{has an $\mbox{Inverse-G-Wishart}(\Gfull,\xi_{\qDens(\bSigma)},\bLambda_{\qDens(\bSigma)})$ distribution.}\\ \end{array} \end{array}$$ The optimal $\qDens$-density parameters are determined via an iterative coordinate ascent algorithm, with details given in Section \[sec:drvAlgTwo\] of this article’s web-supplement. The stopping criterion is based on the variational lower bound on the marginal likelihood (e.g. Bishop, 2006; Section 10.2.2) and denoted $\underline{\pDens}(\by;\qDens)$. Its logarithmic form and derivation are given in Section \[sec:lowerBound\] of the web-supplement. Note that updates for $\bmu_{\qDens(\bbeta,\bu)}$ and $\bSigma_{\qDens(\bbeta,\bu)}$ may be written $$\bmu_{\qDens(\bbeta,\bu)}\leftarrow(\bC^T\RMFVB^{-1}\bC+\DMFVB)^{-1}(\bC^T\RMFVB^{-1}\by + \oMFVB) \quad \mbox{and}\quad \bSigma_{\qDens(\bbeta,\bu)}\leftarrow(\bC^T\RMFVB^{-1}\bC+\DMFVB)^{-1} \label{eq:muSigmaMFVBupd}$$ where $$\begin{array}{l} \RMFVB\equiv\mu_{\qDens(1/\sigeps^2)}^{-1}\bI, \quad \DMFVB\equiv \left[ \begin{array}{ccc} \bSigma_{\bbeta}^{-1} & \bO &\bO \\[1ex] \bO &\mu_{\qDens(1/\sigmaGbl^2)}\bI & \bO \\[1ex] \bO & \bO & \displaystyle{\blockdiag{1\le i\le m}} \left[ \begin{array}{cc} \bM_{\qDens(\bSigma^{-1})} & \bO \\ \bO &\mu_{\qDens(1/\sigmaGrp^2)}\bI \\ \end{array} \right] \end{array} \right]\\[3ex] \quad\mbox{and}\quad \oMFVB\equiv\left[ \begin{array}{c} \bSigma_{\bbeta}^{-1}\bmu_{\bbeta}\\[1ex] \bzero \end{array} \right]. \end{array} \label{eq:MFVBmatDefns}$$ For increasingly large numbers of groups the matrix $\bSigma_{\qDens(\bbeta,\bu)}$ approaches a size that is untenable for random access memory storage on standard 2020s workplace computers. However, only the following relatively small sub-blocks of $\bSigma_{\qDens(\bbeta,\bu)}$ are required for variational inference concerning the variance and covariance matrix parameters: $${\setlength\arraycolsep{1pt} \begin{array}{rcl} &&\bSigma_{\qDens(\bbeta,\buGbl)}=\mbox{top left-hand $(2+\Kgbl)\times(2+\Kgbl)$ sub-block of $(\bC^T\RMFVB^{-1}\bC+\DMFVB)^{-1}$},\\[2ex] &&\bSigma_{\qDens(\buLini,\buGrpi)}=\mbox{subsequent $(2+\Kgrp)\times(2+\Kgrp)$ diagonal sub-blocks of} \\ &&\qquad\qquad\qquad\quad\mbox{$(\bC^T\RMFVB^{-1}\bC+\DMFVB)^{-1}$ below $\bSigma_{\qDens(\bbeta,\buGbl)}$,\ $1\le i\le m$, and}\\[2ex] && E_\qDens \left\{\left(\left[\begin{array}{c}\bbeta\\ \buGbl\end{array}\right]-\bmu_{\qDens(\bbeta,\buGbl)}\right) \left(\left[\begin{array}{c}\buLini\\ \buGrpi\end{array}\right] -\bmuq{\buLini,\buGrpi)}\right)^T\right\} = \mbox{subsequent}\\[1ex] &&\qquad\qquad\qquad\qquad\mbox{$(2+\Kgbl)\times(2+\Kgrp)$ sub-blocks of $(\bC^T\RMFVB^{-1}\bC+\DMFVB)^{-1}$}\\[1ex] &&\qquad\qquad\qquad\qquad\mbox{to the right of $\bSigma_{\qDens(\bbeta,\buGbl)}$,\ $1\le i\le m$.} \end{array} } \label{eq:CovMFVB}$$ For a streamlined mean field variational Bayes algorithm, we appeal to: The mean field variational Bayes updates of $\bmu_{\qDens(\bbeta,\bu)}$ and each of the sub-blocks of $\bSigma_{\qDens(\bbeta,\bu)}$ in (\[eq:CovMFVB\]) are expressible as a two-level sparse matrix least squares problem of the form: $$\left\Vert\bb-\bB\bmu_{\qDens(\bbeta,\bu)} \right\Vert^2$$ where the non-zero sub-blocks $\bB$ and $\bb$, according to the notation in (\[eq:BandbForms\]), are, for $1\le i\le m$, $$\bveci\equiv\left[\begin{array}{c} \mu_{\qDens(1/\sigeps^2)}^{1/2}\by_i\\[2ex] m^{-1/2}\bSigma_{\bbeta}^{-1/2}\bmu_{\bbeta}\\[2ex] \bzero\\[1ex] \bzero\\[1ex] \bzero \end{array} \right], \quad\Bmati\equiv\left[\begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_i & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZgbli\\[2ex] m^{-1/2}\bSigma_{\bbeta}^{-1/2}& \bO \\[2ex] \bO & m^{-1/2}\mu_{\qDens(1/\sigmaGbl^2)}^{1/2}\bI_{\Kgbl} \\[2ex] \bO & \bO \\[2ex] \bO & \bO \\[2ex] \end{array} \right]$$ and $$\Bmatdoti\equiv \left[\begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_i & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZgrpi \\[2ex] \bO & \bO \\[2ex] \bO & \bO \\[2ex] \bM_{\qDens(\bSigma^{-1})}^{1/2} & \bO \\[2ex] \bO & \mu_{\qDens(1/\sigmaGrp^2)}^{1/2}\bI_{\Kgrp} \end{array} \right]$$ with each of these matrices having $\nadj_i=n_i+2+\Kgbl+2+\Kgrp$ rows and with $\Bmati$ having $p=2+\Kgbl$ columns and $\Bmatdoti$ having $q=2+\Kgrp$ columns. The solutions are $$\bmu_{\qDens(\bbeta,\buGbl)} =\xveco,\quad \bSigma_{\qDens(\bbeta,\buGbl)} =\AUoo,$$ $$\bmu_{\qDens(\buLini,\buGrpi)} =\xvectCi,\quad \bSigma_{\qDens(\buLini,\buGrpi)} =\AUttCi,$$ and $$E_q\left\{ \left[ \begin{array}{c} \bbeta-\bmu_{\qDens(\bbeta)}\\ \buGbl-\bmu_{\qDens(\buGbl)} \end{array} \right] \left[ \begin{array}{c} \buLini-\bmu_{\qDens(\buLini)}\\[1ex] \buGrpi-\bmu_{\qDens(\buGrpi)} \end{array} \right]^T \right\}=\AUotCi, 1\le i\le m. $$ \[res:twoLevelMFVB\] - Data Inputs: $\by_i(n_i\times1),\ \bX_i(n_i\times 2),\ \bZgbli(n_i\times \Kgbl),\ \bZgrpi(n_i\times \Kgrp),\ 1\le i\le m$; - Hyperparameter Inputs: $\bmu_{\bbeta}(2\times1)$, $\bSigma_{\bbeta}(2\times 2)\ \mbox{symmetric and positive definite}$, - $s_{\varepsilon},\nu_{\varepsilon},s_{\mbox{\rm\tiny gbl}}, \nu_{\mbox{\rm\tiny gbl}},\sSigmaOne,\sSigmaTwo,\nu_{\bSigma}, s_{\mbox{\rm\tiny grp}},\nu_{\mbox{\rm\tiny grp}}>0$. - For $i=1,\ldots,m$: - $\bCgbli\thickarrow[\bX_i\ \bZgbli]$   ;   $\bCgrpi\thickarrow[\bX_i\ \bZgrpi]$ - Initialize: $\muq{1/\sigsqeps}$, $\muq{1/\sigma_{\mbox{\rm\tiny gbl}}^{2}}$, $\muq{1/\sigma_{\mbox{\rm\tiny grp}}^{2}}$, $\muq{1/\aeps}$, $\muq{1/a_{\mbox{\rm\tiny gbl}}}$, $\muq{1/a_{\mbox{\rm\tiny grp}}} > 0$, - $\MqSigma (2 \times 2), \MqASigma (2 \times 2)$ both symmetric and positive definite. - $\xi_{\qDens(\sigeps^2)}\thickarrow \nu_{\varepsilon} + \sumim n_i$   ;   $\xi_{\qDens(\sigmaGbl^2)}\thickarrow\nu_{\mbox{\rm\tiny gbl}}+\Kgbl$   ;   $\xi_{\qDens(\bSigma)}\thickarrow\nu_{\bSigma}+2+m$ - $\xi_{\qDens(\sigmaGrp^2)}\thickarrow \nu_{\mbox{\rm\tiny grp}} + m\Kgrp$    ;    $\xi_{\qDens(a_{\varepsilon})}\thickarrow \nuEps + 1$   ;    $\xi_{\qDens(a_{\mbox{\rm\tiny gbl}})}\thickarrow \nuGbl + 1$   ;    $\xi_{\qDens(a_{\mbox{\rm\tiny grp}})}\thickarrow \nuGrp + 1$ - $\xi_{\qDens(\bA_{\bSigma})}\thickarrow \nuSigma + 2$ - Cycle: - For $i = 1,\ldots, m$: - $\bveci\thickarrow\left[ \begin{array}{c} \mu_{\qDens(1/\sigeps^2)}^{1/2}\by_i\\[1.5ex] m^{-1/2}\bSigma_{\bbeta}^{-1/2}\bmu_{\bbeta} \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \end{array} \right],\ \Bmati\thickarrow \left[ \begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_i & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZgbli\\[1.5ex] m^{-1/2}\bSigma_{\bbeta}^{-1/2} & \bO \\[1ex] \bO & m^{-1/2}\mu_{\qDens(1/\sigmaGbl^2)}^{1/2}\bI_{\Kgbl} \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \end{array} \right],$ - $\Bmatdoti\thickarrow \left[ \begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_i & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZgrpi \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bM_{\qDens(\bSigma^{-1})}^{1/2} & \bO \\[1ex] \bO & \mu_{\qDens(1/\sigmaGrp^2)}^{1/2}\bI_{\Kgrp} \end{array} \right]$ - $\Ssc_2\thickarrow\SolveTwoLevelSparseLeastSquares\Big(\big\{( \bveci,\Bmati,\Bmatdoti):1\le i\le m\big\}\Big)$ - $\bmu_{\qDens(\bbeta,\buGbl)}\thickarrow\mbox{$\xveco$ component of $\Ssc_2$}$    ;   $\bSigma_{\qDens(\bbeta,\buGbl)}\thickarrow\mbox{$\AUoo$ component of $\Ssc_2$}$ - $\bmu_{\qDens(\buGbl)}\thickarrow\mbox{last $\Kgbl$ rows of $\bmu_{\qDens(\bbeta,\buGbl)}$}$ - $\bSigma_{\qDens(\buGbl)}\thickarrow\mbox{bottom-right $\Kgbl\times\Kgbl$ sub-block of $\bSigma_{\qDens(\bbeta,\buGbl)}$}$ - $\lambda_{\qDens(\sigsqeps)}\thickarrow\muq{1/\aeps}$  ;  $\Lambda_{\qDens(\bSigma)}\thickarrow \MqASigma$  ;  $\lambda_{\qDens(\sigma^{2}_{\mbox{\rm\tiny grp}})}\thickarrow\mu_{\qDens(1/a_{\mbox{\rm\tiny grp}})}$ - For $i = 1,\ldots, m$: - $\bmu_{\qDens(\buLini,\buGrpi)}\thickarrow\mbox{$\xvectCi$ component of $\Ssc_2$}$ - $\bSigma_{\qDens(\buLini,\buGrpi)}\thickarrow\mbox{$\AUttCi$ component of $\Ssc_2$}$ - $\bmu_{\qDens(\buLini)}\thickarrow\mbox{first $2$ rows of $\bmu_{\qDens(\buLini,\buGrpi)}$}$ - $\bSigma_{\qDens(\buLini)}\thickarrow\mbox{top left $2\times 2$ sub-block of $\bSigma_{\qDens(\buLini,\buGrpi)}$}$ - $\bmu_{\qDens(\buGrpi)}\thickarrow\mbox{last $\Kgrp$ rows of $\bmu_{\qDens(\buLini,\buGrpi)}$}$ - $\bSigma_{\qDens(\buGrpi)}\thickarrow\mbox{bottom right $\Kgrp \times \Kgrp$ sub-block of $\bSigma_{\qDens(\buLini,\buGrpi)}$}$ - *continued on a subsequent page* $\ldots$ <!-- --> - - - $E_{\qDens}\left\{\left(\left[\begin{array}{c}\bbeta\\ \buGbl\end{array}\right] -\bmu_{\qDens(\bbeta,\buGbl)}\right) \left(\left[\begin{array}{c}\buLini\\ \buGrpi\end{array}\right] -\bmuq{\buLini,\buGrpi)}\right)^T\right\}$ - $\thickarrow\mbox{$\AUotCi$ component of $\Ssc_2$}$ - $\lambda_{\qDens(\sigsqeps)}\thickarrow \lambda_{\qDens(\sigsqeps)} +\big\Vert\by_i-\bCgbli\bmu_{\qDens(\bbeta,\buGbl)} -\bCgrpi\bmu_{\qDens(\buLini,\buGrpi)}\big\Vert^2$ - $\lambda_{\qDens(\sigsqeps)}\thickarrow \lambda_{\qDens(\sigsqeps)} +\mbox{tr}(\bCgbli^T\bCgbli\bSigma_{\qDens(\bbeta,\buGbl)}) +\mbox{tr}(\bCgrpi^T\bCgrpi\bSigma_{\qDens(\buLini,\buGrpi)})$ - $\lambda_{\qDens(\sigsqeps)}\thickarrow \lambda_{\qDens(\sigsqeps)}$ - $+2\,\mbox{tr}\left[\bCgrpi^T\bCgbli\,E_{\qDens}\left\{\left( \left[\begin{array}{c}\bbeta \\ \buGbl\end{array}\right] -\bmu_{\qDens(\bbeta,\buGbl)}\right)\left(\left[\begin{array}{c}\buLini\\ \buGrpi\end{array}\right] -\bmuq{\buLini,\buGrpi)}\right)^T\right\}\right]$ - $\bLambda_{\qDens(\bSigma)}\thickarrow\bLambda_{\qDens(\bSigma)}+ \bmu_{\qDens(\buLini)}\bmu_{\qDens(\buLini)}^T+ \bSigma_{\qDens(\buLini)}$ - $\lambda_{\qDens(\sigma^{2}_{\mbox{\rm\tiny grp}})} \thickarrow \lambda_{\qDens(\sigma^{2}_{\mbox{\rm\tiny grp}})} + \Vert\bmu_{\qDens(\bu_{\mbox{\rm\tiny grp,i}})}\Vert^2 + \tr\big(\bSigma_{\qDens(\bu_{\mbox{\rm\tiny grp,i}})}\big)$ - $\lambda_{\qDens(\sigma^{2}_{\mbox{\rm\tiny gbl}})} \thickarrow \mu_{\qDens(1/a_{\mbox{\rm\tiny gbl}})} + \Vert\bmu_{\qDens(\bu_{\mbox{\rm\tiny gbl}})}\Vert^2 + \tr\big(\bSigma_{\qDens(\bu_{\mbox{\rm\tiny gbl}})}\big)$ - $\muq{1/\sigsqeps} \leftarrow \xi_{\qDens(\sigeps)}/\lambda_{\qDens(\sigsqeps)}$    ;    $\muq{1/\sigma^{2}_{\mbox{\rm\tiny gbl}}} \leftarrow \xi_{\qDens(\sigma^{2}_{\mbox{\rm\tiny gbl}})}/ \lambda_{\qDens(\sigma^{2}_{\mbox{\rm\tiny gbl}})}$ - $\MqSigma \leftarrow(\xi_{\qDens(\bSigma)}-1)\bLambda^{-1}_{\qDens(\bSigma)}$    ;    $\muq{1/\sigma^{2}_{\mbox{\rm\tiny grp}}} \leftarrow \xi_{\qDens(\sigma^{2}_{\mbox{\rm\tiny grp}})}/ \lambda_{\qDens(\sigma^{2}_{\mbox{\rm\tiny grp}})}$ - $\lambda_{\qDens(a_{\varepsilon})}\thickarrow\muq{1/\sigma_{\varepsilon}^{2}} +1/(\nu_{\varepsilon} s_{\varepsilon}^2)$   ;   $\muq{1/a_{\varepsilon}} \thickarrow \xi_{\qDens(a_{\varepsilon})}/ \lambda_{\qDens(a_{\varepsilon})}$ - $\bLambda_{\qDens(\ASigma)}\thickarrow \diag\big\{\mbox{diagonal}\big(\bM_{\qDens(\bSigma^{-1})}\big)\big\}+\{\nuSigma\diag(\sSigmaOne^2,\sSigmaTwo^2)\}^{-1}$ - $\bM_{\qDens(\ASigma^{-1})}\thickarrow \xi_{\qDens(\ASigma)}\bLambda_{\qDens(\ASigma)}^{-1}$ - $\lambda_{\qDens(a_{\mbox{\rm\tiny gbl}})}\thickarrow\muq{1/\sigma_{\mbox{\rm\tiny gbl}}^{2}} +1/(\nu_{\mbox{\rm\tiny gbl}} s_{\mbox{\rm\tiny gbl}}^2)$   ;   $\muq{1/a_{\mbox{\rm\tiny gbl}}} \thickarrow \xi_{\qDens(a_{\mbox{\rm\tiny gbl}})}/ \lambda_{\qDens(a_{\mbox{\rm\tiny gbl}})}$ - $\lambda_{\qDens(a_{\mbox{\rm\tiny grp}})}\thickarrow\muq{1/\sigma_{\mbox{\rm\tiny grp}}^{2}} +1/(\nu_{\mbox{\rm\tiny grp}} s_{\mbox{\rm\tiny grp}}^2)$   ;   $\muq{1/a_{\mbox{\rm\tiny grp}}} \thickarrow \xi_{\qDens(a_{\mbox{\rm\tiny grp}})}/ \lambda_{\qDens(a_{\mbox{\rm\tiny grp}})}$ - until the increase in $\underline{\pDens}(\by;\qDens)$ is negligible. - Outputs: $\bmu_{\qDens(\bbeta,\buGbl)}$, $\bSigma_{\qDens(\bbeta,\buGbl)}$, $\Big\{\bmu_{\qDens(\buLini,\buGrpi)}, \bSigma_{\qDens(\buLini,\buGrpi)},$ - $E_{\qDens}\left\{\left(\left[\begin{array}{c}\bbeta\\ \buGbl\end{array}\right] -\bmu_{\qDens(\bbeta,\buGbl)}\right) \left(\left[\begin{array}{c}\buLini\\ \buGrpi\end{array}\right] -\bmuq{\buLini,\buGrpi)}\right)^T\right\}:1\le i\le m\Big\},$ - $\xi_{\qDens(\sigeps)},\lambda_{\qDens(\sigsqeps)},\xi_{\qDens(\sigmaGbl^2)}, \lambda_{\qDens(\sigmaGbl^2)},\xi_{\qDens(\bSigma)},\bLambda^{-1}_{\qDens(\bSigma)}, \xi_{\qDens(\sigmaGrp^2)},\lambda_{\qDens(\sigmaGrp^2)}.$ Algorithm \[alg:twoLevMFVB\] utilizes Result \[res:twoLevelMFVB\] to facilitate streamlined computation of the variational parameters. Lastly, we note that Algorithm \[alg:twoLevMFVB\] is loosely related to Algorithm 2 of Lee Wand (2016). One difference is that we are treating the Gaussian, rather than Bernoulli, response situation here. In addition, we are using the recent sparse multilevel matrix results of Nolan Wand (2018) which are amenable to higher level extensions, such as the three-level group specific curve model treated in Section \[sec:threeLevMods\]. Contrast Function Extension --------------------------- In many curve-type data applications the data can be categorized as being from two or more types. Of particular interest in such circumstances are contrast function estimates and accompanying standard errors. The streamlined approaches used in Algorithms \[alg:twoLevBLUP\] and \[alg:twoLevMFVB\] still apply for the contrast function extension regardless of the number of categories. The two category situation, where there is a single contrast function, is described here. The extension to higher numbers of categories is straightforward. Suppose that the $(x_{ij},y_{ij})$ pairs are from one of two categories, labeled $A$ and $B$, and introduce the indicator variable data: $$\iota^A_{ij}\equiv\left\{ \begin{array}{ll} 1 & \mbox{if $(x_{ij},y_{ij})$ is from category $A$},\\ 0 & \mbox{if $(x_{ij},y_{ij})$ is from category $B$}.\\ \end{array} \right.$$ Then penalized spline models for the global mean and deviation functions for each category are $$\left. \begin{array}{rcl} f^A(x)&=&\beta_0^{\mbox{\rm\tiny A}} +\beta_1^{\mbox{\rm\tiny A}}\,x+\displaystyle{\sum_{k=1}^{\Kgbl}}\,\uGblk^A\zgblk(x)\\[1ex] g^A_i(x)&=&\uLiniz^A+\uLinio^A\,x +\displaystyle{\sum_{k=1}^{\Kgrp}}\,\uGrpik^A\zgrpk(x) \end{array} \right\}\ \mbox{for category A} $$ and $$\left. \begin{array}{rcl} f^B(x)&=&\beta_0^{\mbox{\rm\tiny A}}+\beta_0^{\mbox{\rm\tiny BvsA}}+ (\beta_1^{\mbox{\rm\tiny A}}+\beta_1^{\mbox{\rm\tiny BvsA}})\,x +\displaystyle{\sum_{k=1}^{\Kgbl}}\,\uGblk^B\zgblk(x)\\[1ex] g^B_i(x)&=&\uLiniz^B+\uLinio^B\,x +\displaystyle{\sum_{k=1}^{\Kgrp}}\,\uGrpik^B\zgrpk(x) \end{array} \right\}\ \mbox{for category B.} $$ This allows us to estimate the global contrast function $$c(x)\equiv f^B(x) - f^A(x) =\beta_0^{\mbox{\rm\tiny BvsA}}+\beta_1^{\mbox{\rm\tiny BvsA}}\,x +\sum_{k=1}^{\Kgbl}(\uGblk^B-\uGblk^A)\zgblk(x). \label{eq:contrastCurve}$$ The distributions on the random coefficients are $$[\uLiniz^A\ \uLinio^A\ \uLiniz^B\ \uLinio^B]^T\simind N(\bzero,\bSigma)$$ and $$\uGblk^A\simind N\big(0,(\sigmaGbl^A)^2\big),\ \ \uGblk^B\simind N\big(0,(\sigmaGbl^B)^2\big), \ \ \uGrpik^A\simind N\big(0,\sigmaGrp^2\big)\ \ \mbox{and}\ \ \uGrpik^B\simind N\big(0,\sigmaGrp^2\big)$$ independently of each other. In this two-category extension, the matrix $\bSigma$ is an unstructured $4\times4$ covariance matrix. Algorithms \[alg:twoLevBLUP\] and \[alg:twoLevMFVB\] can be used to achieve streamlined fitting and inference for the contrast curve extension, but with key matrices having new definitions. Firstly, the $\bX_i$, $\bZgbli$ and $\bZgrpi$ matrices need to become: $$\bX_i=[\begin{array}{ccccc} \bone & \bx_i & \bone-\biota^A_i & (\bone-\biota^A_i)\odot\bx_i \end{array}],$$ $$\bZgbli=[{\setlength\arraycolsep{2pt} \begin{array}{cccccc} \biota^A_i\odot\zgblo(\bx_i) & \cdots & \biota^A_i\odot\zgblKgbl(\bx_i)& (\bone-\biota^A_i)\odot\zgblo(\bx_i) & \cdots & (\bone-\biota^A_i)\odot\zgblKgbl(\bx_i) \end{array}}]$$ and $$\bZgrpi=[ {\setlength\arraycolsep{2pt} \begin{array}{cccccc} \biota^A_i\odot\zgrpo(\bx_i)& \cdots& \biota^A_i\odot\zgrpKgrp(\bx_i)& (\bone-\biota^A_i)\odot\zgrpo(\bx_i) & \cdots & (\bone-\biota^A_i)\odot\zgrpKgrp(\bx_i) \end{array}}]$$ where $\biota^A_i$ is the $n_i\times1$ vector of $\iota^A_{ij}$ values. In the case of best linear unbiased prediction the updates for the $\bB_i$ and $\bBdot_i$ matrices in Algorithm \[alg:twoLevBLUP\] need to be replaced by: $$\Bmati\thickarrow \left[ \begin{array}{cc} \sigeps^{-1}\bX_i & \sigeps^{-1}\bZgbli\\[1ex] \bO & m^{-1/2}\left[\begin{array}{cc} (\sigmaGbl^A)^{-1}\bI_{\Kgbl}& \bzero \\ \bzero & (\sigmaGbl^B)^{-1}\bI_{\Kgbl} \end{array} \right]\\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \end{array} \right]\ \mbox{and}\ \Bmatdoti\thickarrow \left[ \begin{array}{cc} \sigeps^{-1}\bX_i & \sigeps^{-1}\bZgrpi \\[1ex] \bO & \bO \\[1ex] \bSigma^{-1/2} & \bO \\[1ex] \bO & \sigmaGrp^{-1}\bI_{2\Kgrp} \end{array} \right]$$ and the output coefficient vectors change to $$\left[ \begin{array}{c} \bbetahat\\[1ex] \buHatGbl^A\\[1ex] \buHatGbl^B \end{array} \right] \quad\mbox{and}\quad \left[ \begin{array}{c} \buHatLini^A\\[1ex] \buHatLini^B\\[1ex] \buHatGrpi^A\\[1ex] \buHatGrpi^B \end{array} \right].$$ In the case of mean field variational Bayes the updates of the $\bB_i$ and $\bBdot_i$ matrices in Algorithm \[alg:twoLevMFVB\] need to be replaced by: $$\Bmati\thickarrow \left[ \begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_i & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZgbli\\[1.5ex] m^{-1/2}\bSigma_{\bbeta}^{-1/2} & \bO \\[1ex] \bO & m^{-1/2}\left[ \begin{array}{cc} \mu_{\qDens(1/(\sigmaGbl^A)^2)}^{1/2}\bI_{\Kgbl} & \bzero \\ \bzero & \mu_{\qDens(1/(\sigmaGbl^B)^2)}^{1/2}\bI_{\Kgbl} \end{array} \right]\\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \end{array} \right], $$ and $$\Bmatdoti\thickarrow \left[ \begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_i & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZgrpi \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bM_{\qDens(\bSigma^{-1})}^{1/2} & \bO \\[1ex] \bO & \mu_{\qDens(1/\sigmaGrp^2)}^{1/2}\bI_{2\Kgrp} \end{array} \right].$$ A contrast curves adjustment to the mean field variational Bayes updates is also required for some of the covariance matrix parameters. However, these calculations are comparatively simple and analogous to those given in Section \[sec:drvAlgTwo\]. We demonstrate the use of Algorithm \[alg:twoLevMFVB\] in this setting for data from a longitudinal study on adolescent somatic growth. More detail on this data can be found in Pratt [*et al.*]{} (1989). The variables of interest are $$\begin{array}{lcl} y_{ij} & = & \mbox{$j$th height measurement (centimetres) of subject $i$, and} \\ x_{ij} & = & \mbox{age (years) of subject $i$ when $y_{ij}$ is recorded,} \end{array}$$ for $1 \le i \le m$ and $1 \le j \le n_{i}$. The subjects are categorized into black ethnicity and white ethnicity and comparison of mean height between the two populations is of interest. Algorithm \[alg:twoLevMFVB\] is seen to have good agreement with the data in each sub-panel of the top two plots in Figure \[fig:growthIndianaFits\]. The bottom panels of Figure \[fig:growthIndianaFits\] show the estimated height gap between black and white adolescents as a function of age. For the females, there is a significant height difference only at 16-17 years old. Between 5 and 15 years, there is no obvious height difference. For the males, it is highest and (marginally) statistically significant up to about 14 years of age, peaking at 13 years of age. Between 17 and 20 years old there is no discernible height difference between the two populations. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ *Top panels: fitted group-specific curves for 100 female subjects (left) and 116 male subjects (right) from the data on adolescent somatic growth (Pratt [*et al.*]{} 1989). The shading corresponds to approximate pointwise 99% credible intervals. Bottom panels: similar to the top panels but for the estimated contrast curve. The shaded regions correspond to approximate pointwise 95% credible intervals.*[]{data-label="fig:growthIndianaFits"}](growthIndianaFemFits.pdf "fig:"){width="72mm"} ![ *Top panels: fitted group-specific curves for 100 female subjects (left) and 116 male subjects (right) from the data on adolescent somatic growth (Pratt [*et al.*]{} 1989). The shading corresponds to approximate pointwise 99% credible intervals. Bottom panels: similar to the top panels but for the estimated contrast curve. The shaded regions correspond to approximate pointwise 95% credible intervals.*[]{data-label="fig:growthIndianaFits"}](growthIndianaMalFits.pdf "fig:"){width="72mm"} ![ *Top panels: fitted group-specific curves for 100 female subjects (left) and 116 male subjects (right) from the data on adolescent somatic growth (Pratt [*et al.*]{} 1989). The shading corresponds to approximate pointwise 99% credible intervals. Bottom panels: similar to the top panels but for the estimated contrast curve. The shaded regions correspond to approximate pointwise 95% credible intervals.*[]{data-label="fig:growthIndianaFits"}](growthIndianaFemContr.pdf "fig:"){width="72mm"} ![ *Top panels: fitted group-specific curves for 100 female subjects (left) and 116 male subjects (right) from the data on adolescent somatic growth (Pratt [*et al.*]{} 1989). The shading corresponds to approximate pointwise 99% credible intervals. Bottom panels: similar to the top panels but for the estimated contrast curve. The shaded regions correspond to approximate pointwise 95% credible intervals.*[]{data-label="fig:growthIndianaFits"}](growthIndianaMalContr.pdf "fig:"){width="72mm"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Three-Level Models {#sec:threeLevMods} ================== The three-level version of group-specific curve models corresponds to curve-type data having two nested groupings. For example, the data in each panel of Figure \[fig:MNSWintro\] are first grouped according to slice, which is the level 2 group, and the slices are grouped according to tumor which is the level 3 group. We denote predictor/response pairs as $(x_{ijk},y_{ijk})$ where $x_{ijk}$ is the $k$th value of the predictor variable in the $i$th level 3 group and $(i,j)$th level 2 group and $y_{ijk}$ is the corresponding value of the response variable. We let $m$ denote the number of level 3 groups, $n_i$ denote that number of level 2 groups in the $i$th level 3 group and $o_{ij}$ denote the the number of units within the $(i,j)$th level 2 group. The Figure \[fig:MNSWintro\] data, which happen to be balanced, are such that [ $$\begin{aligned} m&=&\mbox{number of tumors}=10,\\ n_i&=&\mbox{number of slices for the $i$th tumor}=5\\ \mbox{and}\ \ o_{ij}&=&\mbox{number of predictor/response pairs for the $i$th tumor and $j$th slice}=128.\end{aligned}$$ ]{} The Gaussian response three-level group specific curve model for such data is $$\begin{array}{l} y_{ijk}=f(x_{ijk})+g_i(x_{ijk})+h_{ij}(x_{ijk})+\varepsilon_{ijk}, \quad\varepsilon_{ijk}\simind N(0,\sigeps^2),\\[2ex] \quad 1\le i\le m,\ 1\le j\le n_i,\ 1\le k\le o_{ij}, \end{array} \label{eq:threeLevelfg}$$ where the smooth function $f$ is the global mean function, the $g_i$ functions, $1\le i\le m$, allow for group-specific deviations according to membership of the $i$th level 3 group and the $h_{ij}$, $1\le i\le m$ and $1\le j\le n_i$ allow for an additional level of group-specific deviations according to membership of the $j$th level 2 group within the $i$th level 3 group. The mixed model-based penalized spline models for these functions are [ $$\begin{aligned} f(x)&=&\beta_0+\beta_1\,x+\sum_{k=1}^{\Kgbl}\,\uGblk\,\zgblk(x),\quad \uGblk\simind N(0,\sigmaGbl^2),\\[1ex] g_i(x)&=&\uLiniz^g+\uLinio^g\,x+\sum_{k=1}^{\Kgrp^g}\,\uGrpik^g\,\zgrpk^g(x), \ \left[\begin{array}{c} \uLiniz^g\\[1ex] \uLinio^g \end{array} \right]\simind N(\bzero,\bSigmag),\ \uGrpik^g\simind N\big(0,\sigmaGrpg^2\big)\\[1ex] \mbox{and}&&\\[1ex] h_{ij}(x)&=&\uLinijz^h+\uLinijo^h\,x+\sum_{k=1}^{\Kgrp^h}\,\uGrpijk^h\,\zgrpk^h(x), \ \left[\begin{array}{c} \uLinijz^h\\[1ex] \uLinijo^h \end{array} \right]\simind N(\bzero,\bSigmah),\ \uGrpijk^h\simind N\big(0,\sigmaGrph^2\big), $$ ]{} with all random effect distributions independent of each other. For this three-level case we have three bases: $$\{\zgblk(\cdot):1\le k\le \Kgbl\},\quad \{\zgrpk^g(\cdot):1\le k\le \Kgrp^g\} \quad\mbox{and}\quad\{\zgrpk^h(\cdot):1\le k\le \Kgrp^h\}.$$ The variance and covariance matrix parameters are analogous to the two-level model. For example, $\bSigmag$ and $\bSigmah$ are both unstructured $2\times2$ matrices corresponding to the linear components of the $g_i$ and $h_{ij}$ respectively. The following notation is useful for setting up the required design matrices: if $\bM_1,\ldots,\bM_d$ is a set of matrices each having the same number of columns then $$\stack{1\le i\le d}(\bM_i)\equiv \left[ \begin{array}{c} \bM_1\\ \vdots\\ \bM_d \end{array} \right].$$ We then define, for $1\le i\le m$ and $1\le j\le n_i$, $$\bx_i\equiv\stack{1\le j\le n_i}\big(\bx_{ij}\big) \quad\mbox{and}\quad \bx_{ij}\equiv\stack{1\le k\le\, o_{ij}}\big(x_{ijk}\big).$$ Best Linear Unbiased Prediction ------------------------------- Model (\[eq:threeLevelfg\]) is expressible as a Gaussian response linear mixed model as follows: $$\by|\bu\sim N(\bX\bbeta+\bZ\,\bu,\sigeps^2\,\bI),\quad \bu\sim N(\bzero,\bG), \label{eq:threeLevFreq}$$ where the design matrices are $$\bX=\stack{1 \le i \le m}\big(\bX_i\big) \quad\mbox{with}\quad \bX_i=\stack{1 \le j \le n_i}\big(\bX_{ij}\big) \quad\mbox{and}\quad \bX_{ij}\equiv[\bone\ \bx_{ij}] $$ and $$\bZ\equiv\Big[\bZgbl\, \blockdiag{1\le i\le m} \Big[\stack{1\le j\le n_i}([\bX_{ij}\ \bZLoneGrpij])\, \blockdiag{1\le j\le n_i}([\bX_{ij}\ \bZLtwoGrpij]) \Big]\Big].$$ where $$\bZgbl\equiv\stack{1\le i\le m}\big(\stack{1\le j\le n_i}(\bZgblij)\big)$$ and the matrices $\bZgblij$, $\bZLoneGrpij$ and $\bZLtwoGrpij$, $1\le i\le m$, $1\le j\le n_i$, contain, respectively, spline basis functions for the global mean function $f$, the $i$th level one group deviation functions $g_i$ and $(i,j)$th level two group deviation functions $h_{ij}$. Specifically, [ $$\begin{aligned} \bZgblij&\equiv&[\begin{array}{ccc} \zgblo(\bx_{ij}) & \cdots & \zgblKgbl(\bx_{ij}) \end{array}],\quad\bZLoneGrpij= [\zLoneGrpo(\bx_{ij}) \cdots \zLoneGrpK(\bx_{ij})]\\[2ex] \mbox{and}\ \bZLtwoGrpij&\equiv& [\zLtwoGrpo(\bx_{ij}) \cdots \zLtwoGrpK(\bx_{ij})] \quad\mbox{for $1\le i\le m$ and $1\le j\le n_i$.}\end{aligned}$$ ]{} The fixed and random effects vectors are $$\bbeta\equiv\left[\begin{array}{c} \beta_0\\[1ex] \beta_1 \end{array} \right] \quad \mbox{and} \quad \bu\equiv \left[ \begin{array}{c} \buGbl\\[1ex] {\displaystyle\stack{1\le i\le m}}\left( \left[ \begin{array}{c} \left[ \begin{array}{c} \buLoneLini\\ \buLoneGrpi \end{array} \right] \\[2ex] \left[ \begin{array}{c} \buLtwoLinio\\ \buLtwoGrpio \end{array} \right] \\[2ex] \vdots \\[2ex] \left[ \begin{array}{c} \buLtwoLinini\\ \buLtwoGrpini \end{array} \right] \end{array} \right] \right) \end{array} \right] \quad\mbox{where}\quad \buLoneLini\equiv \left[\begin{array}{c} \uLiniz^g\\[1ex] \uLinio^g \end{array} \right]$$ with $\buLoneGrpi$, $\buLtwoLinij$ and $\buLtwoGrpij$ defined similarly and the covariance matrix of $\bu$ is $$\begin{array}{c} \bG=\Cov(\bu) =\left[ \begin{array}{cc} \sigmaGbl^2\bI & \bO \\[1ex] \bO & \displaystyle{\blockdiag{1\le i\le m}} \left[ \begin{array}{ccc} \bSigmag & \bO & \bO \\ \bO &\sigmaGrpg^2\bI & \bO \\ \bO & \bO & \bI_{n_i}\otimes \left[ \begin{array}{cc} \bSigmah & \bO \\ \bO & \sigmaGrph^2\bI \end{array} \right] \end{array} \right] \end{array} \right]. \end{array} \label{eqn:threeLevBLUPCov}$$ We define matrices in a similar way to what is given in (\[eq:CDRmatBLUPdefs\]). The best linear unbiased predictor of $[ \bbeta \ \bu ]$ and corresponding covariance matrix are as shown in (\[eq:BLUPandCov\]), but, with entries as described in this section. This covariance matrix grows quadratically in both $m$ and the $n_{i}$s, and so, storage becomes impractical for large numbers of level 2 and level 3 groups. However, only certain sub-blocks are required for the addition of pointwise confidence intervals to curve estimates. In particular, we only require the non-zero sub-blocks of the general three-level sparse matrix given in Section 3 of Nolan Wand (2018) that correspond to $(\bC^T\RBLUP^{-1}\bC+\DBLUP)^{-1}$. In the case of the three-level Gaussian response linear model, Nolan Wand’s $$\begin{array}{l} \bA_{11} \mbox{ sub-block corresponds to a } (2 + \Kgbl) \times (2 + \Kgbl) \mbox{ matrix } \Cov\left(\left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right]\right); \\[2ex] \bA_{22,i} \mbox{ sub-block corresponds to a } (2 + \Kgrpg) \times (2 + \Kgrpg) \mbox{ matrix } \Cov\left(\left[ \begin{array}{c} \buHatLoneLini-\buLoneLini\\[1ex] \buHatLoneGrpi-\buLoneGrpi \end{array} \right] \right); \\[1ex] \bA_{12,i} \mbox{ sub-block corresponds to a } (2 + \Kgbl) \times (2 + \Kgrpg) \mbox{ matrix } \\[1ex] \qquad \qquad \qquad E\left\{ \left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right] \left[ \begin{array}{c} \buHatLoneLini-\buLoneLini\\[1ex] \buHatLoneGrpi-\buLoneGrpi \end{array} \right]^T \right\}, \ 1\le i \le m; \\[1ex] \bA_{22,ij} \mbox{ sub-block corresponds to a } (2 + \Kgrph) \times (2 + \Kgrph) \mbox{ matrix } \Cov \left( \left[ \begin{array}{c} \buHatLtwoLinij-\buLtwoLinij\\[1ex] \buHatLtwoGrpij-\buLtwoGrpij \end{array} \right] \right); \\[1ex] \bA_{12,ij} \mbox{ sub-block corresponds to a } (2 + \Kgbl) \times (2 + \Kgrph) \mbox{ matrix } \\[1ex] \qquad \qquad \qquad E\left\{ \left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right] \left[ \begin{array}{c} \buHatLtwoLinij-\buLtwoLinij\\[1ex] \buHatLtwoGrpij-\buLtwoGrpij \end{array} \right]^T \right\}; \\[1ex] \bA_{12,\iCOMMAj} \mbox{ sub-block corresponds to a } (2 + \Kgrpg) \times (2 + \Kgrph) \mbox{ matrix } \\[1ex] \qquad \qquad \qquad E\left\{ \left[ \begin{array}{c} \buHatLoneLini-\buLoneLini\\[1ex] \buHatLoneGrpi-\buLoneGrpi \end{array} \right] \left[ \begin{array}{c} \buHatLtwoLinij-\buLtwoLinij\\[1ex] \buHatLtwoGrpij-\buLtwoGrpij \end{array} \right]^T \right\}, \ 1 \le i \le m, \ 1 \le j \le n_{i}. \end{array}$$ As described in Nolan, Menictas Wand (2019), the $\SolveThreeLevelSparseLeastSquares$ algorithm arises in the special case where $\bx$ is the minimizer of the least squares problem given in equation (\[eqn:sparseLeastSquares\]), where $\bB$ has the three-level sparse form and $\bb$ is partitioned according to that shown in equation (7) of Nolan Wand (2018). This algorithm can be used for fitting three-level group-specific curve models by making use of Result \[res:threeLevelBLUP\]. Computation of $[\bbetahat^T\ \ \buhat^T]^T$ and each of the sub-blocks of $\mbox{\rm Cov}([\bbetahat^T\ \ (\buhat-\bu)^T]^T)$ listed in (\[eq:CovMain\]) are expressible as the three-level sparse matrix least squares form: $$\left\Vert\bb-\bB\left[ \begin{array}{c} \bbeta\\ \bu \end{array} \right] \right\Vert^2$$ where the non-zero sub-blocks $\bB$ and $\bb$, according to the notation in Section 3.1 of Nolan Wand (2018), are for $1\le i\le m$ and $1\le j\le n_i$: $$\bvecij \equiv \left[ \begin{array}{c} \sigeps^{-1}\by_{ij}\\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \end{array} \right], \qquad \Bmatij\equiv \left[ \begin{array}{cc} \sigeps^{-1}\bX_{ij} & \sigeps^{-1}\bZgblij\\[1ex] \bO & \ndotmh \sigmaGbl^{-1}\bI_{\Kgbl} \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \end{array} \right],$$ $$\Bmatdotij\equiv \left[ \begin{array}{cc} \sigeps^{-1}\bX_{ij} & \sigeps^{-1}\bZLoneGrpij \\[1ex] \bO & \bO \\[1ex] n_{i}^{-1/2}\bSigmag^{-1/2} & \bO \\[1ex] \bO & n_{i}^{-1/2} \sigmaGrpg^{-1} \bI_{\Kgrpg} \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \end{array}\right] \quad and\quad \Bmatdotdotij\equiv \left[ \begin{array}{cc} \sigeps^{-1}\bX_{ij} & \sigeps^{-1}\bZLtwoGrpij \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bSigmah^{-1/2} & \bO \\[1ex] \bO & \sigmaGrph^{-1}\bI_{\Kgrph} \end{array}\right] $$ with each of these matrices having $\oadj_{ij}=o_{ij}+\Kgbl+2+\Kgrpg+2+\Kgrph$ rows and with $\Bmati$ having $p=2+\Kgbl$ columns, $\Bmatdoti$ having $q_1=2+\Kgrpg$ columns and $\Bmatdotdotij$ having $q_2=2+\Kgrph$ columns. The solutions are $$\left[ \begin{array}{c} \bbetahat\\ \buHatGbl \end{array} \right]=\xveco,\quad \Cov\left(\left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right]\right)=\AUoo,$$ $$\left[ \begin{array}{c} \buHatLoneLini\\[1ex] \buHatLoneGrpi \end{array} \right]=\xvectCi,\quad E\left\{ \left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right] \left[ \begin{array}{c} \buHatLoneLini-\buLoneLini\\[1ex] \buHatLoneGrpi-\buLoneGrpi \end{array} \right]^T \right\}=\AUotCi, $$ $$\Cov\left(\left[ \begin{array}{c} \buHatLoneLini-\buLoneLini\\[1ex] \buHatLoneGrpi-\buLoneGrpi \end{array} \right] \right)=\AUttCi,\ \ 1\le i\le m,$$ $$\left[ \begin{array}{c} \buHatLtwoLinij\\[1ex] \buHatLtwoGrpij \end{array} \right]=\bx_{2,ij},\quad E\left\{ \left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right] \left[ \begin{array}{c} \buHatLtwoLinij-\buLtwoLinij\\[1ex] \buHatLtwoGrpij-\buLtwoGrpij \end{array} \right]^T \right\}=\bA^{12,ij}, $$ $$E\left\{ \left[ \begin{array}{c} \buHatLoneLini-\buLoneLini\\[1ex] \buHatLoneGrpi-\buLoneGrpi \end{array} \right] \left[ \begin{array}{c} \buHatLtwoLinij-\buLtwoLinij\\[1ex] \buHatLtwoGrpij-\buLtwoGrpij \end{array} \right]^T \right\}=\bA^{12,\iCOMMAj}$$ and $$\Cov \left( \left[ \begin{array}{c} \buHatLtwoLinij-\buLtwoLinij\\[1ex] \buHatLtwoGrpij-\buLtwoGrpij \end{array} \right] \right)=\bA^{22,ij}, \quad 1\le i\le m,\ 1\le j\le n_i.$$ \[res:threeLevelBLUP\] - Inputs: $\by_{ij}(o_{ij}\times1),\ \bX_{ij}(o_{ij}\times 2),\ \bZgblij(o_{ij}\times \Kgbl),\ \bZLoneGrpij(o_{ij}\times \Kgrpg),$ - $\bZLtwoGrpij(o_{ij}\times \Kgrph), 1\le i\le m, \ 1\le j\le n_{i}$; $\quad\sigeps^2,\sigmaGbl^2,\sigmaGrpg^2,\sigmaGrph^2>0$, - $\bSigmag(2\times2), \ \bSigmah(2\times 2), \mbox{symmetric and positive definite.}$ - For $i=1,\ldots,m$: - For $j=1,\ldots,n_{i}$: - $\begin{array}{l} \bvecij\thickarrow \left[ \begin{array}{c} \sigeps^{-1}\by_{ij}\\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \end{array} \right], \ \Bmatij\thickarrow \left[ \begin{array}{cc} \sigeps^{-1}\bX_{ij} & \sigeps^{-1}\bZgblij\\[1ex] \bO & \ndotmh \sigmaGbl^{-1}\bI_{\Kgbl} \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \end{array} \right] \end{array}$ - $\begin{array}{l} \Bmatdotij\thickarrow \left[ \begin{array}{cc} \sigeps^{-1}\bX_{ij} & \sigeps^{-1}\bZLoneGrpij \\[1ex] \bO & \bO \\[1ex] n_{i}^{-1/2}\bSigmag^{-1/2} & \bO \\[1ex] \bO & n_{i}^{-1/2} \sigmaGrpg^{-1} \bI_{\Kgrpg} \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \end{array}\right], \ \Bmatdotdotij\thickarrow \left[ \begin{array}{cc} \sigeps^{-1}\bX_{ij} & \sigeps^{-1}\bZLtwoGrpij \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bSigmah^{-1/2} & \bO \\[1ex] \bO & \sigmaGrph^{-1}\bI_{\Kgrph} \end{array}\right] \end{array}$ - $\Ssc_3\thickarrow\SolveThreeLevelSparseLeastSquares\Big(\big\{( \bvecij,\Bmatij,\Bmatdotij,\Bmatdotdotij):1\le i\le m, $ - $\qquad \qquad 1 \le j\le n_{i} \big\}\Big)$ - $\left[ \begin{array}{c} \bbetahat\\ \buHatGbl \end{array} \right]\thickarrow\mbox{$\xveco$ component of $\Ssc_3$}$   ;   $\Cov\left(\left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right]\right)\thickarrow\mbox{$\AUoo$ component of $\Ssc_3$}$ - For $i=1,\ldots,m$: - $\left[ \begin{array}{c} \buHatLoneLini\\[1ex] \buHatLoneGrpi \end{array} \right]\thickarrow\mbox{$\xvectCi$ component of $\Ssc_3$}$ - $\Cov\left(\left[ \begin{array}{c} \buHatLoneLini-\buLoneLini\\[1ex] \buHatLoneGrpi-\buLoneGrpi \end{array} \right] \right)\thickarrow\mbox{$\AUttCi$ component of $\Ssc_3$}$ - $E\left\{ \left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right] \left[ \begin{array}{c} \buHatLoneLini-\buLoneLini\\[1ex] \buHatLoneGrpi-\buLoneGrpi \end{array} \right]^T \right\}\thickarrow\mbox{$\AUotCi$ component of $\Ssc_3$}$ - *continued on a subsequent page* $\ldots$ <!-- --> - - - For $j=1,\ldots,n_{i}$: - - $\left[ \begin{array}{c} \buHatLtwoLinij\\[1ex] \buHatLtwoGrpij \end{array} \right]\thickarrow\mbox{$\bx_{2,ij}$ component of $\Ssc_3$}$ - $E\left\{ \left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right] \left[ \begin{array}{c} \buHatLtwoLinij-\buLtwoLinij\\[1ex] \buHatLtwoGrpij-\buLtwoGrpij \end{array} \right]^T \right\}\thickarrow\mbox{$\bA^{12,ij}$ component of $\Ssc_3$}$ - $E\left\{ \left[ \begin{array}{c} \buHatLoneLini-\buLoneLini\\[1ex] \buHatLoneGrpi-\buLoneGrpi \end{array} \right] \left[ \begin{array}{c} \buHatLtwoLinij-\buLtwoLinij\\[1ex] \buHatLtwoGrpij-\buLtwoGrpij \end{array} \right]^T \right\}\thickarrow\mbox{$\bA^{12,\iCOMMAj}$ component of $\Ssc_3$}$ - $\Cov \left( \left[ \begin{array}{c} \buHatLtwoLinij-\buLtwoLinij\\[1ex] \buHatLtwoGrpij-\buLtwoGrpij \end{array} \right] \right)\thickarrow\mbox{$\bA^{22,ij}$ component of $\Ssc_3$}$ - Output: $$\begin{array}{c}\Bigg(\left[ \begin{array}{c} \bbetahat\\ \buHatGbl \end{array} \right],\ \Cov\left(\left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right]\right), \Bigg\{\Bigg( \left[ \begin{array}{c} \buHatLini\\[1ex] \buHatGrpi \end{array} \right],\, \Cov\left(\left[ \begin{array}{c} \buHatLini-\buLini\\[1ex] \buHatGrpi-\buGrpi \end{array} \right]\right),\,\\[3ex] E\left\{ \left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right] \left[ \begin{array}{c} \buHatLini-\buLini\\[1ex] \buHatGrpi-\buGrpi \end{array} \right]^T \right\} \Bigg):\ 1\le i\le m, \\[3ex] \Bigg( \left[ \begin{array}{c} \buHatLtwoLinij\\[1ex] \buHatLtwoGrpij \end{array} \right],\, E\left\{ \left[ \begin{array}{c} \bbetahat\\ \buHatGbl-\buGbl \end{array} \right] \left[ \begin{array}{c} \buHatLtwoLinij-\buLtwoLinij\\[1ex] \buHatLtwoGrpij-\buLtwoGrpij \end{array} \right]^T \right\}, \\[3ex] E\left\{ \left[ \begin{array}{c} \buHatLoneLini-\buLoneLini\\[1ex] \buHatLoneGrpi-\buLoneGrpi \end{array} \right] \left[ \begin{array}{c} \buHatLtwoLinij-\buLtwoLinij\\[1ex] \buHatLtwoGrpij-\buLtwoGrpij \end{array} \right]^T \right\}, \, \Cov \left( \left[ \begin{array}{c} \buHatLtwoLinij-\buLtwoLinij\\[1ex] \buHatLtwoGrpij-\buLtwoGrpij \end{array} \right] \right) \Bigg): \\[3ex] 1 \le i \le m, \, 1 \le j \le n_{i} \Big\} \Big) \end{array}$$ A derivation of Result \[res:threeLevelBLUP\] is given in Section \[sec:drvResultThree\] of the web-supplement. Result \[res:threeLevelBLUP\] combined with Theorem 4 of Nolan Wand (2018) leads to Algorithm \[alg:threeLevBLUP\]. The  algorithm is given in Section \[sec:Solve3Lev\]. Mean Field Variational Bayes ---------------------------- A Bayesian extension of (\[eq:threeLevFreq\]) and (\[eqn:threeLevBLUPCov\]) is: $$\begin{array}{c} \by|\bbeta, \bu, \sigsqeps\sim N(\bX\bbeta+\bZ\,\bu,\sigeps^2\,\bI),\quad \bu|\sigmaGbl^2,\sigmaGrpg^2,\bSigmag, \sigmaGrph^2,\bSigmah\sim N(\bzero,\bG), \quad\mbox{$\bG$ as defined in (\ref{eqn:threeLevBLUPCov}),} \\[1ex] \bbeta\sim N(\bmu_{\bbeta},\bSigma_{\bbeta}),\quad\sigeps^2|\aeps\sim\mbox{Inverse-$\chi^2$}(\nuEps,1/\aeps), \quad\aeps\sim\mbox{Inverse-$\chi^2$}(1,1/(\nuEps\sEps^2)),\\[2ex] \quad\sigmaGbl^2|\aGbl\sim\mbox{Inverse-$\chi^2$}(\nuGbl,1/\aGbl), \quad\aGbl\sim\mbox{Inverse-$\chi^2$}(1,1/(\nuGbl\sGbl^2)),\\[2ex] \quad\sigmaGrpg^2|\aGrpg\sim\mbox{Inverse-$\chi^2$}(\nuGrpg,1/\aGrpg), \quad\aGrpg\sim\mbox{Inverse-$\chi^2$}(1,1/(\nuGrpg\sGrpg^2)),\\[2ex] \quad\sigmaGrph^2|\aGrph\sim\mbox{Inverse-$\chi^2$}(\nuGrph,1/\aGrph), \quad\aGrph\sim\mbox{Inverse-$\chi^2$}(1,1/(\nuGrph\sGrph^2)),\\[2ex] \bSigmag|\ASigmag\sim\mbox{Inverse-G-Wishart}\big(\Gfull,\nuSigmag+2,\ASigmag^{-1}\big),\\[2ex] \ASigmag\sim\mbox{Inverse-G-Wishart}(\Gdiag,1,\bLambda_{\ASigmag}),\quad \bLambda_{\ASigmag}\equiv\{\nuSigmag\diag(\sSigmagOne^2,\sSigmagTwo^2)\}^{-1},\\[2ex] \bSigmah|\ASigmah\sim\mbox{Inverse-G-Wishart}\big(\Gfull,\nuSigmah+2,\ASigmah^{-1}\big),\\[2ex] \ASigmah\sim\mbox{Inverse-G-Wishart}(\Gdiag,1,\bLambda_{\ASigmah}),\quad \bLambda_{\ASigmah}\equiv\{\nuSigmah\diag(\sSigmahOne^2,\sSigmahTwo^2)\}^{-1}. \end{array} \label{eq:threeLevBayes}$$ The following mean field restriction is imposed on the joint posterior density function of all parameters in (\[eq:threeLevBayes\]): $$\begin{array}{l} \pDens(\bbeta,\bu,\aeps,\aGbl,\aGrpg,\ASigmag,\aGrph,\ASigmah,\sigeps^2,\sigmaGbl^2,\sigmaGrpg^2,\bSigmag, \sigmaGrph^2,\bSigmah|\by) \\ \qquad \qquad \approx \qDens(\bbeta,\bu,\aeps,\aGbl,\aGrpg,\ASigmag,\aGrph,\ASigmah)\, \qDens(\sigeps^2,\sigmaGbl^2,\sigmaGrpg^2,\bSigmag,\sigmaGrph^2,\bSigmah). \end{array} \label{eq:producRestrict3lev}$$ The optimal $\qDens$-density functions for the parameters of interest are [ $$\begin{aligned} &&\qDens^*(\bbeta,\bu)\ \mbox{has a $N\big(\bmu_{\qDens(\bbeta,\bu)},\bSigma_{\qDens(\bbeta,\bu)}\big)$ distribution,}\\[1ex] &&\qDens^*(\sigeps^2)\ \mbox{has an $\mbox{Inverse-$\chi^2$} \big(\xi_{\qDens(\sigeps^2)},\lambda_{\qDens(\sigeps^2)}\big)$ distribution,}\\[1ex] &&\qDens^*(\sigmaGbl^2)\ \mbox{has an $\mbox{Inverse-$\chi^2$} \big(\xi_{\qDens(\sigmaGbl^2)},\lambda_{\qDens(\sigmaGbl^2)}\big)$ distribution,}\\[1ex] &&\qDens^*(\sigmaGrpg^2)\ \mbox{has an $\mbox{Inverse-$\chi^2$} \big(\xi_{\qDens(\sigmaGrpg^2)},\lambda_{\qDens(\sigmaGrpg^2)}\big)$ distribution}\\[1ex] &&\qDens^*(\sigmaGrph^2)\ \mbox{has an $\mbox{Inverse-$\chi^2$} \big(\xi_{\qDens(\sigmaGrph^2)},\lambda_{\qDens(\sigmaGrph^2)}\big)$ distribution}\\[1ex] &&\qDens^*(\bSigmag)\ \mbox{has an $\mbox{Inverse-G-Wishart}(\Gfull,\xi_{\qDens(\bSigmag)},\bLambda_{\qDens(\bSigmag)})$ distribution}\\[1ex] \mbox{and}&&\qDens^*(\bSigmah)\ \mbox{has an $\mbox{Inverse-G-Wishart}(\Gfull,\xi_{\qDens(\bSigmah)},\bLambda_{\qDens(\bSigmah)})$ distribution.}\end{aligned}$$ ]{} The optimal $\qDens$-density parameters are determined through an iterative coordinate ascent algorithm, details of which are given in Section \[sec:drvAlgFour\] of the web-supplement. As in the two-level case, the updates for $\bmu_{\qDens(\bbeta,\bu)}$ and $\bSigma_{\qDens(\bbeta,\bu)}$ may be written in the same form as (\[eq:muSigmaMFVBupd\]) but with a three-level version of the $\bC$ matrix and $$\begin{array}{l} \DMFVB \equiv \\[1ex] \left[ {\setlength\arraycolsep{1pt} \begin{array}{ccc} \bSigma_{\bbeta}^{-1} & \bO & \bO \\[1ex] \bO & \bmu_{\qDens(1/{\sigmaGbl^{2}})}\bI & \bO \\[1ex] \bO & \bO & \bI_{m} \otimes \left[ {\setlength\arraycolsep{1pt} \begin{array}{ccc} \bM_{\qDens(\bSigmag^{-1})} & \bO & \bO \\ \bO &\bmu_{\qDens(1/\sigmaGrpg^2)}\bI & \bO \\ \bO & \bO & \bI_{n_i}\otimes \left[ \begin{array}{cc} \bM_{\qDens(\bSigmah^{-1})} & \bO \\ \bO & \bmu_{\qDens(1/\sigmaGrph^2)}\bI \end{array} \right] \end{array} } \right] \end{array} } \right]. \end{array} \label{eqn:DMFVB}$$ For large numbers of level 2 and level 3 groups, $\bSigma_{\qDens(\bbeta, \bu)}$’s size becomes infeasible to deal with. However, only relatively small sub-blocks of $\bSigma_{\qDens(\bbeta,\bu)}$ are needed for variational inference regarding the variance and covariance parameters. These sub-block positions correspond to the non-zero sub-block positions of a general three-level sparse matrix defined in Section 3 of Nolan Wand (2018). Here, Nolan Wand’s $$\begin{array}{l} \bA_{11} \mbox{ sub-block corresponds to a } (2 + \Kgbl) \times (2 + \Kgbl) \mbox{ matrix } \bSigma_{\qDens(\bbeta, \buGbl)}; \\[2ex] \bA_{22,i} \mbox{ sub-block corresponds to a } (2 + \Kgrpg) \times (2 + \Kgrpg) \mbox{ matrix } \bSigma_{\qDens(\buLoneLini, \buLoneGrpi)}; \\[1ex] \bA_{12,i} \mbox{ sub-block corresponds to a } (2 + \Kgbl) \times (2 + \Kgrpg) \mbox{ matrix } \\[1ex] \qquad \qquad \qquad E\left\{ \left( \left[ \begin{array}{c} \bbeta\\ \buGbl \end{array} \right] - \bmu_{\qDens(\bbeta,\buGbl)} \right) \left( \left[ \begin{array}{c} \buLoneLini\\[1ex] \buLoneGrpi \end{array} \right] - \bmu_{\qDens(\buLoneLini,\buLoneGrpi)} \right)^T \right\}, \ 1\le i \le m; \\[1ex] \bA_{22,ij} \mbox{ sub-block corresponds to a } (2 + \Kgrph) \times (2 + \Kgrph) \mbox{ matrix } \bSigma_{\qDens(\buLtwoLinij,\buLtwoGrpij)}; \\[1ex] \bA_{12,ij} \mbox{ sub-block corresponds to a } (2 + \Kgbl) \times (2 + \Kgrph) \mbox{ matrix } \\[1ex] \qquad \qquad \qquad E\left\{ \left( \left[ \begin{array}{c} \bbeta\\ \buGbl \end{array} \right] - \bmu_{\qDens(\bbeta, \buGbl)} \right) \left( \left[ \begin{array}{c} \buLtwoLinij\\[1ex] \buLtwoGrpij \end{array} \right] - \bmu_{\qDens(\buLtwoLinij,\buLtwoGrpij)} \right)^T \right\}; \\[1ex] \bA_{12,\iCOMMAj} \mbox{ sub-block corresponds to a } (2 + \Kgrpg) \times (2 + \Kgrph) \mbox{ matrix } \\[1ex] \qquad \qquad \qquad E\left\{ \left( \left[ \begin{array}{c} \buLoneLini\\[1ex] \buLoneGrpi \end{array} \right] - \bmu_{\qDens(\buLoneLini, \buLoneGrpi)} \right) \left( \left[ \begin{array}{c} \buLtwoLinij\\[1ex] \buLtwoGrpij \end{array} \right] - \bmu_{\qDens(\buLtwoLinij, \buLtwoGrpij)} \right)^T \right\}, \\[1ex] 1 \le i \le m, \ 1 \le j \le n_{i}. \end{array} \label{eq:subBlocksThreeLevMfvb}$$ We appeal to Result \[res:threeLevelMFVB\] for a streamlined mean field variational Bayes algorithm. The mean field variational Bayes updates of $\bmu_{\qDens(\bbeta,\bu)}$ and each of the sub-blocks of $\bSigma_{\qDens(\bbeta,\bu)}$ in (\[eq:subBlocksThreeLevMfvb\]) are expressible as a three-level sparse matrix least squares problem of the form: $$\left\Vert\bb-\bB\left[ \begin{array}{c} \bbeta\\ \bu \end{array} \right] \right\Vert^2$$ where the non-zero sub-blocks $\bB$ and $\bb$, according to the notation in Section 3.1 of Nolan Wand (2018), are for $1\le i\le m$ and $1\le j\le n_i$. $$\bb_{ij}\equiv \left[ \begin{array}{c} \mu_{\qDens(1/\sigeps^2)}^{1/2}\by_{ij}\\[1ex] \ndotmh \bSigma_{\bbeta}^{-1/2} \bmu_{\bbeta}\\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \end{array} \right], \ \ \bB_{ij}\equiv \left[ \begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_{ij} & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZgblij\\[1ex] \ndotmh \bSigma_{\bbeta}^{-1/2} & \bO \\[1ex] \bO &\ndotmh \mu_{\qDens(1/\sigmaGbl^2)}^{1/2}\bI_{\Kgbl}\\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \end{array} \right],$$ $$\bBdot_{ij}\equiv \left[ \begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_{ij} & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZLoneGrpij \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] n_i^{-1/2}\bM_{\qDens(\bSigmag^{-1})}^{1/2} & \bO \\[1ex] \bO &n_i^{-1/2}\mu_{\qDens(1/\sigmaGrpg^2)}^{1/2}\bI_{\Kgrpg}\\[1ex] \bO & \bO \\[1ex] \bO & \bO \\ \end{array} \right] \ \ and\ \ \bBdotdot_{ij}\equiv \left[ \begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_{ij} & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZLtwoGrpij \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bM_{\qDens(\bSigmah^{-1})}^{1/2} & \bO \\[1ex] \bO &\mu_{\qDens(1/\sigmaGrph^2)}^{1/2}\bI_{\Kgrph}\\ \end{array} \right] $$ with each of these matrices having $\oadj_{ij}=o_{ij}+2+\Kgbl+2+\Kgrpg+2+\Kgrph$ rows and with $\Bmati$ having $p=2+\Kgbl$ columns, $\Bmatdoti$ having $q_1=2+\Kgrpg$ columns and $\Bmatdotdotij$ having $q_2=2+\Kgrph$ columns. The solutions are $$\bmu_{\qDens(\bbeta,\buGbl)} =\xveco,\quad \bSigma_{\qDens(\bbeta,\buGbl)} =\AUoo,$$ $$\bmu_{\qDens(\buLoneLini,\buLoneGrpi)} =\xvectCi,\quad E_q\left\{ \left[ \begin{array}{c} \bbeta-\bmu_{\qDens(\bbeta)}\\ \buGbl-\bmu_{\qDens(\buGbl)} \end{array} \right] \left[ \begin{array}{c} \buLoneLini-\bmu_{\qDens(\buLoneLini)}\\[1ex] \buLoneGrpi-\bmu_{\qDens(\buLoneGrpi)} \end{array} \right]^T \right\}=\AUotCi, $$ $$\bSigma_{\qDens(\buLoneLini,\buLoneGrpi)} =\AUttCi,\ \ 1\le i\le m,$$ $$\bmu_{\qDens(\buLtwoLinij,\buLtwoGrpij)} =\bx_{2,ij},\quad E_q\left\{ \left[ \begin{array}{c} \bbeta-\bmu_{\qDens(\bbeta)}\\ \buGbl-\bmu_{\qDens(\buGbl)} \end{array} \right] \left[ \begin{array}{c} \buLtwoLinij-\bmu_{\qDens(\buLtwoLinij)}\\[1ex] \buLtwoGrpij-\bmu_{\qDens(\buLtwoGrpij)} \end{array} \right]^T \right\}=\bA^{12,ij}, $$ $$E_q\left\{ \left[ \begin{array}{c} \buLoneLini-\bmu_{\qDens(\buLoneLini)}\\[1ex] \buLoneGrpi-\bmu_{\qDens(\buLoneGrpi)} \end{array} \right] \left[ \begin{array}{c} \buLtwoLinij-\bmu_{\qDens(\buLtwoLinij)}\\[1ex] \buLtwoGrpij-\bmu_{\qDens(\buLtwoGrpij)} \end{array} \right]^T \right\}=\bA^{12,\iCOMMAj}$$ and $$\bSigma_{\qDens(\buLtwoLinij,\buLtwoGrpij)} =\bA^{22,ij}, \quad 1\le i\le m,\ 1\le j\le n_i.$$ \[res:threeLevelMFVB\] - Data Inputs: $\by_{ij}(o_{ij}\times1),\ \bX_{ij}(o_{ij}\times 2),\ \bZgblij (o_{ij} \times \Kgbl), \bZLoneGrpij(o_{ij}\times \Kgrpg),$ - $\bZLtwoGrpij(o_{ij}\times \Kgrph)\ 1\le i\le m,\ 1\le j\le n_{i}.$ - Hyperparameter Inputs: $\bmu_{\bbeta}(2\times1)$, $\bSigma_{\bbeta}(2\times 2)\ \mbox{symmetric and positive definite}$, - $s_{\varepsilon},\nu_{\varepsilon},s_{\mbox{\rm\tiny gbl}}, \nu_{\mbox{\rm\tiny gbl}},\sSigmagOne,\sSigmagTwo,\nuSigmag, \sGrpg, \nuGrpg, \sSigmahOne,\sSigmahTwo,\nuSigmah, \sGrph, \nuGrph>0$. - For $i=1,\ldots,m:$ - - For $j=1,\ldots,n_{i}:$ - - $\bCgblij\thickarrow[\bX_{ij}\ \bZgblij]$   ;   $\bCLoneGrpij\thickarrow[\bX_{ij}\ \bZLoneGrpij]$    ;   $\bCLtwoGrpij\thickarrow[\bX_{ij}\ \bZLtwoGrpij]$ - Initialize: $\muq{1/\sigsqeps}$, $\muq{1/\sigma_{\mbox{\rm\tiny gbl}}^{2}}$, $\muq{1/\sigma_{\mbox{\rm\tiny grp, g}}^{2}}$, $\muq{1/\sigma_{\mbox{\rm\tiny grp, h}}^{2}}$, $\muq{1/\aeps}$, $\muq{1/a_{\mbox{\rm\tiny gbl}}}$, - $\muq{1/a_{\mbox{\rm\tiny grp, g}}}$, $\muq{1/a_{\mbox{\rm\tiny grp, h}}} > 0$, $\bM_{\qDens(\bSigma_{\mbox{\rm\tiny g}}^{-1})} (2 \times 2), \bM_{\qDens(\bSigma_{\mbox{\rm\tiny h}}^{-1})} (2 \times 2),$ - $\bM_{\qDens(\bA^{-1}_{\mbox{\rm\tiny g}})} (2 \times 2), \bM_{\qDens(\bA^{-1}_{\mbox{\rm\tiny h}})} (2 \times 2)$ symmetric and positive definite. - $\xi_{\qDens(\sigeps^2)}\thickarrow \nu_{\varepsilon} + \sumim \sum_{j=1}^{n_{i}} o_{ij}$   ;   $\xi_{\qDens(\sigmaGbl^2)}\thickarrow\nu_{\mbox{\rm\tiny gbl}}+\Kgbl$   ;   $\xi_{\qDens(\bSigma_{\mbox{\rm\tiny g}})}\thickarrow\nuSigmag+2+m$ - $\xi_{\qDens(\bSigma_{\mbox{\rm\tiny h}})}\thickarrow\nuSigmah+2+\sum_{i=1}^{m} n_{i}$    ;    $\xi_{\qDens(\sigmaGrpg^2)}\thickarrow \nu_{\mbox{\rm\tiny grp, g}} + m\Kgrpg$ - $\xi_{\qDens(\sigmaGrph^2)}\thickarrow \nu_{\mbox{\rm\tiny grp, h}} + \Kgrph \sum_{i=1}^{m}n_{i}$   ;    $\xi_{\qDens(a_{\varepsilon})}\thickarrow \nuEps + 1$   ;    $\xi_{\qDens(a_{\mbox{\rm\tiny gbl}})}\thickarrow \nuGbl + 1$ - $\xi_{\qDens(a_{\mbox{\rm\tiny grp, g}})}\thickarrow \nuGrpg + 1$    ;    $\xi_{\qDens(a_{\mbox{\rm\tiny grp, h}})}\thickarrow \nuGrph + 1$    ;     $\xi_{\qDens(\bA_{\bSigmag})}\thickarrow \nuSigmag + 2$   ;    $\xi_{\qDens(\bA_{\bSigmah})}\thickarrow \nuSigmah + 2$ - Cycle: - For $i = 1,\ldots, m$: - - For $j = 1, \ldots, n_{i}$: - - $\bvecij\thickarrow\left[ \begin{array}{c} \mu_{\qDens(1/\sigeps^2)}^{1/2}\by_{ij}\\[1.5ex] n_{\boldsymbol{\cdot}}^{-1/2} \bSigma_{\bbeta}^{-1/2}\bmu_{\bbeta} \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \end{array} \right],\ \Bmatij\thickarrow \left[ \begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_{ij} & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZgblij\\[1ex] \ndotmh \bSigma_{\bbeta}^{-1/2} & \bO \\[1ex] \bO &\ndotmh \mu_{\qDens(1/\sigmaGbl^2)}^{1/2}\bI_{\Kgbl}\\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \end{array} \right],$ - $\Bmatdotij\thickarrow \left[ \begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_{ij} & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZLoneGrpij \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] n_i^{-1/2}\bM_{\qDens(\bSigmag^{-1})}^{1/2} & \bO \\[1ex] \bO &n_i^{-1/2}\mu_{\qDens(1/\sigmaGrpg^2)}^{1/2}\bI_{\Kgrpg}\\[1ex] \bO & \bO \\[1ex] \bO & \bO \\ \end{array} \right]$ - *continued on a subsequent page* $\ldots$ <!-- --> - - $\Bmatdotdotij\thickarrow \left[ \begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_i & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZLtwoGrpij \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bM_{\qDens(\bSigmah^{-1})}^{1/2} & \bO \\[1ex] \bO &\mu_{\qDens(1/\sigmaGrph^2)}^{1/2}\bI_{\Kgrph}\\ \end{array} \right]$ - - $\Ssc_4\thickarrow\SolveThreeLevelSparseLeastSquares\Big(\big\{( \bvecij,\Bmatij,\Bmatdotij,\Bmatdotdotij):1\le i\le m, $ - $\qquad 1 \le j\le n_{i} \big\}\Big)$ - $\bmu_{\qDens(\bbeta,\buGbl)}\thickarrow\mbox{$\xveco$ component of $\Ssc_4$}$    ;   $\bSigma_{\qDens(\bbeta,\buGbl)}\thickarrow\mbox{$\AUoo$ component of $\Ssc_4$}$ - $\bmu_{\qDens(\buGbl)}\thickarrow\mbox{last $\Kgbl$ rows of $\bmu_{\qDens(\bbeta,\buGbl)}$}$ - $\bSigma_{\qDens(\buGbl)}\thickarrow\mbox{bottom-right $\Kgbl\times\Kgbl$ sub-block of $\bSigma_{\qDens(\bbeta,\buGbl)}$}$ - $\lambda_{\qDens(\sigsqeps)}\thickarrow\muq{1/\aeps}$  ;  $\Lambda_{\qDens(\bSigma_{\mbox{\rm\tiny g}})}\thickarrow \bM_{\qDens(\ASigmag^{-1})}$  ;  $\Lambda_{\qDens(\bSigma_{\mbox{\rm\tiny h}})}\thickarrow \bM_{\qDens(\ASigmah^{-1})}$ - $\lambda_{\qDens(\sigma^{2}_{\mbox{\rm\tiny grp, g}})}\thickarrow\mu_{\qDens(1/a_{\mbox{\rm\tiny grp, g}})}$  ;  $\lambda_{\qDens(\sigma^{2}_{\mbox{\rm\tiny grp, g}})}\thickarrow\mu_{\qDens(1/a_{\mbox{\rm\tiny grp, g}})}$ - For $i = 1,\ldots, m$: - $\bmu_{\qDens(\buLoneLini,\buLoneGrpi)} \thickarrow\mbox{$\xvectCi$ component of $\Ssc_4$}$ - $\bSigma_{\qDens(\buLoneLini,\buLoneGrpi)} \thickarrow\mbox{$\AUttCi$ component of $\Ssc_4$}$ - $\bmu_{\qDens(\buLoneLini)}\thickarrow\mbox{first $2$ rows of $\bmu_{\qDens(\buLoneLini,\buLoneGrpi)}$}$ - $\bSigma_{\qDens(\buLoneLini)}\thickarrow\mbox{top left $2 \times 2$ sub-block of $\bSigma_{\qDens(\buLoneLini,\buLoneGrpi)}$}$ - $\bmu_{\qDens(\buLoneGrpi)}\thickarrow\mbox{last $\Kgrpg$ rows of $\bmu_{\qDens(\buLoneLini,\buLoneGrpi)}$}$ - $\bSigma_{\qDens(\buLoneGrpi)}\thickarrow\mbox{bottom right $\Kgrpg \times \Kgrpg$ sub-block of $\bSigma_{\qDens(\buLoneLini,\buLoneGrpi)}$}$ - $E_{\qDens}\left\{\left(\left[\begin{array}{c}\bbeta\\ \buGbl\end{array}\right] -\bmu_{\qDens(\bbeta,\buGbl)}\right) \left(\left[\begin{array}{c}\buLoneLini\\ \buLoneGrpi\end{array}\right] -\bmuq{\buLoneLini,\buLoneGrpi)}\right)^T\right\}$ - $\thickarrow\mbox{$\AUotCi$ component of $\Ssc_4$}$ - For $j = 1,\ldots, n_{i}$: - $\bmu_{\qDens(\buLtwoLinij,\buLtwoGrpij)} \thickarrow\mbox{$\xvectCij$ component of $\Ssc_4$}$ - $\bSigma_{\qDens(\buLtwoLinij,\buLtwoGrpij)} \thickarrow\mbox{$\AUttCij$ component of $\Ssc_4$}$ - $\bmu_{\qDens(\buLtwoLinij)}\thickarrow\mbox{first $2$ rows of $\bmu_{\qDens(\buLtwoLinij,\buLtwoGrpij)}$}$ - $\bSigma_{\qDens(\buLtwoLinij)}\thickarrow\mbox{top left $2\times 2$ sub-block of $\bSigma_{\qDens(\buLtwoLinij,\buLtwoGrpij)}$}$ - $\bmu_{\qDens(\buLtwoGrpij)}\thickarrow\mbox{last $\Kgrph$ rows of $\bmu_{\qDens(\buLtwoLinij,\buLtwoGrpij)}$}$ - $\bSigma_{\qDens(\buLtwoGrpij)}\thickarrow\mbox{bottom right $\Kgrph \times \Kgrph$ sub-block of $\bSigma_{\qDens(\buLtwoLinij,\buLtwoGrpij)}$}$ - $E_{\qDens}\left\{\left(\left[\begin{array}{c}\bbeta\\ \buGbl\end{array}\right] -\bmu_{\qDens(\bbeta,\buGbl)}\right) \left(\left[\begin{array}{c} \buLtwoLinij\\ \buLtwoGrpij \end{array}\right] -\bmuq{\buLoneLini,\buLoneGrpi)}\right)^T\right\}$ - $\thickarrow\mbox{$\AUotCij$ component of $\Ssc_4$}$ - *continued on a subsequent page* $\ldots$ <!-- --> - - - - $E_{\qDens}\left\{\left(\left[\begin{array}{c}\buLoneLini\\ \buLoneGrpi \end{array}\right] -\bmuq{\buLoneLini,\buLoneGrpi)}\right) \left(\left[\begin{array}{c}\buLtwoLinij\\ \buLtwoGrpij \end{array}\right] -\bmuq{\buLtwoLinij,\buLtwoGrpij)}\right)^T\right\}$ - $\thickarrow\mbox{$\AUotCicommaj$ component of $\Ssc_4$}$ - $\lambda_{\qDens(\sigsqeps)}\thickarrow \lambda_{\qDens(\sigsqeps)} +\big\Vert\by_{ij}-\bCgblij\bmu_{\qDens(\bbeta,\buGbl)} -\bCLoneGrpij\bmu_{\qDens(\buLoneLini,\buLoneGrpi)}$ - $-\bCLtwoGrpij\bmu_{\qDens(\buLtwoLinij,\buLtwoGrpij)}\big\Vert^2$ - $\lambda_{\qDens(\sigsqeps)}\thickarrow \lambda_{\qDens(\sigsqeps)} +\mbox{tr}(\bCgblij^T\bCgblij\bSigma_{\qDens(\bbeta,\buGbl)}) +\mbox{tr}((\bCLoneGrpij)^T\bCLoneGrpij\bSigma_{\qDens(\buLoneLini,\buLoneGrpi)})$ - $\lambda_{\qDens(\sigsqeps)}\thickarrow \lambda_{\qDens(\sigsqeps)} +\mbox{tr}((\bCLtwoGrpij)^T\bCLtwoGrpij\bSigma_{\qDens(\buLtwoLinij,\buLtwoGrpij)})$ - $\lambda_{\qDens(\sigsqeps)}\thickarrow \lambda_{\qDens(\sigsqeps)} +2\,\mbox{tr}\left[\bCgrpi^T\bCgbli\,E_{\qDens}\left\{\left( \left[\begin{array}{c}\bbeta \\ \buGbl\end{array}\right] -\bmu_{\qDens(\bbeta,\buGbl)}\right) \right. \right. $ - $\left.\left. \times \left(\left[\begin{array}{c}\buLoneLini\\ \buLoneGrpi\end{array}\right] -\bmuq{\buLoneLini,\buLoneGrpi)}\right)^T\right\}\right]$ - $\lambda_{\qDens(\sigsqeps)}\thickarrow \lambda_{\qDens(\sigsqeps)} +2\,\mbox{tr}\left[(\bCLoneGrpij)^T\bCgblij\,E_{\qDens}\left\{\left( \left[\begin{array}{c}\bbeta \\ \buGbl\end{array}\right] -\bmu_{\qDens(\bbeta,\buGbl)}\right) \right. \right. $ - $\left.\left. \times \left(\left[\begin{array}{c}\buLtwoLinij\\ \buLtwoGrpij\end{array}\right] -\bmuq{\buLtwoLinij,\buLtwoGrpij)}\right)^T\right\}\right]$ - $\lambda_{\qDens(\sigsqeps)}\thickarrow \lambda_{\qDens(\sigsqeps)} +2\,\mbox{tr}\left[(\bCLoneGrpij)^T\bCLtwoGrpij\,E_{\qDens} \left\{\left(\left[\begin{array}{c}\buLoneLini\\ \buLoneGrpi\end{array}\right] -\bmuq{\buLoneLini,\buLoneGrpi}\right) \right. \right. $ - $\left.\left. \times \left(\left[\begin{array}{c}\buLtwoLinij\\ \buLtwoGrpij\end{array}\right] -\bmu_{\qDens(\buLtwoLinij,\buLtwoGrpij)}\right)^T\right\}\right]$ - $\bLambda_{\qDens(\bSigmah)}\thickarrow\bLambda_{\qDens(\bSigmah)}+ \bmu_{\qDens(\buLtwoLinij)}\bmu_{\qDens(\buLtwoLinij)}^T+ \bSigma_{\qDens(\buLtwoLinij)}$ - $\lambda_{\qDens(\sigmaGrph^{2})} \thickarrow \lambda_{\qDens(\sigmaGrph^{2})} + \Vert \bmu_{\qDens(\buLtwoGrpij)} \Vert^2 + \mbox{tr} \left( \bSigma_{\qDens(\buLtwoGrpij)} \right)$ - $\bLambda_{\qDens(\bSigmag)}\thickarrow\bLambda_{\qDens(\bSigmag)}+ \bmu_{\qDens(\buLoneLini)}\bmu_{\qDens(\buLoneLini)}^T+ \bSigma_{\qDens(\buLoneLini)}$ - $\lambda_{\qDens(\sigmaGrpg^{2})} \thickarrow \lambda_{\qDens(\sigmaGrpg^{2})} + \Vert \bmu_{\qDens(\buLoneGrpi)} \Vert^2 + \mbox{tr} \left( \bSigma_{\qDens(\buLoneGrpi)} \right)$ - $\lambda_{\qDens(\sigma^{2}_{\mbox{\rm\tiny gbl}})} \thickarrow \mu_{\qDens(1/a_{\mbox{\rm\tiny gbl}})} + \Vert \bmu_{\qDens(\bu_{\mbox{\rm\tiny gbl}})} \Vert^2 + \mbox{tr} \left( \bSigma_{\qDens(\bu_{\mbox{\rm\tiny gbl}})} \right)$ - $\muq{1/\sigsqeps} \leftarrow \xi_{\qDens(\sigeps)}/\lambda_{\qDens(\sigsqeps)}$    ;    $\muq{1/\sigma^{2}_{\mbox{\rm\tiny gbl}}} \leftarrow \xi_{\qDens(\sigma^{2}_{\mbox{\rm\tiny gbl}})}/ \lambda_{\qDens(\sigma^{2}_{\mbox{\rm\tiny gbl}})}$ - $\MqSigmag \leftarrow(\xi_{\qDens(\bSigmag)} - 2 + 1) \bLambda^{-1}_{\qDens(\bSigmag)}$    ;    $\MqSigmah \leftarrow(\xi_{\qDens(\bSigmah)} - 2 + 1) \bLambda^{-1}_{\qDens(\bSigmah)}$ - $\muq{1/\sigmaGrpg^{2}} \leftarrow \xi_{\qDens(\sigmaGrpg^{2})}/\lambda_{\qDens(\sigmaGrpg^{2})}$    ;    $\muq{1/\sigmaGrph^{2}} \leftarrow \xi_{\qDens(\sigmaGrph^{2})}/\lambda_{\qDens(\sigmaGrph^{2})}$ - $\lambda_{\qDens(a_{\varepsilon})}\thickarrow\muq{1/\sigma_{\varepsilon}^{2}} +1/(\nu_{\varepsilon} s_{\varepsilon}^2)$   ;   $\muq{1/a_{\varepsilon}} \thickarrow \xi_{\qDens(a_{\varepsilon})}/ \lambda_{\qDens(a_{\varepsilon})}$ - $\bM_{\qDens(\ASigmag^{-1})}\thickarrow \xi_{\qDens(\ASigmag)}\bLambda_{\qDens(\ASigmag)}^{-1}$   ;  $\bM_{\qDens(\ASigmah^{-1})}\thickarrow \xi_{\qDens(\ASigmah)}\bLambda_{\qDens(\ASigmah)}^{-1}$ - $\bLambda_{\qDens(\ASigmag)}\thickarrow \diag\big\{\mbox{diagonal}\big(\bM_{\qDens(\bSigmag^{-1})}\big)\big\}+\{\nuSigmag\diag(\sSigmagOne^2,\sSigmagTwo^2)\}^{-1}$ - $\bLambda_{\qDens(\ASigmah)}\thickarrow \diag\big\{\mbox{diagonal}\big(\bM_{\qDens(\bSigmah^{-1})}\big)\big\}+\{\nuSigmah\diag(\sSigmahOne^2,\sSigmahTwo^2)\}^{-1}$ - $\lambda_{\qDens(a_{\mbox{\rm\tiny gbl}})}\thickarrow\muq{1/\sigma_{\mbox{\rm\tiny gbl}}^{2}} +1/(\nu_{\mbox{\rm\tiny gbl}} s_{\mbox{\rm\tiny gbl}}^2)$   ;   $\muq{1/a_{\mbox{\rm\tiny gbl}}} \thickarrow \xi_{\qDens(a_{\mbox{\rm\tiny gbl}})}/ \lambda_{\qDens(a_{\mbox{\rm\tiny gbl}})}$ - $\lambda_{\qDens(\aGrpg)}\thickarrow\muq{1/\sigmaGrpg^{2}} +1/(\nu_{\mbox{\rm\tiny grp, $g$}} s_{\mbox{\rm\tiny grp, $g$}}^2)$   ;   $\muq{1/\aGrpg} \thickarrow \xi_{\qDens(\aGrpg)}/\lambda_{\qDens(\aGrpg)}$ - $\lambda_{\qDens(\aGrph)}\thickarrow\muq{1/\sigmaGrph^{2}} +1/(\nu_{\mbox{\rm\tiny grp, $h$}} s_{\mbox{\rm\tiny grp, $h$}}^2)$   ;   $\muq{1/\aGrph} \thickarrow \xi_{\qDens(\aGrph)}/\lambda_{\qDens(\aGrph)}$ - until the increase in $\underline{\pDens}(\by;\qDens)$ is negligible. - *continued on a subsequent page* $\ldots$ <!-- --> - Outputs: $\bmu_{\qDens(\bbeta,\buGbl)}$, $\bSigma_{\qDens(\bbeta,\buGbl)}$, $\Big\{\bmu_{\qDens(\buLoneLini,\buLoneGrpi)}, \bSigma_{\qDens(\buLoneLini,\buLoneGrpi)},$ - $E_{\qDens}\left\{\left(\left[\begin{array}{c}\bbeta\\ \buGbl\end{array}\right] -\bmu_{\qDens(\bbeta,\buGbl)}\right) \left(\left[\begin{array}{c}\buLoneLini\\ \buLoneGrpi\end{array}\right] -\bmuq{\buLoneLini,\buLoneGrpi)}\right)^T\right\}:1\le i\le m,$ - $ E_{\qDens}\left\{\left( \left[\begin{array}{c}\bbeta \\ \buGbl\end{array}\right] -\bmu_{\qDens(\bbeta,\buGbl)}\right) \left(\left[\begin{array}{c}\buLtwoLinij\\ \buLtwoGrpij\end{array}\right] -\bmuq{\buLtwoLinij,\buLtwoGrpij)}\right)^T\right\}, $ - $E_{\qDens} \left\{\left(\left[\begin{array}{c}\buLoneLini\\ \buLoneGrpi\end{array}\right]-\bmuq{\buLoneLini,\buLoneGrpi}\right) \left(\left[\begin{array}{c}\buLtwoLinij\\ \buLtwoGrpij\end{array}\right] -\bmu_{\qDens(\buLtwoLinij,\buLtwoGrpij)}\right)^T\right\},$ - $ \bmu_{\qDens(\buLtwoLinij,\buLtwoGrpij)}, \bSigma_{\qDens(\buLtwoLinij,\buLtwoGrpij)}: 1 \le i \le m, 1 \le j \le n_{i} \Big\}, \xi_{\qDens(\sigeps)}, \ \lambda_{\qDens(\sigsqeps)}, \ \xi_{\qDens(\sigmaGbl^2)}, $ - $\lambda_{\qDens(\sigmaGbl^2)}, \xi_{\qDens(\bSigmag)}, \ \bLambda^{-1}_{\qDens(\bSigmag)}, \ \xi_{\qDens(\bSigmah)}, \ \bLambda^{-1}_{\qDens(\bSigmah)}, \ \xi_{\qDens(\sigmaGrpg^2)}, \ \lambda_{\qDens(\sigmaGrpg^2)}, \ \xi_{\qDens(\sigmaGrph^2)}, \ \lambda_{\qDens(\sigmaGrph^2)}.$ Algorithm \[alg:threeLevMFVB\] makes use of Result \[res:threeLevelMFVB\] to facilitate streamlined computation of all variational parameters in the three-level group specific curves model. Figure \[fig:MNSWintroWithFit\] provides illustration of Algorithm \[alg:threeLevMFVB\] by showing the fits to the Figure \[fig:MNSWintro\] ultrasound data. Posterior mean curves and (narrow) 99% pointwise credible intervals are shown. As discussed in the next section, such fits can be obtained rapidly and accurately and Algorithm \[alg:threeLevMFVB\] is scalable to much larger data sets of the type illustrated by Figures \[fig:MNSWintro\] and \[fig:MNSWintroWithFit\]. Accuracy and Speed Assessment {#sec:accAndSpeed} ============================= In this section we provide some assessment of the accuracy and speed of the inference delivered by streamlined variational inference for group-specific curves models. Accuracy Assessment ------------------- Mean field restrictions such as (\[eq:producRestrict\]) and (\[eq:producRestrict3lev\]) imply that there is some loss of accuracy in inference produced by Algorithms \[alg:twoLevMFVB\] and \[alg:threeLevMFVB\]. However, at least for the Gaussian response case treated here, approximate parameter orthogonality between the coefficient parameters and covariance parameters from likelihood theory implies that such restrictions are mild and mean field accuracy is high. Figure \[fig:ultrasoundAcc\] corroborates this claim by assessing accuracy of the mean function estimates and 95% credible intervals at the median values of frequency for each panel in Figure \[fig:MNSWintroWithFit\]. As a benchmark we use Markov chain Monte Carlo-based inference via the package (Guo *et al.*, 2018). After a warmup of size $1,000$ we retained $5,000$ Markov chain Monte Carlo samples from the mean function and median frequency posterior distributions and used kernel density estimation to approximate the corresponding posterior density function. For a generic univariate parameter $\theta$, the accuracy of an approximation $\qDens(\theta)$ to $\pDens(\theta|\by)$ is defined to be $$\mbox{accuracy}\equiv\, 100\left\{1-\smhalf\infint\big|\qDens(\theta)-\pDens(\theta|\by)\big|\,d\theta\right\}\%. \label{eq:accurDefn}$$ The percentages in the top right-hand panel of Figure \[fig:ultrasoundAcc\] correspond to (\[eq:accurDefn\]) with replacement of $\pDens(\theta|\by)$ by the above-mentioned kernel density estimate. In this case accuracy is seen to be excellent, with accuracy percentages between 97% and 99% for all 40 curves. Speed Assessment ---------------- We also conducted some simulation studies to assess the speed of streamlined variational higher level group-specific curve models, in terms of both comparative advantage over naïve implementation and absolute performance. The focus of these studies was variational inference in the two-level case and to probe maximal speed potential Algorithm \[alg:twoLevMFVB\] was implemented in the low-level computer language `Fortran 77`. An implementation of the naïve counterpart of Algorithm \[alg:twoLevMFVB\], involving storage and direct calculations concerning the full $\bSigma_{\qDens(\bbeta,\bu)}$ matrix, was also carried out. We then simulated data according to model (\[eq:twoLevelfg\]) with $\sigeps=0.2$, $$f(x)=3\sqrt{x(1.3-x)}\,\Phi(6x-3)\quad\mbox{and}\quad g_i(x)=\alpha_1\alpha_2\sin(2\pi x^{\alpha_3})$$ where, for each $i$, $\alpha_1$, $\alpha_2$ and $\alpha_3$ are, respectively, random draws from the $N(\quarter,\quarter)$ distribution and the sets $\{-1,1\}$ and $\{1,2,3\}$. The level-2 sample sizes $n_i$ generated randomly from the set $\{30,31,\ldots,60\}$ and the level-1 sample sizes $m$ ranging over the set $\{100,200,300,400,500\}$. All $x_{ij}$ data were generated from a Uniform distribution over the unit interval. Table \[tab:StreamVsNaive\] summarizes the timings based on 100 replications with the number of mean field variational Bayes iterations fixed at 50. The study was run on a `MacBook Air` laptop with a 2.2 gigahertz processor and 8 gigabytes of random access memory. [cccc]{}\ $m$ & naïve & streamlined & naïve/streamlined\ \ 100 & 75 (1.21) &0.748 (0.0334) & 100\ 200 & 660 (7.72) &1.490 (0.0491) & 442\ 300 & 2210 (22.00) &2.260 (0.0567) & 974\ 400 & 5180 (92.20) &3.040 (0.0718) & 1700\ 500 & NA &3.780 (0.0593) & NA\ For $m$ ranging from $100$ to $400$ we see that the naïve to streamlined ratios increase from about $100$ to $1,700$. When $m=500$ the naïve implementation fails to run due to its excessive storage demands. In contrast, the streamlined fits are produced in about $3$ seconds. It is clear that streamlined variational inference is to be preferred and is the only option for large numbers of groups. We then obtained timings for the streamlined algorithm for $m$ becoming much larger, taking on values $100$, $500$, $2,500$ and $12,500$. The iterations in Algorithm \[alg:twoLevMFVB\] were stopped when the relative increase in the marginal log-likelihood fell below $10^{-5}$. The average and standard deviation times in seconds over 100 replications are shown in Table \[tab:StreamAbsolute\]. We see that the computational times are approximately linear in $m$. Even with twelve and a half thousand groups, Algorithm \[alg:twoLevMFVB\] is able to deliver fitting and inference on a contemporary laptop computer in about one and a half minutes. [cccc]{}\ $m=100$ & $m=500$ & $m=2,500$ & $m=12,500$\ \ 0.635 & 2.900 & 16.90 & 95.00\ (0.183) & (0.391) & (1.92) & (4.92)\ Acknowledgments {#acknowledgments .unnumbered} =============== This research was partially supported by the Australian Research Council Discovery Project DP140100441. The ultrasound data was provided by the Bioacoustics Research Laboratory, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Illinois, U.S.A. References {#references .unnumbered} ========== Atay-Kayis, A. Massam, H. (2005). A Monte Carlo method for computing marginal likelihood in nondecomposable Gaussian graphical models. *Biometrika*, [**92**]{}, 317–335. Bates, D., Mächler, M., Bolker, B. and Walker, S. (2015). Fitting linear mixed-effects models using `lme4`. *Journal of Statistical Software*, [**67(1)**]{}, 1–48. Bishop, C.M. (2006). [*Pattern Recognition and Machine Learning.*]{} New York: Springer. Brumback, B.A. and Rice, J.A. (1998). Smoothing spline models for the analysis of nested and crossed samples of curves (with discussion). [*Journal of the American Statistical Association*]{}, [**93**]{}, 961–994. Donnelly, C.A., Laird, N.M. and Ware, J.H. (1995). Prediction and creation of smooth curves for temporally correlated longitudinal data. [*Journal of the American Statistical Association*]{}, [**90**]{}, 984–989. Durban, M., Harezlak, J., Wand, M.P. and Carroll, R.J. (2005). Simple fitting of subject-specific curves for longitudinal data. [*Statistics in Medicine*]{}, [**24**]{}, 1153–1167. Goldsmith, J., Zipunnikov, V. and Schrack, J. (2015). Generalized multilevel function-on-scalar regression and principal component analysis. *Biometrics*, [**71**]{}, 344–353. Guo, J., Gabry, J. and Goodrich, B. (2018). : interface to . package version 2.18.2.\ `http://mc-stan.org`. Huang, A. and Wand, M.P. (2013). Simple marginally noninformative prior distributions for covariance matrices. *Bayesian Analysis*, [**8**]{}, 439–452. Lee, C.Y.Y. and Wand, M.P. (2016). Variational inference for fitting complex Bayesian mixed effects models to health data. *Statistics in Medicine*, [**35**]{}, 165–188. Nolan, T.H., Menictas, M. and Wand, M.P. (2019). Streamlined computing for variational inference with higher level random effects. Unpublished manuscript submitted to the arXiv.org e-Print archive; on hold as of 11th March 2019. Soon to be posted also on `http://matt-wand.utsacademics.info/statsPapers.html` Nolan, T.H. and Wand, M.P. (2018). Solutions to sparse multilevel matrix problems. Unpublished manuscript available at *https://arxiv.org/abs/1903.03089*. Pinheiro, J.C. and Bates, D.M. (2000). [*Mixed-Effects Models in S and S-PLUS*]{}. New York: Springer. Pinheiro, J., Bates, D., DebRoy, S., Sarkar, D. and Core Team. (2018). : Linear and nonlinear mixed effects models. package version 3.1.\ `http://cran.r-project.org/package=nlme`. Pratt, J.H., Jones, J.J., Miller, J.Z., Wagner, M.A. and Fineberg, N.S. (1989). Racial differences in aldosterone excretion and plasma aldosterone concentrations in children. [*New England Journal of Medicine*]{}, [**321**]{}, 1152–1157. Robinson, G.K. (1991). That BLUP is a good thing: the estimation of random effects. [*Statistical Science*]{}, [**6**]{}, 15–51. Trail, J.B., Collins, L.M., Rivera, D.E., Li, R., Piper, M.E. and Baker, T.B. (2014). Functional data analysis for dynamical system identification of behavioral processes. [*Psychological Methods*]{}, [**19(2)**]{}, 175–187. Verbyla, A.P., Cullis, B.R., Kenward, M.G. and Welham, S.J. (1999). The analysis of designed experiments and longitudinal data by using smoothing splines (with discussion). [*Applied Statistics*]{}, [**48**]{}, 269–312. Wahba, G. (1990). [*Spline Models for Observational Data.*]{} Philadelphia: Society for Industrial and Applied Mathematics. Wand, M.P. and Ormerod, J.T. (2008). On semiparametric regression with O’Sullivan penalized splines. [*Australian and New Zealand Journal of Statistics*]{}, [**50**]{}, 179–198. Wand, M.P. and Ormerod, J.T. (2011). Penalized wavelets: embedding wavelets into semiparametric regression. [*Electronic Journal of Statistics*]{}, [**5**]{}, 1654–1717. Wang, Y. (1998). Mixed effects smoothing spline analysis of variance. [*Journal of the Royal Statistical Society, Series B*]{}, [**60**]{}, 159–174. Wirtzfeld, L.A., Ghoshal, G., Rosado-Mendez, I.M., Nam, K., Park, Y., Pawlicki, A.D., Miller, R.J., Simpson, D.G., Zagzebski, J.A., Oelze, M.I., Hall, T.J. and O’Brien, W.D. (2015). Quantitative ultrasound comparison of MAT and 4T1 mammary tumors in mice and rates across multiple imaging systems. *Journal of Ultrasound Medicine*, [**34**]{}, 1373–1383. Zhang, D., Lin, X., Raz, J. and Sowers, M. (1998). Semi-parametric stochastic mixed models for longitudinal data. [*Journal of the American Statistical Association*]{}, [**93**]{}, 710–719. Web-Supplement for: **Streamlined Variational Inference for** **Higher Level Group-Specific Curve Models** By M. Menictas$\null^1$, T.H. Nolan$\null^1$, D.G. Simpson$\null^2$ and M.P. Wand$\null^1$ *University of Technology Sydney$\null^1$ and University of Illinois$\null^2$* Derivation of Result \[res:twoLevelBLUP\] {#sec:drvResultOne} ========================================= Straightforward algebra can be used to verify that $$\begin{array}{c} \bC^T\RBLUP^{-1}\bC+\DBLUP = \bB^T \bB \mbox{ and } \bC^T\RBLUP^{-1}\by = \bB^T \bb \end{array}$$ where $\bB$ and $\bb$ have sparse forms (\[eq:BandbForms\]) with non-zero sub-blocks equal to $$\bveci\equiv \left[ \begin{array}{c} \sigeps^{-1}\by_i\\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \end{array} \right], \quad \Bmati\equiv \left[ \begin{array}{cc} \sigeps^{-1}\bX_i & \sigeps^{-1}\bZgbli\\[1ex] \bO & m^{-1/2}\sigmaGbl^{-1}\bI_{\Kgbl}\\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \end{array} \right] \quad\mbox{and}\quad \Bmatdoti\equiv \left[ \begin{array}{cc} \sigeps^{-1}\bX_i & \sigeps^{-1}\bZgrpi \\[1ex] \bO & \bO \\[1ex] \bSigma^{-1/2} & \bO \\[1ex] \bO & \sigmaGrp^{-1}\bI_{\Kgrp} \end{array} \right]. $$ Therefore, in view of (\[eq:BLUPandCov\]) and (\[eq:CovMain\]), $$\left[\begin{array}{c} \bbetahat\\ \buhat \end{array} \right]=(\bB^T\bB)^{-1}\bB^T\bb \quad\mbox{and}\quad \mbox{\rm Cov}\left(\left[\begin{array}{c} \bbetahat\\ \buhat-\bu \end{array} \right]\right)=(\bB^T\bB)^{-1}.$$ Derivation of Algorithm \[alg:twoLevBLUP\] {#sec:drvAlgOne} ========================================== Algorithm \[alg:twoLevBLUP\] is simply a proceduralization of Result \[res:twoLevelBLUP\]. The Inverse G-Wishart and Inverse $\chi^2$ Distributions {#sec:IGWandICS} ======================================================== The Inverse G-Wishart corresponds to the matrix inverses of random matrices that have a *G-Wishart* distribution (e.g. Atay-Kayis Massam, 2005). For any positive integer $d$, let $G$ be an undirected graph with $d$ nodes labeled $1,\ldots,d$ and set $E$ consisting of sets of pairs of nodes that are connected by an edge. We say that the symmetric $d\times d$ matrix $\bM$ *respects* $G$ if $$\bM_{ij}=0\quad\mbox{for all}\quad \{i,j\}\notin E.$$ A $d\times d$ random matrix $\bX$ has an Inverse G-Wishart distribution with graph $G$ and parameters $\xi>0$ and symmetric $d\times d$ matrix $\bLambda$, written $$\bX\sim\mbox{Inverse-G-Wishart}(G,\xi,\bLambda)$$ if and only if the density function of $\bX$ satisfies $$\pDens(\bX)\propto |\bX|^{-(\xi+2)/2}\exp\{-\smhalf\tr(\bLambda\,\bX^{-1})\}$$ over arguments $\bX$ such that $\bX$ is symmetric and positive definite and $\bX^{-1}$ respects $G$. Two important special cases are $$G=\Gfull\equiv\mbox{totally connected $d$-node graph},$$ for which the Inverse G-Wishart distribution coincides with the ordinary Inverse Wishart distribution, and $$G=\Gdiag\equiv\mbox{totally disconnected $d$-node graph},$$ for which the Inverse G-Wishart distribution coincides with a product of independent Inverse Chi-Squared random variables. The subscripts of $\Gfull$ and $\Gdiag$ reflect the fact that $\bX^{-1}$ is a full matrix and $\bX^{-1}$ is a diagonal matrix in each special case. The $G=\Gfull$ case corresponds to the ordinary Inverse Wishart distribution. However, with message passing in mind, we will work with the more general Inverse G-Wishart family throughout this article. In the $d=1$ special case the graph $G=\Gfull=\Gdiag$ and the Inverse G-Wishart distribution reduces to the Inverse Chi-Squared distributions. We write $$x\sim\mbox{Inverse-$\chi^2$}(\xi,\lambda)$$ for this $\mbox{Inverse-G-Wishart}(\Gdiag,\xi,\lambda)$ special case with $d=1$ and $\lambda>0$ scalar. Derivation of Result \[res:twoLevelMFVB\] {#sec:drvResultTwo} ========================================= It is straightforward to verify that the $\bmu_{\qDens(\bbeta,\bu)}$ and $\bSigma_{\qDens(\bbeta,\bu)}$ updates, given at (\[eq:muSigmaMFVBupd\]), may be written as $$\bmu_{\qDens(\bbeta,\bu)}\thickarrow(\bB^T\bB)^{-1}\bB^T\bb \quad\mbox{and}\quad \bSigma_{\qDens(\bbeta,\bu)}\thickarrow(\bB^T\bB)^{-1}$$ where $\bB$ and $\bb$ have the forms (\[eq:BandbForms\]) with $$\bveci\equiv\left[\begin{array}{c} \mu_{\qDens(1/\sigeps^2)}^{1/2}\by_i\\[2ex] m^{-1/2}\bSigma_{\bbeta}^{-1/2}\bmu_{\bbeta}\\[2ex] \bzero\\[1ex] \bzero\\[1ex] \bzero \end{array} \right], \quad\Bmati\equiv\left[\begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_i & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZgbli\\[2ex] m^{-1/2}\bSigma_{\bbeta}^{-1/2}& \bO \\[2ex] \bO & m^{-1/2}\mu_{\qDens(1/\sigmaGbl^2)}^{1/2}\bI_{\Kgbl}\\[2ex] \bO & \bO \\[2ex] \bO & \bO \\[2ex] \end{array} \right]$$ $$\Bmatdoti\equiv \left[\begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_i & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZgrpi \\[2ex] \bO & \bO \\[2ex] \bO & \bO \\[2ex] \bM_{\qDens(\bSigma^{-1})}^{1/2} & \bO \\[2ex] \bO & \mu_{\qDens(1/\sigmaGrp^2)}^{1/2}\bI_{\Kgrp} \end{array} \right].$$ Result \[res:twoLevelMFVB\] immediately follows from Theorem 2 of Nolan Wand (2018). Derivation of Algorithm \[alg:twoLevMFVB\] {#sec:drvAlgTwo} ========================================== We provide expressions for the $\qDens$-densities for mean field variational Bayesian inference for the parameters in (\[eq:twoLevBayes\]), with product density restriction (\[eq:producRestrict\]). Arguments analogous to those given in, for example, Appendix C of Wand Ormerod (2011) lead to: $$\qDens(\bbeta,\bu)\ \mbox{is a $N(\bmu_{\qDens(\bbeta,\bu)},\bSigma_{\qDens(\bbeta,\bu)})$ density function}$$ where $$\bSigma_{\qDens(\bbeta,\bu)}=(\bC^T\RMFVB^{-1}\bC+\DMFVB)^{-1} \quad \mbox{and} \quad \bmu_{\qDens(\bbeta,\bu)}=\bSigma_{\qDens(\bbeta,\bu)}(\bC^T\RMFVB^{-1}\by + \oMFVB)$$ with $\RMFVB$, $\DMFVB$ and $\oMFVB$ defined via (\[eq:MFVBmatDefns\]), $$\qDens(\sigsqeps)\ \mbox{is an $\mbox{Inverse-$\chi^2$} \left(\xi_{\qDens(\sigsqeps)},\lambda_{\qDens(\sigsqeps)}\right)$ density function}$$ where $\xi_{\qDens(\sigsqeps)}=\nuEps+\sumim n_i$ and $$\begin{aligned} \lambda_{\qDens(\sigsqeps)}&=&\mu_{\qDens(1/\aeps)}+\sumim\, E_{\qDens}\left\{ \Bigg \Vert \by_{i} - \bCgbli \left[ \begin{array}{c} \bbeta \\[1ex] \buGbl \end{array} \right] - \bCgrpi \left[ \begin{array}{c} \buLini \\[1ex] \buGrpi \end{array} \right] \Bigg \Vert^2 \right\} \\[1ex] & = & \mu_{\qDens(1/\aeps)}+\sumim\,\left[ \Bigg \Vert\,E_{\qDens} \left( \by_{i} - \bCgbli \left[ \begin{array}{c} \bbeta \\[1ex] \buGbl \end{array} \right] - \bCgrpi \left[ \begin{array}{c} \buLini \\[1ex] \buGrpi \end{array} \right] \right) \Bigg \Vert^2 \right. \\[1ex] & & \left. \qquad \qquad \qquad \qquad + \mbox{tr} \left\{\mbox{\rm Cov}_{\qDens} \left(\bCgbli\left[\begin{array}{c} \bbeta \\[1ex] \buGbli \end{array} \right] + \bCgrpi \left[ \begin{array}{c} \buLini \\[1ex] \buGrpi \end{array} \right] \right) \right\} \right] \\[1ex] & = & \mu_{\qDens(1/\aeps)} + \displaystyle{\sum_{i=1}^{m}} \left\{ \Bigg \Vert E_{\qDens} \left( \by_{i} - \bCgbli \left[ \begin{array}{c} \bbeta \\[1ex] \buGbl \end{array} \right] - \bCgrpi \left[ \begin{array}{c} \buLini \\[1ex] \buGrpi \end{array} \right] \right) \Bigg \Vert^2 \right. \\[3ex] & & \qquad \qquad \qquad \quad \left. + \mbox{tr}(\bCgbli^T\bCgbli\bSigma_{\qDens(\bbeta,\buGbl)}) + \mbox{tr}(\bCgrpi^T\bCgrpi\bSigma_{\qDens(\buLini,\buGrpi)}) \right. \\[1ex] & & \qquad \qquad \qquad \quad \left. + 2 \, \mbox{tr}\left[\bCgrpi^T\bCgbli\,E_{\qDens}\left\{\left( \left[\begin{array}{c}\bbeta \\ \buGbl\end{array}\right] -\bmu_{\qDens(\bbeta,\buGbl)}\right) \times \right. \right. \right. \\[1ex] & & \left. \left. \left. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \quad \left(\left[\begin{array}{c}\buLini\\ \buGrpi\end{array}\right] -\bmuq{\buLini,\buGrpi}\right)^T\right\}\right] \right\} $$ where $\bCgbli \equiv [ \bX_{i} \, \bZgbli ]$, $\bCgrpi \equiv [ \bX_{i} \, \bZgrpi ]$, and with reciprocal moment $\mu_{\qDens(1/\sigsqeps)}=\xi_{\qDens(\sigsqeps)}/\lambda_{\qDens(\sigsqeps)},$ $$\qDens(\sigmaGbl^2)\ \mbox{is an $\mbox{Inverse-$\chi^2$} \left(\xi_{\qDens(\sigmaGbl^2)},\lambda_{\qDens(\sigmaGbl^2)}\right)$ density function}$$ where $\xi_{\qDens(\sigmaGbl^2)}=\nuGbl+\Kgbl$ and $$\lambda_{\qDens(\sigmaGbl^2)}= \mu_{\qDens(1/{\aGbl})} + \Vert \bmu_{\qDens(\buGbl)} \Vert^2 + \mbox{tr} \left( \bSigma_{\qDens(\buGbl)} \right),$$ with reciprocal moment $\mu_{\qDens(1/{\sigmaGbl^2})}=\xi_{\qDens(\sigmaGbl^2)} / \lambda_{\qDens(\sigmaGbl^2)},$ $$\qDens(\sigmaGrp^2)\ \mbox{is an $\mbox{Inverse-$\chi^2$} \left(\xi_{\qDens(\sigmaGrp^2)},\lambda_{\qDens(\sigmaGrp^2)}\right)$ density function}$$ where $\xi_{\qDens(\sigmaGrp^2)}=\nuGrp+m\Kgrp$ and $$\lambda_{\qDens(\sigmaGrp^2)} = \mu_{\qDens(1/{\aGrp})} + \sum_{i=1}^m \left\{ \Vert \bmu_{\qDens(\buGrpi)} \Vert^2 + \mbox{tr} \left( \bSigma_{\qDens(\buGrpi)} \right) \right\},$$ with reciprocal moment $\mu_{\qDens(1/{\sigmaGrp^2})}=\xi_{\qDens(\sigmaGrp^2)} / \lambda_{\qDens(\sigmaGrp^2)},$ $$\qDens(\bSigma)\ \mbox{is an $\mbox{Inverse-G-Wishart} \left(\Gfull,\xi_{\qDens(\bSigma)},\bLambda_{\qDens(\bSigma)}\right)$ density function}$$ where $\xi_{\qDens(\bSigma)}=\nuSigma+2+m$ $$\bLambda_{\qDens(\bSigma)}=\bM_{\qDens(\bA_{\bSigma}^{-1})} +\sumim\left(\bmu_{\qDens(\buLini)}\bmu_{\qDens(\buLini)}\trans + \bSigma_{\qDens(\buLini)}\right),$$ with inverse moment $\bM_{\qDens(\bSigma^{-1})}=(\xi_{\qDens(\bSigma)}-1)\bLambda_{\qDens(\bSigma)}^{-1}$, $$\qDens(\aeps)\ \mbox{is an $\mbox{Inverse-$\chi^2$} (\xi_{\qDens(\aeps)},\lambda_{\qDens(\aeps)})$ density function}$$ where $\xi_{\qDens(\aeps)}=\nuEps+1$, $$\lambda_{\qDens(\aeps)}=\mu_{\qDens(1/\sigsqeps)}+1/(\nuEps\sEps^2)$$ with reciprocal moment $\mu_{\qDens(1/\aeps)}=\xi_{\qDens(\aeps)}/\lambda_{\qDens(\aeps)}$, $$\qDens(\aGbl)\ \mbox{is an $\mbox{Inverse-$\chi^2$} (\xi_{\qDens(\aGbl)},\lambda_{\qDens(\aGbl)})$ density function}$$ where $\xi_{\qDens(\aGbl)}=\nuGbl+1$, $$\lambda_{\qDens(\aGbl)}=\mu_{\qDens(1/\sigmaGbl^2)}+1/(\nuGbl\sGbl^2)$$ with reciprocal moment $\mu_{\qDens(1/\aGbl)}=\xi_{\qDens(\aGbl)}/\lambda_{\qDens(\aGbl)}$, $$\qDens(\aGrp)\ \mbox{is an $\mbox{Inverse-$\chi^2$} (\xi_{\qDens(\aGrp)},\lambda_{\qDens(\aGrp)})$ density function}$$ where $\xi_{\qDens(\aGrp)}=\nuGrp+1$, $$\lambda_{\qDens(\aGrp)}=\mu_{\qDens(1/\sigmaGrp^2)}+1/(\nuGrp\sGrp^2)$$ with reciprocal moment $\mu_{\qDens(1/\aGrp)}=\xi_{\qDens(\aGrp)}/\lambda_{\qDens(\aGrp)}$ and $$\qDens(\ASigma)\ \mbox{is an $\mbox{Inverse-G-Wishart} \left(\Gdiag,\xi_{\qDens(\ASigma)},\bLambda_{\qDens(\ASigma)}\right)$ density function}$$ where $\xi_{\qDens(\ASigma)}=\nuSigma+2$, $$\bLambda_{\qDens(\ASigma)}=\diag\big\{\mbox{diagonal}\big(\bM_{\qDens(\bSigma^{-1})}\big)\big\} +\bLambda_{\ASigma}$$ with inverse moment $\bM_{\qDens(\ASigma^{-1})}=\xi_{\qDens(\ASigma)}\bLambda_{\qDens(\ASigma)}^{-1}$. Marginal Log-Likelihood Lower Bound and Derivation {#sec:lowerBound} ================================================== The expression for the lower bound on the marginal log-likelihood for Algorithm \[alg:twoLevMFVB\] is\ $$\begin{array}{l} \log \underline{\pDens}(\by; \qDens) = \\[2ex] -\frac{1}{2} \log (\pi) \displaystyle{\sum_{i=1}^{m}} n_{i} - \smhalf \log |\bSigma_{\bbeta}| - \smhalf \mbox{tr} \left( \bSigma_{\bbeta}^{-1} \left\{ \left( \bmu_{\qDens (\bbeta)} - \bmu_{\bbeta} \right) \left( \bmu_{\qDens (\bbeta)} - \bmu_{\bbeta} \right)^T + \bSigma_{\qDens (\bbeta)} \right\} \right) \\[1ex] -\smhalf \mbox{tr} \left( \bM_{\qDens (\bSigma^{-1})} \left\{ \displaystyle{\sum_{i=1}^{m}} \left( \bmu_{\qDens (\buLini} \bmu_{\qDens (\buLini)}^T + \bSigma_{\qDens (\buLini)} \right) \right\} \right) + \smhalf \{ 2 + \Kgbl + m (2 + \Kgrp) \} \\[1ex] - \smhalf \mu_{\qDens(1/{\sigmaGbl^2})} \left\{ \Vert \bmu_{\qDens ( \buGbl )} \Vert^2 + \mbox{tr} ( \bSigma_{\qDens(\buGbl)} ) \right\} - \smhalf \mu_{\qDens ( 1/{\sigmaGrp^2} )} \displaystyle{\sum_{i=1}^{m}} \left\{ \Vert \bmu_{\qDens ( \buGrpi ))} \Vert^2 + \mbox{tr} (\bSigma_{\qDens(\buGrpi)}) \right\} \\[1ex] + \smhalf \log |\bSigma_{\bbeta}| + \{ \nuSigma + m + 1 + \smhalf (\nuEps + \nuGbl + \Kgbl + \nuGrp + m \Kgrp) \} \log(2) - \log \Gamma (\frac{\nuEps}{2}) \\[1ex] - \smhalf \mu_{\qDens(1/{\aeps})} \mu_{\qDens (1/{\sigsqeps})} - \smhalf \xi_{\qDens (\sigsqeps)} \log ( \lambda_{\qDens(\sigsqeps)}) + \log \{ \Gamma (\smhalf \xi_{\qDens (\sigsqeps)}) \} + \smhalf \lambda_{\qDens(\sigsqeps)} \mu_{\qDens(1/\sigsqeps)} - \smhalf \log (\nuEps \sEps^2 ) \\[1ex] - 3 \log \{ \Gamma ( \smhalf ) \} - \frac{1}{2 \nuEps \sEps^2} \mu_{\qDens (1/\aeps)} - \smhalf \xi_{\qDens (\aeps)} \log (\lambda_{\qDens (\aeps)}) + \log \{ \Gamma (\smhalf \xi_{\qDens(\aeps)}) \} + \smhalf \lambda_{\qDens (\aeps)} \mu_{\qDens(1/ \aeps)} \\[1ex] - \log \Gamma (\frac{\nuGbl}{2}) - \smhalf \mu_{\qDens(1/{\aGbl})} \mu_{\qDens (1/{\sigmaGbl^2})} - \smhalf \xi_{\qDens (\sigmaGbl^2)} \log ( \lambda_{\qDens(\sigmaGbl^2)}) + \log \{ \Gamma (\smhalf \xi_{\qDens (\sigmaGbl^2)}) \} - \smhalf \log (\nuGbl \sGbl^2 ) \\[1ex] + \smhalf \lambda_{\qDens(\sigmaGbl^2)} \mu_{\qDens(1/\sigmaGbl^2)} - \{ 1/(2 \nuGbl \sGbl^2) \} \mu_{\qDens (1/\aGbl)} - \smhalf \xi_{\qDens (\aGbl)} \log (\lambda_{\qDens (\aGbl)}) - \smhalf \mu_{\qDens(1/{\aGrp})} \mu_{\qDens (1/{\sigmaGrp^2})} \\[1ex] + \log \{ \Gamma (\smhalf \xi_{\qDens(\aGbl)}) \} + \smhalf \lambda_{\qDens (\aGbl)} \mu_{\qDens(1/ \aGbl)} - \log \Gamma (\frac{\nuGrp}{2}) + \log \{ \Gamma (\smhalf \xi_{\qDens (\sigmaGrp^2)}) \} - \smhalf \log (\nuGrp \sGrp^2 ) \\[1ex] - \smhalf \xi_{\qDens (\sigmaGrp^2)} \log ( \lambda_{\qDens(\sigmaGrp^2)}) + \smhalf \lambda_{\qDens(\sigmaGrp^2)} \mu_{\qDens(1/\sigmaGrp^2)} - \{ 1/(2 \nuGrp \sGrp^2) \} \mu_{\qDens (1/\aGrp)} - \smhalf \xi_{\qDens (\aGrp)} \log (\lambda_{\qDens (\aGrp)}) \\[1ex] + \log \{ \Gamma (\smhalf \xi_{\qDens(\aGrp)}) \} + \smhalf \lambda_{\qDens (\aGrp)} \mu_{\qDens(1/ \aGrp)} - \smhalf \mbox{tr} ( \bM_{\qDens(\bASigma^{-1})} \bM_{\qDens(\bSigma^{-1})} ) + \smhalf \mbox{tr} (\bLambda_{\qDens(\bSigma)} \bM_{\qDens(\bSigma^{-1})}) \\[1ex] + \displaystyle{\sum_{j=1}^{2}} \log \Gamma (\smhalf (\xi_{\qDens(\bASigma)} + 2 - j)) - \displaystyle{\sum_{j=1}^{2}} \log \Gamma (\smhalf (\nuSigma + 4 - j)) - \smhalf (\xi_{\qDens(\bSigma)} - 1) \log |\bLambda_{\qDens(\bSigma)}| \\[1ex] + \displaystyle{\sum_{j=1}^{2}} \log \Gamma (\smhalf (\xi_{\qDens(\bSigma)} + 2 - j)) - \smhalf \displaystyle{\sum_{j=1}^{2}} 1/(\nuSigma \sSigmaj^2) \left( \bM_{\qDens(\bASigma^{-1})} \right)_{jj} - \displaystyle{\sum_{j=1}^{2}} \log \Gamma (\smhalf (3-j)) \\[1ex] - \smhalf (\xi_{\qDens(\bASigma)} - 1) \log |\bLambda_{\qDens(\bASigma)}| + \smhalf \mbox{tr} (\bLambda_{\qDens(\bASigma)} \bM_{\qDens(\bASigma^{-1})}) \\[1ex] - \frac{1}{2} \mu_{\qDens (1/{\sigsqeps})} \displaystyle{\sum_{i=1}^{m}} \left\{ \Bigg \Vert E_{\qDens} \left( \by_{i} - \bCgbli \left[ \begin{array}{c} \bbeta \\[1ex] \buGbli \end{array} \right] - \bCgrpi \left[ \begin{array}{c} \buLini \\[1ex] \buGrpi \end{array} \right] \right) \Bigg \Vert^2 \right. \\[3ex] \qquad \qquad \qquad \quad \left. + \mbox{tr}(\bCgbli^T\bCgbli\bSigma_{\qDens(\bbeta,\buGbl)}) +\mbox{tr}(\bCgrpi^T\bCgrpi\bSigma_{\qDens(\buLini,\buGrpi)}) \right. \\[1ex] \qquad \qquad \qquad \quad \left. + 2 \, \mbox{tr}\left[\bCgrpi^T\bCgbli\,E_{\qDens}\left\{\left( \left[\begin{array}{c}\bbeta \\ \buGbl\end{array}\right] -\bmu_{\qDens(\bbeta,\buGbl)}\right) \times \right. \right. \right. \\[1ex] \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \left. \left. \left. \left(\left[\begin{array}{c}\buLini\\ \buGrpi\end{array}\right] -\bmuq{\buLini,\buGrpi)}\right)^T\right\}\right] \right\}. \end{array} \label{eq:lowerBound}$$ *Derivation:* The lower-bound on the marginal log-likelihood is achieved through the following expression: $$\begin{array}{lcl} \log \underline{\pDens}(\by; \qDens) & = & E_{\qDens} \{ \log \pDens (\by, \bbeta, \bu, \sigsqeps, \aeps, \sigmaGbl^2, \aGbl, \sigmaGrp^2, \aGrp, \bSigma, \ASigma ) \\[1ex] & & - \, \log \qDens^{*} ( \bbeta, \bu, \sigsqeps, \aeps, \sigmaGbl^2, \aGbl, \sigmaGrp^2, \aGrp, \bSigma, \ASigma ) \} \end{array}$$ $$\begin{array}{lcl} & = & E_{\qDens} \{ \log \pDens (\by \, | \, \bbeta, \bu, \sigsqeps) \} \\[1ex] & & + \, E_{\qDens} \{ \log \pDens (\bbeta, \bu \, | \, \sigmaGbl^2, \sigmaGrp^2, \bSigma) \} - E_{\qDens} \{ \log \qDens^{*} (\bbeta, \bu) \} \\[1ex] & & + \, E_{\qDens} \{ \log \pDens (\sigsqeps \, | \, \aeps) \} - E_{\qDens} \{ \log \qDens^{*} (\sigsqeps) \} + E_{\qDens} \{ \log \pDens (\aeps) \} - E_{\qDens} \{ \log \qDens^{*} (\aeps) \} \\[1ex] & & + \, E_{\qDens} \{ \log \pDens (\sigmaGbl^2 \, | \, \aGbl) \} - E_{\qDens} \{ \log \qDens^{*} (\sigmaGbl^2) \} + E_{\qDens} \{ \log \pDens (\aGbl) \} - E_{\qDens} \{ \log \qDens^{*} (\aGbl) \} \\[1ex] & & + \, E_{\qDens} \{ \log \pDens (\sigmaGrp^2 \, | \, \aGrp) \} - E_{\qDens} \{ \log \qDens^{*} (\sigmaGrp^2) \} + E_{\qDens} \{ \log \pDens (\aGrp) \} - E_{\qDens} \{ \log \qDens^{*} (\aGrp) \} \\[1ex] & & + \, E_{\qDens} \{ \log \pDens (\bSigma \, | \, \bA_{\mbox{\rm\tiny $\bSigma$}}) \} - E_{\qDens} \{ \log \qDens^{*} (\bSigma) \} + E_{\qDens} \{ \log \pDens (\bA_{\mbox{\rm\tiny $\bSigma$}}) \} - E_{\qDens} \{ \log \qDens^{*} (\bA_{\mbox{\rm\tiny $\bSigma$}})\}. \end{array} \label{eq:lowerBoundSetup}$$ First we note that $$\log \pDens (\by \, | \, \bbeta, \bu, \sigsqeps) = -\frac{1}{2} \log (2 \pi) \displaystyle{\sum_{i=1}^{m}} n_{i} - \frac{1}{2} \log (\sigsqeps) \displaystyle{\sum_{i=1}^{m}} n_{i} - \frac{1}{2 \sigsqeps} \displaystyle{\sum_{i=1}^{m}} \Vert \by - \bX \bbeta - \bZ \bu \Vert^2$$ where $$\begin{array}{l} \Vert \by - \bX \bbeta - \bZ \bu \Vert^2 \\[1ex] \quad = {\bBigg@{4}}\Vert \left[ \begin{array}{c} \by_{1} \\ \vdots \\ \by_{m} \end{array} \right] - \left[ \begin{array}{c} \bX_{1} \\ \vdots \\ \bX_{m} \end{array} \right] \bbeta - \left[ \begin{array}{c} \bZgblo \\ \vdots \\ \bZgblm \end{array} \right] \buGbl - \displaystyle \blockdiag{1\le i\le m}([\bX_i\ \bZgrpi]) \left[ \begin{array}{c} \buLini\\ \buGrpi \end{array} \right]_{1\le i\le m} {\bBigg@{4}}\Vert^2 \\[5ex] \quad = \displaystyle \sum_{i=1}^{m} \Vert \by_{i} - \bX_{i} \bbeta - \bZgbli \buGbli - \bX_{i} \buLini - \bZgrpi \buGrpi \Vert^2 \\[3ex] \quad = \displaystyle \sum_{i=1}^{m} \Bigg \Vert \by_{i} - \bCgbli \left[ \begin{array}{c} \bbeta \\[1ex] \buGbli \end{array} \right] - \bCgrpi \left[ \begin{array}{c} \buLini \\[1ex] \buGrpi \end{array} \right] \Bigg \Vert^2 \end{array}$$ and $$\begin{array}{c} \bCgbli \equiv [ \bX_{i} \, \bZgbli ], \quad \bCgrpi \equiv [ \bX_{i} \, \bZgrpi ]. \end{array}$$ Therefore, $$\begin{array}{l} E_{\qDens} \{ \log \pDens (\by \, | \, \bbeta, \bu, \sigsqeps) \} \\[1ex] \quad = -\frac{1}{2} \log (2 \pi) \displaystyle{\sum_{i=1}^{m}} n_{i} - \frac{1}{2} E_{\qDens} \{ \log (\sigsqeps) \} \displaystyle{\sum_{i=1}^{m}} n_{i} \\[1ex] \quad \quad - \frac{1}{2} \mu_{\qDens (1/{\sigsqeps})} \displaystyle{\sum_{i=1}^{m}} \left\{ \Bigg \Vert E_{\qDens} \left( \by_{i} - \bCgbli \left[ \begin{array}{c} \bbeta \\[1ex] \buGbli \end{array} \right] - \bCgrpi \left[ \begin{array}{c} \buLini \\[1ex] \buGrpi \end{array} \right] \right) \Bigg \Vert^2 \right. \\[3ex] \quad \quad \quad \left. + \mbox{tr}(\bCgbli^T\bCgbli\bSigma_{\qDens(\bbeta,\buGbl)}) +\mbox{tr}(\bCgrpi^T\bCgrpi\bSigma_{\qDens(\buLini,\buGrpi)}) \right. \\[1ex] \quad \quad \quad \left. + 2 \, \mbox{tr}\left[\bCgrpi^T\bCgbli\,E_{\qDens}\left\{\left( \left[\begin{array}{c}\bbeta \\ \buGbl\end{array}\right] -\bmu_{\qDens(\bbeta,\buGbl)}\right)\left(\left[\begin{array}{c}\buLini\\ \buGrpi\end{array}\right] -\bmuq{\buLini,\buGrpi)}\right)^T\right\}\right] \right\} \end{array}$$ The remainder of the expectations in (\[eq:lowerBoundSetup\]) are expressed as: $$\begin{array}{lcl} E_{\qDens} \{ \log \pDens (\bbeta, \bu \, | \, \sigmaGbl^2, \sigmaGrp^2, \bSigma) \} & = & - \smhalf \{ 2 + \Kgbl + m (2 + \Kgrp) \} \log (2\pi) - \smhalf \log |\bSigma_{\bbeta}| \\[1ex] & & -\frac{\Kgbl}{2} E_{\qDens} \{ \log ( \sigmaGbl^2 ) \} - \frac{m}{2} E_{\qDens} \{ \log |\bSigma| \} - \frac{m \Kgrp}{2} E_{\qDens} \{ \log ( \sigmaGrp^2 ) \} \\[1ex] & & - \smhalf \mbox{tr} \left( \bSigma_{\bbeta}^{-1} \left\{ \left( \bmu_{\qDens (\bbeta)} - \bmu_{\bbeta} \right) \left( \bmu_{\qDens (\bbeta)} - \bmu_{\bbeta} \right)^T + \bSigma_{\qDens (\bbeta)} \right\} \right) \\[1ex] & & - \smhalf \mu_{\qDens(1/{\sigmaGbl^2})} \left\{ \Vert \bmu_{\qDens ( \buGbl )} \Vert^2 + \mbox{tr} ( \bSigma_{\qDens(\buGbl)} ) \right\} \\[1ex] & & -\smhalf \mbox{tr} \left( \bM_{\qDens (\bSigma^{-1})} \left\{ \displaystyle{\sum_{i=1}^{m}} \left( \bmu_{\qDens (\buLini} \bmu_{\qDens (\buLini)}^T + \bSigma_{\qDens (\buLini)} \right) \right\} \right) \\[1ex] & & - \smhalf \mu_{\qDens ( 1/{\sigmaGrp^2} )} \displaystyle{\sum_{i=1}^{m}} \left\{ \Vert \bmu_{\qDens ( \buGrpi ))} \Vert^2 + \mbox{tr} (\bSigma_{\qDens(\buGrpi)}) \right\} \end{array}$$ $$\begin{array}{lcl} E_{\qDens} \{ \log \qDens^{*} (\bbeta, \bu) \} & = & - \smhalf \{ 2 + \Kgbl + m (2 + \Kgrp) \} - \smhalf \{ 2 + \Kgbl + m (2 + \Kgrp) \} \log (2\pi) \\[1ex] & & - \smhalf \log |\bSigma_{\bbeta}| \end{array}$$ $$\begin{array}{lcl} E_{\qDens} \{ \log \pDens (\sigsqeps \, | \, \aeps) \} & = & -\smhalf \nuEps E_{\qDens} \{ \log (2 \aeps) \} - \log \Gamma (\nuEps/2) - (\smhalf \nuEps + 1) E_{\qDens} \{ \log(\sigsqeps) \} \\[1ex] & & - \smhalf \mu_{\qDens(1/{\aeps})} \mu_{\qDens (1/{\sigsqeps})} \end{array}$$ $$\begin{array}{lcl} E_{\qDens} \{ \log \qDens^{*} (\sigsqeps) \} & = & \smhalf \xi_{\qDens (\sigsqeps)} \log ( \lambda_{\qDens(\sigsqeps)}/2) - \log \{ \Gamma (\smhalf \xi_{\qDens (\sigsqeps)}) \} - (\smhalf \xi_{\qDens(\sigsqeps)} + 1) E_{\qDens} \{ \log (\sigsqeps) \} \\[1ex] & & - \smhalf \lambda_{\qDens(\sigsqeps)} \mu_{\qDens(1/\sigsqeps)} \end{array}$$ $$\begin{array}{lcl} E_{\qDens} \{ \log \pDens (\aeps) \} & = & -\smhalf \log (2 \nuEps \sEps^2 ) - \log \{ \Gamma ( \smhalf ) \} - (\smhalf + 1) E_{\qDens} \{ \log (\aeps) \} \\[1ex] & & - \{ 1/(2 \nuEps \sEps^2) \} \mu_{\qDens (1/\aeps)} \end{array}$$ $$\begin{array}{lcl} E_{\qDens} \{ \log \qDens^{*} (\aeps) \} & = & \smhalf \xi_{\qDens (\aeps)} \log (\lambda_{\qDens (\aeps)}/2) - \log \{ \Gamma (\smhalf \xi_{\qDens(\aeps)}) \} - (\smhalf \xi_{\qDens (\aeps)} + 1) E_{\qDens} \{ \log (\aeps) \} \\[1ex] & & - \smhalf \lambda_{\qDens (\aeps)} \mu_{\qDens(1/ \aeps)} \end{array}$$ $$\begin{array}{lcl} E_{\qDens} \{ \log \pDens (\sigmaGbl^2 \, | \, \aGbl) \} & = & -\smhalf \nuGbl E_{\qDens} \{ \log (2 \aGbl) \} - \log \Gamma (\nuGbl/2) - (\smhalf \nuGbl + 1) E_{\qDens} \{ \log(\sigmaGbl^2) \} \\[1ex] & & - \smhalf \mu_{\qDens(1/{\aGbl})} \mu_{\qDens (1/{\sigmaGbl^2})} \end{array}$$ $$\begin{array}{lcl} E_{\qDens} \{ \log \qDens^{*} (\sigmaGbl^2) \} & = & \smhalf \xi_{\qDens (\sigmaGbl^2)} \log ( \lambda_{\qDens(\sigmaGbl^2)}/2) - \log \{ \Gamma (\smhalf \xi_{\qDens (\sigmaGbl^2)}) \} - (\smhalf \xi_{\qDens(\sigmaGbl^2)} + 1) E_{\qDens} \{ \log (\sigmaGbl^2) \} \\[1ex] & & - \smhalf \lambda_{\qDens(\sigmaGbl^2)} \mu_{\qDens(1/\sigmaGbl^2)} \end{array}$$ $$\begin{array}{lcl} E_{\qDens} \{ \log \pDens (\aGbl) \} & = & -\smhalf \log (2 \nuGbl \sGbl^2 ) - \log \{ \Gamma ( \smhalf ) \} - (\smhalf + 1) E_{\qDens} \{ \log (\aGbl) \} \\[1ex] & & - \{ 1/(2 \nuGbl \sGbl^2) \} \mu_{\qDens (1/\aGbl)} \end{array}$$ $$\begin{array}{lcl} E_{\qDens} \{ \log \qDens^{*} (\aGbl) \} & = & \smhalf \xi_{\qDens (\aGbl)} \log (\lambda_{\qDens (\aGbl)}/2) - \log \{ \Gamma (\smhalf \xi_{\qDens(\aGbl)}) \} - (\smhalf \xi_{\qDens (\aGbl)} + 1) E_{\qDens} \{ \log (\aGbl) \} \\[1ex] & & - \smhalf \lambda_{\qDens (\aGbl)} \mu_{\qDens(1/ \aGbl)} \end{array}$$ $$\begin{array}{lcl} E_{\qDens} \{ \log \pDens (\sigmaGrp^2 \, | \, \aGrp) \} & = & -\smhalf \nuGrp E_{\qDens} \{ \log (2 \aGrp) \} - \log \Gamma (\nuGrp/2) - (\smhalf \nuGrp + 1) E_{\qDens} \{ \log(\sigmaGrp^2) \} \\[1ex] & & - \smhalf \mu_{\qDens(1/{\aGrp})} \mu_{\qDens (1/{\sigmaGrp^2})} \end{array}$$ $$\begin{array}{lcl} E_{\qDens} \{ \log \qDens^{*} (\sigmaGrp^2) \} & = & \smhalf \xi_{\qDens (\sigmaGrp^2)} \log ( \lambda_{\qDens(\sigmaGrp^2)}/2) - \log \{ \Gamma (\smhalf \xi_{\qDens (\sigmaGrp^2)}) \} - (\smhalf \xi_{\qDens(\sigmaGrp^2)} + 1) E_{\qDens} \{ \log (\sigmaGrp^2) \} \\[1ex] & & - \smhalf \lambda_{\qDens(\sigmaGrp^2)} \mu_{\qDens(1/\sigmaGrp^2)} \end{array}$$ $$\begin{array}{lcl} E_{\qDens} \{ \log \pDens (\aGrp) \} & = & -\smhalf \log (2 \nuGrp \sGrp^2 ) - \log \{ \Gamma ( \smhalf ) \} - (\smhalf + 1) E_{\qDens} \{ \log (\aGrp) \} \\[1ex] & & - \{ 1/(2 \nuGrp \sGrp^2) \} \mu_{\qDens (1/\aGrp)} \end{array}$$ $$\begin{array}{lcl} E_{\qDens} \{ \log \qDens^{*} (\aGrp) \} & = & \smhalf \xi_{\qDens (\aGrp)} \log (\lambda_{\qDens (\aGrp)}/2) - \log \{ \Gamma (\smhalf \xi_{\qDens(\aGrp)}) \} - (\smhalf \xi_{\qDens (\aGrp)} + 1) E_{\qDens} \{ \log (\aGrp) \} \\[1ex] & & - \smhalf \lambda_{\qDens (\aGrp)} \mu_{\qDens(1/ \aGrp)} \end{array}$$ $$\begin{array}{lcl} E_{\qDens}[\log\{\pDens (\bSigma | \bASigma)\}] & = & - \smhalf (\nuSigma + 1) E_{\qDens} \{ \log | \bASigma | \} - \smhalf (\nuSigma + 4) E_{\qDens} \{ \log | \bSigma | \} - \smhalf \log(\pi) \\ & & - \smhalf \mbox{tr} ( \bM_{\qDens(\bASigma^{-1})} \bM_{\qDens(\bSigma^{-1})} ) - (\nuSigma+3) \log(2) - \sum_{j=1}^{2} \log \Gamma (\smhalf (\nuSigma + 4 - j)) \end{array}$$ $$\begin{array}{lcl} E_{\qDens} [\log\{\qDens(\bSigma)\}] &=& \smhalf (\xi_{\qDens(\bSigma)} - 1) \log |\bLambda_{\qDens(\bSigma)}| - \smhalf (\xi_{\qDens(\bSigma)} + 2) E_{\qDens} \{ \log |\bSigma| \} - \smhalf \mbox{tr} (\bLambda_{\qDens(\bSigma)} \bM_{\qDens(\bSigma^{-1})}) \\ & & - (\xi_{\qDens(\bSigma)} + 1) \log (2) - \smhalf \log(\pi) - \sum_{j=1}^{2} \log \Gamma (\smhalf (\xi_{\qDens(\bSigma)} + 2 - j)) \end{array}$$ $$\begin{array}{lcl} E_{\qDens}[\log\{\pDens (\bASigma)\}] &=& - \frac{3}{2} E_{\qDens} \{ \log |\bASigma| \} - \smhalf \sum_{j=1}^{2} 1/(\nuSigma \sSigmaj^2) \left( \bM_{\qDens(\bASigma^{-1})} \right)_{jj} - 2 \log (2) - \smhalf \log(\pi) \\ & & - \sum_{j=1}^{2} \log \Gamma (\smhalf (3-j)) \end{array}$$ $$\begin{array}{lcl} E_{\qDens}[\log\{\qDens(\bASigma)\}] &=& \smhalf (\xi_{\qDens(\bASigma)} - 1) \log |\bLambda_{\qDens(\bASigma)}| - \smhalf (\xi_{\qDens(\bASigma)} + 2) E_{\qDens} \{ \log |\bASigma| \} - \smhalf \mbox{tr} (\bLambda_{\qDens(\bASigma)} \bM_{\qDens(\bASigma^{-1})}) \\ & & - (\xi_{\qDens(\bASigma)} + 1) \log (2) - \smhalf \log(\pi) - \sum_{j=1}^{2} \log \Gamma (\smhalf (\xi_{\qDens(\bASigma)} + 2 - j)) \end{array}$$ In the summation of each of these $\log\pDensUnder(\by;\qDens)$ terms, note that the coefficient of $E_{\qDens} \{\log(\sigsqeps)\}$ is $$-\smhalf \sum_{i=1}^{m} n_{i} -\smhalf\nuEps-1+\smhalf\xi_{\qDens(\sigsqeps)}+1= -\smhalf \sum_{i=1}^{m} n_{i} -\smhalf\nuEps-1+\smhalf(\nuEps+\sum_{i=1}^{m} n_{i})+1=0.$$ The coefficient of $E_{\qDens} \{ \log (\sigmaGbl^2) \}$ is $$-\smhalf \Kgbl - \smhalf \nuGbl - 1 + \smhalf \xi_{\qDens (\sigmaGbl^2)} + 1 = -\smhalf \Kgbl - \smhalf \nuGbl - 1 + \smhalf (\nuGbl + \Kgbl) + 1 = 0.$$ The coefficient of $E_{\qDens} \{ \log (\sigmaGrp^2) \}$ is $$-\smhalf m \Kgrp - \smhalf \nuGrp - 1 + \smhalf \xi_{\qDens (\sigmaGrp^2)} + 1 = -\smhalf m \Kgrp - \smhalf \nuGrp - 1 + \smhalf (\nuGrp + m \Kgrp) + 1 = 0.$$ The coefficient of $E_{\qDens}\{\log|\bSigma|\}$ is $$-\frac{m}{2} - \smhalf(\nuSigma + 4) + \smhalf(\xi_{\qDens(\bSigma)} + 2) = -\smhalf(m + \nuSigma + 4) + \smhalf(m + \nuSigma + 4)=0.$$ The coefficient of $E_{\qDens} \{\log(\aeps)\}$ is $$-\smhalf\nuEps-\smhalf-1+\smhalf\xi_{\qDens(\aeps)}+1=-\smhalf\nuEps-\smhalf-1 +\smhalf(\nuEps+1)+1=0.$$ The coefficient of $E_{\qDens} \{ \log (\aGbl) \}$ is $$-\smhalf\nuGbl-\smhalf-1+\smhalf\xi_{\qDens(\aGbl)}+1=-\smhalf\nuGbl-\smhalf-1 +\smhalf(\nuGbl+1)+1=0.$$ The coefficient of $E_{\qDens} \{ \log (\aGrp) \}$ is $$-\smhalf\nuGrp-\smhalf-1+\smhalf\xi_{\qDens(\aGrp)}+1=-\smhalf\nuGrp-\smhalf-1 +\smhalf(\nuGrp+1)+1=0.$$ The coefficient of $E_{\qDens}\{\log|\bASigma|\}$ is $$- \smhalf(\nuSigma + 1) - \frac{3}{2} + \smhalf(\xi_{\qDens(\bASigma)} + 2) = -\smhalf(\nuSigma + 2) + \smhalf(\nuSigma + 2)=0.$$ Therefore these terms can be dropped and then the cancellations led by the above expectations leads to the lower bound expression in (\[eq:lowerBound\]). Derivation of Result \[res:threeLevelBLUP\] {#sec:drvResultThree} =========================================== If $\bB$ and $\bb$ have the same forms given by equation (7) in Nolan Wand (2018) with $$\bvecij \equiv \left[ \begin{array}{c} \sigeps^{-1}\by_{ij}\\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \end{array} \right], \qquad \Bmatij\equiv \left[ \begin{array}{cc} \sigeps^{-1}\bX_{ij} & \sigeps^{-1}\bZgblij\\[1ex] \bO &\ndotmh \sigmaGbl^{-1}\bI_{\Kgbl} \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \end{array} \right],$$ $$\Bmatdotij\equiv \left[ \begin{array}{cc} \sigeps^{-1}\bX_{ij} & \sigeps^{-1}\bZLoneGrpij \\[1ex] \bO & \bO \\[1ex] n_{i}^{-1/2}\bSigmag^{-1/2} & \bO \\[1ex] \bO & n_{i}^{-1/2} \sigmaGrpg^{-1} \bI_{\Kgrpg} \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \end{array}\right] \quad and\quad \Bmatdotdotij\equiv \left[ \begin{array}{cc} \sigeps^{-1}\bX_{ij} & \sigeps^{-1}\bZLtwoGrpij \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bSigmah^{-1/2} & \bO \\[1ex] \bO & \sigmaGrph^{-1}\bI_{\Kgrph} \end{array}\right], $$ then straightforward algebra leads to $$\begin{array}{c} \bB^T \bB = \bC^T\RBLUP^{-1}\bC+\DBLUP \mbox{ and } \bB^T \bb = \bC^T\RBLUP^{-1}\by \end{array}$$ where $$\begin{array}{c} \bC\equiv[\bX\ \bZ],\quad\DBLUP\equiv\left[ \begin{array}{cc} \bO & \bO \\[1ex] \bO & \bG^{-1} \end{array} \right]\quad\mbox{and}\quad\RBLUP\equiv\sigeps^2\bI, \end{array}$$ and $\bG$ as defined in (\[eqn:threeLevBLUPCov\]). The remainder of the derivation of Result 3 is analogous to that of Result 1. Derivation of Algorithm \[alg:threeLevBLUP\] {#sec:drvAlgThree} ============================================ Algorithm \[alg:threeLevBLUP\] is simply a proceduralization of Result 3. Derivation of Result \[res:threeLevelMFVB\] {#sec:drvResultFour} =========================================== It is straightforward to verify that the $\bmu_{\qDens(\bbeta,\bu)}$ and $\bSigma_{\qDens(\bbeta,\bu)}$ updates, given at (\[eq:muSigmaMFVBupd\]) but with $\DMFVB$ as given in (\[eqn:DMFVB\]), may be written as $$\bmu_{\qDens(\bbeta,\bu)}\thickarrow(\bB^T\bB)^{-1}\bB^T\bb \quad\mbox{and}\quad \bSigma_{\qDens(\bbeta,\bu)}\thickarrow(\bB^T\bB)^{-1}$$ where $\bB$ and $\bb$ have the forms given by equation (7) in Nolan Wand (2018) with $$\bb_{ij}\equiv \left[ \begin{array}{c} \mu_{\qDens(1/\sigeps^2)}^{1/2}\by_{ij}\\[1ex] \ndotmh \bSigma_{\bbeta}^{-1/2} \bmu_{\bbeta}\\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \bzero \\[1ex] \end{array} \right], \ \ \bB_{ij}\equiv \left[ \begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_{ij} & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZgblij\\[1ex] \ndotmh \bSigma_{\bbeta}^{-1/2} & \bO \\[1ex] \bO &\ndotmh \mu_{\qDens(1/\sigmaGbl^2)}^{1/2}\bI_{\Kgbl}\\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \end{array} \right],$$ $$\bBdot_{ij}\equiv \left[ \begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_{ij} & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZLoneGrpij \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] n_i^{-1/2}\bM_{\qDens(\bSigmag^{-1})}^{1/2} & \bO \\[1ex] \bO &n_i^{-1/2}\mu_{\qDens(1/\sigmaGrpg^2)}^{1/2}\bI_{\Kgrpg}\\[1ex] \bO & \bO \\[1ex] \bO & \bO \\ \end{array} \right] \ \ and\ \ \bBdotdot_{ij}\equiv \left[ \begin{array}{cc} \mu_{\qDens(1/\sigeps^2)}^{1/2}\bX_{ij} & \mu_{\qDens(1/\sigeps^2)}^{1/2}\bZLtwoGrpij \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bO & \bO \\[1ex] \bM_{\qDens(\bSigmah^{-1})}^{1/2} & \bO \\[1ex] \bO &\mu_{\qDens(1/\sigmaGrph^2)}^{1/2}\bI_{\Kgrph}\\ \end{array} \right]. $$ Result \[res:threeLevelMFVB\] immediately follows from Theorem 4 of Nolan Wand (2018). Derivation of Algorithm \[alg:threeLevMFVB\] {#sec:drvAlgFour} ============================================ We provide expressions for the $\qDens$-densities for mean field variational Bayesian inference for the parameters in (\[eq:threeLevBayes\]) with product density restriction (\[eq:producRestrict3lev\]). $$\qDens(\bbeta,\bu)\ \mbox{is a $N(\bmu_{\qDens(\bbeta,\bu)},\bSigma_{\qDens(\bbeta,\bu)})$ density function}$$ where $$\bSigma_{\qDens(\bbeta,\bu)}=(\bC^T\RMFVB^{-1}\bC+\DMFVB)^{-1} \quad \mbox{and} \quad \bmu_{\qDens(\bbeta,\bu)}=\bSigma_{\qDens(\bbeta,\bu)}(\bC^T\RMFVB^{-1}\by + \oMFVB)$$ with $\RMFVB\equiv\mu_{\qDens(1/\sigeps^2)}^{-1}\bI$, $\oMFVB\equiv\left[ \begin{array}{c} \bSigma_{\bbeta}^{-1}\bmu_{\bbeta}\\[1ex] \bzero \end{array} \right]$ and $\DMFVB$ as given in (\[eqn:DMFVB\]). $$\qDens(\sigsqeps)\ \mbox{is an $\mbox{Inverse-$\chi^2$} \left(\xi_{\qDens(\sigsqeps)},\lambda_{\qDens(\sigsqeps)}\right)$ density function}$$ where $\xi_{\qDens(\sigsqeps)}=\nuEps+\sumim \sum_{j=1}^{n_{i}} o_{ij}$ and $$\begin{aligned} \lambda_{\qDens(\sigsqeps)}&=&\mu_{\qDens(1/\aeps)}+\sumim \sum_{j=1}^{n_{i}} \, E_{\qDens}\left\{ \Bigg \Vert \by_{ij} - \bCgblij \left[ \begin{array}{c} \bbeta \\[1ex] \buGbl \end{array} \right] - \bCLoneGrpij \left[ \begin{array}{c} \buLoneLini \\[1ex] \buLoneGrpi \end{array} \right] - \bCLtwoGrpij \left[ \begin{array}{c} \buLtwoLinij \\[1ex] \buLtwoGrpij \end{array} \right] \Bigg \Vert^2 \right\} $$ $$\begin{aligned} & = & \mu_{\qDens(1/\aeps)}+\sumim \sum_{j=1}^{n_{i}} \, \left[ \Bigg \Vert\,E_{\qDens} \left( \by_{ij} - \bCgblij \left[ \begin{array}{c} \bbeta \\[1ex] \buGbl \end{array} \right] - \bCLoneGrpij \left[ \begin{array}{c} \buLoneLini \\[1ex] \buLoneGrpi \end{array} \right] - \bCLtwoGrpij \left[ \begin{array}{c} \buLtwoLinij \\[1ex] \buLtwoGrpij \end{array} \right] \right) \Bigg \Vert^2 \right. \\[1ex] & & \left. \qquad \qquad \qquad \qquad + \mbox{tr} \left\{ \mbox{\rm Cov}_{\qDens} \left( \bCgblij \left[ \begin{array}{c} \bbeta \\[1ex] \buGbl \end{array} \right] + \bCLoneGrpij \left[ \begin{array}{c} \buLoneLini \\[1ex] \buLoneGrpi \end{array} \right] + \bCLtwoGrpij \left[ \begin{array}{c} \buLtwoLinij \\[1ex] \buLtwoGrpij \end{array} \right] \right) \right\} \right] \\[1ex] $$ $$\begin{aligned} & = & \mu_{\qDens(1/\aeps)} + \displaystyle{\sum_{i=1}^{m} \sum_{j=1}^{n_{i}}} \left\{ \Bigg \Vert\,E_{\qDens} \left( \by_{ij} - \bCgblij \left[ \begin{array}{c} \bbeta \\[1ex] \buGbl \end{array} \right] - \bCLoneGrpij \left[ \begin{array}{c} \buLoneLini \\[1ex] \buLoneGrpi \end{array} \right] - \bCLtwoGrpij \left[ \begin{array}{c} \buLtwoLinij \\[1ex] \buLtwoGrpij \end{array} \right] \right) \Bigg \Vert^2 \right. \\[3ex] & & \quad \left. + \mbox{tr}(\bCgblij^T\bCgblij \bSigma_{\qDens(\bbeta,\buGbl)}) + \mbox{tr}((\bCLoneGrpij)^T\bCLoneGrpij \bSigma_{\qDens(\buLoneLini,\buLoneGrpi)}) + \mbox{tr}((\bCLtwoGrpij)^T\bCLtwoGrpij \bSigma_{\qDens(\buLtwoLinij,\buLtwoGrpij)}) \right. \\[1ex] & & \quad \left. + 2 \, \mbox{tr}\left[(\bCLoneGrpij)^T\bCgblij\,E_{\qDens}\left\{\left( \left[\begin{array}{c}\bbeta \\ \buGbl\end{array}\right] -\bmu_{\qDens(\bbeta,\buGbl)}\right) \left(\left[\begin{array}{c}\buLoneLini\\ \buLoneGrpi\end{array}\right] -\bmuq{\buLoneLini,\buLoneGrpi}\right)^T\right\}\right] \right. \\[1ex] & & \quad \left. + 2 \, \mbox{tr}\left[(\bCLtwoGrpij)^T\bCgblij\,E_{\qDens}\left\{\left( \left[\begin{array}{c}\bbeta \\ \buGbl\end{array}\right] -\bmu_{\qDens(\bbeta,\buGbl)}\right) \left(\left[\begin{array}{c}\buLtwoLinij\\ \buLtwoGrpij\end{array}\right] -\bmuq{\buLtwoLinij,\buLtwoGrpij}\right)^T\right\}\right] \right. \\[1ex] & & \quad \left. + 2 \, \mbox{tr}\left[(\bCLoneGrpij)^T\bCLtwoGrpij\,E_{\qDens}\left\{\left( \left[\begin{array}{c}\buLoneLini\\ \buLoneGrpi\end{array}\right] -\bmuq{\buLoneLini,\buLoneGrpi}\right) \left(\left[\begin{array}{c}\buLtwoLinij\\ \buLtwoGrpij\end{array}\right] -\bmuq{\buLtwoLinij,\buLtwoGrpij}\right)^T\right\}\right] \right\} $$ where $\bCgblij \equiv [ \bX_{ij} \, \bZgblij ]$, $\bCLoneGrpij \equiv [ \bX_{ij} \, \bZLoneGrpij ]$, $\bCLtwoGrpij \equiv [ \bX_{ij} \, \bZLtwoGrpij ]$ and with reciprocal moment $\mu_{\qDens(1/\sigsqeps)}=\xi_{\qDens(\sigsqeps)}/\lambda_{\qDens(\sigsqeps)},$ $$\qDens(\sigmaGbl^2)\ \mbox{is an $\mbox{Inverse-$\chi^2$} \left(\xi_{\qDens(\sigmaGbl^2)},\lambda_{\qDens(\sigmaGbl^2)}\right)$ density function}$$ where $\xi_{\qDens(\sigmaGbl^2)}=\nuGbl+\Kgbl$ and $$\lambda_{\qDens(\sigmaGbl^2)}= \mu_{\qDens(1/{\aGbl})} + \Vert \bmu_{\qDens(\buGbl)} \Vert^2 + \mbox{tr} \left( \bSigma_{\qDens(\buGbl)} \right),$$ with reciprocal moment $\mu_{\qDens(1/{\sigmaGbl^2})}=\xi_{\qDens(\sigmaGbl^2)} / \lambda_{\qDens(\sigmaGbl^2)},$ $$\qDens(\sigmaGrpg^2)\ \mbox{is an $\mbox{Inverse-$\chi^2$} \left(\xi_{\qDens(\sigmaGrpg^2)},\lambda_{\qDens(\sigmaGrpg^2)}\right)$ density function}$$ where $\xi_{\qDens(\sigmaGrpg^2)}=\nuGrpg+m\Kgrpg$ and $$\lambda_{\qDens(\sigmaGrpg^2)} = \mu_{\qDens(1/{\aGrpg})} + \sum_{i=1}^m \left\{ \Vert \bmu_{\qDens(\buLoneGrpi)} \Vert^2 + \mbox{tr} \left( \bSigma_{\qDens(\buLoneGrpi)} \right) \right\},$$ with reciprocal moment $\mu_{\qDens(1/{\sigmaGrpg^2})}=\xi_{\qDens(\sigmaGrpg^2)} / \lambda_{\qDens(\sigmaGrpg^2)},$ $$\qDens(\bSigmag)\ \mbox{is an $\mbox{Inverse-G-Wishart} \left(\Gfull,\xi_{\qDens(\bSigmag)},\bLambda_{\qDens(\bSigmag)}\right)$ density function}$$ where $\xi_{\qDens(\bSigmag)}=\nuSigmag+2+m$ and $$\bLambda_{\qDens(\bSigmag)}=\bM_{\qDens(\bA_{\bSigmag}^{-1})} +\sumim\left(\bmu_{\qDens(\buLoneLini)}\bmu_{\qDens(\buLoneLini)}\trans + \bSigma_{\qDens(\buLoneLini)}\right),$$ with inverse moment $\bM_{\qDens(\bSigmag^{-1})}=(\xi_{\qDens(\bSigmag)}-1)\bLambda_{\qDens(\bSigmag)}^{-1}$, $$\qDens(\sigmaGrph^2)\ \mbox{is an $\mbox{Inverse-$\chi^2$} \left(\xi_{\qDens(\sigmaGrph^2)},\lambda_{\qDens(\sigmaGrph^2)}\right)$ density function}$$ where $\xi_{\qDens(\sigmaGrph^2)}=\nuGrph+\Kgrph \sum_{i=1}^{m} n_{i}$ and $$\lambda_{\qDens(\sigmaGrph^2)} = \mu_{\qDens(1/{\aGrph})} + \sum_{i=1}^m \sum_{j=1}^{n_{i}} \left\{ \Vert \bmu_{\qDens(\buLtwoGrpij)} \Vert^2 + \mbox{tr} \left( \bSigma_{\qDens(\buLtwoGrpij)} \right) \right\},$$ with reciprocal moment $\mu_{\qDens(1/{\sigmaGrph^2})}=\xi_{\qDens(\sigmaGrph^2)} / \lambda_{\qDens(\sigmaGrph^2)},$ $$\qDens(\bSigmah)\ \mbox{is an $\mbox{Inverse-G-Wishart} \left(\Gfull,\xi_{\qDens(\bSigmah)},\bLambda_{\qDens(\bSigmah)}\right)$ density function}$$ where $\xi_{\qDens(\bSigmah)}=\nuSigmah+2+\sum_{i=1}^m n_{i}$ and $$\bLambda_{\qDens(\bSigmah)}=\bM_{\qDens(\bA_{\bSigmah}^{-1})} +\sumim \sum_{j=1}^{n_{i}} \left(\bmu_{\qDens(\buLtwoLinij)}\bmu_{\qDens(\buLtwoLinij)}\trans + \bSigma_{\qDens(\buLtwoLinij)}\right),$$ with inverse moment $\bM_{\qDens(\bSigmah^{-1})}=(\xi_{\qDens(\bSigmah)}-1)\bLambda_{\qDens(\bSigmah)}^{-1}$, $$\qDens(\aeps)\ \mbox{is an $\mbox{Inverse-$\chi^2$} (\xi_{\qDens(\aeps)},\lambda_{\qDens(\aeps)})$ density function}$$ where $\xi_{\qDens(\aeps)}=\nuEps+1$, $$\lambda_{\qDens(\aeps)}=\mu_{\qDens(1/\sigsqeps)}+1/(\nuEps\sEps^2)$$ with reciprocal moment $\mu_{\qDens(1/\aeps)}=\xi_{\qDens(\aeps)}/\lambda_{\qDens(\aeps)}$, $$\qDens(\aGbl)\ \mbox{is an $\mbox{Inverse-$\chi^2$} (\xi_{\qDens(\aGbl)},\lambda_{\qDens(\aGbl)})$ density function}$$ where $\xi_{\qDens(\aGbl)}=\nuGbl+1$, $$\lambda_{\qDens(\aGbl)}=\mu_{\qDens(1/\sigmaGbl^2)}+1/(\nuGbl\sGbl^2)$$ with reciprocal moment $\mu_{\qDens(1/\aGbl)}=\xi_{\qDens(\aGbl)}/\lambda_{\qDens(\aGbl)}$, $$\qDens(\aGrpg)\ \mbox{is an $\mbox{Inverse-$\chi^2$} (\xi_{\qDens(\aGrpg)},\lambda_{\qDens(\aGrpg)})$ density function}$$ where $\xi_{\qDens(\aGrpg)}=\nuGrpg+1$, $$\lambda_{\qDens(\aGrpg)}=\mu_{\qDens(1/\sigmaGrpg^2)}+1/(\nuGrpg\sGrpg^2)$$ with reciprocal moment $\mu_{\qDens(1/\aGrpg)}=\xi_{\qDens(\aGrpg)}/\lambda_{\qDens(\aGrpg)}$ and $$\qDens(\ASigmag)\ \mbox{is an $\mbox{Inverse-G-Wishart} \left(\Gdiag,\xi_{\qDens(\ASigmag)},\bLambda_{\qDens(\ASigmag)}\right)$ density function}$$ where $\xi_{\qDens(\ASigmag)}=\nuSigmag+2$, $$\bLambda_{\qDens(\ASigmag)}=\diag\big\{\mbox{diagonal}\big(\bM_{\qDens(\bSigmag^{-1})}\big)\big\} +\bLambda_{\ASigmag}$$ with inverse moment $\bM_{\qDens(\ASigmag^{-1})}=\xi_{\qDens(\ASigmag)}\bLambda_{\qDens(\ASigmag)}^{-1}$, $$\qDens(\aGrph)\ \mbox{is an $\mbox{Inverse-$\chi^2$} (\xi_{\qDens(\aGrph)},\lambda_{\qDens(\aGrph)})$ density function}$$ where $\xi_{\qDens(\aGrph)}=\nuGrph+1$, $$\lambda_{\qDens(\aGrph)}=\mu_{\qDens(1/\sigmaGrph^2)}+1/(\nuGrph\sGrph^2)$$ with reciprocal moment $\mu_{\qDens(1/\aGrph)}=\xi_{\qDens(\aGrph)}/\lambda_{\qDens(\aGrph)}$ and $$\qDens(\ASigmah)\ \mbox{is an $\mbox{Inverse-G-Wishart} \left(\Gdiag,\xi_{\qDens(\ASigmah)},\bLambda_{\qDens(\ASigmah)}\right)$ density function}$$ where $\xi_{\qDens(\ASigmah)}=\nuSigmah+2$ $$\bLambda_{\qDens(\ASigmah)}=\diag\big\{\mbox{diagonal}\big(\bM_{\qDens(\bSigmah^{-1})}\big)\big\} +\bLambda_{\ASigmah}$$ with inverse moment $\bM_{\qDens(\ASigmah^{-1})}=\xi_{\qDens(\ASigmah)}\bLambda_{\qDens(\ASigmah)}^{-1}$. The <span style="font-variant:small-caps;">SolveTwoLevelSparseLeastSquares</span> Algorithm {#sec:Solve2Lev} =========================================================================================== The  is listed in Nolan [*et al.*]{} (2018) and based on Theorem 2 of Nolan Wand (2018). Given its centrality to Algorithms \[alg:twoLevBLUP\] and \[alg:twoLevMFVB\] we list it again here. The algorithm solves a sparse version of the the least squares problem: $$\min_{\bx}\Vert\bb-\bB\bx\Vert^2$$ which has solution $\bx=\bA^{-1}\bB^T\bb$ where $\bA=\bB^T\bB$ where $\bB$ and $\bb$ have the following structure: $$\bB\equiv \left[ \arraycolsep=2.2pt\def\arraystretch{1.6} \begin{array}{c|c|c|c|c} \setstretch{4.5} \Bmato &\Bmatdoto &\bO &\cdots&\bO\\ \hline \Bmatt &\bO &\Bmatdott&\cdots&\bO\\ \hline \vdots &\vdots &\vdots &\ddots&\vdots\\ \hline \Bmatm &\bO &\bO &\cdots &\Bmatdotm \end{array} \right] \quad\mbox{and}\quad \bb=\left[ \arraycolsep=2.2pt\def\arraystretch{1.6} \begin{array}{c} \setstretch{4.5} \bveco \\ \hline \bvect \\ \hline \vdots \\ \hline \bvecm \\ \end{array} \right]. \label{eq:BandbFormsReprise}$$ The sub-matrices corresponding to the non-zero blocks of $\AtLev$ are labelled according to: $$\AtLev^{-1}= \left[ \arraycolsep=2.2pt\def\arraystretch{1.6} \begin{array}{c|c|c|c|c} \setstretch{4.5} \AUoo & \AUotCo & \AUotCt &\ \ \cdots\ \ &\AUotCm \\ \hline \AUotCoT & \AUttCo & \bigX & \cdots & \bigX \\ \hline \AUotCtT & \bigX & \AUttCt & \cdots & \bigX \\ \hline \vdots & \vdots & \vdots & \ddots & \vdots \\ \hline \AUotCmT & \bigX & \bigX & \cdots &\AUttCm \\ \end{array} \right]. \label{eq:AtLevInv}$$ with $\bigX$ denoting sub-blocks that are not of interest. The  algorithm is given in Algorithm \[alg:SolveTwoLevelSparseLeastSquares\]. - Inputs: $\big\{\big(\bveci(\nadj_i\times1), \ \Bmati(\nadj_i\times p),\ \Bmatdoti(\nadj_i\times q)\big):\ 1\le i\le m\big\}$ - $\bomega_3\thickarrow\mbox{NULL}$   ;   $\bOmega_4\thickarrow\mbox{NULL}$ - For $i=1,\ldots,m$: - Decompose $\Bmatdoti=\bQ_i\left[\begin{array}{c} \bR_i\\ \bzero \end{array} \right]$ such that $\bQ_i^{-1}=\bQ_i^T$ and $\bR_i$ is upper-triangular. - $\cveczi\thickarrow\bQ_i^T\bveci\ \ \ ;\ \ \ \Cmatzi\thickarrow\bQ_i^T\Bmati$ - $\cvecoi\thickarrow\mbox{first $q$ rows of}\ \cveczi$   ;    $\cvecti\thickarrow\mbox{remaining rows of}\ \cveczi$   ;    $\bomega_3\thickarrow \left[ \begin{array}{c} \bomega_3\\ \cvecti \end{array} \right]$ - $\Cmatoi\thickarrow\mbox{first $q$ rows of}\ \Cmatzi$   ;    $\Cmatti\thickarrow\mbox{remaining rows of}\ \Cmatzi$   ;    $\bOmega_4\thickarrow \left[ \begin{array}{c} \bOmega_4\\ \Cmatti \end{array} \right]$ - Decompose $\OmegaAtwoTwo=\bQ\left[\begin{array}{c} \bR\\ \bzero \end{array} \right]$ such that $\bQ^{-1}=\bQ^T$ and $\bR$ is upper-triangular. - $\bc\thickarrow\mbox{first $p$ rows of $\bQ^T\bomega_3$}$    ;   $\xveco\thickarrow\bR^{-1}\bc$   ;    $\AUoo\thickarrow\bR^{-1}\bR^{-T}$ - For $i=1,\ldots,m$: - $\xvectCi\thickarrow\bR_i^{-1}(\bc_{1i}-\Cmatoi\xveco)$   ;    $\AUotCi\thickarrow\,-\AUoo(\bR_i^{-1}\Cmatoi)^T$ - $\AUttCi\thickarrow\bR_i^{-1}(\bR_i^{-T} - \Cmatoi\AUotCi)$ - Output: $\Big(\xveco,\AUoo,\big\{\big(\xvectCi,\AUttCi,\AUotCi):\ 1\le i\le m\big\}\Big)$ The <span style="font-variant:small-caps;">SolveThreeLevelSparseLeastSquares</span> Algorithm {#sec:Solve3Lev} ============================================================================================= The , listed in Nolan [*et al.*]{} (2018) is a proceduralization of Theorem 4 of Nolan Wand (2018). Since it is central to Algorithms \[alg:threeLevBLUP\] and \[alg:threeLevMFVB\] we list it here. The  algorithm is concerned with solving the sparse three-level version of $$\min_{\bx}\Vert\bb-\bB\bx\Vert^2$$ with the solution $\bx=\bA^{-1}\bB^T\bb$ where $\bA=\bB^T\bB$ where $\bB$ and $\bb$ have the following structure: $$\bB\equiv\Big[\stack{1\le i\le m}\Big\{\stack{1\le j\le n_i}({\bB_{ij}})\Big\}\ \Big\vert \blockdiag{1\le i\le m}\Big\{\big[\stack{1\le j\le n_i}({{\accentset{\mbox{\smalldot}}{\pmb{B}}}_{ij}})\ \big\vert \ \blockdiag{1\le j\le n_i}({{\accentset{\mbox{\smalldot\smalldot}}{\pmb{B}}}_{ij}})\big]\Big\} \Big] \label{eq:catweasleOne}$$ and $$\bb\equiv\stack{1\le i\le m}\Big\{\stack{1\le j\le n_i}(\bb_{ij})\Big\}. \label{eq:catweasleTwo}$$ The three-level sparse matrix inverse problem involves determination of the sub-blocks of $\bA^{-1}$ corresponding to the non-zero sub-blocks of $\bA$. Our notation for these sub-blocks is illustrated by $$\bA^{-1} = \left[\arraycolsep=2.2pt\def\arraystretch{1.6} \begin{array}{c | c | c | c | c | c | c | c} \setstretch{4.5} {\bA^{11}} & {\bA^{12,1}} & {\bA^{12,11}} & {\bA^{12,12}} & {\bA^{12,2}} & {\bA^{12,21}} & {\bA^{12,22}} & {\bA^{12,23}} \\ \hline {\bA^{12,1\,T}} & {\bA^{22,1}} & {\bA^{12,1,1}} & {\bA^{12,1,2}} & \bigX & \bigX & \bigX & \bigX \\ \hline {\bA^{12,11\,T}} & {\bA^{12,1,1\,T}} & {\bA^{22,11}} & \bigX & \bigX & \bigX & \bigX & \bigX \\ \hline {\bA^{12,12\,T}} & {\bA^{12,1,2\,T}} & \bigX & {\bA^{22,12}} & \bigX & \bigX & \bigX & \bigX \\ \hline {\bA^{12,2\,T}} & \bigX & \bigX & \bigX & {\bA^{22,2}} & {\bA^{12,2,1}} & {\bA^{12,2,2}} & {\bA^{12,2,3}} \\ \hline {\bA^{12,21\,T}} & \bigX & \bigX & \bigX & {\bA^{12,2,1\,T}} & {\bA^{22,21}} & \bigX & \bigX \\ \hline {\bA^{12,22\,T}} & \bigX & \bigX & \bigX & {\bA^{12,2,2\,T}} & \bigX & {\bA^{22,22}} & \bigX \\ \hline {\bA^{12,23\,T}} & \bigX & \bigX & \bigX & {\bA^{12,2,3\,T}} & \bigX & \bigX & {\bA^{22,23}} \end{array} \right] \label{eq:catweasleThree}$$ for the $m=2$, $n_1=2$ and $n_2=3$ case. The $\bigX$ symbol denotes sub-blocks that are not of interest. The  algorithm is given in Algorithm \[alg:SolveThreeLevelSparseLeastSquares\]. - Inputs: $\big\{\big(\bb_{ij}(\oadj_{ij}\times1), \ \bB_{ij}(\oadj_{ij}\times p),\ \bBdot_{ij}(\oadj_{ij}\times q_1), \ \bBdotdot_{ij}(\oadj_{ij}\times q_2)\big):\ 1\le i\le m,\ 1\le j\le n_i\big\}$ - $\bomega_7\thickarrow\mbox{NULL}$   ;   $\bOmega_8\thickarrow\mbox{NULL}$ - For $i=1,\ldots,m$: - $\bomega_9\thickarrow\mbox{NULL}$   ;   $\bOmega_{10}\thickarrow\mbox{NULL}$    ;   $\bOmega_{11}\thickarrow\mbox{NULL}$ - For $j=1,\ldots,n_i$: - Decompose $\bBdotdot_{ij}=\bQ_{ij}\left[\begin{array}{c} \bR_{ij}\\ \bzero \end{array} \right]$ such that $\bQ_{ij}^{-1}=\bQ_{ij}^T$ and $\bR_{ij}$ is upper-triangular. - $\bd_{0ij}\thickarrow\bQ_{ij}^T\bb_{ij}\ \ \ ;\ \ \ \bD_{0ij}\thickarrow\bQ_{ij}^T\bB_{ij} \ \ \ ;\ \ \ \bDdot_{0ij}\thickarrow\bQ_{ij}^T\bBdot_{ij} $ - $\bd_{1ij}\thickarrow\mbox{1st $q_2$ rows of}\ \bd_{0ij}$  ;   $\bd_{2ij}\thickarrow\mbox{remaining rows of}\ \bd_{0ij}$  ;   $\bomega_9\thickarrow \left[ \begin{array}{c} \bomega_9\\ \bd_{2ij} \end{array} \right]$ - $\bD_{1ij}\thickarrow\mbox{1st $q_2$ rows of}\ \bD_{0ij}$ ;  $\bD_{2ij}\thickarrow\mbox{remaining rows of}\ \bD_{0ij}$ ;  $\bOmega_{10}\thickarrow \left[ \begin{array}{c} \bOmega_{10}\\ \bD_{2ij} \end{array} \right]$ - $\bDdot_{1ij}\thickarrow\mbox{1st $q_2$ rows of}\ \bDdot_{0ij}$ ;  $\bDdot_{2ij}\thickarrow\mbox{remaining rows of}\ \bDdot_{0ij}$ ;  $\bOmega_{11}\thickarrow \left[ \begin{array}{c} \bOmega_{11}\\ \bDdot_{2ij} \end{array} \right]$ - Decompose $\bOmega_{11}=\bQ_i\left[\begin{array}{c} \bR_i\\ \bzero \end{array} \right]$ such that $\bQ_i^{-1}=\bQ_i^T$ and $\bR_i$ is upper-triangular. - $\bc_{0i}\thickarrow\bQ_i^T\bomega_9\ \ \ ;\ \ \ \bC_{0i}\thickarrow\bQ_i^T\bOmega_{10}$ - $\bc_{1i}\thickarrow\mbox{1st $q_1$ rows of}\ \bc_{0i}$  ;   $\bc_{2i}\thickarrow\mbox{remaining rows of}\ \bc_{0i}$  ;   $\bomega_7\thickarrow \left[ \begin{array}{c} \bomega_7\\ \bc_{2i} \end{array} \right]$ - $\bC_{1i}\thickarrow\mbox{1st $q_1$ rows of}\ \bC_{0i}$  ;   $\bC_{2i}\thickarrow\mbox{remaining rows of}\ \bC_{0i}$  ;   $\bOmega_8\thickarrow \left[ \begin{array}{c} \bOmega_8\\ \bC_{2i} \end{array} \right]$ - Decompose $\bOmega_8=\bQ\left[\begin{array}{c} \bR\\ \bzero \end{array} \right]$ so that $\bQ^{-1}=\bQ^T$ and $\bR$ is upper-triangular. - $\bc\thickarrow\mbox{first $p$ rows of $\bQ^T\bomega_7$}$    ;   $\xveco\thickarrow\bR^{-1}\bc$   ;    $\AUoo\thickarrow\bR^{-1}\bR^{-T}$ - For $i=1,\ldots,m$: - $\xvectCi\thickarrow\bR_i^{-1}(\bc_{1i}-\bC_{1i}\xveco)$   ;    $\AUotCi\thickarrow\,-\AUoo(\bR_i^{-1}\Cmatoi)^T$ - $\AUttCi\thickarrow\bR_i^{-1}(\bR_i^{-T} - \Cmatoi\AUotCi)$ - For $j=1,\ldots,n_i$: - $\bx_{2,ij}\leftarrow\bR_{ij}^{-1} (\bd_{1ij} - \bD_{1ij} \bx_1 - \bDdot_{1ij} \bx_{2,i})$ - ${\bA^{12,ij}}\leftarrow - \left\{ \bR_{ij}^{-1}(\bD_{1ij} {\bA^{11}} + \bDdot_{1ij} {\bA^{12,i\,T}}) \right\}^T$ - ${\bA^{12,\iCOMMAj}}\leftarrow - \left\{\bR_{ij}^{-1}(\bD_{1ij} {\bA^{12,i}} + \bDdot_{1ij} {\bA^{22,i}}) \right\}^T$ - ${\bA^{22,ij}}\leftarrow\bR_{ij}^{-1}\big(\bR_{ij}^{-T}-\bD_{1ij}{\bA^{12,ij}}-\bDdot_{1ij}{\bA^{12,\iCOMMAj}} \big)$ - Output: $\Big(\xveco,\AUoo,\big\{\big(\xvectCi,\AUttCi,\AUotCi):\ 1\le i\le m\big\}\Big)$\ $\null\qquad\qquad\big\{\big(\bx_{2,ij},\bA^{22,ij},\bA^{12,ij},\bA^{12,\iCOMMAj} \big):\ 1\le i\le m,\ 1\le j\le n_i\big\}\Big)$
{ "pile_set_name": "ArXiv" }
--- abstract: 'Spiking neural networks (SNNs) have garnered a great amount of interest for supervised and unsupervised learning applications. This paper deals with the problem of training multi-layer feedforward SNNs. The non-linear integrate-and-fire dynamics employed by spiking neurons make it difficult to train SNNs to generate desired spike trains in response to a given input. To tackle this, first the problem of training a multi-layer SNN is formulated as an optimization problem such that its objective function is based on the deviation in membrane potential rather than the spike arrival instants. Then, an optimization method named Normalized Approximate Descent (NormAD), hand-crafted for such non-convex optimization problems, is employed to derive the iterative synaptic weight update rule. Next, it is reformulated to efficiently train multi-layer SNNs, and is shown to be effectively performing spatio-temporal error backpropagation. Thus, the new algorithm is a key step towards building deep spiking neural networks capable of efficient event-triggered learning.' author: - | Navin Anwani[^1]\ Department of Electrical Engineering\ Indian Institute of Technology Bombay\ Mumbai, 400076, India\ `[email protected]`\ Bipin Rajendran\ Department of Electrical and Computer Engineering\ New Jersey Institute of Technology\ NJ, 07102 USA\ `[email protected]`\ title: 'Training Multi-layer Spiking Neural Networks using NormAD based Spatio-Temporal Error Backpropagation' --- Introduction ============ T[he]{} human brain assimilates multi-modal sensory data and uses it to *learn* and perform complex cognitive tasks such as pattern detection, recognition and completion. This ability is attributed to the dynamics of approximately $10^{11}$ neurons interconnected through a network of $10^{15}$ synapses in the human brain. This has motivated the study of neural networks in the brain and attempts to mimic their learning and information processing capabilities to create *smart learning machines*. Neurons, the fundamental information processing units in brain, communicate with each other by transmitting action potentials or spikes through their synapses. The process of learning in the brain emerges from synaptic plasticity viz., modification of strength of synapses triggered by spiking activity of corresponding neurons. Spiking neurons are the third generation of artificial neuron models which closely mimic the dynamics of biological neurons. Unlike previous generations, both inputs and the output of a spiking neuron are signals in time. Specifically, these signals are point processes of spikes in the membrane potential of the neuron, also called a spike train. Spiking neural networks (SNNs) are computationally more powerful than previous generations of artificial neural networks as they incorporate temporal dimension to the information representation and processing capabilities of neural networks [@maass1997networks; @Bohte2004; @CROTTY2005]. Owing to the incorporation of temporal dimension, SNNs naturally lend themselves for processing of signals in time such as audio, video, speech, etc. Information can be encoded in spike trains using temporal codes, rate codes or population codes [@BialekSpikes; @Gerstner1997; @Prescott2008]. Temporal encoding uses exact spike arrival time for information representation and has far more representational capacity than rate code or population code [@thorpe2001spike]. However, one of the major hurdles in developing temporal encoding based applications of SNNs is the lack of efficient learning algorithms to train them with desired accuracy. In recent years, there has been significant progress in development of neuromorphic computing chips, which are specialized hardware implementations that emulate SNN dynamics inspired by the parallel, event-driven operation of the brain. Some notable examples are the TrueNorth chip from IBM [@merolla2014million], the Zeroth processor from Qualcomm [@Gehlhaar:ASPLOS2014] and the Loihi chip from Intel [@intelLoihi]. Hence, a breakthrough in learning algorithms for SNNs is apt and timely, to complement the progress of neuromorphic computing hardware. The present success of deep learning based methods can be traced back to the breakthroughs in learning algorithms for second generation artificial neural networks (ANNs) [@hinton2006fast]. As we will discuss in section \[secLitRev\], there has been work on learning algorithms for SNNs in the recent past, but those methods have not found wide acceptance as they suffer from computational inefficiencies and/or lack of reliable and fast convergence. One of the main reasons for unsatisfactory performance of algorithms developed so far is that those efforts have been centered around adapting high-level concepts from learning algorithms for ANNs or from neuroscience and porting them to SNNs. In this work, we utilize properties specific to spiking neurons in order to develop a supervised learning algorithm for temporal encoding applications with spike-induced weight updates. A supervised learning algorithm named *NormAD*, for single layer SNNs was proposed in [@anwani2015normad]. For a spike domain training problem, it was demonstrated to converge at least an order of magnitude faster than the previous state-of-the-art. [Recognizing the importance of multi-layer SNNs for supervised learning, in this paper we extend the idea to derive NormAD based supervised learning rule for multi-layer feedforward spiking neural networks. It is a spike-domain analogue of the error backpropagation rule commonly used for ANNs and can be interpreted to be a realization of spatio-temporal error backpropagation.]{} The derivation comprises of first formulating the training problem for a multi-layer feedforward SNN as a non-convex optimization problem. Next, the Normalized Approximate Descent based optimization, introduced in [@anwani2015normad], is employed to obtain an iterative weight adaptation rule. The new learning rule is successfully validated by employing it to train $2$-layer feedforward SNNs for a spike domain formulation of the XOR problem and $3$-layer feedforward SNNs for general spike domain training problems. This paper is organized as follows. We begin with a summary of learning methods for SNNs documented in literature in section \[secLitRev\]. Section \[secSpikingNeurons\] provides a brief introduction to spiking neurons and the mathematical model of Leaky Integrate-and-Fire (LIF) neuron, also setting the notations we use later in the paper. Supervised learning problem for feedforward spiking neural networks is discussed in section \[supLearn\], starting with the description of a generic training problem for SNNs. Next we present a brief mathematical description of a feedforward SNN with one hidden layer and formulate the corresponding training problem as an optimization problem. Then Normalized Approximate Descent based optimization is employed to derive the spatio-temporal error backpropagation rule in section. \[multiLayerNormAD\]. Simulation experiments to demonstrate the performance of the new learning rule for some exemplary supervised training problems are discussed in section. \[validate\]. Section \[secConclusion\] concludes the development with a discussion on directions for future research that can leverage the algorithm developed here towards the goal of realizing event-triggered deep spiking neural networks. Related Work {#secLitRev} ============ [One of the earliest attempts to demonstrate supervised learning with spiking neurons is the SpikeProp algorithm]{} [@bohte2002error]. [However, it is restricted to single spike learning, thereby limiting its information representation capacity]{}. SpikeProp was then extended in [@booij2005gradient] to neurons firing multiple spikes. In these studies, the training problem was formulated as an optimization problem with the objective function in terms of the difference between desired and observed spike arrival instants and gradient descent was used to adjust the weights. However, since spike arrival time is a discontinuous function of the synaptic strengths, the optimization problem is non-convex and gradient descent is prone to local minima. The biologically observed spike time dependent plasticity (STDP) has been used to derive weight update rules for SNNs in [@ponulak2010supervised; @DL-ReSuMe; @paugam2007supervised]. ReSuMe and DL-ReSuMe took cues from both STDP as well as the Widrow-Hoff rule to formulate a supervised learning algorithm [@ponulak2010supervised; @DL-ReSuMe]. Though these algorithms are biologically inspired, the training time necessary to converge is a concern, especially for real-world applications in large networks. The ReSuMe algorithm has been extended to multi-layer feedforward SNNs using backpropagation in [@Sporea]. Another notable spike-domain learning rule is PBSNLR [@xu2013new], which is an offline learning rule for the spiking perceptron neuron (SPN) model using the perceptron learning rule. The PSD algorithm [@yu2013precise] uses Widrow-Hoff rule to empirically determine an equivalent learning rule for spiking neurons. [The SPAN rule]{} [@mohemmed2012span] [converts input and output spike signals into analog signals and then applies the Widrow-Hoff rule to derive a learning algorithm]{}. The SWAT algorithm [@Wade2010] uses STDP and BCM rule to derive a weight adaptation strategy for SNNs. The Normalized Spiking Error Back-Propagation (NSEBP) method proposed in [@NSEBP] is based on approximations of the simplified Spike Response Model for the neuron. The multi-STIP algorithm proposed in [@Lin2016] defines an inner product for spike trains to approximate a learning cost function. As opposed to the above approaches which attempt to develop weight update rules for fixed network topologies, there are also some efforts in developing feed-forward networks based on evolutionary algorithms where new neuronal connections are progressively added and their weights and firing thresholds updated for every class label in the database [@Schliebs2013; @SOLTIC2010]. Recently, an algorithm to learn precisely timed spikes using a leaky integrate-and-fire neuron was presented in [@MEMMESHEIMER2014925]. The algorithm converges only when a synaptic weight configuration to the given training problem exists, and can not provide a close approximation, if the exact solution does not exist. To overcome this limitation, another algorithm to learn spike sequences with finite precision is also presented in the same paper. It allows a window of width $\epsilon$ around the desired spike instant within which the output spike could arrive and performs training only on the first deviation from such desired behavior. While it mitigates the non-linear accumulation of error due to interaction between output spikes, it also restricts the training to just one discrepancy per iteration. Backpropagation for training deep networks of LIF neurons has been presented in [@10.3389/fnins.2016.00508], derived assuming an impulse-shaped post-synaptic current kernel and treating the discontinuities at spike events as noise. It presents remarkable results on MNIST and N-MNIST benchmarks using rate coded outputs, while in the present work we are interested in training multi-layer SNNs with temporally encoded outputs i.e., representing information in the timing of spikes. Many previous attempts to formulate supervised learning as an optimization problem employ an objective function formulated in terms of the difference between desired and observed spike arrival times [@bohte2002error; @booij2005gradient; @xu2013supervised; @florian2012chronotron]. We will see in section \[secSpikingNeurons\] that a leaky integrate-and-fire (LIF) neuron can be described as a non-linear spatio-temporal filter, spatial filtering being the weighted summation of the synaptic inputs to obtain the total incoming synaptic current and temporal filtering being the leaky integration of the synaptic current to obtain the membrane potential. Thus, it can be argued that in order to train multi-layer SNNs, we would need to backpropagate error in space as well as in time, and as we will see in section \[multiLayerNormAD\], it is indeed the case for proposed algorithm. Note that while the membrane potential can directly control the output spike timings, it is also relatively more tractable through synaptic inputs and weights compared to spike timing. This observation is leveraged to derive a spaio-temporal error backpropagation algorithm by treating supervised learning as an optimization problem, with the objective function formulated in terms of the membrane potential. Spiking Neurons {#secSpikingNeurons} =============== Spiking neurons are simplified models of biological neurons e.g., the Hodgkin-Huxley equations describing the dependence of membrane potential of a neuron on its membrane current and conductivity of ion channels [@hodgkin1952quantitative]. A spiking neuron is modeled as a multi-input system that receives inputs in the form of sequences of spikes, which are then transformed to analog current signals at its input synapses. The synaptic currents are superposed inside the neuron and the result is then transformed by its non-linear integrate-and-fire dynamics to a membrane potential signal with a sequence of stereotyped events in it, called action potentials or spikes. Despite the continuous-time variations in the membrane potential of a neuron, it communicates with other neurons through the synaptic connections by chemically inducing a particular current signal in the post-synaptic neuron each time it spikes. Hence, the output of a neuron can be completely described by the time sequence of spikes issued by it. This is called *spike based information representation* and is illustrated in Fig. \[FigSpkNeuron\]. The output, also called a spike train, is modeled as a point process of spike events. Though the internal dynamics of an individual neuron is straightforward, a network of neurons can exhibit complex dynamical behaviors. The processing power of neural networks is attributed to the massively parallel synaptic connections among neurons. ![Illustration of spike based information representation: a spiking neuron assimilates multiple input spike trains to generate an output spike train. Figure adapted from [@anwani2015normad].[]{data-label="FigSpkNeuron"}](spikingNeuron.png) Synapse {#secSynapse} ------- The communication between any two neurons is spike induced and is accomplished through a directed connection between them known as a synapse. In the cortex, each neuron can receive spike-based inputs from thousands of other neurons. If we model an incoming spike at a synapse as a unit impulse, then the behavior of the synapse to translate it to an analog current signal in the post-synaptic neuron can be modeled by a linear time invariant system with transfer function $w\alpha(t)$. Thus, if a pre-synaptic neuron issues a spike at time $t^{f}$, the post-synaptic neuron receives a current $i(t)=w\alpha(t-t^{f})$. Here the waveform $\alpha(t)$ is known as the post-synaptic current kernel and the scaling factor $w$ is called the weight of the synapse. The weight varies from synapse-to-synapse and is representative of its conductance, whereas $\alpha(t)$ is independent of synapse and is commonly modeled as $$\label{Eqkernel} \alpha(t)= \left[ \exp({-t}/{\tau_1})-\exp({-t}/{\tau_2})\right]u(t),$$ where $u(t)$ is the Heaviside step function and $\tau_1 > \tau_2$. Note that the synaptic weight $w$ can be positive or negative, depending on which the synapse is said to be excitatory or inhibitory respectively. Further, we assume that the synaptic currents do not depend on the membrane potential or reversal potential of the post-synaptic neuron. Let us assume that a neuron receives inputs from $n$ synapses and spikes arrive at the $i^{th}$ synapse at instants $t^i_1, t^i_2, \ldots$. Then, the input signal at the $i^{th}$ synapse (before scaling by synaptic weight $w_{i}$) is given by the expression $$\begin{aligned} c_i(t)=\sum_{f}\alpha(t-t^{i}_{f}). \label{eqcoft}\end{aligned}$$ The synaptic weights of all input synapses to a neuron are usually represented in a compact form as a weight vector ${\mathbf{w}}=\begin{bmatrix}w_1 & w_2 & \cdots & w_n \end{bmatrix}^T$, where $w_i$ is the weight of the $i^{th}$ synapse. The synaptic weights perform spatial filtering over the input signals resulting in an aggregate synaptic current received by the neuron: $$\begin{aligned} I(t)={\mathbf{w}}^T{\mathbf{c}}(t), \label{eqIoft}\end{aligned}$$ where $ {\mathbf{c}}(t)=\begin{bmatrix}c_1(t) & c_2(t) & \cdots & c_n(t)\end{bmatrix}^T$. A simplified illustration of the role of synaptic transmission in overall spike based information processing by a neuron is shown in Fig. \[Figkernel\], where an incoming spike train at a synaptic input is translated to an analog current with an amplitude depending on weight of the synapse. The resultant current at the neuron from all its upstream synapses is transformed non-linearly to generate its membrane potential with instances of spikes viz., sudden surge in membrane potential followed by an immediate drop. ![Illustration of a simplified synaptic transmission and neuronal integration model: exemplary spikes (stimulus) arriving at a synapse, the resultant current being fed to the neuron through the synapse and the resultant membrane potential of the post-synaptic neuron.[]{data-label="Figkernel"}](a.PNG) ![Illustration of a simplified synaptic transmission and neuronal integration model: exemplary spikes (stimulus) arriving at a synapse, the resultant current being fed to the neuron through the synapse and the resultant membrane potential of the post-synaptic neuron.[]{data-label="Figkernel"}](b.PNG) ![Illustration of a simplified synaptic transmission and neuronal integration model: exemplary spikes (stimulus) arriving at a synapse, the resultant current being fed to the neuron through the synapse and the resultant membrane potential of the post-synaptic neuron.[]{data-label="Figkernel"}](c.PNG) ### Synaptic Plasticity {#synaptic-plasticity .unnumbered} The response of a neuron to stimuli greatly depends on the conductance of its input synapses. Conductance of a synapse (the synaptic weight) changes based on the spiking activity of the corresponding pre- and post-synaptic neurons. A neural network’s ability to learn is attributed to this activity dependent synaptic plasticity. Taking cues from biology, we will also constrain the learning algorithm we develop to have spike-induced synaptic weight updates. Leaky Integrate-and-Fire (LIF) Neuron {#secLIF} ------------------------------------- In leaky integrate-and-fire (LIF) model of spiking neurons, the transformation from aggregate input synaptic current $I(t)$ to the resultant membrane potential $V(t)$ is governed by the following differential equation and reset condition [@stein1967some]: $$\begin{aligned} \label{lif1} C_m & \frac{dV(t)}{dt}=-g_L(V(t)-E_L)+I(t),\\ \nonumber &V(t)\longrightarrow E_L \quad \text{when } V(t)\geq V_T.\end{aligned}$$ Here, $C_m$ is the membrane capacitance, $E_L$ is the leak reversal potential, and $g_L$ is the leak conductance. If $V(t)$ exceeds the threshold potential $V_T$, a spike is said to have been issued at time $t$. The expression $V(t)\longrightarrow E_L$ when $V(t)\geq V_T$ denotes that $V(t)$ is reset to $E_{L}$ when it exceeds the threshold $V_T$. Assuming that the neuron issued its latest spike at time $t_{l}$, Eq.  can be solved for any time instant $t>t_l$, until the issue of the next spike, with the initial condition $V(t_{l}) = E_{L}$ as $$\begin{aligned} \label{lif2} V&(t)=E_L+\left(I(t)u(t-t_{l})\right) \ast h(t)\\ \nonumber &V(t)\longrightarrow E_L \quad \text{when } V(t)\geq V_T,\end{aligned}$$ where ‘$\ast$’ denotes linear convolution and $$\begin{aligned} h(t)=\frac{1}{C_m}\exp({-t}/{\tau_L})u(t), \label{eqhoft}\end{aligned}$$ with $\tau_L=C_m/g_L$ is the neuron’s leakage time constant. Note from Eq.  that the aggregate synaptic current $I(t)$ obtained by spatial filtering of all the input signals is first gated with a unit step located at $t=t_{l}$ and then fed to a leaky integrator with impulse response $h(t)$, which performs temporal filtering. So the LIF neuron acts as a non-linear spatio-temporal filter and the non-linearity is a result of the reset at every spike. Using Eq.  and the membrane potential can be represented in a compact form as $$V(t)=E_L + {\mathbf{w}}^T{\mathbf{d}}(t), \label{lif6}$$ where ${\mathbf{d}}(t)=\begin{bmatrix}d_1(t) & d_2(t) & \cdots & d_n(t)\end{bmatrix}^T$ and $$d_i(t)= \left(c_i(t)u(t-t_{l})\right)*h(t). \label{eqdoft}$$ From Eq. , it is evident that ${\mathbf{d}}(t)$ carries all the information about the input necessary to determine the membrane potential. It should be noted that ${\mathbf{d}}(t)$ depends on weight vector ${\mathbf{w}}$, since $d_i(t)$ for each $i$ depends on the last spiking instant $t_{l}$, which in turn is dependent on the weight vector ${\mathbf{w}}$. The neuron is said to have spiked only when the membrane potential $V(t)$ reaches the threshold $V_{T}$. Hence, minor changes in the weight vector ${\mathbf{w}}$ may eliminate an already existing spike or introduce new spikes. Thus, spike arrival time $t_{l}$ is a discontinuous function of ${\mathbf{w}}$. Therefore, Eq.  implies that $V(t)$ is also discontinuous in weight space. Supervised learning problem for SNNs is generally framed as an optimization problem with the cost function described in terms of the spike arrival time or membrane potential. However, the discontinuity of spike arrival time as well as $V(t)$ in weight space renders the cost function discontinuous and hence the optimization problem non-convex. Commonly used steepest descent methods can not be applied to solve such non-convex optimization problems. In this paper, we extend the optimization method named *Normalized Approximate Descent*, introduced in [@anwani2015normad] for single layer SNNs to multi-layer SNNs. Refractory Period ----------------- After issuing a spike, biological neurons can not immediately issue another spike for a short period of time. This short duration of inactivity is called the absolute refractory period ($\Delta_{abs}$). This aspect of spiking neurons has been omitted in the above discussion for simplicity, but can be easily incorporated in our model by replacing $t_{l}$ with $(t_{l} + \Delta_{abs})$ in the equations above. Armed with a compact representation of membrane potential in Eq. , we are now set to derive a synaptic weight update rule to accomplish supervised learning with spiking neurons. Supervised Learning using Feedforward SNNs {#supLearn} ========================================== Supervised learning is the process of obtaining an approximate model of an unknown system based on available training data, where the training data comprises of a set of inputs to the system and corresponding outputs. The learned model should not only fit to the training data well but should also generalize well to unseen samples from the same input distribution. The first requirement viz. to obtain a model so that it best fits the given training data is called training problem. Next we discuss the training problem in spike domain, solving which is a stepping stone towards solving the more constrained supervised learning problem. ![Spike domain training problem: Given a set of $n$ input spike trains fed to the SNN through its $n$ inputs, determine the weights of synaptic connections constituting the SNN so that the observed output spike train is as close as possible to the given desired spike train.[]{data-label="supLearnfig"}](trainingProblem.png) Training Problem {#probTrain} ---------------- A canonical training problem for a spiking neural network is illustrated in Fig. \[supLearnfig\]. There are $n$ inputs to the network such that $s_{in,i}(t)$ is the spike train fed at the $i^{th}$ input. Let the desired output spike train corresponding to this set of input spike trains be given in the form of an impulse train as $$\begin{aligned} s_{d}(t)=\sum_{i=1}^f \delta (t-t^{i}_{d}).\end{aligned}$$ Here, $\delta(t)$ is the Dirac delta function and $ t_{d}^{1},\, t_{d}^{2},\, ...,\, t_{d}^{f} $ are the desired spike arrival instants over a duration $T$, also called an epoch. The aim is to determine the weights of the synaptic connections constituting the SNN so that its output $s_{o}(t)$ in response to the given input is as close as possible to the desired spike train $s_{d}(t)$. NormAD based iterative synaptic weight adaptation rule was proposed in [@anwani2015normad] for training single layer feedforward SNNs. However, there are many systems which can not be modeled by any possible configuration of single layer SNN and necessarily require a multi-layer SNN. Hence, now we aim to obtain a supervised learning rule for multi-layer spiking neural networks. The change in weights in a particular iteration of training can be based on the given set of input spike trains, desired output spike train and the corresponding observed output spike train. Also, the weight adaptation rule should be constrained to have spike induced weight updates for computational efficiency. For simplicity, we will first derive the weight adaptation rule for training a feedforward SNN with one hidden layer and then state the general weight adaptation rule for feedforward SNN with an arbitrary number of layers. Imaginary Buffer Line Performance Metric {#performance-metric .unnumbered} ------------------ Training performance can be assessed by the correlation between desired and observed outputs. It can be quantified in terms of the cross-correlation between low-pass filtered versions of the two spike trains. The correlation metric which was introduced in [@vanRossum] and is commonly used in characterizing the spike based learning efficiency [@anwani2015normad; @resume] is defined as $$\begin{aligned} C=\frac{\langle L(s_{d}(t)),L(s_{o}(t)) \rangle}{\|L(s_{d}(t))\| \cdot \|L(s_{o}(t))\|}.\end{aligned}$$ Here, $L(s(t))$ is the low-pass filtered spike train $s(t)$ obtained by convolving it with a one-sided falling exponential i.e., $$L(s(t)) = s(t)*(\exp({-t}/{\tau_{LP}})u(t)),$$ with $\tau_{LP}=5\,$ms. \[1\][\#1]{} ![Feedforward SNN with one hidden layer ($n \rightarrow m \rightarrow 1$) also known as 2-layer feedforward SNN.[]{data-label="multiLayerSNN"}](multiLayerNet.png) Feedforward SNN with One Hidden Layer {#multiLayerIntro} ------------------------------------- A fully connected feedforward SNN with one hidden layer is shown in Fig. \[multiLayerSNN\]. It has $n$ neurons in the input layer, $m$ neurons in the hidden layer and $1$ in the output layer. It is also called a 2-layer feedforward SNN, since the neurons in input layer provide spike based encoding of sensory inputs and do not actually implement the neuronal dynamics. We denote this network as a $n \rightarrow m \rightarrow 1$ feedforward SNN. This basic framework can be extended to the case where there are multiple neurons in the output layer or the case where there are multiple hidden layers. The weight of the synapse from the $j^{th}$ neuron in the input layer to the $i^{th}$ neuron in the hidden layer is denoted by $w_{h,ij}$ and that of the synapse from the $i^{th}$ neuron in the hidden layer to the neuron in output layer is denoted by $w_{o,i}$. All input synapses to the $i^{th}$ neuron in the hidden layer can be represented compactly as an $n$-dimensional vector ${\mathbf{w}}_{h,i} = \begin{bmatrix}w_{h,i1} & w_{h,i2} & \cdots & w_{h,in}\end{bmatrix}^T$. Similarly input synapses to the output neuron are represented as an $m$-dimensional vector ${\mathbf{w}}_o = \begin{bmatrix}w_{o,1} & w_{o,2} & \cdots & w_{o,m}\end{bmatrix}^T$. Let $s_{in,j}\left(t\right)$ denote the spike train fed by the $j^{th}$ neuron in input layer to neurons in hidden layer. Hence, from Eq. , the signal fed to the neurons in the hidden layer from the $j^{th}$ input (before scaling by synaptic weight) $c_{h,j}\left(t\right)$ is given as $$\begin{aligned} c_{h,j}\left(t\right)= s_{in,j}\left(t\right)*\alpha \left(t\right).\end{aligned}$$ Assuming $t_{h,i}^{last}$ as the latest spiking instant of the $i^{th}$ neuron in the hidden layer, define ${\mathbf{d}}_{h,i}\left(t\right)$ as $$\begin{aligned} \label{eqd_hoft} {\mathbf{d}}_{h,i}\left(t\right) = \left({\mathbf{c}}_{h}\left(t\right)u\left(t-t_{h,i}^{last}\right)\right)*h\left(t\right),\end{aligned}$$ where ${\mathbf{c}}_{h}\left(t\right)= \begin{bmatrix}c_{h,1} & c_{h,2} & \cdots & c_{h,n}\end{bmatrix}^T$. From Eq. , membrane potential of the $i^{th}$ neuron in hidden layer is given as $$\begin{aligned} V_{h,i}\left(t\right)=E_{L}+{\mathbf{w}}_{h,i}^T {\mathbf{d}}_{h,i}\left(t\right).\end{aligned}$$ Accordingly, let $s_{h,i}\left(t\right)$ be the spike train produced at the $i^{th}$ neuron in the hidden layer. The corresponding signal fed to the output neuron is given as $$\begin{aligned} c_{o,i}\left(t\right)= s_{h,i}\left(t\right)*\alpha \left(t\right). \label{eq_c_oi_of_t}\end{aligned}$$ Defining ${\mathbf{c}}_{o}\left(t\right)= \begin{bmatrix}c_{o,1} & c_{o,2} & \cdots & c_{o,m}\end{bmatrix}^T$ and denoting the latest spiking instant of the output neuron by $t_{o}^{last}$ we can define $$\begin{aligned} \label{eqd_o_oft} {\mathbf{d}}_{o}\left(t\right) &= \left({\mathbf{c}}_{o}\left(t\right)u\left(t-t_{o}^{last}\right)\right)*h\left(t\right).\end{aligned}$$ Hence, from Eq. , the membrane potential of the output neuron is given as $$\begin{aligned} V_{o}\left(t\right)=E_{L}+{\mathbf{w}}_{o}^T {\mathbf{d}}_{o}\left(t\right)\end{aligned}$$ and the corresponding output spike train is denoted $s_{o}\left( t \right)$. Mathematical Formulation of the Training Problem {#multiLayerProbFormulation} ------------------------------------------------ To solve the training problem employing an $n \rightarrow m \rightarrow 1$ feedforward SNN, effectively we need to determine synaptic weights $W_h = \begin{bmatrix}{\mathbf{w}}_{h,1} & {\mathbf{w}}_{h,2} & \cdots & {\mathbf{w}}_{h,m}\end{bmatrix}^T$ and ${\mathbf{w}}_o$ constituting its synaptic connections, so that the output spike train $s_{o}\left(t\right)$ is as close as possible to the desired spike train $s_{d}\left(t\right)$ when the SNN is excited with the given set of input spike trains $s_{in,i}\left(t\right)$, $i \in \lbrace1,2,...,n \rbrace$. Let $V_{d}\left(t\right)$ be the corresponding ideally desired membrane potential of the output neuron, such that the respective output spike train is $s_{d}\left(t\right)$. Also, for a particular configuration $W_h$ and ${\mathbf{w}}_o$ of synaptic weights of the SNN, let $V_{o}\left(t\right)$ be the observed membrane potential of the output neuron in response to the given input and $s_{o}\left(t\right)$ be the respective output spike train. We define the cost function for training as $$J\left(W_h, {\mathbf{w}}_o\right)=\frac{1}{2} \int_{0}^T \left(\Delta V_{d,o}(t)\right)^2 |e\left(t\right)|dt, \label{costMulti}$$ where $$\Delta V_{d,o}(t) = V_{d}\left(t\right)-V_{o}\left(t\right) \label{deltaV_do}$$ and $$e\left(t\right)= s_{d}\left(t\right)-s_{o}\left(t\right). \label{eqeoft}$$ That is, the cost function is determined by the difference $\Delta V_{d,o}(t)$, only at the instants in time where there is a discrepancy between the desired and observed spike trains of the output neuron. Thus, the training problem can be expressed as following optimization problem: $$\begin{aligned} & \min & & J\left(W_h, {\mathbf{w}}_o\right) \\ & \text{s.t.} & & W_h \in \mathbb{R}^{m \times n}, {\mathbf{w}}_o \in \mathbb{R}^{m} \end{aligned} \label{multilayerOptiProb}$$ Note that the optimization with respect to ${\mathbf{w}}_o$ is same as training a single layer SNN, provided the spike trains from neurons in the hidden layer are known. In addition, we need to derive the weight adaptation rule for synapses feeding the hidden layer viz., the weight matrix $W_{h}$, such that spikes in the hidden layer are most suitable to generate the desired spikes at the output. The cost function is dependent on the membrane potential $V_{o}\left(t\right)$, which is discontinuous with respect to ${\mathbf{w}}_o$ as well as $W_{h}$. Hence the optimization problem  is non-convex and susceptible to local minima when solved with steepest descent algorithm. NormAD based Spatio-Temporal Error Backpropagation {#multiLayerNormAD} ================================================== In this section we apply Normalized Approximate Descent to the optimization problem to derive a spike domain analogue of error backpropagation. First we derive the training algorithm for SNNs with single hidden layer, and then we provide its generalized form to train feedforward SNNs with arbitrary number of hidden layers. NormAD – Normalized Approximate Descent --------------------------------------- Following the approach introduced in [@anwani2015normad], we use three steps viz., (i) Stochastic Gradient Descent, (ii) Normalization and (iii) Gradient Approximation, as elaborated below to solve the optimization problem . ### Stochastic Gradient Descent Instead of trying to minimize the aggregate cost over the epoch, we try to minimize the instantaneous contribution to the cost at each instant $t$ for which $e(t) \neq 0$, independent of that at any other instant and expect that it minimizes the total cost $J\left(W_{h}, {\mathbf{w}}_o \right)$. The instantaneous contribution to the cost at time $t$ is denoted as $J\left(W_h, {\mathbf{w}}_o, t\right)$ and is obtained by restricting the limits of integral in Eq.  to an infinitesimally small interval around time $t$: $$J\left(W_h, {\mathbf{w}}_o , t\right)= \begin{cases} \frac{1}{2}\left(\Delta V_{d,o}(t)\right)^2 & \ e\left(t\right) \neq 0 \\ 0 & \text{otherwise.} \end{cases}$$ Thus, using stochastic gradient descent, the prescribed change in any weight vector ${\mathbf{w}}$ at time $t$ is given as: $$\begin{aligned} \Delta {\mathbf{w}}(t) &= \begin{cases} - k(t) \cdot \nabla_{{\mathbf{w}}}J\left(W_h, {\mathbf{w}}_o , t\right) & \ e\left(t\right) \neq 0\\ 0 & \text{otherwise.} \end{cases}\end{aligned}$$ Here $k(t)$ is a time dependent learning rate. The change aggregated over the epoch is, therefore $$\begin{aligned} \nonumber \Delta {\mathbf{w}} & =\int_{t=0}^{T} - k(t)\cdot \nabla_{{\mathbf{w}}}J\left(W_h, {\mathbf{w}}_o , t\right) \cdot |e(t)| dt \nonumber\\ & =\int_{t=0}^{T} k(t) \cdot \Delta V_{d,o}(t)\cdot \nabla_{{\mathbf{w}}}V_o\left(t\right) \cdot |e(t)| dt. \label{eqMultilayerDelw}\end{aligned}$$ Minimizing the instantaneous cost only for time instants when $e(t)\neq 0$ also renders the weight updates spike-induced i.e., it is non-zero only when there is either an observed or a desired spike in the output neuron. ### Normalization Observe that in Eq. , the gradient of membrane potential $\nabla_{{\mathbf{w}}}V_o(t)$ is scaled with the error term $\Delta V_{d,o}(t)$, which serves two purposes. First, it determines the sign of the weight update at time $t$ and second, it gives more importance to weight updates corresponding to the instants with higher magnitude of error. But $V_{d}(t)$ and hence error $\Delta V_{d,o}(t)$ is not known. Also, dependence of the error on ${\mathbf{w}}_{h,i}$ is non-linear, so we eliminate the error term $\Delta V_{d,o}(t)$ for neurons in hidden layer by choosing $k\left(t\right)$ such that $$\begin{aligned} | k\left(t\right) \cdot \Delta V_{d,o}(t)|=r_h, \label{eqHiddenkoft}\end{aligned}$$ where $r_h$ is a constant. From Eq. , we obtain the weight update for the $i^{th}$ neuron in the hidden layer as $$\begin{aligned} \Delta {\mathbf{w}}_{h,i} &= r_h \int_{t=0}^T \nabla_{{\mathbf{w}}_{h,i}}V_o\left(t\right) e\left(t\right) dt, \label{eqMultilayerNormed}\end{aligned}$$ since ${\mathop{\mathrm{sgn}}}\left(\Delta V_{d,o}(t)\right) = {\mathop{\mathrm{sgn}}}\left( e(t)\right)$. For the output neuron, we eliminate the error term by choosing $k\left(t\right)$ such that $$\begin{aligned} \|k(t) \cdot \Delta V_{d,o}(t)\cdot \nabla_{{\mathbf{w}}_{o}}V_o\left(t\right)\| = r_{o},\end{aligned}$$ where $r_o$ is a constant. From Eq. , we get the weight update for the output neuron as $$\begin{aligned} \Delta {\mathbf{w}}_{o} &= r_{o} \int_{t=0}^T \frac{\nabla_{{\mathbf{w}}_{o}}V_o\left(t\right)}{\|\nabla_{{\mathbf{w}}_{o}}V_o\left(t\right)\|} e\left(t\right) dt. \label{eqOutNormed}\end{aligned}$$ Now, we proceed to determine the gradients $\nabla_{{\mathbf{w}}_{h,i}}V_o\left(t\right)$ and $\nabla_{{\mathbf{w}}_{o}}V_o\left(t\right)$. ### Gradient Approximation We use an approximation of $V_o\left(t\right)$ which is affine in ${\mathbf{w}}_{o}$ and given as $$\begin{aligned} \widehat{{\mathbf{d}}}_o\left(t\right) &= {\mathbf{c}}_{o}\left(t\right)*\widehat{h}\left(t\right) \label{eqApproxVecdo}\\ \Rightarrow V_o\left(t\right) &\approx \widehat{V}_o\left(t\right) = E_L + {\mathbf{w}}^T_{o} \widehat{{\mathbf{d}}}_o\left(t\right), \label{eqApproxVo}\end{aligned}$$ where $\widehat{h}\left(t\right)= \left(\nicefrac{1}{C_m}\right)\exp\left(-t/\tau_L'\right)u\left(t\right)$ with $\tau_L' \leq \tau_L$. Here, $\tau_L'$ is a hyper-parameter of learning rule that needs to be determined empirically. Similarly $V_{h,i}\left(t\right)$ can be approximated as $$\begin{aligned} \widehat{{\mathbf{d}}}_{h}\left(t\right) &= {\mathbf{c}}_{h}\left(t\right)*\widehat{h}\left(t\right)\\ \Rightarrow V_{h,i}\left(t\right) &\approx \widehat{V}_{h,i}\left(t\right) = E_L + {\mathbf{w}}^T_{h,i} \widehat{{\mathbf{d}}}_{h}\left(t\right). \label{eqApproxVhi}\end{aligned}$$ Note that $\widehat{V}_{h,i}\left(t\right)$ and $\widehat{V}_{o}\left(t\right)$ are linear in weight vectors ${\mathbf{w}}_{h,i}$ and ${\mathbf{w}}_{o}$ respectively of corresponding input synapses. From Eq. , we approximate $\nabla_{{\mathbf{w}}_{o}}V_o\left(t\right)$ as $$\begin{aligned} \nonumber \nabla_{{\mathbf{w}}_{o}}V_o\left(t\right) &\approx \nabla_{{\mathbf{w}}_{o}}\widehat{V}_o\left(t\right) \\ &= \widehat{{\mathbf{d}}}_o\left(t\right). \label{eqGradWoVo}\end{aligned}$$ Similarly $\nabla_{{\mathbf{w}}_{h,i}}V_o\left(t\right)$ can be approximated as $$\begin{aligned} \nonumber \nabla_{{\mathbf{w}}_{h,i}}V_o\left(t\right) &\approx \nabla_{{\mathbf{w}}_{h,i}}\widehat{V}_o\left(t\right) \\ &= w_{o,i} \left(\nabla_{{\mathbf{w}}_{h,i}} \widehat{d}_{o,i}\left(t\right)\right),\end{aligned}$$ since only $\widehat{d}_{o,i}\left(t\right)$ depends on ${\mathbf{w}}_{h,i}$. Thus, from Eq. , we get $$\begin{aligned} \nabla_{{\mathbf{w}}_{h,i}}V_o\left(t\right) &\approx w_{o,i} \left(\nabla_{{\mathbf{w}}_{h,i}} c_{o,i}\left(t\right) * \widehat{h}\left(t\right)\right). \label{eqNablaVo}\end{aligned}$$ We know that $c_{o,i}\left(t\right) = \sum_{s} \alpha \left(t-t_{h,i}^{s} \right)$, where $t_{h,i}^{s}$ denotes the $s^{th}$ spiking instant of $i^{th}$ neuron in the hidden layer. Using the chain rule of differentiation, we get $$\begin{aligned} \nabla_{{\mathbf{w}}_{h,i}} c_{o,i}\left(t\right) \approx \left( \sum_{s} \delta\left(t-t_{h,i}^s\right) \frac{\widehat{{\mathbf{d}}}_{h}\left(t_{h,i}^s\right)}{V_{h,i}'\left(t_{h,i}^s\right)} \right)* \alpha ' \left(t\right). \label{eqGradcoi}\end{aligned}$$ Refer to the appendix \[app\_grad\_approx\] for a detalied derivation of Eq. . Using Eq.  and , we obtain an approximation to $\nabla_{{\mathbf{w}}_{h,i}}V_o\left(t\right)$ as $$\begin{aligned} \nabla_{{\mathbf{w}}_{h,i}} V_o\left(t\right) &\approx w_{o,i} \cdot \left( \sum_{s}\delta\left(t-t_{h,i}^s\right) \frac{\widehat{{\mathbf{d}}}_{h}\left(t_{h,i}^s\right)}{V_{h,i}'\left(t_{h,i}^s\right)} \right)* \left(\alpha ' \left(t\right)* \widehat{h}\left(t\right) \right). \label{eqNablaVo2}\end{aligned}$$ Note that the key enabling idea in the derivation of the above learning rule is the use of the inverse of the time rate of change of the neuronal membrane potential to capture the dependency of its spike time on its membrane potential, as shown in the appendix \[app\_grad\_approx\] in detail. Spatio-Temporal Error Backpropagation ------------------------------------- Incorporating the approximation from Eq.  into Eq. , we get the weight adaptation rule for ${\mathbf{w}}_{o}$ as $$\begin{aligned} \Delta {\mathbf{w}}_{o} = r_{o}\int_{0}^T \frac{ \widehat{{\mathbf{d}}}_{o}\left(t\right)}{\| \widehat{{\mathbf{d}}}_{o}\left(t\right)\|} e\left(t\right)dt. \label{del_wo}\end{aligned}$$ Similarly incorporating the approximation made in Eq.  into Eq. , we obtain the weight adaptation rule for ${\mathbf{w}}_{h,i}$ as $$\begin{aligned} \Delta {\mathbf{w}}_{h,i} &= r_h \cdot w_{o,i} \cdot \int_{t=0}^T \left( \left( \sum_{s} \delta\left(t-t_{h,i}^s\right) \frac{\widehat{{\mathbf{d}}}_{h}\left(t_{h,i}^s\right)}{V_{h,i}'\left(t_{h,i}^s\right)} \right)* \alpha ' \left(t\right)* \widehat{h}\left(t\right) \right) e\left(t\right) dt. \label{eqBackPdw}\end{aligned}$$ Thus the adaptation rule for the weight matrix $W_{h}$ is given as $$\begin{aligned} \Delta W_{h} &= r_h \cdot \int_{t=0}^T \left( \left( U_{h}(t) {\mathbf{w}}_{o} \widehat{{\mathbf{d}}}_{h}^{T}\left(t\right) \right)* \alpha ' \left(t\right)* \widehat{h}\left(t\right) \right) e\left(t\right) dt, \label{eqBackPdw1}\end{aligned}$$ where $U_{h}(t)$ is a $m \times m$ diagonal matrix with $i^{th}$ diagonal entry given as $$\begin{aligned} u_{h,ii}(t) = \frac{ \sum_{s}\delta\left(t-t_{h,i}^s\right)}{V_{h,i}'\left(t\right)}.\end{aligned}$$ Note that Eq.  requires ${\mathcal{O}}(mn)$ convolutions to compute $\Delta W_{h}$. Using the identity (derived in appendix \[app\_conv\_proof\]) $$\begin{aligned} \int_{t} \left( x\left(t\right) * y\left(t\right) \right) z\left(t\right) dt = \int_{t} \left( z\left(t\right) * y\left(-t\right) \right) x\left(t\right) dt,\end{aligned}$$ equation  can be equivalently written in following form, which lends itself to a more efficient implementation involving only ${\mathcal{O}}(1)$ convolutions. $$\begin{aligned} \Delta W_{h} &= r_h \cdot \int_{t=0}^T \left( e\left(t\right) * \alpha ' \left(-t\right)* \widehat{h}\left(-t\right) \right) U_{h}(t) {\mathbf{w}}_{o} \widehat{{\mathbf{d}}}_{h}^{T} \left(t\right) dt. \label{eqBackPdw2}\end{aligned}$$ Rearranging the terms as follows brings forth the inherent process of *spatio-temporal backpropagation of error* happening during NormAD based training. $$\begin{aligned} \Delta W_{h} &= r_h \cdot \int_{t=0}^T U_{h}(t) \left( \left( {\mathbf{w}}_{o} e\left(t\right) \right) * \alpha ' \left(-t\right)* \widehat{h}\left(-t\right) \right) \widehat{{\mathbf{d}}}_{h}^{T} \left(t\right) dt.\end{aligned}$$ Here spatial backpropagation is done through the weight vector ${\mathbf{w}}_{o}$ as $${\mathbf{e}}^{spat}_{h}(t) = {\mathbf{w}}_{o} e(t)$$ and then temporal backpropagation by convolution with time reversed kernels $\alpha ' (t)$ and $\widehat{h} (t)$ and sampling with $U_{h}(t)$ as $${\mathbf{e}}^{temp}_{h}(t) = U_{h}(t) \left({\mathbf{e}}^{spat}_{h}(t) * \alpha ' \left(-t\right) * \widehat{h}\left(-t\right)\right).$$ It will be more evident when we generalize it to SNNs with arbitrarily many hidden layers. From Eq. , note that the weight update for synapses of a neuron in hidden layer depends on its own spiking activity thus suggesting the spike-induced nature of weight update. However, in case all the spikes of the hidden layer vanish in a particular training iteration, there will be no spiking activity in the output layer and as per Eq.  the weight update $\Delta {\mathbf{w}}_{h,i}={\mathbf{0}}$ for all subsequent iterations. To avoid this, regularization techniques such as constraining the average spike rate of neurons in the hidden layer to a certain range can be used, though it has not been used in the present work. ![Fully connected feedforward SNN with $L$ layers ($N_{0} \rightarrow N_{1} \rightarrow N_{2} \cdots N_{L-1} \rightarrow 1$)[]{data-label="figMultiLayerSNN_cartoon"}](multiLayer_nn_2.png) ### Generalization to Deep SNNs {#spatTempBackProp} For the case of feedforward SNNs with two or more hidden layers, the weight update rule for output layer remains the same as in Eq. . Here, we provide the general weight update rule for any particular hidden layer of an arbitrary fully connected feedforward SNN $N_{0} \rightarrow N_{1} \rightarrow N_{2} \cdots N_{L-1} \rightarrow 1$ with L layers as shown in Fig. \[figMultiLayerSNN\_cartoon\]. This can be obtained by the straight-forward extension of the derivation for the case with single hidden layer discussed above. For this discussion, the subscript *h* or *o* indicating the layer of the corresponding neuron in the previous discussion is replaced by the layer index to accommodate arbitrary number of layers. The iterative weight update rule for synapses connecting neurons in layer $l-1$ to neurons in layer $l$ viz., $W_{l}$ $(0 < l < L)$ is given as follows: $$\begin{aligned} \label{eqSptTmpBackP} \Delta W_{l} = r_h \int_{t=0}^{T} {\mathbf{e}}^{temp}_{l}(t) {\mathbf{\widehat{d}}}_{l}^{T}(t) dt \qquad \text{for } 0<l<L,\end{aligned}$$ where $${\mathbf{e}}^{temp}_{l}(t) = \begin{cases} U_{l}(t) \left({\mathbf{e}}^{spat}_{l}(t) * \alpha ' \left(-t\right) * \widehat{h}\left(-t\right)\right) & 1<l<L\\ e(t) & l = L, \end{cases} \label{eqTemporalBackProp}$$ performs *temporal backpropagation* following the *spatial backpropagation* as $${\mathbf{e}}^{spat}_{l}(t) = W_{l+1}^{T} {\mathbf{e}}^{temp}_{l+1}(t) \qquad \text{for } 1<l<L. \label{eqSpaialBackProp}$$ Here $U_{l}(t)$ is an $N_{l} \times N_{l}$ diagonal matrix with $n^{th}$ diagonal entry given as $$\begin{aligned} \label{eqInvDiffV} u_{l,nn}(t) = \frac{ \sum_{s}\delta\left(t-t_{l,n}^s\right)}{V_{l,n}'\left(t\right)},\end{aligned}$$ where $V_{l,n}\left(t\right)$ is the membrane potential of $n^{th}$ neuron in layer $l$ and $t_{l,n}^s$ is the time of its $s^{th}$ spike. From Eq. , note that temporal backpropagation through layer $l$ requires ${\mathcal{O}}\left(N_{l}\right)$ convolutions. Numerical validation {#validate} ==================== In this section we validate the applicability of NormAD based spatio-temporal error backpropagation to the training of multi-layer SNNs. The algorithm comprises of Eq.  - . XOR Problem {#XORtraining} ----------- XOR problem is a prominent example of non-linear classification problems which can not be solved using the single layer neural network architecture and hence compulsorily require a multi-layer network. Here, we present how proposed NormAD based training was employed to solve a spike domain formulation of the XOR problem for a multi-layer SNN. The XOR problem is similar to the one used in [@bohte2002error] and represented by Table \[tableXOR\]. There are $3$ input neurons and $4$ different input spike patterns given in the $4$ rows of the table, where temporal encoding is used to represent logical $0$ and $1$. The numbers in the table represent the arrival time of spikes at the corresponding neurons. The bias input neuron always spikes at $t=0\,$ms. The other two inputs can have two types of spiking activity viz., presence or absence of a spike at $t=6\,$ms, representing logical $1$ and $0$ respectively. The desired output is coded such that an early spike (at $t=10\,$ms) represents a logical $1$ and a late spike (at $t=16\,$ms) represents a logical $0$. ------ ----------- ----------- ----------------- Bias Input $1$ Input $2$ spike time (ms) 0 - - 16 0 - 6 10 0 6 - 10 0 6 6 16 ------ ----------- ----------- ----------------- : XOR Problem set-up from [@bohte2002error], which uses arrival time of spike to encode logical $0$ and $1$.[]{data-label="tableXOR"} In the network reported in [@bohte2002error], the three input neurons had $16$ synapses with axonal delays of $0,1,2,...,15\,$ms respectively. Instead of having multiple synapses we use a set of $18$ different input neurons for each of the three inputs such that when the first neuron of the set spikes, second one spikes after $1\,$ms, third one after another $1\,$ms and so on. Thus, there are $54$ input neurons comprising of three sets with $18$ neurons in each set. So, a $54 \rightarrow 54 \rightarrow 1$ feedforward SNN is trained to perform the XOR operation in our implementation. Input spike rasters corresponding to the $4$ input patterns are shown in Fig. \[figXorRaster\] (left). ![XOR problem: Input spike raster (left) and corresponding output spike raster (right - blue dots) obtained during NormAD based training of a $54 \rightarrow 54 \rightarrow 1$ SNN with vertical red lines marking the position of desired spikes. The output spike raster is plotted for one in every 5 training iterations for clarity.[]{data-label="figXorRaster"}](xorRaster.png) Weights of synapses from the input layer to the hidden layer were initialized randomly using Gaussian distribution, with $80\%$ of the synapses having positive mean weight (excitatory) and rest $20\%$ of the synapses having negative mean weight (inhibitory). The network was trained using NormAD based spatio-temporal error backpropagation. Figure \[figXorRaster\] plots the output spike raster (on right) corresponding to each of the four input patterns (on left), for an exemplary initialization of the weights from the input to the hidden layer. As can be seen, convergence was achieved in less than $120$ training iterations in this experiment. The necessity of a multi-layer SNN for solving an XOR problem is well known, but to demonstrate the effectiveness of NormAD based training to hidden layers as well, we conducted two experiments. For $100$ independent random initializations of the synaptic weights to the hidden layer, the SNN was trained with (i) non-plastic hidden layer, and (ii) plastic hidden layer. The output layer was trained using Eq.  in both the experiments. Figures \[meanCorrXor\] and \[stdDevCorrXor\] show the mean and standard deviation respectively of spike correlation against training iteration number for the two experiments. For the case with non-plastic hidden layer, the mean correlation reached close to 1, but the non-zero standard deviation represents a sizable number of experiments which did not converge even after $800$ training iterations. When the synapses in hidden layer were also trained, convergence was obtained for all the $100$ initializations within $400$ training iterations. The convergence criteria used in these experiments was to reach the perfect spike correlation metric of $1.0$. ![Plots of mean and standard deviation of spike correlation metric over $100$ different initializations of $54 \rightarrow 54 \rightarrow 1$ SNN, trained for the XOR problem with non-plastic hidden layer (red asterisk) and plastic hidden layer (blue circles).[]{data-label="figXorCorr"}](xor_corr_mean_allLayers_100nets.png) ![Plots of mean and standard deviation of spike correlation metric over $100$ different initializations of $54 \rightarrow 54 \rightarrow 1$ SNN, trained for the XOR problem with non-plastic hidden layer (red asterisk) and plastic hidden layer (blue circles).[]{data-label="figXorCorr"}](xor_corr_std_allLayers_100nets.png) Training SNNs with 2 Hidden Layers {#multilayerTraining} ---------------------------------- Next, to demonstrate spatio-temporal error backpropagation through multiple hidden layers, we applied the algorithm to train $100 \rightarrow 50 \rightarrow 25 \rightarrow 1$ feedforward SNNs for general spike based training problems. The weights of synapses feeding the output layer were initialized to $0$, while synapses feeding the hidden layers were initialized using a uniform random distribution and with $80\%$ of them excitatory and the rest $20\%$ inhibitory. Each training problem comprised of $n=100$ input spike trains and one desired output spike train, all generated to have Poisson distributed spikes with arrival rate $20\,$s$^{-1}$ for inputs and $10\,$s$^{-1}$ for the output, over an epoch duration $T=500\,$ms. Figure \[figMultilayerRaster\] shows the progress of training for an exemplary training problem by plotting the output spike rasters for various training iterations overlaid on plots of vertical red lines denoting the positions of desired spikes. ![Illustrating NormAD based training of an exemplary problem for 3-layer $100 \rightarrow 50 \rightarrow 25 \rightarrow 1$ SNN. The output spike rasters (blue dots) obtained during one in every 20 training iterations (for clarity) is shown, overlaid on plots of vertical red lines marking positions of the desired spikes.[]{data-label="figMultilayerRaster"}](multilayer_training_raster.png) To assess the gain of training hidden layers using NormAD based spatio-temporal error backpropagation, we ran a set of $3$ experiments. For $100$ different training problems for the same SNN architecture as described above, we studied the effect of (i) training only the output layer weights, (ii) training only the outer 2 layers and (iii) training all the $3$ layers. ![Plots showing cumulative number of training problems for which convergence was achieved out of total $100$ different training problems for 3-layer $100 \rightarrow 50 \rightarrow 25 \rightarrow 1$ SNNs.[]{data-label="figHistMultiLayerCumulativeTrained"}](multilayer_cumulativeTrained_100nets.png) Figure \[figHistMultiLayerCumulativeTrained\] plots the cumulative number of SNNs trained against number of training itertions for the $3$ cases, where the criteria for completion of training is reaching the correlation metric of $0.98$ or above. Figures \[meanCorr\] and \[stdDevCorr\] show plots of mean and standard deviation respectively of spike correlation against training iteration number for the $3$ experiments. As can be seen, in the third experiment when all $3$ layers were trained, all $100$ training problems converged within $6000$ training iterations. In contrast, the first $2$ experiments have non-zero standard deviation even until $10000$ training iterations indicating non-convergence for some of the cases. In the first eperiment, where only synapses feeding the output layer were trained, convergence was achieved only for $71$ out of $100$ training problems after $10000$ iterations. However, when the synapses feeding the top two layers or all three layers were trained, the number of cases reaching convergenvce rose to $98$ and $100$ respectively, thus proving the effectiveness of the proposed NormAD based training method for multi-layer SNNs. ![Plots of mean and standard deviation of spike correlation metric while partially or completely training 3-layer $100 \rightarrow 50 \rightarrow 25 \rightarrow 1$ SNNs for $100$ different training problems.[]{data-label="figCorrMultiLayerCmp1"}](multilayer_corr_allLayers_100nets.png) ![Plots of mean and standard deviation of spike correlation metric while partially or completely training 3-layer $100 \rightarrow 50 \rightarrow 25 \rightarrow 1$ SNNs for $100$ different training problems.[]{data-label="figCorrMultiLayerCmp1"}](multilayer_corr_std_allLayers_100nets.png) Conclusion {#secConclusion} ========== We developed NormAD based spaio-temporal error backpropagation to train multi-layer feedforward spiking neural networks. It is the spike domain analogue of error backpropagation algorithm used in second generation neural networks. The derivation was accomplished by first formulating the corresponding training problem as a non-convex optimization problem and then employing Normalized Approximate Descent based optimization to obtain the weight adaptation rule for the SNN. The learning rule was validated by applying it to train $2$ and $3$-layer feedforward SNNs for a spike domain formulation of the XOR problem and general spike domain training problems respectively. The main contribution of this work is hence the development of a learning rule for spiking neural networks with arbitrary number of hidden layers. One of the major hurdles in achieving this has been the problem of backpropagating errors through non-linear leaky integrate-and-fire dynamics of a spiking neuron. We have tackled this by introducing temporal error backpropagation and quantifying the dependence of the time of a spike on the corresponding membrane potential by the inverse temporal rate of change of the membrane potential. This together with the spatial backpropagation of errors constitutes NormAD based training of multi-layer SNNs. The problem of local convergence while training second generation deep neural networks is tackled by unsupervised *pretraining* prior to the application of error backpropagation [@hinton2006fast; @Erhan:2010:WUP:1756006.1756025]. Development of such unsupervised pretraining techniques for deep SNNs is a topic of future research, as NormAD could be applied in principle to develop SNN based autoencoders. Gradient Approximation {#app_grad_approx} ====================== Derivation of Eq. \[eqGradcoi\] is presented below: $$\begin{aligned} \nabla_{{\mathbf{w}}_{h,i}} c_{o,i}\left(t\right) \nonumber &= \sum_{s} \frac{\partial \alpha \left(t-t_{h,i}^s \right)}{\partial t_{h,i}^s} \cdot \nabla_{{\mathbf{w}}_{h,i}} t_{h,i}^s \qquad (\text{from Eq. \eqref{eq_c_oi_of_t}})\\ &= \sum_{s} - \alpha ' \left(t-t_{h,i}^s \right) \cdot \nabla_{{\mathbf{w}}_{h,i}} t_{h,i}^s \label{eqGradcoiOrigin} $$ To compute $\nabla_{{\mathbf{w}}_{h,i}} t_{h,i}^s$, let us assume that a small change $\delta w_{h,ij}$ in $w_{h,ij}$ led to changes in $V_{h,i}(t)$ and $t_{h,i}^s$ by $\delta V_{h,i}(t)$ and $\delta t_{h,i}^s$ respectively i.e., $$\begin{aligned} V_{h,i}(t_{h,i}^s + \delta t_{h,i}^s) + \delta V_{h,i}(t_{h,i}^s + \delta t_{h,i}^s) = V_{T}. \label{eq_delta_spike}\end{aligned}$$ From Eq. , $\delta V_{h,i}(t)$ can be approximated as $$\begin{aligned} \delta V_{h,i}(t) \approx \delta w_{h,ij} \cdot \widehat{d}_{h,j}(t),\end{aligned}$$ hence from Eq. above $$\begin{aligned} \nonumber V_{h,i}(t_{h,i}^s) + \delta t_{h,i}^s V_{h,i}'(t_{h,i}^s) + \delta w_{h,ij} \cdot \widehat{d}_{h,j}(t_{h,i}^s + \delta t_{h,i}^s) &\approx V_{T}\end{aligned}$$ $$\begin{aligned} & \nonumber \implies \frac{\delta t_{h,i}^s}{\delta w_{h,ij}} \approx \frac{-\widehat{d}_{h,j}(t_{h,i}^s + \delta t_{h,i}^s)}{V_{h,i}'(t_{h,i}^s)} \qquad (\text{since } V_{h,i}(t_{h,i}^s) = V_{T} )\\ & \nonumber \implies \frac{\partial t_{h,i}^s}{\partial w_{h,ij}} \approx \frac{-\widehat{d}_{h,j}(t_{h,i}^s)}{V_{h,i}'(t_{h,i}^s)}\\ & \implies \nabla_{{\mathbf{w}}_{h,i}} t_{h,i}^s \approx \frac{-\widehat{{\mathbf{d}}}_{h}\left(t_{h,i}^s\right)}{V_{h,i}'\left(t_{h,i}^s\right)}. \label{eq_grad_sp_time}\end{aligned}$$ Thus using Eq. in Eq. we get $$\begin{aligned} \nabla_{{\mathbf{w}}_{h,i}} c_{o,i}\left(t\right) \nonumber &\approx \sum_{s} \alpha ' \left(t-t_{h,i}^s \right)\frac{\widehat{{\mathbf{d}}}_{h}\left(t_{h,i}^s\right)}{V_{h,i}'\left(t_{h,i}^s\right)}\\ &\approx \left( \sum_{s} \delta\left(t-t_{h,i}^s\right) \frac{\widehat{{\mathbf{d}}}_{h}\left(t_{h,i}^s\right)}{V_{h,i}'\left(t_{h,i}^s\right)} \right)* \alpha ' \left(t\right). \label{eqGradcoiAppendix}\end{aligned}$$ Note that approximation in Eq.  is an important step towards obtaining weight adaptation rule for hidden layers, as it now allows us to approximately model the dependence of the spiking instant of a neuron on its inputs using the inverse of the time derivative of its membrane potential. {#app_conv_proof} Given 3 functions $x(t)$, $y(t)$ and $z(t)$ $$\begin{aligned} \int_{t} \left( x\left(t\right) * y\left(t\right) \right) z\left(t\right) dt = \int_{t} \left( z\left(t\right) * y\left(-t\right) \right) x\left(t\right) dt.\end{aligned}$$ By definition of linear convolution $$\begin{aligned} \int_{t} \left( x\left(t\right) * y\left(t\right) \right) z\left(t\right) dt &= \int_{t} \left( \int_{u} x\left(u\right) y\left(t - u\right) du \right) z\left(t\right) dt.\end{aligned}$$ Changing the order of integration, we get $$\begin{aligned} \int_{t} \left( x\left(t\right) * y\left(t\right) \right) z\left(t\right) dt &= \int_{u} x\left(u\right) \left( \int_{t} y\left(t - u\right) z\left(t\right) dt \right) du\\ &= \int_{u} x\left(u\right) \left( y\left(-u\right) * z\left(u\right) \right) du\\ &= \int_{t} \left( z\left(t\right) * y\left(-t\right) \right) x\left(t\right) dt.\end{aligned}$$ Acknowledgment {#acknowledgment .unnumbered} ============== This research was supported in part by the U.S. National Science Foundation through the grant 1710009. The authors acknowledge the invaluable insights gained during their stay at Indian Institute of Technology, Bombay where the initial part of this work was conceived and conducted as a part of a master’s thesis project. We also acknowledge the reviewer comments which helped us expand the scope of this work and bring it to its present form. [10]{} Wolfgang Maass. Networks of spiking neurons: the third generation of neural network models. , 10(9):1659–1671, 1997. Sander M. Bohte. The evidence for neural information processing with precise spike-times: A survey. 3(2):195–206, May 2004. Patrick Crotty and William B Levy. Energy-efficient interspike interval codes. , 65:371 – 378, 2005. omputational Neuroscience: Trends in Research 2005. William Bialek, David Warland, and Rob de Ruyter van Steveninck. . MIT Press, Cambridge, Massachusetts, 1996. Wulfram Gerstner, Andreas K. Kreiter, Henry Markram, and Andreas V. M. Herz. Neural codes: Firing rates and beyond. , 94(24):12740–12741, 1997. Steven A. Prescott and Terrence J. Sejnowski. Spike-rate coding and spike-time coding are affected oppositely by different adaptation mechanisms. , 28(50):13649–13661, 2008. Simon Thorpe, Arnaud Delorme, and Rufin Van Rullen. Spike-based strategies for rapid processing. , 14(6):715–725, 2001. Paul A Merolla, John V Arthur, Rodrigo Alvarez-Icaza, Andrew S Cassidy, Jun Sawada, Filipp Akopyan, Bryan L Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. , 345(6197):668–673, 2014. Jeff Gehlhaar. Neuromorphic processing: A new frontier in scaling computer architecture. In [*Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS)*]{}, pages 317–318, 2014. Michael Mayberry. ntel’s new self-learning chip promises to accelerate artificial intelligence, Sept 2017. Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. , 18(7):1527–1554, 2006. Navin Anwani and Bipin Rajendran. Normad-normalized approximate descent based supervised learning rule for spiking neurons. In [*Neural Networks (IJCNN), 2015 International Joint Conference on*]{}, pages 1–8. IEEE, 2015. Sander M Bohte, Joost N Kok, and Han La Poutre. Error-backpropagation in temporally encoded networks of spiking neurons. , 48(1):17–37, 2002. Olaf Booij and Hieu tat Nguyen. A gradient descent rule for spiking neurons emitting multiple spikes. , 95(6):552–558, 2005. Filip Ponulak and Andrzej Kasinski. Supervised learning in spiking neural networks with resume: sequence learning, classification, and spike shifting. , 22(2):467–510, 2010. A. Taherkhani, A. Belatreche, Y. Li, and L. P. Maguire. Dl-resume: A delay learning-based remote supervised method for spiking neurons. , 26(12):3137–3149, Dec 2015. H[é]{}lene Paugam-Moisy, R[é]{}gis Martinez, and Samy Bengio. A supervised learning approach based on stdp and polychronization in spiking neuron networks. In [*ESANN*]{}, pages 427–432, 2007. I. Sporea and A. Grüning. Supervised learning in multilayer spiking neural networks. , 25(2):473–509, Feb 2013. Yan Xu, Xiaoqin Zeng, and Shuiming Zhong. A new supervised learning algorithm for spiking neurons. , 25(6):1472–1511, 2013. Qiang Yu, Huajin Tang, Kay Chen Tan, and Haizhou Li. Precise-spike-driven synaptic plasticity: Learning hetero-association of spatiotemporal spike patterns. , 8(11):e78318, 2013. Ammar Mohemmed, Stefan Schliebs, Satoshi Matsuda, and Nikola Kasabov. Span: Spike pattern association neuron for learning spatio-temporal spike patterns. , 22(04), 2012. J. J. Wade, L. J. McDaid, J. A. Santos, and H. M. Sayers. S[W]{}[A]{}[T]{}: A spiking neural network training algorithm for classification problems. , 21(11):1817–1830, Nov 2010. Xiurui Xie, Hong Qu, Guisong Liu, Malu Zhang, and Jürgen Kurths. An efficient supervised training algorithm for multilayer spiking neural networks. , 11(4):1–29, 04 2016. Xianghong Lin, Xiangwen Wang, and Zhanjun Hao. Supervised learning in multilayer spiking neural networks with inner products of spike trains. , 2016. Stefan Schliebs and Nikola Kasabov. Evolving spiking neural network—a survey. , 4(2):87–98, Jun 2013. SNJEZANA SOLTIC and NIKOLA KASABOV. Knowledge extraction from evolving spiking neural networks with rank order population coding. , 20(06):437–445, 2010. PMID: 21117268. Raoul-Martin Memmesheimer, Ran Rubin, Bence P. Ölveczky, and Haim Sompolinsky. Learning precisely timed spikes. , 82(4):925 – 938, 2014. Jun Haeng Lee, Tobi Delbruck, and Michael Pfeiffer. Training deep spiking neural networks using backpropagation. , 10:508, 2016. Yan Xu, Xiaoqin Zeng, Lixin Han, and Jing Yang. A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks. , 43:99–113, 2013. R[ă]{}zvan V Florian. The chronotron: a neuron that learns to fire temporally precise spike patterns. , 7(8):e40233, 2012. Alan L Hodgkin and Andrew F Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. , 117(4):500, 1952. Richard B Stein. Some models of neuronal variability. , 7(1):37, 1967. M. C. W. van Rossum. A novel spike distance. , 13(4):751–763, 2001. Filip Ponulak and Andrzej Kasiński. Supervised learning in spiking neural networks with [R]{}e[S]{}u[M]{}e: Sequence learning, classification, and spike shifting. , 22(2):467–510, February 2010. Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. Why does unsupervised pre-training help deep learning? , 11:625–660, March 2010. [^1]: Corresponding author
{ "pile_set_name": "ArXiv" }
--- abstract: 'We will make the case that *pedal coordinates* (instead of polar or Cartesian coordinates) are more natural settings in which to study force problems of classical mechanics in the plane. We will show that the trajectory of a test particle under the influence of central and Lorentz-like forces can be translated into pedal coordinates at once without the need of solving any differential equation. This will allow us to generalize Newton theorem of revolving orbits to include nonlocal transforms of curves. Finally, we apply developed methods to solve the “dark Kepler problem”, i.e. central force problem where in addition to the central body, gravitational influences of dark matter and dark energy are assumed.' address: ' Mathematical Institute, Silesian University in Opava, Na Rybnicku 1, 746 01 Opava, Czech Republic' author: - Petr Blaschke title: 'Pedal coordinates, Dark Kepler and other force problems' --- [^1] Introduction {#Intro} ============ Since the time of Isaac Newton it is known that conic sections offers full description of trajectories for the so-called Kepler problem – i.e. central force problem, where the force varies inversely as a square of the distance: $ F\propto \frac{1}{r^2 }. $ There is also another force problem for which the trajectories are fully described, Hook’s law, where the force varies in proportion with the distance: $ F\propto r. $ (This law is usually used in the context of material science but can be also interpreted as a problem of celestial mechanics since such a force would produce gravity in a spherically symmetric, homogeneous bulk of dark matter by Newton shell theorem.) Solutions of Hook’s law are also conic sections but with the distinction that the origin is now in the center (instead of the focus) of the conic section. But save for the law of gravity and Hook’s law there seems to be no other force problem whose trajectories are fully described in known curves. Indeed, by Bertrand’s theorem [@Bertrand] such a description would be problematic at best, since it states that *no other* central force problem (i.e. when the force is only a function of distance from the center and points to it) has the property that all bounded curves are also closed. But Newton himself (in *Philosophiæ Naturalis Principia Mathematica*) proves that this effort is not hopeless after all by showing that there is another force inversely proportional to the cube of the distance: $ F\propto \frac{1}{r^3}, $ whose contribution can be understood purely geometrically. More precisely, he observed, that if a curve is given as a solution to the central force problem $F(r)$ adding additional force of the form $ \frac{L^2}{m r^3}(1-k^2), $ where $L$ is the particle’s angular momentum and $m$ its mass, is equivalent to a making the curve’s $k$-th harmonic. The $k$-th harmonic of a curve is done simply by multiplying the angle of every point on a curve by a constant $k$. For example, the following picture shows the second harmonic ($k=2$) and the third subharmonic ($k=\frac{1}{3}$) of an ellipse, where the center of polar coordinates are in one focus: (5.17,1.138) rectangle (14.476,9.322); (0,0) – (1.2453642667683522:1.65) arc (1.2453642667683522:58.81705235472486:1.65) – cycle; (0,0) – (1.2453642667683522:0.66) arc (1.2453642667683522:116.38874044268138:0.66) – cycle; (-1.122,9.476) – (3.278,9.476); (-1.188,10.422) – (3.212,10.422); (10.42,5.22) ellipse (3.0093012461645294cm and 2.3809019278767845cm); (8.58,5.18) circle (2.8029826422986392cm); plot(,[(-13.057096900389318–2.3980032179015085\*)/1.4513070867192344]{}); plot(,[(–18.376–0.08\*)/3.68]{}); (7.334188721272077,7.690909387218455)(9.748590414543063,5.205404187075875) – (9.748590413508122,5.205404234682668) – (9.748590411438258,5.205404329896248) – (9.74859040729849,5.20540452032341) – (9.748590399018877,5.205404901177732) – (9.748590382459325,5.205405662886367) – (9.748590349338864,5.205407186303608) – (9.748590283092543,5.205410233137976) – (9.748590150578332,5.205416326806241) – (9.748589885463623,5.205428514140894) – (9.748589354889083,5.205452888802687) – (9.748588292359457,5.205501638096203) – (9.748586161778114,5.205599136562709) – (9.748581878527002,5.205794133011689) – (9.748573223671453,5.206184123958111) – (9.748555560549839,5.206964097921643) – (9.748518820687163,5.2085240131460315) – (9.748439686674999,5.211643704901045) – (9.748258803187971,5.217882470575507) – (9.747806590346414,5.230357026177157) – (9.746540549160109,5.255290200800038) – (9.742563989356876,5.3050605737863625) – (9.728859050981733,5.4039605914704225) – (9.7075629967265,5.501521581630724) – (9.67881183453481,5.5971718737473655) – (9.642788646603776,5.690355448967924) – (9.599721525613443,5.780536409440664) – (9.549881026713605,5.86720313945163) – (9.493577186473797,5.9498721001278705) – (9.431156167462326,6.028091208774393) – (9.362996592759535,6.10144276417264) – (9.289505638423352,6.169545890005588) – (9.211114953689206,6.232058479607591) – (9.128276478555524,6.288678636099268) – (9.041458226505707,6.339145612323527) – (8.95114009663188,6.383240264556782) – (8.857809774584505,6.4207850424919695) – (8.76195877583638,6.451643545298953) – (8.664078677997262,6.475719679547866) – (8.564657581626193,6.492956459375396) – (8.464176831432813,6.503334492484546) – (8.363108022185996,6.506870197444686) – (8.26191030627911,6.503613798392131) – (8.161028012923078,6.493647142746824) – (8.06088858250086,6.47708138610271) – (7.961900813831905,6.454054586179817) – (7.864453417036887,6.42472924480874) – (7.768913860402554,6.38928983351731) – (7.6756274961326305,6.347940334559033) – (7.584916947118106,6.3009018253063775) – (7.497081734830339,6.2484101299551735) – (7.412398127079814,6.19071355855806) – (7.331119183626643,6.12807074961468) – (7.253474977402996,6.060748628865078) – (7.179672969336822,5.989020493613974) – (7.109898515374115,5.9131642288934785) – (7.044315485209685,5.833460659073062) – (6.9830669733845525,5.750192036156841) – (6.92627608472826,5.663640663968966) – (6.874046777559361,5.574087655707908) – (6.826464749558036,5.4818118209335935) – (6.783598352748235,5.387088676916737) – (6.745499525538462,5.29018957840268) – (6.71220473124151,5.191380959196779) – (6.683735893902058,5.090923678537894) – (6.660101323590879,4.989072464963941) – (6.641296624563267,4.886075450263317) – (6.627305580820557,4.782173786123479) – (6.618101014653119,4.677601336210819) – (6.613645614679964,4.572584436623309) – (6.613892730736365,4.467341717930838) – (6.618787133699133,4.362083982341404) – (6.628265738984962,4.257014129889905) – (6.642258293015537,4.152327127928462) – (6.660688022420969,4.04821001859186) – (6.68347224615652,3.9448419593113524) – (6.710522951043949,3.8423942918464995) – (6.741747331524614,3.7410306356931695) – (6.77704829463303,3.640907002101735) – (6.8163249313739644,3.54217192529923) – (6.859472955818235,3.444966607850743) – (6.906385113328556,3.34942507741689) – (6.956951559391497,3.255674352464955) – (7.0110602105698705,3.163834614770415) – (7.068597069105544,3.074019386803723) – (7.129446522699757,2.986335712334136) – (7.193491620979506,2.9008843387989502) – (7.260614330127742,2.8177599001836477) – (7.330695767114246,2.7370510993367247) – (7.403616414915438,2.6588408888039656) – (7.479256320057111,2.5832066494112014) – (7.557495273755368,2.510220365953696) – (7.63821297786992,2.439948799465126) – (7.721289196820803,2.3724536556410136) – (7.806603896556175,2.3077917490811486) – (7.894037371595317,2.2460151630946075) – (7.983470361108491,2.187171404879844) – (8.074784154934068,2.131303555952325) – (8.167860690373875,2.078450417744177) – (8.262582640550145,2.028646652344794) – (8.358833495052671,1.9819229183895257) – (8.456497633551743,1.9383060021358816) – (8.555460393002518,1.8978189437937343) – (8.655608129019054,1.8604811591985895) – (8.75682827195123,1.8263085569354662) – (8.8590093781556,1.7953136510358596) – (8.962041176911667,1.767505669382111) – (9.06581461339765,1.7428906579625258) – (9.17022188810545,1.7214715811273762) – (9.275156493041832,1.7032484180005625) – (9.380513245032883,1.688218255204593) – (9.486188316420785,1.6763753760579632) – (9.592079263415977,1.6677113464041096) – (9.698085052343739,1.6622150972300882) – (9.804106084002106,1.6598730042312897) – (9.9100442163274,1.6606689644757724) – (10.01580278554491,1.6645844703185442) – (10.121286625964741,1.6715986807123135) – (10.22640208856699,1.6816884900570712) – (10.331057058505593,1.6948285947262525) – (10.435160971646962,1.7109915574026746) – (10.53862483024684,1.7301478693523071) – (10.641361217858057,1.7522660107591386) – (10.743284313550973,1.7773125092391382) – (10.84430990551962,1.8052519966463896) – (10.944355404137406,1.8360472642792653) – (11.043339854519015,1.86965931658953) – (11.141183948637604,1.9060474234924025) – (11.237810037040278,1.9451691713705683) – (11.333142140198863,1.9869805128605464) – (11.4271059595278,2.031435815505113) – (11.519628888096191,2.0784879093509874) – (11.610640021056808,2.1280881335667456) – (11.700070165810887,2.1801863821514726) – (11.787851851924284,2.2347311488009662) – (11.873919340807284,2.291669570994089) – (11.95820863516767,2.3509474733582922) – (12.0406574882442,2.41250941036972) – (12.121205412825457,2.476298708439837) – (12.199793690057048,2.5422575074372284) – (12.276365378038522,2.610326801690161) – (12.350865320209849,2.680446480512422) – (12.42324015352596,2.7525553682921435) – (12.493438316416771,2.826591264180631) – (12.561410056529246,2.9024909814155646) – (12.62710743824703,2.9801903863106824) – (12.69048434998274,3.059624436941452) – (12.751496511237306,3.1407272215544793) – (12.81010147942029,3.2234319967257536) – (12.866258656424908,3.3076712252915517) – (12.919929294951098,3.3933766140733876) – (12.971076504569798,3.480479151417029) – (13.019665257521655,3.5689091445639054) – (13.065662394243166,3.658596256871548) – (13.109036628613381,3.749469544898501) – (13.14975855291446,3.841457495367651) – (13.187800642499305,3.9344880620207587) – (13.22313726015994,4.028488702375771) – (13.255744660190262,4.123386414397375) – (13.285600992137308,4.2191077730904745) – (13.312686304235282,4.3155789670251) – (13.336982546516971,4.412725834800517) – (13.358473573597603,4.510473901455622) – (13.377145147126438,4.608748414831978) – (13.392984937901879,4.707474381895101) – (13.405982527646255,4.806576605019296) – (13.416129410436849,4.905979718240599) – (13.423418993790209,5.005608223482213) – (13.427846599397203,5.105386526756304) – (13.429409463506687,5.205238974345699) – (13.428106736956313,5.3050898889691664) – (13.423939484849122,5.404863605933117) – (13.41691068587545,5.504484509273284) – (13.407025231279771,5.603877067889188) – (13.394289923472835,5.702965871674664) – (13.3787134742898,5.801675667647616) – (13.360306502895522,5.899931396082367) – (13.339081533338716,5.997658226648159) – (13.315052991756982,6.094781594557642) – (13.288237203235326,6.191227236729507) – (13.258652388321071,6.28692122796984) – (13.226318659198565,6.381790017177197) – (13.191258015527424,6.475760463576959) – (13.153494339948477,6.568759872991089) – (13.113053393261886,6.660716034150114) – (13.06996280928233,6.751557255054852) – (13.024252089376288,6.841212399396206) – (12.975952596686913,6.929610923042269) – (12.925097550052154,7.016682910602895) – (12.871722017621895,7.1023591120829135) – (12.81586291018019,7.186570979636356) – (12.757558974178728,7.269250704435155) – (12.696850784487669,7.350331253667133) – (12.633780736870108,7.429746407679471) – (12.56839304018628,7.507430797285244) – (12.500733708333552,7.583319941252271) – (12.430850551927993,7.657350283995088) – (12.358793169733133,7.729459233492702) – (12.284612939841058,7.799585199456578) – (12.208363010610508,7.86766763177533) – (12.130098291366206,7.93364705926469) – (12.049875442862763,7.997465128753458) – (11.96775286751582,8.059064644538566) – (11.883790699402008,8.118389608244787) – (11.79805079402827,8.175385259127108) – (11.710596717869727,8.229998114856746) – (11.621493737673752,8.282176012834285) – (11.5308088095263,8.331868152076659) – (11.4386105676746,8.379025135727735) – (11.344969313097977,8.423599014245408) – (11.249957001816458,8.465543329321658) – (11.153647232923777,8.5048131585955) – (11.056115236328443,8.541365161222346) – (10.957437860183248,8.575157624367288) – (10.857693557979589,8.606150510693567) – (10.756962375278983,8.63430550692177) – (10.655325936049373,8.659586073539314) – (10.55286742856872,8.681957495744268) – (10.449671590852763,8.701386935711845) – (10.345824695557459,8.71784348627661) – (10.24141453429986,8.731298226127798) – (10.136530401333557,8.741724276620072) – (10.031263076506523,8.749096860306507) – (9.925704807420168,8.753393361305445) – (9.819949290698375,8.754593387617408) – (9.714091652264475,8.75267883551294) – (9.608228426512264,8.747633956116651) – (9.502457534244078,8.739445424317022) – (9.396878259235015,8.728102410135694) – (9.29159122326691,8.713596652693457) – (9.186698359459012,8.695922536913788) – (9.082302883704227,8.675077173107335) – (8.978509264000152,8.651060479583098) – (8.875423187442907,8.623875268433531) – (8.773151524628732,8.593527334641259) – (8.671802291183704,8.560025548654675) – (8.571484606115039,8.52338195257791) – (8.47230864664891,8.48361186011755) – (8.374385599188805,8.440733960423355) – (8.277827605995459,8.394770425953602) – (8.182747707154014,8.345747024486315) – (8.089259777356617,8.293693235386023) – (7.997478456988257,8.238642370220813) – (7.907519076961504,8.180631697806362) – (7.8194975767004795,8.119702573731686) – (7.7335304146272765,8.05590057439489) – (7.649734470453984,7.989275635545854) – (7.568226938531572,7.919882195296282) – (7.489125211452532,7.847779341514213) – (7.4125467530480975,7.773030963470472) – (7.338608959863062,7.69570590754679) – (7.267429010132277,7.6158781367495045) – (7.199123699223103,7.5336268936970185) – (7.133809260448269,7.449036866663679) – (7.071601170094309,7.362198358165503) – (7.012613935453099,7.273207455463769) – (6.956960864588822,7.182166202239728) – (6.90475381652163,7.089182770556219) – (6.856102930463537,6.994371632069291) – (6.811116332703621,6.897853727283371) – (6.769899819710533,6.799756631456351) – (6.732556516002906,6.700214715555546) – (6.699186505335329,6.59936930044047) – (6.669886433762207,6.497368802203555) – (6.644749083177416,6.394368866334942) – (6.62386291398844,6.290532488092032) – (6.6073115756733785,6.186030116149155) – (6.595173384093371,6.081039736278165) – (6.587520764595915,5.97574693146868) – (6.584419660152626,5.870344914538785) – (6.585928904033533,5.765034528916834) – (6.592099556835675,5.660024212895896) – (6.602974208062591,5.5555299222802885) – (6.618586242900394,5.451775005963989) – (6.63895907536119,5.348990028612492) – (6.664105349572563,5.247412534271432) – (6.694026111687629,5.147286744409143) – (6.728709955679429,5.048863183628802) – (6.768132147168591,4.952398226074763) – (6.812253730417579,4.85815355542392) – (6.861020624707038,4.7663955313160375) – (6.914362717487898,4.677394455158453) – (6.972192962970151,4.591423728462865) – (7.034406496156225,4.508758897260706) – (7.100879773738493,4.429676576723285) – (7.171469754737352,4.354453250910607) – (7.2460131352326735,4.283363943613392) – (7.324325653004736,4.216680757560828) – (7.406201479313344,4.1546712808635835) – (7.491412716357918,4.097596861464852) – (7.579709020125897,4.045710752593325) – (7.670817369289976,3.9992561347550994) – (7.764442001492532,3.958464022661034) – (7.8602645386859,3.923551068644656) – (7.957944323107143,3.8947172775525525) – (8.057118984880532,3.8721436517381904) – (8.15740526108785,3.855989788597939) – (8.258400084358822,3.846391456972951) – (8.359681956554551,3.8434581826038734) – (8.460812619902455,3.8472708765478165) – (8.561339033968194,3.8578795439161384) – (8.660795662117936,3.8753011133178363) – (8.758707065661318,3.8995174298399684) – (8.854590797732296,3.930473456100665) – (8.947960582257693,3.9680757267122715) – (9.03832975621509,4.01219110124143) – (9.125214945963288,4.062645859318794) – (9.208139940945435,4.119225178829622) – (9.28663972075326,4.181673034042063) – (9.36026458466067,4.249692545081939) – (9.428584326560323,4.322946803374121) – (9.491192393044695,4.401060189632492) – (9.547709958430074,4.48362019184483) – (9.597789848069354,4.570179720678976) – (9.641120240539692,4.660259909100296) – (9.677428080372318,4.7533533720557175) – (9.70648213599782,4.8489278911989215) – (9.728095642519031,4.946430479180674) – (9.742128475723433,5.045291768386706) – (9.74848881225427,5.144930660538251) – (9.74877659187954,5.194858095281946) – (9.74867683067413,5.2010986846243785) – (9.748615639872442,5.204218779751435) – (9.748599164081945,5.204998779696964) – (9.74859074947715,5.20538877590072) – (9.74859048462252,5.205400963241025) – (9.748590418390886,5.205404010075712) – (9.748590416321033,5.205404105289294)(7.334188721272083,7.690909387218463); (7.334188721272075,7.690909387218454)(9.748590414543063,5.205404187075875) – (9.748590413508122,5.205404234682667) – (9.748590411438258,5.205404329896248) – (9.74859040729849,5.205404520323409) – (9.748590399018877,5.205404901177733) – (9.748590382459325,5.205405662886366) – (9.748590349338864,5.205407186303607) – (9.748590283092543,5.205410233137975) – (9.748590150578332,5.205416326806241) – (9.748589885463623,5.205428514140895) – (9.748589354889083,5.205452888802687) – (9.748588292359457,5.2055016380962025) – (9.748586161778114,5.205599136562708) – (9.748581878527002,5.20579413301169) – (9.748573223671453,5.206184123958112) – (9.748555560549839,5.206964097921643) – (9.748518820687163,5.208524013146031) – (9.748439686674999,5.211643704901046) – (9.748258803187971,5.217882470575507) – (9.747806590346414,5.230357026177157) – (9.746540549160109,5.255290200800039) – (9.742563989356876,5.305060573786363) – (9.728859050981733,5.4039605914704225) – (9.7075629967265,5.501521581630724) – (9.67881183453481,5.5971718737473655) – (9.642788646603776,5.6903554489679244) – (9.599721525613443,5.780536409440665) – (9.549881026713605,5.86720313945163) – (9.493577186473797,5.9498721001278705) – (9.431156167462326,6.028091208774393) – (9.362996592759535,6.101442764172639) – (9.289505638423352,6.169545890005589) – (9.211114953689206,6.232058479607591) – (9.128276478555524,6.288678636099268) – (9.041458226505707,6.339145612323527) – (8.95114009663188,6.383240264556781) – (8.857809774584506,6.4207850424919695) – (8.761958775836382,6.4516435452989525) – (8.664078677997262,6.475719679547866) – (8.564657581626195,6.492956459375396) – (8.464176831432814,6.503334492484546) – (8.363108022185997,6.506870197444686) – (8.26191030627911,6.503613798392131) – (8.161028012923078,6.493647142746824) – (8.060888582500862,6.47708138610271) – (7.961900813831905,6.454054586179818) – (7.864453417036888,6.42472924480874) – (7.768913860402554,6.38928983351731) – (7.6756274961326305,6.347940334559033) – (7.584916947118106,6.300901825306378) – (7.497081734830339,6.2484101299551735) – (7.412398127079815,6.19071355855806) – (7.331119183626643,6.128070749614681) – (7.2534749774029965,6.060748628865078) – (7.179672969336822,5.989020493613975) – (7.109898515374115,5.9131642288934785) – (7.044315485209685,5.833460659073063) – (6.9830669733845525,5.750192036156842) – (6.9262760847282605,5.663640663968967) – (6.874046777559361,5.574087655707909) – (6.826464749558036,5.4818118209335935) – (6.783598352748235,5.387088676916737) – (6.745499525538462,5.290189578402681) – (6.71220473124151,5.191380959196779) – (6.683735893902058,5.090923678537894) – (6.660101323590879,4.989072464963942) – (6.641296624563267,4.886075450263317) – (6.627305580820557,4.78217378612348) – (6.618101014653119,4.677601336210819) – (6.613645614679964,4.57258443662331) – (6.613892730736364,4.467341717930839) – (6.618787133699133,4.362083982341404) – (6.628265738984962,4.257014129889905) – (6.6422582930155345,4.152327127928466) – (6.660688022420969,4.048210018591861) – (6.683472246156522,3.9448419593113497) – (6.710522951043949,3.8423942918465) – (6.741747331524614,3.7410306356931695) – (6.77704829463303,3.640907002101736) – (6.816324931373961,3.5421719252992334) – (6.859472955818235,3.444966607850743) – (6.906385113328552,3.3494250774168934) – (6.956951559391493,3.255674352464958) – (7.011060210569867,3.163834614770418) – (7.0685970691055475,3.074019386803721) – (7.129446522699757,2.986335712334136) – (7.193491620979505,2.9008843387989502) – (7.260614330127737,2.81775990018365) – (7.330695767114245,2.737051099336725) – (7.403616414915438,2.658840888803966) – (7.479256320057106,2.583206649411203) – (7.557495273755363,2.5102203659536984) – (7.63821297786992,2.4399487994651263) – (7.721289196820802,2.372453655641014) – (7.806603896556175,2.307791749081149) – (7.894037371595316,2.2460151630946075) – (7.983470361108486,2.1871714048798454) – (8.074784154934067,2.131303555952325) – (8.167860690373875,2.0784504177441776) – (8.262582640550145,2.028646652344794) – (8.358833495052671,1.9819229183895257) – (8.456497633551743,1.9383060021358816) – (8.555460393002523,1.8978189437937343) – (8.655608129019054,1.8604811591985895) – (8.756828271951228,1.8263085569354658) – (8.8590093781556,1.7953136510358596) – (8.962041176911665,1.7675056693821105) – (9.065814613397645,1.7428906579625245) – (9.170221888105448,1.7214715811273762) – (9.275156493041825,1.7032484180005607) – (9.380513245032876,1.6882182552045912) – (9.48618831642079,1.676375376057965) – (9.592079263415975,1.6677113464041091) – (9.698085052343737,1.6622150972300878) – (9.804106084002106,1.6598730042312893) – (9.9100442163274,1.660668964475772) – (10.015802785544915,1.664584470318546) – (10.121286625964746,1.6715986807123158) – (10.226402088566989,1.6816884900570708) – (10.331057058505593,1.694828594726252) – (10.435160971646962,1.7109915574026742) – (10.538624830246833,1.7301478693523036) – (10.641361217858055,1.7522660107591381) – (10.743284313550966,1.7773125092391342) – (10.844309905519612,1.8052519966463847) – (10.944355404137399,1.836047264279261) – (11.043339854519013,1.8696593165895297) – (11.141183948637604,1.9060474234924016) – (11.237810037040273,1.9451691713705626) – (11.333142140198866,1.9869805128605504) – (11.427105959527799,2.0314358155051115) – (11.519628888096191,2.0784879093509865) – (11.610640021056813,2.1280881335667505) – (11.700070165810882,2.180186382151467) – (11.787851851924287,2.234731148800971) – (11.873919340807284,2.291669570994088) – (11.95820863516767,2.350947473358292) – (12.040657488244197,2.4125094103697133) – (12.121205412825452,2.476298708439829) – (12.199793690057048,2.5422575074372276) – (12.27636537803852,2.6103268016901597) – (12.350865320209845,2.680446480512415) – (12.42324015352596,2.7525553682921426) – (12.493438316416771,2.8265912641806294) – (12.561410056529244,2.902490981415564) – (12.627107438247029,2.980190386310681) – (12.690484349982741,3.059624436941458) – (12.751496511237306,3.1407272215544784) – (12.810101479420284,3.223431996725745) – (12.866258656424908,3.3076712252915508) – (12.919929294951102,3.3933766140733947) – (12.971076504569798,3.480479151417028) – (13.019665257521655,3.568909144563904) – (13.065662394243166,3.6585962568715473) – (13.109036628613381,3.7494695448984996) – (13.149758552914456,3.8414574953676417) – (13.187800642499305,3.934488062020758) – (13.22313726015994,4.02848870237577) – (13.255744660190262,4.123386414397374) – (13.285600992137308,4.219107773090473) – (13.312686304235282,4.315578967025099) – (13.336982546516971,4.412725834800507) – (13.358473573597603,4.5104739014556205) – (13.377145147126438,4.608748414831977) – (13.392984937901877,4.707474381895099) – (13.405982527646255,4.806576605019303) – (13.416129410436849,4.905979718240589) – (13.423418993790209,5.005608223482212) – (13.427846599397203,5.105386526756294) – (13.429409463506687,5.205238974345706) – (13.428106736956313,5.305089888969167) – (13.423939484849122,5.404863605933116) – (13.41691068587545,5.504484509273281) – (13.407025231279771,5.6038770678891865) – (13.394289923472835,5.70296587167466) – (13.3787134742898,5.801675667647615) – (13.360306502895522,5.899931396082367) – (13.339081533338716,5.99765822664816) – (13.315052991756982,6.094781594557639) – (13.288237203235326,6.191227236729507) – (13.258652388321071,6.286921227969838) – (13.226318659198567,6.381790017177195) – (13.191258015527424,6.475760463576959) – (13.153494339948477,6.568759872991089) – (13.113053393261886,6.660716034150114) – (13.069962809282332,6.751557255054848) – (13.024252089376288,6.841212399396206) – (12.975952596686913,6.9296109230422696) – (12.925097550052154,7.016682910602896) – (12.871722017621895,7.102359112082911) – (12.81586291018019,7.186570979636356) – (12.757558974178728,7.269250704435153) – (12.696850784487669,7.3503312536671315) – (12.633780736870108,7.42974640767947) – (12.56839304018628,7.507430797285244) – (12.500733708333552,7.58331994125227) – (12.430850551927994,7.657350283995086) – (12.358793169733133,7.729459233492701) – (12.284612939841058,7.799585199456578) – (12.208363010610508,7.86766763177533) – (12.130098291366206,7.933647059264687) – (12.049875442862765,7.997465128753456) – (11.96775286751582,8.059064644538566) – (11.88379069940201,8.118389608244787) – (11.798050794028272,8.175385259127108) – (11.710596717869727,8.229998114856745) – (11.621493737673752,8.282176012834283) – (11.530808809526302,8.331868152076657) – (11.438610567674601,8.379025135727733) – (11.344969313097977,8.423599014245408) – (11.249957001816458,8.465543329321656) – (11.153647232923777,8.5048131585955) – (11.056115236328445,8.541365161222345) – (10.957437860183248,8.575157624367288) – (10.85769355797959,8.606150510693567) – (10.756962375278984,8.63430550692177) – (10.655325936049374,8.659586073539312) – (10.55286742856872,8.681957495744268) – (10.44967159085276,8.701386935711845) – (10.34582469555746,8.717843486276609) – (10.24141453429986,8.731298226127798) – (10.136530401333559,8.741724276620072) – (10.031263076506523,8.749096860306505) – (9.925704807420168,8.753393361305447) – (9.819949290698375,8.754593387617408) – (9.714091652264477,8.75267883551294) – (9.608228426512264,8.747633956116651) – (9.50245753424408,8.739445424317022) – (9.396878259235017,8.728102410135694) – (9.291591223266908,8.713596652693457) – (9.186698359459013,8.695922536913788) – (9.082302883704227,8.675077173107335) – (8.978509264000154,8.651060479583096) – (8.875423187442909,8.623875268433531) – (8.773151524628735,8.593527334641259) – (8.671802291183704,8.560025548654675) – (8.571484606115039,8.52338195257791) – (8.472308646648912,8.48361186011755) – (8.374385599188805,8.440733960423355) – (8.277827605995459,8.394770425953602) – (8.182747707154016,8.345747024486315) – (8.089259777356618,8.293693235386023) – (7.99747845698826,8.238642370220813) – (7.907519076961503,8.180631697806362) – (7.819497576700479,8.119702573731686) – (7.7335304146272765,8.05590057439489) – (7.649734470453985,7.989275635545854) – (7.568226938531572,7.919882195296282) – (7.489125211452535,7.847779341514214) – (7.412546753048101,7.773030963470473) – (7.33860895986306,7.69570590754679) – (7.267429010132276,7.6158781367495045) – (7.199123699223106,7.53362689369702) – (7.133809260448269,7.449036866663679) – (7.071601170094311,7.362198358165505) – (7.0126139354531,7.273207455463769) – (6.956960864588822,7.182166202239728) – (6.904753816521631,7.089182770556221) – (6.856102930463538,6.994371632069292) – (6.811116332703621,6.897853727283371) – (6.769899819710535,6.7997566314563525) – (6.732556516002907,6.700214715555548) – (6.699186505335328,6.599369300440469) – (6.669886433762207,6.497368802203555) – (6.644749083177414,6.39436886633494) – (6.623862913988441,6.290532488092035) – (6.607311575673379,6.186030116149156) – (6.595173384093371,6.081039736278165) – (6.587520764595915,5.97574693146868) – (6.584419660152626,5.870344914538785) – (6.585928904033533,5.765034528916834) – (6.592099556835675,5.660024212895898) – (6.602974208062591,5.555529922280289) – (6.618586242900394,5.451775005963988) – (6.63895907536119,5.348990028612493) – (6.664105349572563,5.247412534271434) – (6.694026111687629,5.147286744409142) – (6.728709955679429,5.0488631836288) – (6.768132147168591,4.952398226074765) – (6.812253730417579,4.858153555423919) – (6.861020624707038,4.766395531316039) – (6.914362717487898,4.677394455158453) – (6.972192962970151,4.591423728462866) – (7.034406496156226,4.508758897260706) – (7.100879773738492,4.429676576723285) – (7.171469754737352,4.354453250910607) – (7.2460131352326735,4.283363943613392) – (7.324325653004736,4.216680757560828) – (7.406201479313344,4.1546712808635835) – (7.491412716357917,4.0975968614648535) – (7.579709020125898,4.045710752593324) – (7.670817369289977,3.9992561347550986) – (7.764442001492532,3.958464022661034) – (7.8602645386859,3.923551068644656) – (7.957944323107143,3.8947172775525525) – (8.057118984880532,3.87214365173819) – (8.157405261087849,3.8559897885979395) – (8.258400084358822,3.846391456972951) – (8.35968195655455,3.8434581826038734) – (8.460812619902454,3.8472708765478165) – (8.561339033968192,3.8578795439161384) – (8.660795662117934,3.8753011133178363) – (8.758707065661318,3.8995174298399684) – (8.854590797732296,3.930473456100665) – (8.947960582257693,3.9680757267122715) – (9.03832975621509,4.01219110124143) – (9.125214945963286,4.062645859318794) – (9.208139940945433,4.1192251788296215) – (9.28663972075326,4.181673034042062) – (9.36026458466067,4.249692545081939) – (9.428584326560323,4.32294680337412) – (9.491192393044695,4.401060189632492) – (9.547709958430074,4.483620191844829) – (9.597789848069354,4.570179720678975) – (9.641120240539692,4.660259909100295) – (9.67742808037232,4.7533533720557175) – (9.70648213599782,4.848927891198921) – (9.728095642519031,4.946430479180673) – (9.742128475723433,5.045291768386704) – (9.74848881225427,5.144930660538252) – (9.74877659187954,5.194858095281947) – (9.74867683067413,5.201098684624378) – (9.748615639872442,5.204218779751434) – (9.748599164081945,5.204998779696963) – (9.74859074947715,5.20538877590072) – (9.74859048462252,5.205400963241025) – (9.748590418390886,5.2054040100757115) – (9.748590416321033,5.205404105289294)(7.334188721272082,7.690909387218462); plot(,[(-27.996904966145024–2.5109093872184634\*)/-1.245811278727917]{}); (1.2453642667683522:0.66) arc (1.2453642667683522:116.38874044268138:0.66); (1.2453642667683522:0.55) arc (1.2453642667683522:116.38874044268138:0.55); (7.334188721272076,7.690909387218454)(9.748590414543063,5.205404187075875) – (9.748590413508122,5.205404234682665) – (9.748590411438258,5.205404329896248) – (9.74859040729849,5.205404520323408) – (9.748590399018877,5.205404901177732) – (9.748590382459325,5.205405662886366) – (9.748590349338864,5.205407186303607) – (9.748590283092543,5.2054102331379735) – (9.748590150578332,5.205416326806243) – (9.748589885463623,5.205428514140895) – (9.748589354889083,5.205452888802684) – (9.748588292359457,5.205501638096202) – (9.748586161778114,5.205599136562706) – (9.748581878527002,5.20579413301169) – (9.748573223671453,5.206184123958111) – (9.748555560549839,5.206964097921645) – (9.748518820687163,5.208524013146031) – (9.748439686674999,5.211643704901045) – (9.748258803187971,5.217882470575506) – (9.747806590346414,5.230357026177157) – (9.746540549160109,5.255290200800039) – (9.742563989356876,5.305060573786363) – (9.728859050981733,5.40396059147042) – (9.7075629967265,5.501521581630724) – (9.67881183453481,5.597171873747364) – (9.642788646603776,5.6903554489679244) – (9.599721525613443,5.780536409440665) – (9.549881026713607,5.867203139451628) – (9.493577186473797,5.9498721001278705) – (9.431156167462328,6.028091208774391) – (9.362996592759535,6.101442764172639) – (9.289505638423352,6.169545890005589) – (9.211114953689206,6.232058479607591) – (9.128276478555525,6.288678636099268) – (9.041458226505705,6.339145612323528) – (8.951140096631882,6.383240264556781) – (8.857809774584508,6.420785042491969) – (8.761958775836382,6.4516435452989525) – (8.664078677997264,6.475719679547866) – (8.564657581626195,6.492956459375396) – (8.464176831432814,6.503334492484546) – (8.363108022185997,6.506870197444686) – (8.26191030627911,6.503613798392131) – (8.161028012923078,6.493647142746824) – (8.060888582500862,6.47708138610271) – (7.961900813831905,6.454054586179818) – (7.864453417036886,6.424729244808739) – (7.768913860402555,6.38928983351731) – (7.6756274961326305,6.347940334559034) – (7.584916947118105,6.300901825306377) – (7.49708173483034,6.2484101299551735) – (7.4123981270798165,6.190713558558063) – (7.331119183626645,6.1280707496146825) – (7.2534749774029965,6.0607486288650785) – (7.1796729693368215,5.989020493613972) – (7.109898515374116,5.9131642288934785) – (7.044315485209685,5.833460659073063) – (6.9830669733845525,5.750192036156842) – (6.9262760847282605,5.663640663968967) – (6.874046777559361,5.574087655707909) – (6.826464749558036,5.481811820933594) – (6.7835983527482355,5.387088676916741) – (6.745499525538463,5.290189578402685) – (6.71220473124151,5.191380959196777) – (6.683735893902058,5.090923678537899) – (6.660101323590879,4.989072464963942) – (6.641296624563267,4.886075450263318) – (6.627305580820556,4.78217378612348) – (6.618101014653119,4.67760133621082) – (6.613645614679965,4.572584436623307) – (6.613892730736364,4.467341717930839) – (6.618787133699133,4.362083982341405) – (6.628265738984961,4.257014129889906) – (6.6422582930155345,4.152327127928467) – (6.660688022420969,4.048210018591861) – (6.683472246156522,3.94484195931135) – (6.710522951043948,3.8423942918465004) – (6.741747331524613,3.7410306356931704) – (6.777048294633029,3.640907002101736) – (6.816324931373961,3.5421719252992334) – (6.859472955818234,3.4449666078507435) – (6.906385113328552,3.3494250774168934) – (6.956951559391492,3.2556743524649585) – (7.011060210569866,3.163834614770418) – (7.068597069105547,3.0740193868037213) – (7.129446522699756,2.9863357123341365) – (7.193491620979504,2.9008843387989507) – (7.260614330127737,2.817759900183651) – (7.3306957671142445,2.737051099336725) – (7.4036164149154375,2.6588408888039665) – (7.479256320057105,2.5832066494112036) – (7.557495273755362,2.5102203659536984) – (7.6382129778699195,2.4399487994651263) – (7.721289196820802,2.372453655641014) – (7.806603896556174,2.307791749081149) – (7.894037371595315,2.246015163094608) – (7.983470361108485,2.1871714048798454) – (8.074784154934067,2.131303555952325) – (8.167860690373873,2.0784504177441776) – (8.262582640550143,2.028646652344794) – (8.35883349505267,1.9819229183895257) – (8.456497633551741,1.9383060021358816) – (8.555460393002523,1.8978189437937343) – (8.655608129019054,1.8604811591985895) – (8.756828271951228,1.8263085569354658) – (8.859009378155598,1.7953136510358596) – (8.962041176911665,1.7675056693821105) – (9.065814613397643,1.7428906579625245) – (9.170221888105448,1.7214715811273762) – (9.275156493041823,1.7032484180005607) – (9.380513245032876,1.6882182552045908) – (9.48618831642079,1.676375376057965) – (9.592079263415975,1.6677113464041091) – (9.698085052343737,1.6622150972300873) – (9.804106084002104,1.6598730042312893) – (9.910044216327398,1.660668964475772) – (10.015802785544915,1.664584470318546) – (10.121286625964746,1.6715986807123153) – (10.226402088566989,1.6816884900570703) – (10.331057058505591,1.6948285947262516) – (10.43516097164696,1.7109915574026737) – (10.538624830246832,1.7301478693523031) – (10.641361217858055,1.7522660107591377) – (10.743284313550966,1.7773125092391338) – (10.844309905519612,1.8052519966463842) – (10.944355404137397,1.8360472642792605) – (11.043339854519013,1.8696593165895288) – (11.141183948637604,1.9060474234924012) – (11.237810037040271,1.9451691713705621) – (11.333142140198866,1.98698051286055) – (11.427105959527797,2.031435815505111) – (11.51962888809619,2.078487909350986) – (11.610640021056813,2.12808813356675) – (11.70007016581088,2.180186382151466) – (11.787851851924287,2.2347311488009702) – (11.873919340807284,2.2916695709940877) – (11.958208635167669,2.350947473358291) – (12.040657488244195,2.4125094103697116) – (12.121205412825452,2.4762987084398285) – (12.199793690057046,2.542257507437227) – (12.27636537803852,2.6103268016901593) – (12.350865320209845,2.680446480512414) – (12.42324015352596,2.7525553682921418) – (12.493438316416771,2.8265912641806286) – (12.561410056529244,2.9024909814155633) – (12.627107438247027,2.9801903863106807) – (12.690484349982741,3.059624436941457) – (12.751496511237304,3.1407272215544775) – (12.810101479420284,3.2234319967257443) – (12.866258656424908,3.3076712252915494) – (12.9199292949511,3.393376614073394) – (12.971076504569798,3.4804791514170272) – (13.019665257521655,3.5689091445639027) – (13.065662394243166,3.6585962568715464) – (13.109036628613381,3.7494695448984983) – (13.149758552914456,3.841457495367641) – (13.187800642499305,3.9344880620207565) – (13.223137260159938,4.0284887023757685) – (13.25574466019026,4.123386414397372) – (13.285600992137308,4.219107773090472) – (13.312686304235282,4.315578967025098) – (13.336982546516971,4.412725834800506) – (13.358473573597603,4.51047390145562) – (13.377145147126438,4.608748414831976) – (13.392984937901877,4.707474381895098) – (13.405982527646255,4.806576605019302) – (13.416129410436849,4.905979718240588) – (13.423418993790209,5.00560822348221) – (13.427846599397203,5.105386526756292) – (13.429409463506687,5.205238974345705) – (13.428106736956313,5.3050898889691664) – (13.423939484849122,5.404863605933115) – (13.41691068587545,5.50448450927328) – (13.407025231279771,5.603877067889189) – (13.394289923472835,5.702965871674664) – (13.3787134742898,5.801675667647613) – (13.360306502895522,5.899931396082366) – (13.339081533338717,5.997658226648155) – (13.315052991756982,6.0947815945576425) – (13.288237203235328,6.191227236729501) – (13.258652388321071,6.286921227969836) – (13.226318659198567,6.381790017177194) – (13.191258015527424,6.4757604635769574) – (13.153494339948478,6.568759872991084) – (13.113053393261886,6.660716034150113) – (13.06996280928233,6.751557255054852) – (13.02425208937629,6.841212399396201) – (12.975952596686913,6.929610923042264) – (12.925097550052156,7.016682910602894) – (12.871722017621895,7.1023591120829135) – (12.81586291018019,7.186570979636356) – (12.757558974178728,7.269250704435152) – (12.696850784487669,7.350331253667134) – (12.633780736870108,7.429746407679472) – (12.56839304018628,7.507430797285243) – (12.500733708333556,7.583319941252266) – (12.430850551927993,7.657350283995089) – (12.358793169733133,7.729459233492703) – (12.284612939841061,7.799585199456574) – (12.208363010610508,7.86766763177533) – (12.130098291366206,7.93364705926469) – (12.049875442862765,7.997465128753454) – (11.967752867515822,8.059064644538564) – (11.88379069940201,8.118389608244785) – (11.798050794028274,8.175385259127106) – (11.710596717869727,8.229998114856745) – (11.62149373767375,8.282176012834285) – (11.530808809526304,8.331868152076657) – (11.438610567674598,8.379025135727735) – (11.344969313097977,8.423599014245408) – (11.249957001816458,8.465543329321656) – (11.15364723292378,8.5048131585955) – (11.056115236328445,8.541365161222345) – (10.957437860183251,8.575157624367286) – (10.857693557979589,8.606150510693567) – (10.756962375278988,8.634305506921766) – (10.655325936049374,8.659586073539312) – (10.552867428568721,8.681957495744268) – (10.449671590852763,8.701386935711845) – (10.345824695557457,8.71784348627661) – (10.241414534299862,8.731298226127798) – (10.136530401333559,8.741724276620072) – (10.031263076506525,8.749096860306505) – (9.925704807420168,8.753393361305445) – (9.819949290698373,8.754593387617408) – (9.714091652264479,8.75267883551294) – (9.608228426512264,8.74763395611665) – (9.502457534244076,8.739445424317022) – (9.396878259235015,8.728102410135694) – (9.29159122326691,8.713596652693457) – (9.186698359459015,8.695922536913788) – (9.082302883704227,8.675077173107335) – (8.978509264000154,8.651060479583096) – (8.875423187442909,8.623875268433531) – (8.773151524628735,8.593527334641259) – (8.67180229118371,8.560025548654675) – (8.571484606115042,8.52338195257791) – (8.472308646648914,8.48361186011755) – (8.374385599188807,8.440733960423355) – (8.277827605995462,8.394770425953602) – (8.182747707154016,8.345747024486315) – (8.089259777356618,8.293693235386023) – (7.997478456988261,8.238642370220813) – (7.907519076961504,8.180631697806362) – (7.8194975767004795,8.119702573731686) – (7.7335304146272765,8.05590057439489) – (7.649734470453985,7.989275635545854) – (7.568226938531572,7.919882195296282) – (7.489125211452536,7.847779341514215) – (7.412546753048101,7.7730309634704735) – (7.338608959863061,7.69570590754679) – (7.267429010132277,7.6158781367495045) – (7.199123699223106,7.53362689369702) – (7.133809260448269,7.44903686666368) – (7.071601170094311,7.3621983581655055) – (7.0126139354531,7.273207455463769) – (6.956960864588822,7.182166202239728) – (6.904753816521632,7.089182770556221) – (6.856102930463538,6.994371632069292) – (6.811116332703622,6.897853727283372) – (6.769899819710535,6.799756631456353) – (6.732556516002908,6.700214715555548) – (6.699186505335328,6.59936930044047) – (6.669886433762207,6.497368802203555) – (6.644749083177415,6.39436886633494) – (6.623862913988441,6.290532488092035) – (6.607311575673379,6.186030116149157) – (6.595173384093371,6.081039736278165) – (6.587520764595916,5.975746931468681) – (6.584419660152626,5.870344914538786) – (6.585928904033533,5.765034528916834) – (6.592099556835675,5.660024212895898) – (6.602974208062591,5.55552992228029) – (6.618586242900394,5.451775005963989) – (6.63895907536119,5.348990028612494) – (6.664105349572563,5.247412534271435) – (6.694026111687629,5.147286744409142) – (6.728709955679429,5.048863183628801) – (6.768132147168591,4.952398226074765) – (6.812253730417579,4.85815355542392) – (6.861020624707038,4.766395531316039) – (6.914362717487898,4.677394455158453) – (6.972192962970152,4.591423728462863) – (7.034406496156225,4.508758897260706) – (7.100879773738492,4.429676576723285) – (7.17146975473735,4.35445325091061) – (7.2460131352326735,4.283363943613392) – (7.324325653004736,4.216680757560828) – (7.406201479313344,4.1546712808635835) – (7.4914127163579165,4.0975968614648535) – (7.579709020125898,4.045710752593325) – (7.670817369289977,3.999256134755099) – (7.764442001492531,3.9584640226610346) – (7.860264538685902,3.923551068644655) – (7.957944323107145,3.8947172775525516) – (8.057118984880532,3.8721436517381904) – (8.157405261087849,3.8559897885979395) – (8.258400084358822,3.846391456972951) – (8.35968195655455,3.8434581826038734) – (8.460812619902454,3.8472708765478165) – (8.561339033968192,3.8578795439161384) – (8.660795662117936,3.8753011133178363) – (8.75870706566132,3.8995174298399684) – (8.854590797732296,3.9304734561006645) – (8.947960582257693,3.9680757267122715) – (9.038329756215091,4.012191101241431) – (9.125214945963284,4.062645859318793) – (9.208139940945433,4.1192251788296215) – (9.28663972075326,4.181673034042062) – (9.360264584660671,4.24969254508194) – (9.428584326560323,4.32294680337412) – (9.491192393044695,4.401060189632492) – (9.547709958430072,4.483620191844829) – (9.597789848069352,4.570179720678974) – (9.641120240539692,4.660259909100295) – (9.67742808037232,4.7533533720557175) – (9.706482135997819,4.848927891198919) – (9.728095642519031,4.9464304791806715) – (9.742128475723433,5.045291768386704) – (9.74848881225427,5.144930660538252) – (9.74877659187954,5.194858095281946) – (9.74867683067413,5.201098684624378) – (9.748615639872442,5.204218779751434) – (9.748599164081945,5.204998779696963) – (9.74859074947715,5.20538877590072) – (9.74859048462252,5.2054009632410265) – (9.748590418390886,5.2054040100757115) – (9.748590416321033,5.205404105289294)(7.334188721272083,7.690909387218463); (-1.122,9.476) circle (2.5pt); (-1.034,9.993) node [$q = 1$]{}; (-1.0362758620689665,10.422) circle (2.5pt); (-0.946,10.939) node [$p = 2$]{}; (12.26,5.26) circle (2.5pt); (8.58,5.18) circle (2.5pt); (10.031307086719227,7.578003217901507) circle (2.5pt); (10.736,8.057) node [$r$]{}; (11.382320546472164,5.24092001187983) circle (1.5pt); (10.31,6.187) node [$\alpha$]{}; (7.334188721272083,7.690909387218463) circle (1.5pt); (9.14,6.859) node [$k\alpha$]{}; (4.95,0.61) rectangle (14.454,9.674); (0,0) – (1.2453642667683522:0.66) arc (1.2453642667683522:58.81705235472486:0.66) – cycle; (0,0) – (1.2453642667683522:1.32) arc (1.2453642667683522:20.435926962753854:1.32) – cycle; (-1.122,9.476) – (3.278,9.476); (-1.188,10.422) – (3.212,10.422); (10.42,5.22) ellipse (3.0093012461645294cm and 2.3809019278767845cm); (8.58,5.18) circle (2.8029826422986392cm); plot(,[(-13.057096900389318–2.3980032179015085\*)/1.4513070867192344]{}); plot(,[(–18.376–0.08\*)/3.68]{}); (11.206571983299975,6.158688566179811)(9.142294570689792,6.2047310600619525) – (9.142294563732095,6.2047310638798) – (9.142294549816715,6.204731071515512) – (9.142294521985944,6.204731086786924) – (9.142294466324408,6.20473111732976) – (9.142294355001354,6.20473117841546) – (9.14229413235529,6.204731300586941) – (9.142293687063354,6.20473154493025) – (9.142292796480248,6.204732033618268) – (9.142291015317111,6.204733010999913) – (9.142287453003156,6.2047349657856445) – (9.142280328424478,6.204738875446835) – (9.14226607946408,6.204746695128191) – (9.142237582331033,6.204762335926764) – (9.142180591215475,6.204793623267614) – (9.142066621582934,6.204856220926146) – (9.141838732683341,6.204981508166293) – (9.141383156115682,6.205232450465079) – (9.140472806062032,6.205735807344194) – (9.13865530351076,6.206748418260883) – (9.135032970229148,6.208797292536236) – (9.127838040027013,6.212990154050129) – (9.113639462170463,6.221760236315846) – (9.085945362893064,6.240867149895145) – (9.054561780390777,6.265949261026407) – (9.02406469300306,6.294041613016844) – (8.994220589802843,6.325204816565728) – (8.964798725716337,6.35947889840794) – (8.9355740118104,6.396883163015096) – (8.906329600146426,6.437416407264063) – (8.876859111153234,6.481057441638486) – (8.846968475924399,6.527765862842477) – (8.816477388466591,6.57748301961184) – (8.78522038193294,6.630133115190607) – (8.753047557287397,6.6856243951903185) – (8.719825002428387,6.743850377042526) – (8.685434944900809,6.80469108576806) – (8.64977568266595,6.8680142693314) – (8.61276133587191,6.933676574725478) – (8.574321459076293,7.001524672723609) – (8.53440054872583,7.071396324759571) – (8.492957475548753,7.143121389646639) – (8.449964866351197,7.216522770928826) – (8.4054084548564,7.2914173077398825) – (8.359286416879254,7.367616613317795) – (8.311608701381179,7.444927865970291) – (8.26239636581938,7.523154557476181) – (8.211680921661687,7.602097203778069) – (8.159503693926059,7.681554022485556) – (8.105915197052658,7.761321581250081) – (8.050974528253033,7.841195420555399) – (7.99474877863533,7.920970653935969) – (7.937312461813706,8.00044254811801) – (7.878746959318899,8.079407085093688) – (7.819139981889357,8.15766150769787) – (7.758585045600171,8.235004849863675) – (7.697180961749648,8.311238452389073) – (7.635031339446579,8.386166464749738) – (7.572244099905978,8.459596333240793) – (7.508931001553169,8.531339275517624) – (7.445207175144166,8.601210741430227) – (7.381190668226814,8.669030859901296) – (7.317001998385687,8.734624871482966) – (7.252763714830582,8.797823546135833) – (7.188599968000298,8.858463585704317) – (7.124636086958752,8.916388010511046) – (7.060998164457786,8.971446529457603) – (6.99781264963001,9.023495892997143) – (6.935205948355098,9.072400228334425) – (6.873304031414424,9.118031356208704) – (6.812232050611715,9.160269088623629) – (6.752113963092096,9.199001506904336) – (6.693072164138663,9.23412521948421) – (6.635227128765106,9.265545598851695) – (6.578697062455431,9.29317699711984) – (6.523597561427662,9.3169429397178) – (6.470041282818459,9.336776296743064) – (6.4181376251996864,9.352619431555926) – (6.36799241984718,9.3644243262425) – (6.319707633186001,9.37215268361944) – (6.273381080836229,9.375776005501985) – (6.229106153678858,9.375275647006431) – (6.186971556352955,9.370642846708702) – (6.147061058583372,9.361878732531622) – (6.109453259722965,9.348994303284814) – (6.0742213668749265,9.332010385832527) – (6.065792677539017,9.32712723267109) – (6.0647498815309895,9.326498982409404) – (6.064489558121819,9.326341300390254) – (6.064424500752313,9.326301841174574) – (6.064422467860883,9.326300607824589) – (6.064422213750101,9.32630045365478) – (6.064422150222427,9.326300415112287)(13.428590415577982,5.285404139469087) – (13.428590415577982,5.285404139469087) – (13.428590415577982,5.285404139469087) – (13.428590414718084,5.28540417902439) – (13.428590413338164,5.2854042425001095) – (13.428590410578282,5.28540436945155) – (13.42859040505837,5.285404623354427) – (13.428590394017942,5.285405131160169) – (13.428590371934666,5.285406146771599) – (13.428590327758432,5.285408177994249) – (13.428590239367228,5.285412240438706) – (13.42859006242992,5.285420365324254) – (13.428589707935629,5.285436615081872) – (13.428588996468417,5.285469114543188) – (13.428587563619457,5.285534113249987) – (13.4285846582634,5.285664109799052) – (13.428578688918975,5.285924099429451) – (13.428566115702695,5.286444064742556) – (13.4285384311761,5.287483938963831) – (13.428472909891237,5.289563456873804) – (13.428301259851146,5.293721531267684) – (13.427795546354794,5.302033520104916) – (13.426134674433104,5.318638346163779) – (13.420218052238141,5.351751339517745) – (13.408912046834184,5.390289345868323) – (13.392906128040515,5.428526077504469) – (13.372237890188321,5.466389697192695) – (13.346955987857354,5.503809396220135) – (13.317120013391303,5.540715602142044) – (13.282800346884978,5.577040184477085) – (13.244077978935458,5.612716658073183) – (13.20104430649998,5.647680383882546) – (13.153800902251902,5.681868766902838) – (13.102459257871995,5.7152214510628205) – (13.047140501755214,5.747680510854686) – (12.9879750916526,5.7791906395423815) – (12.925102482803581,5.809699333805116) – (12.858670772144912,5.8391570747083) – (12.788836319208613,5.867517504930435) – (12.715763344341767,5.894737602213932) – (12.63962350489469,5.920777849050725) – (12.560595450031196,5.945602398659687) – (12.478864354813243,5.969179237362476) – (12.394621434202275,5.991480343517251) – (12.308063437599223,6.012481843225671) – (12.219392124514211,6.032164163087396) – (12.128813721912776,6.050512180337427) – (12.036538363728601,6.067515370764438) – (11.942779512959516,6.083167954871618) – (11.84775336667462,6.097469042804155) – (11.751678244152455,6.1104227786272975) – (11.65477395824301,6.122038484593179) – (11.557261169897734,6.132330806079975) – (11.459360725641085,6.141319857918443) – (11.36129297756391,6.149031372832736) – (11.26327708520277,6.1554968527063805) – (11.165530298432586,6.160753723330849) – (11.06826722024491,6.164845493190333) – (10.971699048017179,6.167821916666813) – (10.876032791608054,6.16973916179509) – (10.781470466354747,6.170659982335115) – (10.688208258819355,6.170653893431658) – (10.59643566296149,6.169797349467725) – (10.506334584341156,6.168173921853359) – (10.418078410030049,6.16587447338844) – (10.331831042195954,6.162997324459608) – (10.247745893906021,6.159648404644671) – (10.165964846669405,6.155941381279788) – (10.086617170725274,6.151997754191383) – (10.0098184112071,6.147946903132313) – (9.935669246212983,6.143926071561304) – (9.864254326607822,6.140080267399314) – (9.795641112166015,6.13656205849852) – (9.72987872445628,6.133531238076182) – (9.666996843587754,6.131154333706341) – (9.607004683335067,6.129603933131662) – (9.549890086791232,6.129057801726667) – (9.495618791869541,6.129697770488143) – (9.44413392174907,6.131708380435254) – (9.39535575858652,6.135275279540554) – (9.349181858256513,6.140583381693149) – (9.305487558378221,6.147814813133878) – (9.264126920622735,6.1571466891506805) – (9.224934131068611,6.168748780887152) – (9.187725359828228,6.182781146826465) – (9.152301054908692,6.19939181371065) – (9.143699132799282,6.203963362664502) – (9.142630633122526,6.204546827622007) – (9.142363739010914,6.2046931130746845) – (9.142297029879659,6.204729710654542) – (9.142294945312054,6.204730854497911) – (9.142294684741497,6.20473099747905) – (9.142294619598873,6.204731033224363) – (9.142294587027564,6.204731051097025) – (9.142294578884737,6.2047310555651904)(11.206571983299977,6.15868856617981); (6.419144847644928,6.9653337793163494)(7.411409584594495,5.154595852596448) – (7.411409584766991,5.1545958446619835) – (7.411409585111961,5.1545958287930524) – (7.411409585801915,5.154595797055192) – (7.4114095871818115,5.1545957335794705) – (7.41140958994157,5.154595606628028) – (7.411409595460996,5.15459535272514) – (7.4114096064994515,5.154594844919356) – (7.411409628574766,5.1545938293077525) – (7.411409672719,5.154591798084407) – (7.411409760981876,5.15458773563716) – (7.4114099374053035,5.154579610740439) – (7.411410289842799,5.154563360938079) – (7.411410993080423,5.154530861297637) – (7.411412393006209,5.154465861873348) – (7.411415166659974,5.154335862447042) – (7.411420609177032,5.154075861250681) – (7.411431075055229,5.153555849220326) – (7.411450330237571,5.1525157845079725) – (7.411482134735206,5.150435475668782) – (7.411518924204396,5.146274005888255) – (7.411485264719881,5.137946582797805) – (7.410989439612766,5.1212652125435625) – (7.408289416703878,5.087727962244758) – (7.402259462534804,5.048007926970382) – (7.39317931575289,5.007550498555966) – (7.381113241416359,4.966123145256971) – (7.36614194789287,4.923506022610257) – (7.348361261486166,4.879494545645807) – (7.327880648100749,4.83390152010159) – (7.304821648181664,4.786558810784325) – (7.279316286462761,4.737318550738905) – (7.2515055094254635,4.686053916026749) – (7.221537692409399,4.632659506532967) – (7.189567246563905,4.57705138307857) – (7.1557533445491055,4.51916681566594) – (7.120258773972333,4.458963797846682) – (7.0832489194754045,4.396420379089413) – (7.044890868330102,4.33153386176425) – (7.005352630263072,4.2643199029418355) – (6.964802459770773,4.194811554418153) – (6.923408268079025,4.12305826779237) – (6.8813371118145605,4.049124885411219) – (6.838754746078916,3.9730906327497215) – (6.795825230686284,3.8950481233982477) – (6.752710579639862,3.81510238425626) – (6.709570445322652,3.733369905727234) – (6.666561830262094,3.649977719571516) – (6.6238388206253305,3.565062505499671) – (6.5815523367740845,3.4787697264744217) – (6.539849897237695,3.3912527919403788) – (6.49887539334614,3.302672247734341) – (6.458768872508436,3.2131949911759827) – (6.419666328736827,3.1229935097422397) – (6.381699499517891,3.0322451417433602) – (6.344995668533308,2.941131357509474) – (6.309677474049568,2.849837059736113) – (6.275862723041691,2.758549901805713) – (6.243664211302189,2.667459623084528) – (6.213189549924617,2.5767574003802753) – (6.184540998649144,2.486635214927429) – (6.1578153066242045,2.3972852344399254) – (6.1331035611789115,2.3088992099315417) – (6.110491045221575,2.22166788715088) – (6.090057103883916,2.1357804326096175) – (6.071875021022457,2.05142387429963) – (6.056011906170477,1.9687825572964477) – (6.042528592508313,1.8880376145346274) – (6.031479546388489,1.8093664531147677) – (6.022912788916364,1.7329422565640518) – (6.016869830048146,1.6589335035220056) – (6.013385615626858,1.587503503362726) – (6.012488487734183,1.5188099492939822) – (6.014200158692174,1.4530044894939307) – (6.01853569900471,1.3902323168580621) – (6.02550353948388,1.3306317779331862) – (6.035105487762275,1.2743340016128273) – (6.04733675934814,1.2214625481592098) – (6.062186023336821,1.1721330791026654) – (6.079635462849456,1.1264530485492994) – (6.099660850227972,1.0845214164033763) – (6.122231636974805,1.0464283839821649) – (6.147311058386247,1.01225515246857) – (6.174856252790058,0.9820737046109693) – (6.204818395261264,0.9559466100408445) – (6.237142845654739,0.9339268545371757) – (6.24558612501107,0.9290689720323888) – (6.246651603701635,0.9284800093293155) – (6.246918322040526,0.928333403653351) – (6.2469850234084685,0.9282967919264937) – (6.246987107966604,0.9282956480658653) – (6.24698736853697,0.9282955050843764) – (6.246987433679581,0.9282954693390435)(6.064422129766742,9.326300402701737) – (6.064422129766742,9.326300402701737) – (6.064422129766742,9.326300402701737) – (6.064422095940794,9.326300382179394) – (6.064422041659169,9.326300349246486) – (6.064421933095936,9.326300283380638) – (6.06442171596955,9.326300151648816) – (6.0644212817170935,9.326299888184653) – (6.064420413213432,9.32629936125426) – (6.064418676211134,9.326298307385194) – (6.064415202226632,9.326296199613939) – (6.064408254338001,9.326291983938962) – (6.064394358882243,9.326283552059092) – (6.064366569256741,9.326266686179753) – (6.064310995149919,9.326232945942753) – (6.064199867514053,9.326165431556069) – (6.06397769456162,9.326030267136954) – (6.063533677999525,9.325759395755696) – (6.062646962770455,9.325215483141736) – (6.060878808075225,9.324118981089569) – (6.057363635039348,9.32189129059735) – (6.050418098298507,9.317297335443854) – (6.0368683330669555,9.307556565137808) – (6.0111499507245565,9.285876123335392) – (5.9834280609157275,9.256815832264952) – (5.958317039357972,9.22383593416065) – (5.935860301754833,9.187004905285079) – (5.916094842960585,9.146400286096746) – (5.899051118307939,9.102108471301525) – (5.884752940477095,9.054224477090118) – (5.873217392000189,9.002851685953097) – (5.864454753456068,8.948101569501153) – (5.858468447370219,8.890093389750916) – (5.8552549977932316,8.828953879365857) – (5.854804005488908,8.764816901367036) – (5.857098138620152,8.697823088848999) – (5.8621131387768415,8.628119465252182) – (5.8698178421459195,8.555859045753323) – (5.880174215579444,8.481200420340032) – (5.893137407271874,8.404307319133494) – (5.908655811713933,8.32534816051386) – (5.926671148546813,8.244495582585921) – (5.947118554898244,8.161925958496656) – (5.969926690741199,8.077818896081173) – (5.995017856777663,7.992356722268033) – (6.022308124314504,7.905723952618626) – (6.05170747656762,7.8181067463065546) – (6.083119960804531,7.729692346762246) – (6.116443850717383,7.640668508112993) – (6.1515718184085,7.551222907440268) – (6.188391115372828,7.461542542752799) – (6.226783761878207,7.37181311643669) – (6.266626744179364,7.282218403792427) – (6.307792219059639,7.192939606105153) – (6.350147725280786,7.104154687521239) – (6.3935564016431226,7.016037694825069) – (6.437877211523016,6.928758059031555) – (6.482965173972103,6.842479877541015) – (6.528671601742916,6.757361175456527) – (6.5748443469611075,6.6735531445571326) – (6.621328055607774,6.591199358376828) – (6.667964432520544,6.5104349618898905) – (6.714592519281702,6.431385834487703) – (6.761048988147115,6.3541677253012985) – (6.8071684560879815,6.278885360539511) – (6.852783824067867,6.205631523450061) – (6.897726647847806,6.134486108857466) – (6.941827547872526,6.065515156084904) – (6.984916667086203,5.998769866530234) – (7.0268241867693915,5.934285615337895) – (7.0673809115512825,5.872080970569091) – (7.1064189354561265,5.812156738062796) – (7.143772400960938,5.754495055771278) – (7.179278362295858,5.699058567612293) – (7.21277776229677,5.645789713527324) – (7.2441165287068285,5.594610179007929) – (7.273146790649276,5.545420553176567) – (7.299728208905055,5.498100248697367) – (7.32372940467399,5.452507738288942) – (7.345029461017338,5.408481160287581) – (7.363519459872313,5.365839338533686) – (7.3791040064780615,5.324383249118147) – (7.391702683660287,5.283897948096768) – (7.401251372256526,5.244154950829484) – (7.407703372498075,5.204915026722051) – (7.411030265462337,5.165931345308236) – (7.411372148988362,5.156196087763185) – (7.411401103351324,5.154979007420425) – (7.411407863488906,5.154674727613312) – (7.41140952362066,5.154598657021051) – (7.411409575307046,5.154596279810864) – (7.4114095817670265,5.154595982659573) – (7.41140958338199,5.154595908371749) – (7.411409584189465,5.154595871227837) – (7.411409584391334,5.154595861941859)(6.419144847644928,6.9653337793163494); plot(,[(–5.2084949756711225–0.9786885661798106\*)/2.6265719832999803]{}); (1.2453642667683522:1.32) arc (1.2453642667683522:20.435926962753854:1.32); (1.2453642667683522:1.21) arc (1.2453642667683522:20.435926962753854:1.21); (8.114283169055097,2.4159776545038385)(9.186295844715714,4.1806730873415985) – (9.186295851500914,4.180673091458217) – (9.186295865071324,4.180673099691434) – (9.186295892212142,4.180673116157883) – (9.18629594649378,4.180673149090769) – (9.186296055057076,4.180673214956511) – (9.186296272183712,4.180673346687918) – (9.186296706437195,4.180673610150393) – (9.186297574944987,4.180674137073978) – (9.186299311963888,4.180675190915679) – (9.186302786014966,4.180677298577195) – (9.186309734170218,4.180681513812726) – (9.186323630693119,4.1806899439337295) – (9.186351424588544,4.180706802775599) – (9.186407015778316,4.180740514859037) – (9.186518211757093,4.180807916626811) – (9.186740658139627,4.180942630583024) – (9.187185768829089,4.181211700314593) – (9.188076863700395,4.1817484081478336) – (9.189862561754035,4.182816106070335) – (9.193448105566457,4.184928701575508) – (9.200676695253106,4.189063263152065) – (9.215371098216771,4.196974551140591) – (9.245765220403058,4.2114048878600965) – (9.283178757074419,4.22604281200321) – (9.32275599124405,4.238407888427189) – (9.364666168780797,4.248672038177299) – (9.409059326390793,4.2570150789818015) – (9.456064726703433,4.263622291339096) – (9.505789751752827,4.268682072634346) – (9.558319240665103,4.272383747577188) – (9.613715237612839,4.274915586418617) – (9.672017102107946,4.276463064361411) – (9.733241925657662,4.277207378276425) – (9.797385196148698,4.277324221731111) – (9.864421653022507,4.276982807291532) – (9.934306281126858,4.276345116385256) – (10.006975397858646,4.275565351579186) – (10.082347795797988,4.2747895635102715) – (10.160325910660635,4.2741554243345545) – (10.240796991503395,4.273792120822273) – (10.323634256372223,4.273820342560989) – (10.408698021834242,4.274352343659954) – (10.495836799064687,4.275492059510396) – (10.584888352434461,4.2773352632839545) – (10.67568071897896,4.279969749773448) – (10.768033188857967,4.283475536796583) – (10.861757248076218,4.2879250766504144) – (10.956657485448613,4.2933834720147726) – (11.052532466173258,4.299908692275498) – (11.149175574509274,4.307551787504222) – (11.24637582801853,4.316357098329688) – (11.343918665677858,4.326362460706005) – (11.441586711944273,4.337599405164071) – (11.539160518592752,4.350093350558767) – (11.636419285866522,4.363863792626852) – (11.733141564200784,4.378924487874811) – (11.82910593751173,4.395283633444547) – (11.924091688791833,4.412944043674678) – (12.017879448522216,4.4319033241021) – (12.110251826206689,4.4521540436423415) – (12.200994025148981,4.473683905658778) – (12.2898944404354,4.49647591858549) – (12.376745239947844,4.520508566713287) – (12.461342928115787,4.5457559816860655) – (12.54348889201879,4.572188115189323) – (12.622989929371737,4.599770913245949) – (12.699658757861677,4.6284664924682275) – (12.773314505256412,4.658233318550808) – (12.843783179669211,4.689026387227249) – (12.91089811934014,4.720797407854362) – (12.974500421281046,4.753494989732938) – (13.034439348127156,4.787064831221806) – (13.090572712542722,4.821449911654376) – (13.142767238539857,4.856590686022095) – (13.19089889908846,4.892425282349011) – (13.234853229419265,4.928889701644104) – (13.274525615452173,4.965918020284868) – (13.309821556815997,5.003442594654831) – (13.340656903964543,5.041394267831256) – (13.366958068935801,5.079702578094635) – (13.388662209346338,5.118295969011408) – (13.405717385260797,5.157102000822727) – (13.41808268862657,5.196047562857404) – (13.425728345015772,5.235059086674344) – (13.428635787470334,5.274062759630296) – (13.428621197449914,5.283803795296517) – (13.428598514767376,5.285021008261279) – (13.428592119837653,5.285325295956396) – (13.428590475839215,5.285401366898931) – (13.428590424172512,5.285403744109546) – (13.42859041771293,5.285404041260845) – (13.428590416097991,5.285404115548669)(6.246987454655274,0.9282954578291758) – (6.246987454655274,0.9282954578291758) – (6.246987454655274,0.9282954578291758) – (6.24698748934112,0.9282954387962183) – (6.2469875450026695,0.9282954082534047) – (6.246987656325779,0.9282953471678121) – (6.246987878972078,0.9282952249967567) – (6.246988324264965,0.9282949806551786) – (6.246989214851902,0.9282944919741398) – (6.246990996030437,0.9282935146205569) – (6.246994558406137,0.9282915599473549) – (6.247001683232082,0.9282876507367837) – (6.2470159331821264,0.9282798328590376) – (6.247044434274842,0.9282641992770602) – (6.247101441230624,0.9282329408072592) – (6.247215474222548,0.9281704586448791) – (6.247443616519405,0.9280456334335954) – (6.24790020629778,0.9277965395017498) – (6.24881460605344,0.9273005778944334) – (6.250648282033538,0.9263175620366253) – (6.254335105109506,0.9243871781349684) – (6.2617863553467,0.9206691444512298) – (6.27699699249994,0.9138050886984121) – (6.308631997037302,0.9023725371468627) – (6.347659892250089,0.892894821866725) – (6.388776832601513,0.8876379883348804) – (6.431901808056846,0.8866053975222243) – (6.47694916918206,0.8897903176831194) – (6.523828868300759,0.8971759265564314) – (6.572446712637927,0.9087353384327956) – (6.622704629064352,0.92443165597372) – (6.674500940043952,0.9442180466163013) – (6.727730650377875,0.9680378433462469) – (6.782285744334773,0.9958246695713235) – (6.838055492755879,1.0275025877782777) – (6.894926769727248,1.062986271608617) – (6.952784378419575,1.1021812009427032) – (7.011511385709169,1.144983879538377) – (7.070989465211942,1.191282074729533) – (7.131099248386359,1.2409550786525743) – (7.1917206833913765,1.293873990435415) – (7.252733401421991,1.349902018754392) – (7.314017090288509,1.4088948041408682) – (7.375451875056526,1.4707007604015754) – (7.436918705623114,1.5351614345062945) – (7.498299751171285,1.6021118842939766) – (7.559478801519605,1.6713810733560184) – (7.620341675466867,1.742792282473316) – (7.6807766363231,1.8161635370153877) – (7.740674814916882,1.8913080497555765) – (7.799930640474718,1.9680346786199023) – (7.858442279878784,2.046148398970131) – (7.916112085922903,2.125450790127598) – (7.972847055299278,2.205740535976403) – (8.028559297155306,2.286813939646024) – (8.083166513154108,2.368465452468549) – (8.136592490044398,2.4504882176375955) – (8.188767605782989,2.532674629268652) – (8.239629350239907,2.614816907876659) – (8.289122861430839,2.696707693647778) – (8.33720147803748,2.7781406592880558) – (8.3838273086601,2.8589111446784505) – (8.428971817756809,2.9388168160445716) – (8.47261642751173,3.0176583528453427) – (8.514753133881971,3.095240166072048) – (8.555385133736177,3.171371152090329) – (8.594527458246171,3.2458654864978618) – (8.632207605458067,3.318543462635308) – (8.668466162188523,3.3892323792783836) – (8.703357402023508,3.45776748152979) – (8.736949842235735,3.523992957869604) – (8.76932673793605,3.587762994537891) – (8.800586486873046,3.648942885730202) – (8.830842913247862,3.7074101943115236) – (8.860225394115476,3.763055952766334) – (8.888878787958106,3.8157858878604083) – (8.916963122559492,3.865521645096766) – (8.944652999225404,3.9122019808144897) – (8.97213667357694,3.9557838812758037) – (8.999614780396142,3.9962435601718633) – (9.027298681871175,4.033577279773164) – (9.055408435143717,4.067801937747975) – (9.08417039571698,4.098955362752551) – (9.113814496674863,4.127096268283363) – (9.1445712676737,4.152303826451484) – (9.176668679628971,4.174676840981114) – (9.184928718212356,4.1798405495723125) – (9.185968263526153,4.180474164957568) – (9.18622839750018,4.180632159312004) – (9.18629344649968,4.180671632324406) – (9.1862954793809,4.180672865691224) – (9.186295733491477,4.180673019861375) – (9.186295797019138,4.180673058403887) – (9.186295828782972,4.180673077675138) – (9.18629583672393,4.18067308249295)(8.114283169055097,2.4159776545038385); (-0.8185517241379318,9.476) circle (2.5pt); (-0.726,9.993) node [$q = 3$]{}; (-1.188,10.422) circle (2.5pt); (-1.1,10.939) node [$p = 1$]{}; (12.26,5.26) circle (2.5pt); (8.58,5.18) circle (2.5pt); (10.031307086719229,7.578003217901507) circle (2.5pt); (10.736,8.057) node [$r$]{}; (11.382320546472164,5.24092001187983) circle (1.5pt); (8.048,5.923) node [$\alpha$]{}; (11.20657198329998,6.15868856617981) circle (1.5pt); (11.02,5.617) node [$k\alpha$]{}; Notice that these curves are closed but this does not contradict Bertrand’s theorem since not *all* harmonics are closed (only those with rational $k$). This theorem (of revolving orbits) remains largely forgotten until 1997, when it was studied in works [@Bell1],[@Bell2]. A generalization was discovered by Mahomed and Vawda in 2000 [@Mahomed]. They assumed that the radial distance $r$, and the angle $\varphi$ changes according to rule: $$\label{Mahomedtr} r\to \frac{ar}{1-br},\qquad \varphi \to \frac{1}{k}\varphi,$$ where $a,b$ are given constant. They proved that such a transform of the solution is equivalent to changing the force as follows: $$\label{Mohamedeq} F(r)\to \frac{a^3}{(1-br)^2}F{\left( \frac{ar}{1-br} \right)}+\frac{L^2}{m r^3}(1-k^2)-\frac{bL^2}{mr^2},$$ where again $m$ is the particle mass and $L$ its angular momentum. This result is complete as far as “point” transformations are considered. But – as we will prove – it can be generalized for a quite large class of *nonlocal* transforms. More precisely: \[nonlocalrevolvingorbits\] Consider a transform $T_f$ depending on a smooth function $f$ that maps a planar curve given in polar coordinates $h(r,\varphi)=0$ into a curve in polar coordinates $h(\tilde r,\tilde\varphi)=0$ according to the rule: $$\tilde r=f(r),\qquad \tilde \varphi=\frac{1}{k}{\int \limits_{\varphi_0}^{ \varphi}}\frac{f^\prime( r(t)) r(t)^2}{f^2( r(t))}{\rm d}t.$$ Let a plane curve $\gamma$ be a solution to the central force problem $$\ddot x=F(r)\frac{x}{r},\qquad x\cdot \dot x^\perp=L,$$ where $L$ is the angular momentum, $\dot x^\perp$ is the vector perpendicular to $\dot x$ and $r:={\left| x \right|}$. Then the transformed curve $T_f (\gamma)$ is a solution to the central force problem: $$\ddot x={\left( f^\prime F(f)-\frac{L^2 k^2}{r^3}+\frac{L^2 f^\prime}{f^3} \right)}\frac{x}{r},\qquad x\cdot \dot x^\perp=Lk,$$ where $f\equiv f(r)$. Notice that previous two results are included in this theorem as a special cases for $f(r):= r$ (Newton) and $f(r):= \frac{ar}{1-br}$ ([@Mahomed])[^2]. In fact, those are the only instances of $f$, where the transform $T_f$ is local, i.e. the integrand is independent of $r$, that is $$\frac{f^\prime r^2}{f^2}=1.$$ Nonlocal transforms are quite natural to assume. Take for example the transform $A_\omega$ which rotate each point on a given curve around some fixed point by amount which is proportional to the area swept by the radius vector from some initial angle $\varphi_0$, i.e. $$\tilde r= r,\qquad \tilde \varphi= \varphi- \varphi_0+\omega{\int \limits_{ \varphi_0}^{ \varphi}} r^2(t){\rm d}t.$$ The conservation of angular momentum in central force problems tells us exactly that such an area is proportional to the time it took to the test particle to travel from angle $\varphi_0$ to $\varphi$. Thus the transform $A_\omega$ describes how curves, given by a central force problem, change when passing to a rotating frame of reference (which is done on regular basis in celestial mechanics). The main goal of this article is to make the case that results similar to Theorem \[nonlocalrevolvingorbits\] and problem of classification of orbits are best studied in the so-called *pedal coordinates*. Pedal coordinates ([@Yates; @Edwards]) describe the position of a point $x$ on a given curve $\gamma$ by two numbers $(r,p)$ – where $r$ is the distance of $x$ from the origin and $p$ is the distance of origin to the tangent of $\gamma$ at $x$. In pedal coordinates, we can even assume more general force problems than central ones. Specifically, we can include the situation when the force has Lorentz like (or magnetic) component – i.e. a component that is pointing in the direction perpendicular to the velocity of a test particle and depending on the distance. In fact, as we show, solutions of such force problems can be translated into pedal coordinates *algebraically without any integration whatsoever* showing that they are indeed “natural” coordinates for the job. More precisely, we prove the following theorem: \[cfth\] Consider a dynamical system: $$\label{dynsys} \ddot x=F^\prime{\left( {\left| x \right|}^2 \right)}x+2 G^\prime{\left( {\left| x \right|}^2 \right)}{\dot x}^\perp,$$ describing an evolution of a test particle (with position $x$ and velocity $\dot x$) in the plane in the presence of central $F$ and Lorentz like $G$ potential. The quantities: $$L=x\cdot \dot x^\perp+G{\left( {\left| x \right|}^2 \right)}, \qquad c={\left| \dot x \right|}^2-F{\left( {\left| x \right|}^2 \right)},$$ are conserved in this system. Then the curve traced by $x$ is given in pedal coordinates by $$\frac{{\left( L-G(r^2) \right)}^2}{p^2}=F(r^2)+c,$$ with the pedal point at the origin. Furthermore, the curve’s image is located in the region given by $$\label{regine} \frac{{\left( L-G(r^2) \right)}^2}{r^2}\leq F(r^2)+c.$$ The structure of the paper is the following: In Section \[pedalsec\] we give an introduction to the Pedal coordinates since – in the author’s opinion – this subject is largely forgotten and hence we do not assume any background knowledge on the reader’s part. Theorem \[cfth\] tells us that answering the question:\ *For a given curve $\gamma$, what forces do we have to impose on a test particle to move along it?*\ is straightforward if we are provided with pedal equation of $\gamma$. In Section \[MTPCS\] we derive an efficient method how to translate quite general curves into pedal coordinates, provided that they are given as a solution of a autonomous differential equation (of any order) in polar coordinates. This will be the statement of Proposition \[MPC\] – a result which (to the best of the author’s knowledge) is new. Characterization of orbits in known curves, of course, depends on which curves are known. There are entire books filled with interesting curves (for example [@Yates],[@Lawrence]) but instead of memorizing them all, it is often better to look for some “transforms” that connects them, e.g. Pedal, circle inverse, parallel, dual, etc. Some of these classical transforms will be introduced in Sections \[pedalsec\] and \[transsec\], along with a method how to translate known transforms in polar coordinates into pedal coordinates (Theorem \[transth\]). Along this line in Section \[evolutesec\] we also provide a pedal analogue to more advanced transforms like evolute, involute, contrapedal and catacaustic – which the author was also unable to find in the literature (Proposition \[evolute\]). In Section \[CentralForce\] we give a proof of Theorem \[cfth\] and also (basically as a corollary) proof of Theorem \[nonlocalrevolvingorbits\]. Finally, we apply these results to a concrete examples. First, we focus on the relativistic version of the Kepler problem in Section \[RKP\], where we have successfully managed to classify solutions in terms of $sn$-spirals, introduced in Section \[spiralssec\] – a generalization of the famous sinusoidal spirals. We also tackle the “dark Kepler problem” in Section \[DKP\] (in addition to the central body, gravitational influences of dark matter and dark energy are allowed), where we show that a particular solution is the Cartesian oval, as seen from a rotating frame of reference. Furthermore we show that additional solutions can be obtained using a nonlocal transform that can be constructed from the rotating frame transforms $A_\omega$ and point transforms. The author would like to thank Miroslav Engliš and Michal Marvan for careful reading of the manuscript and suggesting numerous improvements. Pedal coordinates {#pedalsec} ================= Remember that the “pedal coordinates” of a point $x$ on a differentiable curve $\gamma$ in the plane are given by two positive real numbers $(r,p)$, where $r$ is the distance of $x$ from some given point $O$ (the so called *pedal point*) and $p$ is the distance of $O$ from the tangent line of $\gamma$ at $x$. (-4,3) to \[out=0,in=180\] (0,2) to \[out=0, in=225\] (2,3); (M1) at (2,3); (M2) at (0,2); (-4,2) to (2,2); (-3,2) to (-3,0); (0,2) to (-3,0); (-3,0) circle (0.05); (0,2) circle (0.05); (-3,2) circle (0.05); (M3) at (-3,0); (M4) at (-3,2); (0,2) to (0,0); (-3,0) to (0,0); (0,0) circle (0.05); (M5) at (0,0); (M6) at (-3,1); (M7) at (-1.5,1); (M8) at (-1.5,0); It is useful to measure also the distance of $O$ to the normal (the “contrapedal coordinate” $p_c$) even though it is not an independent quantity and it relates to $(r,p)$ as $$p_c:=\sqrt{r^2-p^2}.$$ For every point $x\in \gamma$ we can define two additional points denoted in the picture $P(x), P_c(x)$ and thus create two additional curves. Let us denote by $P(\gamma)$ the *pedal curve* – i.e. the locus of points $P(x)$. And by $P_c(\gamma)$ the *contrapedal curve* – i.e. the locus of points $P_c(x)$. In fact, one of the main advantages of pedal coordinates is that the operation of making pedal curve (which would in general require solving a differential equation in Cartesian coordinates) can by done by simple algebraic manipulation. Concretely (see [@williamson p. 228]), to any curve $\gamma$ given by the equation $$f(p,r)=0,$$ in pedal coordinates, the pedal curve $P(\gamma)$ satisfies the equation $$f{\left( r,\frac{r^2}{p} \right)}=0.$$ The contrapedal curve $P_c(\gamma)$ is much harder to obtain, but for specific examples it can be done as well (as we will see). Even making the curve’s harmonics, introduced in Section \[Intro\] is easily done in pedal coordinates: $$\begin{aligned} \label{harmtrans} f{\left( \frac{1}{p^2},r \right)}=0&\stackrel{H_k}{\longrightarrow} & f{\left( \frac{k^2}{p^2}-\frac{k^2-1}{r^2},r \right)}=0.\end{aligned}$$ Many additional “transforms” (i.e. operations that bring curves into different curves) can be described easily in pedal coordinates, for example: $$\begin{aligned} \label{translist}f(p,r)=0 &\stackrel{S_\alpha}{\longrightarrow}& f(\alpha p, \alpha r,)=0,\\ f(p,r)=0 &\stackrel{I_R}{\longrightarrow}& f{\left( \frac{R p}{r^2},\frac{R}{r} \right)}=0,\\ f{\left( p,r^2 \right)}=0 &\stackrel{E_c}{\longrightarrow}& f{\left( p-c,r^2-2pc+c^2 \right)}=0,\\ f(p,r)=0 &\stackrel{D_R=I_R P}{\longrightarrow}& f{\left( \frac{R }{r},\frac{R}{p} \right)}=0,\\ f{\left( \frac{1}{p^2},r \right)}=0 &\stackrel{F_c= P E_c P^{-1}}{\longrightarrow}& f{\left( \frac{1}{(r-c)^4}{\left( \frac{r^4}{p^2}-2cr+c^2 \right)},r-c \right)}=0,\\ \label{translistp} f{\left( \frac{1}{p^2},r \right)}=0 &\stackrel{E^\star_c:=D_1 E_c D_1 =I_1 F_c I_1}{\longrightarrow}& f{\left( \frac{1}{p^2}-\frac{2c}{r}+c^2,\frac{r}{1-cr} \right)}=0,\end{aligned}$$ where: $S_\alpha$ is the scaling of a curve by a factor $\alpha$. $I_R$ is the circle inverse with respect to a circle at pedal point with radius $R$. $D_R$ is the dual curve, i.e. a curve in the dual projective space consisting of the set of lines tangent to the original curve. $E_c$ is the parallel (or equidistant) curve at distance $c$. $F_c$ maps every point $x\to x+c\frac{x}{{\left| x \right|}}$, i.e. shifts a curve away from the pedal point $O$ by the fixed amount $c$ (which can be negative). And finally, the $E^\star_c$ changes the radial component of a curve by $r\to \frac{r}{1-cr}$. These transformation formulas are not hard to derive as we will see in Section \[transsec\]. Selected curves --------------- Simplest curves translates into pedal coordinates as follows: ### Line The pedal equation of a line is obviously $$p=a,$$ where $a\geq 0$ is the distance of the line from the pedal point $O$. It is important to note that the other coordinate $r$ is *not completely* arbitrary. In addition to being non-negative, it has to always be true that $$p\leq r,$$ in other words the defining quality of the Euclidean space that “shortest distance is the straight line” must be obeyed. This puts on radius $r$ the constrain $$r\geq a.$$ ### Point The complement of a line is the curve given by $$r=a,\qquad a\geq0,$$ which is *not* a circle but it is actually a point distant $a$ from $O$. ($r=a$ is a circle in polar coordinates since the other coordinate – the angle $\varphi$ – is arbitrary. In pedal coordinates the other coordinate satisfies $p\leq a$ – which is consistent with the picture that a curve consisted of single point has an arbitrary tangent line at this point.) ### Circle Combining these two equations into one $$p=R,\qquad r=R,$$ we have finally arrived at the pedal equation of a circle (with center at the pedal point and radius $R$). Unlike with Cartesian coordinates, where the change of origin is done trivially by simple translation, the position of pedal point influences heavily the form of pedal equation and there is no simple formula telling us how a change of pedal point changes the pedal equation. For example, the pedal equation of a circle with pedal point on the circle is $$2Rp=r^2,$$ and for arbitrary pedal point it looks as $$2Rp=r^2+R^2-a^2,$$ where $a\geq 0$ is the distance of the center of circle to pedal point. The same equation can be described using contrapedal coordinate as $$p_c^2+(p-R)^2=a^2.$$ (These equation are not obvious, but they can be neatly derived, once we understand how curvature translates into pedal coordinates.) We can also consider the equation $$p=r,$$ which describes *all* the concentric circles with center at pedal point. At this point it should be clear that pedal coordinates does not tell us everything about the curve and they actually describes many curves at once – if you choose to differentiate between them. The equation $p=a$ is valid for *any* line distant $a$ and $r=a$ for any point distant $a$, etc. Obviously, the pedal coordinates do not care about rotation around the pedal point and about the curve’s parametrization, but it is actually not easy to tell in general the nature of ambiguity associated to a pedal equation – in fact, it differs from equation to equation. As we will see in the next section, what we can tell is that all curves that a pedal equation describes are solutions to a nonlinear first order autonomous differential equation in polar coordinates. This is actually *an advantage* of pedal coordinates over other systems if you are interested only in the general shape of the curve and do not want to be distracted by details. ### Logarithmic spiral Next curve which satisfies linear equation in pedal coordinates $$p=a r,\qquad 0\leq a\leq 1,$$ is the logarithmic spiral, where $a={\left| \sin \alpha \right|}$ and $\alpha$ is the angle between tangent and radial line, which is constant for logarithmic spirals. ### Circle involute and Spiral of Archimedes The curve which is sort of a contrapedal version of a line, i.e. satisfies $$p_c=a, \qquad a\geq 0,$$ can be shown to be the “Involute of a circle” (with pedal point at the center).[^3] This curve is often mistaken for Spiral of Archimedes, which is actually its pedal, i.e. $$p_c=a\qquad \stackrel{P}{\longrightarrow} \qquad \frac{1}{p^2}=\frac{1}{r^2}+\frac{c^2}{r^4}.$$ The difference between those two curves can be understood as follows: While the legs of Circle involute are parallel: $$p_c=a\qquad \stackrel{E_c}{\longrightarrow} \qquad p_c=a,$$ the legs of Archimedian spiral are spread constantly in a radial way, i.e. $$\frac{1}{p^2}=\frac{1}{r^2}+\frac{c^2}{r^4}\qquad \stackrel{F_c}{\longrightarrow} \qquad \frac{1}{p^2}=\frac{1}{r^2}+\frac{c^2}{r^4},$$ which is easy to see since $F_c=P E_c P^{-1}$. Moving to pedal coordinates {#MTPCS} =========================== It is known that a curve given in polar coordinates $f(\varphi,r)=0$ can be expressed in pedal coordinates by eliminating $\varphi$ from the equations $$f(\varphi,r)=0,\qquad p=\frac{r^2}{\sqrt{r^2+{r_\varphi^\prime}^2}}.$$ In terms of the factor $r_\varphi^\prime$, the second equation gives us $${\left| r_\varphi^\prime \right|}=\frac{r p_c}{p}=P(p_c).$$ Consequently, if a curve can be written as a solution of differential equation $$f{\left( r,{\left| r_\varphi^\prime \right|} \right)}=0,$$ its pedal equation becomes simply $$f{\left( r,\frac{r}{p}p_c \right)}=0.$$ Or, alternatively, we can say that this curve is the pedal of $$f(p,p_c)=0.$$ As an example take the logarithmic spiral with the spiral angle $\alpha$: $$r=a e^{\frac{\cos\alpha}{\sin\alpha} \varphi}.$$ Differentiating with respect to $\varphi$ we obtain $$r_\varphi^\prime= \frac{\cos\alpha}{\sin\alpha} ae^{\frac{\cos\alpha}{\sin\alpha} \varphi}=\frac{\cos\alpha}{\sin\alpha} r,$$ hence $${\left| r_\varphi^\prime \right|}={\left| \frac{\cos\alpha}{\sin\alpha} \right|} r,$$ and thus in pedal coordinates we get $$\frac{r}{p}p_c={\left| \frac{\cos\alpha}{\sin\alpha} \right|} r \qquad \Rightarrow \qquad {\left| \sin\alpha \right|} p_c={\left| \cos\alpha \right|} p,$$ or using the fact that $p_c^2=r^2-p^2$ we obtain $$p={\left| \sin\alpha \right|}r,$$ as claimed in the previous section. Similarly, Spiral of Archimedes given by $$r=a\varphi,\qquad a\geq 0,$$ can be written as a differential equation $${\left| r_\varphi^\prime \right|}=a,$$ hence we get $$P{\left( p_c=a \right)}\qquad \Rightarrow \qquad P{\left( r^2=p^2+a^2 \right)}\qquad \Rightarrow \qquad \frac{r^4}{p^2}=r^2+a^2 \qquad \Rightarrow \qquad \frac{1}{p^2}=\frac{1}{r^2}+\frac{a^2}{r^4},$$ as claimed. Notice that first equality above tells right away that Spiral of Archimedes is the pedal of $p_c=a$ which we will see shortly is indeed Involute of a circle. This approach can be generalized as follows: \[MPC\] A curve $\gamma$ which a solution of a $n$-th order autonomous differential equation ($n\geq 1$) $$f{\left( r,{\left| r_\varphi^\prime \right|},r_\varphi^{\prime\prime},{\left| r_\varphi^{\prime\prime\prime} \right|}\dots,r_\varphi^{2j},{\left| r_\varphi^{(2j+1)} \right|},\dots, r_\varphi^{(n)} \right)}=0,$$ is the pedal of a curve given in pedal coordinates by $$f(p,p_c, p_c p_c^\prime,{\left( p_c p_c^\prime \right)}^\prime p_c,\dots, (p_c\partial_p)^n p)=0.$$ In other words $$\begin{aligned} r&=P(p), & {\left| r_\varphi^\prime \right|}&=P(p_c),\\ r_\varphi^{\prime\prime}&=P(p_c p_c^\prime), & {\left| r_\varphi^{\prime\prime\prime} \right|}&=P{\left( (p_cp_c^\prime)^\prime p_c \right)},\\ &\vdots & &\vdots \\ r_\varphi^{(2j)}&=P{\left( (p_c\partial_p)^{2j}p \right)} &{\left| r_\varphi^{(2j+1)} \right|}&=P{\left( (p_c\partial_p)^{2j+1}p \right)}, & \forall j\in\mathbb{N}_0. \end{aligned}$$ Since we know that $${r_\varphi^\prime}^2=\frac{r^2p_c^2}{p^2}=P(p_c^2),$$ it follows by the properties of pedal operation $P$ and the chain rule that $$2P(p_c p_c^\prime)=P{\left( \frac{\partial p_c^2}{\partial p} \right)}={\left( \frac{\partial P(p_c^2)}{\partial P(p)} \right)}=\frac{\partial \frac{r^2p_c^2}{p^2} }{\partial r}=\frac{\partial {r_\varphi^\prime}^2}{\partial r}=2r_\varphi^{\prime\prime}.$$ Hence $$P{\left( (p_c\partial_p)^{2n} p \right)}=P{\left( {\left( p_cp_c^\prime\partial_p+p_c^2 \partial_p^2 \right)}^n p \right)}={\left( P{\left( p_cp_c^\prime \right)}\partial_r+P(p_c)^2 \partial_r^2 \right)}^n r ={\left( r_\varphi^{\prime\prime}\partial_r+{r_\varphi^\prime}^2\partial_r^2 \right)}^n r=\partial_\varphi^{2n} r=r_\varphi^{(2n)},$$ and $$P{\left( (p_c\partial_p)^{2n+1} p \right)}=P{\left( (p_c\partial_p)(p_c\partial_p)^{2n} p \right)}=\frac{rp_c}{p}\partial_r P{\left( (p_c\partial_p)^{2n} p \right)}={\left| r_\varphi^\prime \right|}\partial_r r^{(2n)}={\rm sgn}(r_\varphi^\prime)r^{(2n+1)}.$$ Taking the absolute value of the above finishes the proof. The radius of curvature $\rho$ and hence the curvature $\kappa$ of a curve $\gamma$ given in pedal coordinates can be computed as $$\rho= r r^\prime, \qquad \kappa=\frac{1}{rr^\prime},$$ where the differentiation is with respect to $p$. The signed curvature $\kappa$ is given in polar coordinates by $$\kappa:=\frac{{r^2+2{r_\varphi^\prime}^2-rr_\varphi^{\prime\prime}}}{{\left( r^2+{r_\varphi^\prime}^2 \right)}^{\frac32}}.$$ Notice that first derivative $r_\varphi^\prime$ is always raised to the second power hence Proposition \[MPC\] can be applied and we get $$\kappa=P{\left( \frac{{p^2+2p_c^2-p p_c p_c^\prime}}{{\left( p^2+p_c^2 \right)}^{\frac32}} \right)}=P{\left( \frac{{2r-pr^\prime}}{r^2} \right)}=\frac{{2\frac{r^2}{p}-r\frac{\partial \frac{r^2}{p}}{\partial r}}}{\frac{r^4}{p^2}}=\frac{{2pr^2r^\prime-2r^2r^\prime p+r^3}}{p^2r^\prime\frac{r^4}{p^2}}=\frac{1}{rr^\prime}.$$ We can use the fact that the radius of curvature $\rho$ is a quantity independent on a position of the pedal point, to write the pedal equation for a circle of radius $R$: $$\rho=R, \qquad \Rightarrow \qquad rr^\prime=R,$$ integrating we get $$r^2=2Rp+c,$$ as claimed. \[Kp\] Consider as a next example the *Kepler problem* restricted to two dimensional plane, i.e. solution to the system of ODE’s: $$\ddot{x}=-\frac{M}{{\left| x \right|}^{3}}x,\qquad x\in\mathbb{R}^2,$$ where $M$ is the reduced mass. This equation can be written in polar coordinates as $${\left( \frac{1}{r} \right)}_\varphi^{\prime\prime}+\frac{1}{r}=\frac{M}{L^2}, \qquad \dot \varphi=\frac{L}{r^2},$$ where $L$ is the angular momentum. This can be, of course, easily solved yielding the formula of a conic section. But taking the circle inverse of the first equation, which in polar coordinates can be done simply by $$r\to \frac{R}{r}, \qquad \varphi\to -\varphi,$$ we get $$I_R{\left( r_\varphi^{\prime\prime}+r=R\frac{M}{L^2} \right)},$$ and moving to pedal coordinates by Proposition \[MPC\] we arrive at: $$I_RP{\left( p_cp_c^\prime+p=\frac{RM}{L^2} \right)}\qquad \Rightarrow \qquad I_RP{\left( r^2=2\frac{RM}{L^2}p+c \right)}\qquad \Rightarrow \qquad \frac{R^2}{p^2}=\frac{2M R^2}{L^2 r}+c.$$ This approach gives us not only the pedal equation of the solution but also a way how to construct it. The next to the last expression instruct us that we must take Pedal and then Inverse (in other words Dual) of a circle with radius $\frac{RM}{L^2}$. It is easy to check that when the pedal point is inside the circle we get an ellipse and for an outside pedal point a hyperbola and for the pedal point on the circle a parabola. It is easy to see that this circle is in fact the inverse of the solution’s circumcircle. \[Covalex\] Cartesian oval is defined to be a locus of points for which a linear combination of distances from two foci are constant, i.e. $${\left| x \right|}+\alpha{\left| x-a \right|}=C.$$ They were studied first by Descartes who realized that such shape can be used to produce lenses without spherical aberration. In polar coordinates the equation becomes $$r+\alpha\sqrt{r^2-2{\left| a \right|}r\cos \varphi+{\left| a \right|}^2}=C.$$ Solving for $\cos\varphi$ we get $$2{\left| a \right|}\alpha^2\cos \varphi=r(\alpha^2-1)-\frac{b^2}{r}+2C,\qquad b^2:=C^2-\alpha^2{\left| a \right|}^2.$$ The quantity $b^2$ is indeed positive since the origin is inside of the oval, i.e. $\alpha{\left| a \right|}\leq C$. Differentiating with respect to $\varphi$ we obtain $$-2{\left| a \right|}\alpha^2\sin\varphi= {\left( \alpha^2-1+\frac{b^2}{ r^2} \right)}r^\prime_\varphi.$$ Thus we arrive at the differential equation: $$4{\left| a \right|}^2\alpha^4=4{\left| a \right|}^2\alpha^4{\left( \cos^2\varphi+\sin^2\varphi \right)}={\left( r(\alpha^2-1)-\frac{b^2}{r}+2C \right)}^2+{\left( \alpha^2-1+\frac{b^2}{ r^2} \right)}^2{r^\prime_\varphi}^2.$$ Applying Proposition \[MPC\] and rearranging we finally obtain: $$\frac{{\left( b^2-(1-\alpha^2)r^2 \right)}^2}{4p^2}=\frac{Cb^2}{r}+(1-\alpha^2)C r -{\left( (1-\alpha^2)C^2+b^2 \right)}.$$ It is worth to mentioning that this equation describes two Cartesian ovals (for $\alpha$ and -$\alpha$) simultaneously. (0.5371952078928851,0.5344508204973402) rectangle (16.285589449310446,10.572103090707767); (7.397922077922091,5.374112554112555) circle (4.554244232948909cm); plot(,[(-10.200846852910338–3.4976926064143576\*)/2.9167253837793385]{}); (8.990995670995687,5.270216450216451)– (10.31464746170143,8.871805160526913); (8.60476414365954,4.219300251419068) circle (2.3558021180607818cm); (8.391349637773637,6.5654157362443835)(7.012435278854425,5.374112550257687) – (7.012435278717274,5.374112546402818) – (7.012435278442968,5.374112538693083) – (7.012435277894372,5.37411252327361) – (7.012435276797174,5.374112492434667) – (7.012435274602785,5.374112430756779) – (7.012435270214018,5.374112307400998) – (7.012435261436539,5.3741120606894315) – (7.012435243881795,5.374111567266261) – (7.01243520877316,5.374110580419785) – (7.012435138559319,5.3741086067262955) – (7.012434998145335,5.374104659337159) – (7.012434717372202,5.3740967645502575) – (7.012434156045195,5.374080974941924) – (7.012433034268301,5.374049395587077) – (7.012430794222958,5.373986236324192) – (7.012426328166269,5.373859915581827) – (7.012417452190433,5.373607265200156) – (7.012399924801407,5.3731019286044885) – (7.012365768378341,5.372091110126664) – (7.012301049854399,5.370068876357841) – (7.012185998392836,5.366021896068195) – (7.0120135215133965,5.357916877735662) – (7.011900020311584,5.341654506764564) – (7.012162701446991,5.322512303081666) – (7.012858837851539,5.303222634194001) – (7.013998308528499,5.2837542237062465) – (7.015593851546938,5.264074878791846) – (7.017661234730849,5.244151329853435) – (7.02021946598765,5.2239490662257655) – (7.023291047044963,5.203432168326219) – (7.0269022750917856,5.182563137139133) – (7.031083597596065,5.161302722575105) – (7.035870026388259,5.139609753127469) – (7.041301617946106,5.117440970424566) – (7.047424027659415,5.094750873835029) – (7.054289146648564,5.071491582334964) – (7.06195583038069,5.047612723529387) – (7.070490728757262,5.023061363204706) – (7.079969227367002,4.997781993280432) – (7.090476508962661,4.971716601765859) – (7.102108742583428,4.944804855580655) – (7.114974404629592,4.91698443615152) – (7.1291957309573295,4.888191578814797) – (7.144910290853559,4.8583618804272515) – (7.162272661497572,4.827431455229935) – (7.181456163906262,4.795338536610554) – (7.202654596894205,4.762025641107884) – (7.226083872696927,4.727442429068891) – (7.2519834152650375,4.691549410827648) – (7.28061712919705,4.654322653441921) – (7.312273684670677,4.615759634139289) – (7.347265794926895,4.575886353705056) – (7.385928095087134,4.534765755337217) – (7.428613176720041,4.492507380944622) – (7.47568530986852,4.449278029084922) – (7.527511416577641,4.40531295568304) – (7.58444897255463,4.360926892596432) – (7.646830726877293,4.316523881193953) – (7.714946449547192,4.272604680851334) – (7.789022323933815,4.229770384603611) – (7.869199044547638,4.188720927000051) – (7.955510079429779,4.150247451821077) – (8.047861816413835,4.115218022158716) – (8.096234855871185,4.099282133868872) – (8.146017352847005,4.0845568422833125) – (8.197155494107013,4.071162115814366) – (8.249585471479008,4.059217902286381) – (8.303233491916517,4.048843115567826) – (8.358015895299934,4.040154607331831) – (8.413839377514426,4.033266140987806) – (8.470601314106027,4.028287384152865) – (8.528190177893272,4.025322934852541) – (8.58648604235832,4.0244713950540945) – (8.645361161486182,4.025824503257388) – (8.70468061596643,4.029466335811952) – (8.76430301529746,4.035472584506339) – (8.824081245299901,4.043909915885683) – (8.883863250803927,4.054835415775506) – (8.943492843766217,4.068296120687241) – (9.002810527738593,4.084328636194651) – (9.061654330394619,4.102958841023346) – (9.119860636671454,4.124201674495679) – (9.177265015959993,4.148061004111812) – (9.233703037638323,4.17452956941117) – (9.289011070067122,4.20358899782127) – (9.343027058930772,4.235209887936958) – (9.395591281501133,4.269351955554633) – (9.446547074017525,4.3059642377837655) – (9.49574152991385,4.3449853506463665) – (9.588257562759907,4.429958311664947) – (9.672015809318774,4.523584086630433) – (9.710286351676862,4.57338772056896) – (9.745992064303376,4.625033131278439) – (9.779023147066166,4.678396817056381) – (9.809277942315818,4.733348356340759) – (9.836663323639963,4.789750974425773) – (9.8610950483172,4.847462129563607) – (9.882498073379656,4.906334116173533) – (9.90080683521386,4.966214683018159) – (9.915965492640103,5.02694766432512) – (9.927928133413236,5.088373621932005) – (9.936658944085188,5.150330496613478) – (9.942132343163037,5.212654266813125) – (9.94433307748543,5.275179613049759) – (9.943256281728596,5.3377405862987475) – (9.938907500938688,5.400171278664317) – (9.93130267597163,5.462306494658703) – (9.9204680917061,5.523982421388388) – (9.906440287878327,5.585037295916329) – (9.889265932371671,5.6453120680213456) – (9.869001656779169,5.70465105651123) – (9.845713854043776,5.762902597163153) – (9.819478437972608,5.8199196802635536) – (9.790380564416246,5.875560575597406) – (9.758514313908634,5.929689442594203) – (9.723982335576293,5.982176923172794) – (9.686895452153871,6.032900714640144) – (9.605538485995185,6.128606571113959) – (9.51547600184947,6.21598992789276) – (9.46753044618409,6.256344633795376) – (9.417839537232389,6.294378796886) – (9.366556986801841,6.330033206901617) – (9.313840130176091,6.363259179383016) – (9.25984919249699,6.394018790323179) – (9.204746523480289,6.4222850509979805) – (9.148695803345998,6.448042017895523) – (9.091861223560958,6.471284832705427) – (9.034406646795889,6.492019687513334) – (8.976494751382441,6.510263710697726) – (8.918286166501922,6.526044769576134) – (8.859938605319755,6.539401186620589) – (8.801606004264842,6.55038136707831) – (8.743437677596699,6.559043337103573) – (8.68557749725267,6.565454193028128) – (8.628163108662887,6.569689464152704) – (8.571325193698359,6.571832393394807) – (8.515186792111484,6.571973142220333) – (8.459862692681604,6.5702079284443355) – (8.405458904737838,6.566638107610916) – (8.352072219769672,6.5613692106492145) – (8.299789871439303,6.554509952232355) – (8.248689300498494,6.546171225626362) – (8.198838028932828,6.536465100701648) – (8.103103899649096,6.513398964335694) – (8.01293144848449,6.486195371954486) – (7.9285160448989025,6.455699301877506) – (7.84991820648472,6.4226945705779235) – (7.777081682366406,6.387887533588818) – (7.709854141471076,6.351897143947105) – (7.648008700047047,6.315251033703772) – (7.591264728493684,6.278386753303345) – (7.539306744773601,6.241656987791459) – (7.4918006361960545,6.2053374736619356) – (7.448406868191501,6.169636424644916) – (7.408790680089355,6.1347044743935735) – (7.372629508026671,6.100644393500571) – (7.33961801456744,6.0675200860542065) – (7.309471160431077,6.035364584835432) – (7.281925749529707,6.0041869299670685) – (7.256740837413933,5.973977932100678) – (7.233697334037662,5.944714894386021) – (7.212597067792741,5.9163654069766) – (7.1932615172678105,5.888890343353621) – (7.175530364422253,5.86224618764718) – (7.159259979380225,5.836386812855694) – (7.144321912726211,5.811264816011294) – (7.130601445036709,5.786832500940094) – (7.117996224077724,5.763042584164103) – (7.106415006262919,5.739848685685146) – (7.095776509355649,5.717205654329351) – (7.086008376955246,5.695069767117564) – (7.077046251184426,5.673398833675904) – (7.06883294752982,5.652152229826252) – (7.061317724481413,5.631290878979571) – (7.0544556400992855,5.610777195572985) – (7.0482069876403415,5.590575001340383) – (7.042536802711637,5.570649422507362) – (7.037414434949606,5.550966773901687) – (7.032813177864023,5.531494434345723) – (7.0287099511726785,5.512200716444865) – (7.025085030649156,5.493054732924186) – (7.021921821188554,5.474026260930121) – (7.0192066694525215,5.455085605153196) – (7.016928713080128,5.436203460202968) – (7.015079764044571,5.417350772346645) – (7.013654224301053,5.398498600485184) – (7.012649032412787,5.3796179760684755) – (7.012463294374902,5.374889965448353) – (7.012441921767474,5.3742986883603034) – (7.0124366426838565,5.374150859082123) – (7.012435326917402,5.374113901133365) – (7.012435285825507,5.374112746193405) – (7.012435280689134,5.3741126018258925) – (7.012435279405038,5.374112565734014) – (7.012435279084023,5.3741125567110455)(8.391349637773637,6.5654157362443835); (6.258483015287831,4.007714604206321)(14.557037253976302,5.374112625703707) – (14.557037251429206,5.374112697294858) – (14.557037246335007,5.374112840477161) – (14.5570372361466,5.374113126841768) – (14.55703721576973,5.374113699570977) – (14.557037175015783,5.374114845029386) – (14.5570370935071,5.374117135946164) – (14.557036930486252,5.374121717779566) – (14.557036604431033,5.374130881445744) – (14.55703595226619,5.374149208775592) – (14.55703464771893,5.3741858634252715) – (14.557032037754375,5.374259172684552) – (14.557026814344843,5.374405791042722) – (14.557016353604505,5.374699027116931) – (14.55699537643877,5.375285496692278) – (14.556953199369017,5.376458425514393) – (14.556867954292855,5.37880424155378) – (14.556693900527515,5.383495704889257) – (14.556331539633074,5.392877937996794) – (14.555549813436025,5.411639481259264) – (14.553758421228007,5.449149686767894) – (14.549264587150871,5.524109066937799) – (14.54355761471931,5.598968986543634) – (14.53663978778991,5.673707781673571) – (14.52851381263736,5.748303841291865) – (14.519182816763568,5.822735615925862) – (14.508650347526721,5.896981626327167) – (14.496920370590631,5.97102047210319) – (14.483997268195925,6.044830840315559) – (14.46988583725241,6.118391514041617) – (14.454591287255056,6.191681380895574) – (14.43811923802277,6.264679441505479) – (14.42047571726229,6.337364817942683) – (14.401667157957416,6.409716762100047) – (14.381700395584662,6.481714664015428) – (14.360582665156937,6.553338060136944) – (14.338321598096147,6.624566641526522) – (14.314925218935791,6.695380261998183) – (14.290401941855446,6.765758946187793) – (14.264760567048262,6.835682897550726) – (14.238010276922779,6.905132506284126) – (14.210160632140935,6.974088357170414) – (14.181221567493916,7.042531237338772) – (14.151203387617162,7.110442143941247) – (14.120116762546468,7.177802291740255) – (14.087972723117556,7.244593120604419) – (14.054782656209937,7.310796302909379) – (14.020558299837795,7.376393750840528) – (13.98531173808957,7.441367623594634) – (13.949055395918878,7.505700334477374) – (13.911802033787508,7.569374557893438) – (13.873564742164755,7.632373236226854) – (13.834356935883068,7.6946795866079185) – (13.79419234835423,7.7562771075644985) – (13.753085025647291,7.817149585554532) – (13.711049320430956,7.8772811013770365) – (13.66809988578304,7.936656036459002) – (13.624251668869375,7.995259079015424) – (13.579519904494642,8.053075230079873) – (13.53392010852792,8.110089809403082) – (13.487468071205702,8.166288461217114) – (13.440179850314557,8.221657159862412) – (13.392071764256835,8.276182215275647) – (13.343160385001966,8.329850278335904) – (13.29346253092589,8.382648346066766) – (13.242995259542052,8.434563766692381) – (13.19177586012661,8.485584244545187) – (13.139821846240492,8.535697844823005) – (13.087150948152154,8.584892998193936) – (13.033781105162952,8.633158505246472) – (12.97973045783946,8.68048354078352) – (12.925017340154549,8.72685765795786) – (12.869660271541429,8.772270792247728) – (12.813677948863294,8.816713265270554) – (12.75708923830173,8.860175788433105) – (12.6999131671673,8.902649466416522) – (12.642168915635786,8.944125800494863) – (12.583875808412555,8.984596691685104) – (12.525053306329463,9.024054443727893) – (12.465720997876486,9.062491765896931) – (12.405898590672425,9.099901775636319) – (12.34560590287753,9.136278001024285) – (12.284862854551074,9.171614383061815) – (12.223689458958452,9.20590527778591) – (12.162105813829744,9.239145458205314) – (12.100132092574423,9.271330116058603) – (12.037788535454881,9.302454863393109) – (11.97509544072251,9.33251573396397) – (11.912073155720028,9.36150918445266) – (11.848742067952863,9.389432095503656) – (11.785122596133903,9.416281772579165) – (11.721235181204442,9.442055946630687) – (11.657100277335408,9.466752774587276) – (11.592738342911609,9.490370839659258) – (11.528169831503478,9.51290915145771) – (11.463415182828998,9.53436714592847) – (11.39849481370982,9.554744685100692) – (11.333429109024937,9.574042056649441) – (11.268238412665362,9.592259973271828) – (11.202943018493654,9.609399571876745) – (11.137563161311432,9.625462412587542) – (11.072119007838605,9.640450477557566) – (11.00663064770809,9.654366169598672) – (10.94111808447906,9.667212310621993) – (10.875601226672728,9.67899213989133) – (10.81009987883386,9.689709312088693) – (10.744633732622027,9.699367895192456) – (10.67922235793555,9.707972368167349) – (10.613885194072338,9.715527618467052) – (10.548641540930596,9.722038939348645) – (10.483510550253506,9.727512026999609) – (10.418511216921273,9.73195297747715) – (10.353662370293762,9.735368283459367) – (10.288982665608023,9.737764830809265) – (10.224490575434057,9.739149894951279) – (10.160204381192111,9.739531137059881) – (10.096142164735571,9.738916600060811) – (10.032321800003444,9.737314704445565) – (9.968760944745261,9.73473424389774) – (9.90547703232288,9.731184380732428) – (9.842487263592947,9.726674641148621) – (9.779808598873608,9.721214910294368) – (9.717457749998877,9.714815427143668) – (9.65545117246632,9.707486779188324) – (9.5938050576796,9.699239896940247) – (9.532535325291988,9.69008604824799) – (9.47165761565397,9.680036832425522) – (9.41118728236914,9.669104174193308) – (9.351139384962767,9.657300317431577) – (9.291528681668087,9.644637818747286) – (9.232369622333149,9.631129540851129) – (9.173676341454502,9.616788645748063) – (9.115462651341392,9.601628587739086) – (9.057742035415442,9.585663106234417) – (9.000527641650677,9.568906218377652) – (8.943832276159215,9.551372211481434) – (8.887668396927427,9.533075635273212) – (8.832048107708598,9.51403129395291) – (8.77698315207622,9.494254238058302) – (8.72248490764533,9.473759756142673) – (8.668564380466336,9.452563366260591) – (8.615232199598484,9.430680807264759) – (8.562498611868019,9.4081280299103) – (8.510373476818751,9.384921187770072) – (8.45886626186089,9.36107662795871) – (8.407986037625033,9.336610881665155) – (8.357741473529156,9.311540654496609) – (8.308140833564954,9.285882816630506) – (8.21090233118716,9.232872551960217) – (8.116328388382744,9.177717954379858) – (8.024470146314107,9.120558861845252) – (7.935371938012676,9.061536735495073) – (7.849071214785221,9.000794300002923) – (7.76559849162591,8.938475172684049) – (7.684977312878761,8.874723481829044) – (7.6072242394336245,8.809683474888402) – (7.5323488587616305,8.743499117305003) – (7.460353819099388,8.676313682993786) – (7.3912348890697634,8.608269337693205) – (7.324981043975998,8.53950671667512) – (7.261574579918471,8.47016449857026) – (7.200991256755664,8.400378977375253) – (7.143200470756595,8.330283635016785) – (7.088165457567891,8.260008717170939) – (7.035843525841369,8.18968081535848) – (6.986186321536472,8.119422458639246) – (6.939140122527631,8.049351718505616) – (6.894646162711846,7.979581830813803) – (6.852640984335317,7.910220838760376) – (6.81305681674662,7.841371261018352) – (6.775821979253956,7.7731297891501985) – (6.74086130522771,7.7055870183258826) – (6.708096584068873,7.638827215160841) – (6.677447017176895,7.572928126158352) – (6.648829683618805,7.507960829788957) – (6.622160010846921,7.443989634666225) – (6.597352245553166,7.381072025600389) – (6.5743199196027655,7.319258658538648) – (6.552976305965316,7.258593404571234) – (6.533234859673866,7.1991134422980645) – (6.515009639081933,7.140849396972808) – (6.498215703059327,7.083825523976823) – (6.4827694802499245,7.028059933377723) – (6.46858910709846,6.973564851609111) – (6.45559473201259,6.920346915706925) – (6.443708783733225,6.8684074950767355) – (6.432856202723211,6.8177430354415005) – (6.422964635112795,6.7683454194597585) – (6.413964589443339,6.720202338490842) – (6.398376097911457,6.627611856306156) – (6.390586300039552,6.575661614035543) – (6.383665907616533,6.525308824417128) – (6.372097753850706,6.42921132027906) – (6.363057601549593,6.338919291341213) – (6.356018923653124,6.253996413349173) – (6.350535562441563,6.173987508883124) – (6.346233291271562,6.098432264497516) – (6.342800901849147,6.026875424333429) – (6.33998143129692,5.958873844583149) – (6.337563943309664,5.894000894591616) – (6.33537610932811,5.831848711476761) – (6.33327770676967,5.772028783068356) – (6.331155061167902,5.714171272416962) – (6.3289164021125845,5.657923424012166) – (6.3264880719892185,5.602947319076556) – (6.32381151468443,5.548917181796693) – (6.320840972737627,5.495516383446378) – (6.317541831340493,5.442434248105593) – (6.313889562907265,5.389362731900914) – (6.309869244500771,5.335993026732214) – (6.305475640849958,5.282012128648568) – (6.300713867118864,5.227099410032432) – (6.2956006672529865,5.170923243610462) – (6.290166364773562,5.113137745543078) – (6.284457561979294,5.053379735299262) – (6.278540678547804,4.991266052625175) – (6.272506428212602,4.926391427202093) – (6.266475327786999,4.858327163985023) – (6.260604309856899,4.786620983955193) – (6.255094461043216,4.7107984398153295) – (6.250199823169939,4.6303663976696825) – (6.246237067412068,4.54481912140537) – (6.243595678160725,4.453647492438815) – (6.242748069101362,4.356351815613072) – (6.243169544104742,4.305257900773919) – (6.244258817353748,4.252458475434814) – (6.246102065208168,4.197901771703287) – (6.248791977904694,4.141540399139567) – (6.252427798086062,4.083332154624335) – (6.257115281744435,4.023240860227628) – (6.262966573277541,3.9612372165331977) – (6.270099987743958,3.897299657211105) – (6.278639695205972,3.8314151893432) – (6.288715304217416,3.763580203195268) – (6.300461343964425,3.693801234868878) – (6.314016647207691,3.6220956656209444) – (6.3295236388728,3.5484923426139083) – (6.347127537760711,3.4730321074333723) – (6.3669754812669215,3.3957682208138755) – (6.389215585082898,3.316766674547691) – (6.413995951501843,3.2361063843819244) – (6.44146364108788,3.1538792606843704) – (6.471763623061815,3.0701901566417895) – (6.505037719791152,2.9851566965726417) – (6.541423560296024,2.8989089894993225) – (6.581053556741327,2.8115892353004264) – (6.624053916574428,2.7233512325120697) – (6.670543701375963,2.634359798117152) – (6.720633941726183,2.5447901104677184) – (6.774426815544474,2.45482698684086) – (6.8320148955274895,2.3646641070920236) – (6.893480469564454,2.274503194498925) – (6.958894936404632,2.1845531642569767) – (7.028318277430615,2.0950292492597535) – (7.10179860417722,2.006152111854458) – (7.179371780234786,1.9181469492499486) – (7.261061115386482,1.831242599240209) – (7.346877129235739,1.7456706519134977) – (7.436817381165063,1.6616645720964305) – (7.530866363205626,1.579458836434914) – (7.579423195757386,1.539104439896289) – (7.628995452264351,1.4992880882670157) – (7.679577506924906,1.4600390026811556) – (7.731162918126561,1.42138631279181) – (7.783744431498968,1.3833590322376865) – (7.837313983703525,1.3459860344909904) – (7.891862706934198,1.309296029095459) – (7.947380934105222,1.2733175383000892) – (8.0038582047017,1.2380788740959248) – (8.061283271270046,1.2036081156605851) – (8.119644106526188,1.1699330872140774) – (8.178927911059672,1.1370813362909309) – (8.23912112161318,1.105080112430448) – (8.300209419917095,1.0739563462892745) – (8.362177742060133,1.0437366291766716) – (8.425010288377074,1.0144471930165402) – (8.4886905338358,0.9861138907362722) – (8.55320123890619,0.9587621770854549) – (8.618524460894424,0.9324170898838612) – (8.684641565725993,0.9071032317027677) – (8.75153324016246,0.8828447519781053) – (8.819179504436415,0.859665329558435) – (8.887559725290146,0.8375881556882652) – (8.956652629403852,0.8166359174280928) – (9.02643631719971,0.7968307815118798) – (9.09688827700823,0.7781943786439703) – (9.167985399583866,0.7607477882363387) – (9.239703992957484,0.744511523586948) – (9.31201979761239,0.7295055175023591) – (9.384908001973173,0.7157491083628119) – (9.458343258193622,0.7032610266352456) – (9.532299698233508,0.6920593818322673) – (9.60675095021177,0.6821616499206149) – (9.681670155024891,0.6735846611802647) – (9.757029983219573,0.6663445885149849) – (9.83280265210788,0.660456936217397) – (9.908959943114532,0.655936529188754) – (9.985473219344794,0.6527975026161116) – (10.062313443362829,0.651053292107403) – (10.139451195168563,0.650716624287761) – (10.216856690363754,0.6517995078564249) – (10.294499798495295,0.654313225107623) – (10.372350061565834,0.6582683239159125) – (10.450376712700738,0.663674610187628) – (10.528548694960838,0.6705411407799109) – (10.606834680290207,0.6788762168885136) – (10.685203088588741,0.6886873779051781) – (10.7636221068982,0.6999813957465326) – (10.842059708692057,0.7127642696541726) – (10.920483673257314,0.7270412214683857) – (10.99886160515894,0.7428166913745119) – (11.077160953775156,0.7600943341239708) – (11.155349032893602,0.7788770157294984) – (11.2333930403576,0.7991668106350439) – (11.311260077751514,0.8209649993608106) – (11.388917170114784,0.844272066623102) – (11.46633128567406,0.86908769992862) – (11.543469355582493,0.8954107886430825) – (11.620298293655537,0.9232394235335954) – (11.696785016092642,0.9525708967838462) – (11.772896461174271,0.9834017024811172) – (11.84859960892316,1.01572753757433) – (11.923861500719616,1.0495433033012969) – (11.998649258859919,1.0848431070839126) – (12.07293010604739,1.1216202648893496) – (12.14667138480536,1.1598673040553784) – (12.219840576801467,1.199575966577651) – (12.292405322073032,1.240737212856235) – (12.364333438142706,1.28334122589912) – (12.435592939014109,1.3273774159795693) – (12.50615205403688,1.3728344257444807) – (12.575979246631363,1.4197001357699417) – (12.645043232861788,1.4679616705612073) – (12.71331299984856,1.5176054049925902) – (12.780757824009124,1.5686169711836868) – (12.847347289117335,1.6209812658076928) – (12.913051304171685,1.6746824578271617) – (12.977840121062176,1.7297039966527628) – (13.041684352026348,1.7860286207199856) – (13.104554986884517,1.8436383664788494) – (13.166423410045136,1.902514577790983) – (13.2272614172703,1.9626379157288545) – (13.287041232192491,2.0239883687711373) – (13.345735522573106,2.0865452633883637) – (13.403317416294025,2.1502872750124906) – (13.45976051707292,2.21519243938417) – (13.515038919893998,2.281238164270845) – (13.56912722614499,2.3484012415491247) – (13.62200055845259,2.4166578596440864) – (13.673634575207348,2.4859836163185713) – (13.724005484770409,2.5563535318048) – (13.773090059354116,2.6277420622706797) – (13.820865648568018,2.700123113613328) – (13.867310192623668,2.7734700555712957) – (13.912402235190031,2.84775573614767) – (13.956120935892589,2.9229524963355944) – (13.998446082448995,2.9990321851376347) – (14.039358102434464,3.075966174870347) – (14.078838074670376,3.153725376745106) – (14.116867740229633,3.2322802567161784) – (14.153429513052028,3.311600851587092) – (14.188506490164668,3.3916567853655337) – (14.22208246150067,3.472417285857716) – (14.254141919311184,3.553851201492376) – (14.284670067165155,3.635927018364742) – (14.31365282853171,3.7186128774906124) – (14.34107685494041,3.80187659226053) – (14.366929533714709,3.8856856660839303) – (14.39119899527436,3.970007310213052) – (14.413874120002083,4.054808461736402) – (14.434944544671367,4.140055801731178) – (14.454400668431228,4.225715773564224) – (14.472233658344612,4.311754601330991) – (14.48843545447737,4.398138308421693) – (14.502998774534957,4.484832736204012) – (14.515917118043848,4.571803562811494) – (14.527184770075976,4.659016322026736) – (14.536796804513264,4.7464364222484825) – (14.544749086850883,4.83402916553162) – (14.551038276537843,4.921759766689014) – (14.555661828852644,5.0095933724442006) – (14.558617996313943,5.097495080623789) – (14.559905829625073,5.18542995937846) – (14.559525178151157,5.273363066421488) – (14.557476689930136,5.361259468273556) – (14.557103422470325,5.372242198780681) – (14.557054933396945,5.3736149580226105) – (14.557042747568198,5.373958144919283) – (14.557039697138539,5.374043941460771) – (14.557038171327818,5.374086839704089) – (14.557037408273496,5.3741082888188885) – (14.557037312884695,5.37411096995792) – (14.557037265189734,5.374112310527408) – (14.557037259227844,5.374112478098592) – (14.55703725668073,5.3741125496897455)(6.258483015287831,4.007714604206321); plot(,[(-21.088230279050162–2.346115484825315\*)/-0.21341450588590405]{}); (7.397922077922091,5.374112554112555) circle (2.5pt); (7.729221785966004,4.949068760696682) node [$0$]{}; (11.58839826839829,3.5905627705627716) circle (1.5pt); (6.169974831370206,9.58782845061917) node [$C$]{}; (10.31464746170143,8.871805160526913) circle (1.5pt); (8.990995670995687,5.270216450216451) circle (2.5pt); (9.424902849088935,4.9100875868317875) node [$a$]{}; (9.417420802706836,6.430497387353753) circle (1.5pt); (6.258483015287831,4.007714604206321) circle (1.5pt); (5.780163092721256,4.149954696466338) node [$x$]{}; (8.391349637773637,6.5654157362443835) circle (1.5pt); (7.963108829155374,6.761693345414294) node [$x$]{}; Transforms {#transsec} ========== The tools developed in the previous section allow us to translate easily some transforms from polar to pedal coordinates. Take, for instance, the circle inverse transform $I_R$. As was mentioned earlier, the circle inverse maps a curve in polar coordinates $(r,\varphi)$ into a curve in different polar coordinates $(\tilde r,\tilde \varphi)$ as follows: $$\tilde r=\frac{R}{r},\qquad \tilde\varphi =-\varphi.$$ Hence in pedal coordinates we have $$\tilde r=\frac{R}{r},$$ and $$\frac{\tilde r\tilde p_c}{\tilde p}={\left| \frac{\partial \tilde r}{\partial \tilde\varphi} \right|}={\left| \frac{\partial \frac{R}{r}}{\partial (-\varphi)} \right|}=\frac{R {\left| r^\prime_\varphi \right|}}{r^2}=\frac{R p_c}{rp}=\frac{\tilde r p_c}{p}.$$ Thus $$\frac{\tilde p_c}{\tilde p}=\frac{p_c}{p},$$ solving for $\tilde p$ using $p_c^2=r^2-p^2$ we get: $$\tilde p=\frac{R p}{r^2},$$ which implies $$f(r,p)=0 \qquad \stackrel{I_R}{\longrightarrow}\qquad f{\left( \frac{R}{r},\frac{R p}{r^2} \right)}=0,$$ as claimed. A generalization of this argument gives the next theorem: \[transth\] Let $T$ be a transform that maps a curve in polar coordinates $(r,\varphi)=0$ into the curve $(\tilde r,\tilde \varphi)$ as follows $$\begin{aligned} \tilde r&= f(r),\\ \tilde \varphi &= {\int \limits_{\varphi_0}^{\varphi}} g( r){\rm d}t.\end{aligned}$$ Then the transform $T$ in pedal coordinates takes form $$\begin{aligned} h{\left( r,\frac{1}{p^2} \right)}&=0 & &\stackrel{T}{\longrightarrow} & h{\left( f(r),{\left( \frac{f^\prime(r) r^2}{f(r)^2 g(r)} \right)}^2{\left( \frac{1}{p^2}-\frac{1}{r^2} \right)}+\frac{1}{f(r)^2} \right)}=0,\end{aligned}$$ $$f^2{\left( \frac{f^2}{\tilde p^2}-1 \right)}={\left( \frac{f \tilde p_c}{\tilde p} \right)}^2={\left( \frac{\tilde r \tilde p_c}{\tilde p} \right)}^2={\left( \frac{\partial \tilde r}{\partial\tilde\varphi} \right)}^2={\left( \frac{\partial f}{g\partial \varphi} \right)}^2=\frac{{f^\prime}^2 {\left( r^\prime_\varphi \right)}^2}{g^2}={\left( \frac{f^\prime}{g} \right)}^2 {\left( \frac{rp_c}{p} \right)}^2={\left( \frac{f^\prime r^2}{g} \right)}{\left( \frac{r^2}{p^2}-1 \right)}.$$ Solving for $\frac{1}{\tilde p^2}$ we obtain the result. By means of this theorem many transforms can be easily obtained. All the transforms listed in (\[translist\])-(\[translistp\]) and also (\[harmtrans\]) are left to the reader as an excercise. For example, the harmonic transform $H_k$ in polar coordinates is given by $$\tilde r=r,\qquad \tilde \varphi=\frac{1}{k} \varphi,$$ and so on. The only exceptional transform which does not behave so well in polar coordinates is $E_c$ but since parallel curves share the normal, the distance to the normal $p_c$ is conserved and the distance to tangent $p$ is hence just shifted by $c$. Relations between $F_c$ and $E_c$ and $E^\star_c$: $$F_c=P E_c P^{-1},\qquad E^\star_c=D_1 E_c D_1=I_1 F_c I_1,$$ are then a consequence which can be verified in pedal coordinates by a little bit of algebra. The same goes for the properties of $H_k$: $$\begin{aligned} H_{\pm 1} &= Id, & H_k H_l &= H_{kl},& H_k I_R &= I_R H_k, & F_c H_k&= H_k F_c,\end{aligned}$$ and so on. With this theorem we can also see that the transform (\[Mahomedtr\]) assumed in paper [@Mahomed] which generalized Newton theorem of revolving orbits is the composition of scaling $S_a$, dual parallel $E^\star_b$ and harmonics $H_k$: $$(\ref{Mahomedtr})=H_k E^\star_{b}S_a.$$ Also we can see that the transform $T_f$ in Theorem \[nonlocalrevolvingorbits\] is a special case of Theorem \[transth\] for $$g=\frac{f^\prime r^2}{f^2 k}.$$ Hence we have $$\begin{aligned} \label{Tftransform} h{\left( r,\frac{1}{p^2} \right)}&=0 & &\stackrel{T_f}{\longrightarrow} & h{\left( f(r),\frac{k^2}{p^2}-\frac{k^2}{r^2}+\frac{1}{f(r)^2} \right)}=0.\end{aligned}$$ The circle inverse can be seen as a specific case of a more general *complex power transform* (denoted here $M_{\alpha}$) which acts in polar coordinates as $$\tilde r= r^\alpha, \qquad \tilde \varphi=\alpha \varphi={\int \limits_{0}^{\varphi}} \alpha {\rm d}t,$$ for a given real number $\alpha$. (The case $\alpha=-1$ corresponds to the circle inverse.) Using Theorem \[transth\] we have $$h(r,p)=0 \qquad \stackrel{M_\alpha}{\longrightarrow}\qquad h{\left( r^\alpha,r^{\alpha-1}p \right)}=0.$$ It is easy to check that $$\begin{aligned} M_1&=Id, & M_{-1}&=I_{1}, & M_\alpha M_\beta &= M_{\alpha\beta}. & $$ One advantage of pedal coordinates over the polar ones is that even a nonlocal change in the $\varphi$ variable translates algebraically into pedal coordinates. Take the transform $A_\omega$, for example, introduced in Section \[Intro\]: $$\tilde r= r,\qquad \tilde \varphi= \varphi- \varphi_0+\omega{\int \limits_{ \varphi_0}^{ \varphi}} r^2(t){\rm d}t.$$ By Theorem \[transth\] we have $$\label{Aomega} f{\left( r,\frac{1}{p^2} \right)}=0,\qquad \stackrel{A_\omega}{\longrightarrow} \qquad f{\left( r,{\left( \frac{1}{p^2}+2\omega +\omega^2 r^2 \right)}\frac{1}{(1+\omega r^2)^2} \right)}=0.$$ Combining this transform with complex powers $B_\alpha:= M_\frac12 A_\alpha M_2$, $$\label{Balpha} f{\left( r,\frac{1}{p^2} \right)}=0, \qquad \stackrel{B_\omega}{\longrightarrow} \qquad f{\left( r,{\left( \frac{1}{p^2}+\frac{2\alpha}{r}+\alpha^2 \right)}\frac{1}{(1+\alpha r)^2} \right)},$$ we get another interesting nonlocal transform which will be important later. It can also be understood in polar coordinates in the sense of Theorem \[transth\]: $$\tilde r= r,\qquad \tilde \varphi= \varphi +\alpha \int r{\rm d} \varphi.$$ Evolute and related transforms {#evolutesec} ============================== Evolute ------- Remember that for a curve $\gamma$ its evolute $E(\gamma)$ is defined to be the locus of centers of osculating circles. It also known that the normal of $\gamma$ becomes tangent line for the evolute (so alternatively, the evolute can be defined to be the locus of normal lines). (-4,3) to \[out=0,in=180\] (0,2) to \[out=0, in=225\] (2,3); (M1) at (2,3); (M2) at (0,2); (M2) at (0,1); (-4,2) to (2,2); (-3,2) to (-3,0); (0,2) to (-3,0); (-3,0) to (0,1); (-3,0) circle (0.05); (0,2) circle (0.05); (-3,2) circle (0.05); (0,1) circle (0.05); (M3) at (-3,0); (0,2) to (0,0); (-3,0) to (0,0); (0,0) circle (0.05); (M6) at (-3,1); (M7) at (-1.5,1); (M9) at (-1.0,0.5); (M8) at (-1.5,0); (M10) at (0,1.5); (M11) at (0,0.5); Since the radius of curvature $\rho$ of a curve with pedal coordinates $(p,r)$ is given by $$\rho=rr^\prime,$$ the pedal coordinates $(\tilde p,\tilde r)$ of the evolute are $$\begin{aligned} \tilde r&=\sqrt{p_c^2+{\left( rr^\prime-p \right)}^2}, & &\text{or} & \tilde p_c&= p_c p_c^\prime \\ \tilde p&=p_c, & & & \tilde p&=p_c.\end{aligned}$$ We can also work out the derivatives $$\tilde p_c \tilde p_c^\prime=\tilde p_c\frac{\partial \tilde p_c}{\partial \tilde p}=p_c p_c ^\prime\frac{\partial p_c p_c^\prime}{\partial p_c}=(p_c p_c)^\prime p_c.$$ Generally $${\left( \tilde p_c\partial_{\tilde p} \right)}^n \tilde p={\left( p_cp_c^\prime \partial_{p_c} \right)}^n p_c={\left( p_c \partial_{p} \right)}^n p_c={\left( p_c \partial_{p} \right)}^{n+1} p.$$ Hence we arrive at the following proposition: \[evolute\] The evolute $E(\gamma)$ of a curve $\gamma$ which satisfies $$f{\left( p_c,p_c p_c^\prime,{\left( p_c p_c^\prime \right)}^\prime p_c,\dots, {\left( p_c\partial_p \right)}^n p \right)}=0,$$ where $n>1$, satisfies $$f{\left( p,p_c,p_c p_c^\prime,{\left( p_c p_c^\prime \right)}^\prime p_c,\dots, {\left( p_c\partial_p \right)}^{n-1} p \right)}=0.$$ In other words $$f{\left( p_c,p_c p_c^\prime,\dots, {\left( p_c\partial_p \right)}^n p \right)}=0,\qquad \stackrel{E}{\longrightarrow} \qquad f{\left( p,p_c,p_c p_c^\prime,\dots, {\left( p_c\partial_p \right)}^{n-1} p \right)}=0.$$ Since the circle with the pedal point at center can be described $$p_c=0,$$ its involute (i.e. inverse of the evolute) is by the previous theorem $$p_c=0 \qquad \stackrel{E^{-1}}{\longrightarrow}\qquad p_cp_c^\prime=0 \qquad \Rightarrow\qquad p_c^2=a^2 \qquad \Rightarrow\qquad p_c=a,$$ for some constant $a\geq 0$ as claimed. We can also work out the case of the circle with general position of the pedal point: $$p_c^2+{\left( p-R \right)}^2=a^2 \qquad \stackrel{E^{-1}}{\longrightarrow}\qquad {\left( p_cp_c^\prime \right)}^2+{\left( p_c-R \right)}^2=a^2.$$ This separable ODE has in addition to the solution above also the solution $$p+\sqrt{a^2-(p_c-R)^2}-R\arctan\frac{p_c-R}{\sqrt{a^2-(p_c-R)^2}}=c,$$ showing once again extreme dependence of pedal coordinates on the position of the pedal point. Contrapedal ----------- It is known that the contrapedal curve $P_c$ defined in the beginning is the pedal of evolute, i.e. $P_c:=PE$ (see [@Zwikker p. 151]). Thus we have: $$f{\left( p_c,p_c p_c^\prime,\dots, {\left( p_c\partial_p \right)}^n p \right)}=0,\qquad \stackrel{P_c:=P E}{\longrightarrow} \qquad P{\left( f{\left( p,p_c,p_c p_c^\prime,\dots, {\left( p_c\partial_p \right)}^{n-1} p \right)}=0 \right)},$$ or using Proposition \[MPC\]: $$f{\left( p_c,p_c p_c^\prime,\dots, {\left( p_c\partial_p \right)}^n p \right)}=0\qquad \stackrel{P_c}{\longrightarrow} \qquad f{\left( r,{\left| r^\prime_\varphi \right|},r^{\prime\prime}_{\varphi},\dots, r^{(n-1)}_\varphi \right)}=0.$$ Equivalently, we can say: $$f{\left( {\left| r^\prime_\varphi \right|},r^{\prime\prime}_{\varphi},\dots, r^{(n)}_\varphi \right)}=0 \qquad \stackrel{P E P^{-1}}{\longrightarrow} \qquad f{\left( r,{\left| r^\prime_\varphi \right|},r^{\prime\prime}_{\varphi},\dots, r^{(n-1)}_\varphi \right)}=0.$$ Catacaustic ----------- For a given curve $\gamma$ the “catacaustic curve” $C$ is defined to be the locus of lines reflected by $\gamma$ originating from a given point (the so-called “radiant” which, for our purposes, will coincide with the pedal point). It is known ([@Lawrence p. 60 and 207]) that catacaustic is the same as the evolute of the orthonomic curve (which is the pedal curve magnified by factor 2), hence: The catacaustic curve $C$ with radiant at the pedal point is given by $$C=E S_\frac12 P.$$ One of the most beautiful features of a conic section is that lines coming from one focus are reflected to the other focus. It is hence obvious that the catacaustic curve of a conic section with the radiant(= pedal) at one focus is a single point (the other focus). Reversing this logic, the *anticatacaustic* $C^{-1}=P^{-1} S^{-1}_{\frac12} E^{-1}=P^{-1}S_{2}E^{-1}$ of a point contains conic sections (and, as we will see, nothing more). Since the pedal equation of a point is $r=a$ or $p_c^2+p^2=a^2$ (i.e. a circle with zero radius), we thus have: $$\begin{aligned} C^{-1}{\left( p_c^2+p^2=a^2 \right)}&\Rightarrow P^{-1} S_2 E^{-1}{\left( p_c^2+p^2=a^2 \right)}\\ &\Rightarrow P^{-1}S_2 {\left( (p_cp_c^\prime)^2+p_c^2=a^2 \right)} \\ &\Rightarrow P^{-1}S_2 {\left( (p-2R)^2+p_c^2=a^2 \right)} \\ &\Rightarrow P^{-1} {\left( (p-R)^2+p_c^2=\frac{a^2}{4} \right)} \\ &\Rightarrow P^{-1} {\left( 2R p=r^2+R^2-\frac{a^2}{4} \right)} \\ &\Rightarrow 2R\frac{p^2}{r}=p^2+R^2-\frac{a^2}{4}\\ &\Rightarrow \frac{R^2-\frac{a^2}{4}}{p^2}= \frac{2R}{r}-1.\end{aligned}$$ Notice that the third equation from the end informs us that a conic section is the antipedal of a circle, giving us yet another method of construction. But since it holds $$P^{-1}= I_RPI_R,$$ which can be easily verified, this construction is actually very close to the one from Example \[Kp\]. $f$ spirals {#spiralssec} =========== Transforms derived in previous two sections allow us to construct large quantity of curves. It is advantageous to collect them into families. ### Sinusoidal spiral The family of curves $\sigma_n(a)$, given in polar and pedal coordinates $$r^n=a^n\sin{\left( n\varphi+\varphi_0 \right)}, \qquad a^n p =r^{n+1},$$ respectively, contains many famous curves, e.g. $$\begin{aligned} n& & a^n p &=r^{n+1} & &\text{Curve} &\text{Pedal point:}\\ n&=0 & p&=r & &\text{Concentric circle }{\left| x \right|}=R. & \text{Center.}\\ n&=-1 & p&=a & &\text{Line.} & \text{A point distant }a.\\ n&=1 & a p&=r^2 & &\text{Circle. } & \text{On the circle.}\\ n&=2 & a^2 p &=r^3 & &\text{Lemniscate of Bernoulli.} & \text{Center.}\\ n&=-2 & rp &=a^2 & &\text{Rectangular hyperbola.} & \text{Center.}\\ n&=-\frac12 & a^{-\frac12} p &=r^{\frac12} & &\text{Parabola.} & \text{Focus}\\ n&=\frac12 & a^{\frac12} p &=r^{\frac32} & &\text{Cardioid.} & \text{Cusp.}\end{aligned}$$ This family, called “sinusoidal spirals”, is famously invariant under a number of transforms, for example: $$\begin{aligned} \sigma_n(a) &\stackrel{S_\alpha}{\longrightarrow} \sigma_{n}{\left( \frac{a}{\alpha} \right)} & \text{Scaling.}\\ \sigma_n(a) &\stackrel{P}{\longrightarrow} \sigma_{\frac{n}{n+1}}(a) & \text{Pedal.}\\ \sigma_n(a) &\stackrel{I_R}{\longrightarrow} \sigma_{-n}{\left( \frac{R}{a} \right)} & \text{Inverse.}\\ \sigma_n(a) &\stackrel{M_\alpha}{\longrightarrow} \sigma_{\alpha n}{\left( a^{\frac{1}{\alpha}} \right)} & \text{Complex power.}\end{aligned}$$ Combining scaling with complex power, we get $$\sigma_1(a) \qquad \stackrel{S_{\beta}M_\alpha}{\longrightarrow} \qquad \sigma_{\alpha}(a),\qquad \beta:=a^\frac{1-\alpha}{\alpha},$$ which shows that *all* sinusoidal spirals are (up to scaling) complex powers of a circle passing through the origin $\sigma_1(a)$. ### Spirals A similar argument can be made about another family of curves $\varsigma_\alpha(c)$: $$r=c\varphi^\alpha \qquad \frac{1}{p^2}=\frac{\alpha^2 c^{\frac{2}{\alpha}}}{r^{2+\frac{2}{\alpha}}}+\frac{1}{r^2}.$$ Specific cases include among others: $$\begin{aligned} \alpha&=1 & \frac{1}{p^2}&=\frac{1}{r^2}+\frac{c^2}{r^4} &\text{Spiral of Archimedes}\\ \alpha&=-1 & \frac{1}{p^2}&=\frac{1}{r^2}+\frac{1}{c^2} &\text{Hyperbolic spiral}\\ \alpha&=\frac12 & \frac{1}{p^2}&=\frac{1}{r^2}+\frac{c^4}{4 r^6} &\text{Fermat spiral}\\ \alpha&=-\frac12 & \frac{1}{p^2}&=\frac{1}{r^2}+\frac{r^2}{4 c^4} &\text{Lituus.}\end{aligned}$$ Similarly, they are also invariant under complex powers and scaling $$\begin{aligned} \varsigma_\alpha(c) &\stackrel{S_\gamma }{\longrightarrow} \varsigma_{\alpha}{\left( \frac{c}{\gamma} \right)} & \text{Scaling.}\\ \varsigma_\alpha(c)&\stackrel{H_k}{\longrightarrow}\varsigma_{\alpha}{\left( \frac{c}{k^\alpha} \right)} &\text{Harmonic is scaling.}\\ \varsigma_\alpha(c) &\stackrel{ M_\beta}{\longrightarrow} \varsigma_{\alpha\beta}{\left( \beta^{\frac{\alpha}{\beta}}c^{-\frac{1}{\beta}} \right)} & \text{Complex power.}\\\end{aligned}$$ Which demonstrates that all of these spirals are complex powers of the spiral of Archimedes (for example; starting curve can be of course any spiral $\varsigma_\alpha(c)$). ### General spirals There is a pattern in previous two examples which can be generalized yielding more interesting families of curves. We start with a curve given in polar coordinates as $$r=f{\left( \varphi+\varphi_0 \right)},$$ for some well behaved function $f$. Since the pedal coordinates are oblivious to any rotation, the phase factor $\varphi_0$ can be chosen arbitrarily. Now we make all complex powers of this curve: $$r^\alpha=f{\left( \alpha\varphi+\varphi_0 \right)},$$ and harmonics $$r^\alpha=f{\left( l\alpha\varphi+\varphi_0 \right)},$$ and scalings and we end up with: Given function $f$ the *$f$-spirals* is the family of curves given in polar coordinates by $$r^\alpha=\frac{c}{l} f{\left( l\alpha\varphi+\varphi_0 \right)},$$ where $\alpha,l,c,\varphi_0\in\mathbb{R}$, $l\neq 0$ are any real numbers so that the right hand side of this equation defines a real function. To get the pedal equation for this family it suffice to translate the original curve $$r=f(\varphi),$$ into pedal coordinates and then perform the transform: $S_{{\left( \frac{l}{c} \right)}^{\frac{1}{\alpha}}} H_{\frac{1}{l}}M_\alpha$. Few examples are the following: $$\begin{aligned} f&=\exp & p&=\frac{l}{\sqrt{1+l^2}} r &\text{logarithmic spiral}\\ f&=Id & \frac{1}{p^2}&=\frac{1}{r^2}+\frac{l^2c^{2\alpha}}{r^{2\alpha+2}} &\text{ spirals}\\ \label{sineq}f&=\sin,\cos & \frac{1}{p^2}&=\frac{1-l^2}{r^2}+\frac{c^2}{r^{2\alpha+2}} &\text{Harmonics of sinusoidal spirals}\\ \label{sinheq}f&=\sinh & \frac{1}{p^2}&=\frac{1+l^2}{r^2}+\frac{c^2}{r^{2\alpha+2}} &\sinh\text{ spirals}\\ \label{cosheq}f&=\cosh & \frac{1}{p^2}&=\frac{1+l^2}{r^2}-\frac{c^2}{r^{2\alpha+2}} &\cosh\text{ spirals}\\ \label{sneq}f&={\rm sn} & \frac{1}{p^2}&=\frac{1-l^2(k^2+1)}{r^2}+l^4k^2c^{-2} r^{2\alpha-2}+\frac{c^2}{r^{2\alpha+2}} &\text{elliptic sinusoidal spirals}\\ \label{cneq}f&={\rm cn} & \frac{1}{p^2}&=\frac{(k^2-{k^\prime}^2)l^2+1}{r^2}-l^4k^2c^{-2} r^{2\alpha-2}+\frac{{k^\prime}^2c^2}{r^{2\alpha+2}} &\text{elliptic cosinusoidal spirals.}\end{aligned}$$ All the $f$ spirals families are invariant (by construction) under scaling, complex power and harmonic transforms but some of them happen to be invariant under larger group of transforms, e.g. logarithmic spirals are famously invariant under pretty much everything – pedal, contrapedal, evolute, orthoptic, catacaustic, etc. Only parallel curves and involutes of logarithmic spirals are slightly different: $$p={\left| \sin\alpha \right|} r \qquad \stackrel{E_c=E^{-1}}{\longrightarrow} \qquad {\left| \cos\alpha \right|}(p-c)={\left| \sin\alpha \right|} p_c.$$ ### Sinusoidal spirals Deriving the case $f=\sin$ is simple using the fact that the function $\sin\varphi$ is a solution of the differential equation $${r^\prime_\varphi}^2+r^2=1,$$ which translates into pedal coordinates by Proposition \[MPC\] as $$\frac{1}{p^2}=\frac{1}{r^4}\qquad \stackrel{S_{{\left( \frac{l}{c} \right)}^{\frac{1}{\alpha}}} H_{\frac{1}{l}}M_\alpha}{\longrightarrow} \qquad \frac{1}{p^2}=\frac{1-l^2}{r^2}+\frac{c^2}{r^{2\alpha+2}}.$$ Hence the $\sin$-spirals are just harmonics of the usual sinusoidal spirals. Notice that $\cos\varphi$ satisfies the same differential equation as $\sin\varphi$, hence the choice $f=\cos$ gives us exactly the same result. Thus the $cos$-spirals and the $sin$-spirals are (in pedal coordinates) indistinguishable. This is a consequence of the fact that $\cos$ function can be obtain from $\sin$ function by simple shift of argument: $\cos\varphi=\sin{\left( \varphi+\frac12\pi \right)}$. Similarly we can ask what are $sinh$-spirals (i.e. $f=\sinh$). The differential equation is now $${r^\prime_\varphi}^2-r^2=1,$$ hence obtaining (\[sinheq\]). Making a similar argument for the $cosh$-spirals we get (\[cosheq\]). This time we cannot get $\cosh$ form $\sinh$ by any (real) shift of the argument so the pedal equations are different. But using *complex* translations we have: $\sinh{\left( \varphi+\frac{{{\rm i}}\pi}{2} \right)}={{\rm i}}\cosh(\varphi)$ which gives us $\cosh$ for purely imaginary scaling factor $c$. (Substituting $c\to {{\rm i}}c$ in (\[sinheq\]) gives us (\[cosheq\]).) Exploiting the relations: $${{\rm i}}\sin({{\rm i}}\varphi)=\sinh(\varphi),\qquad \sin{\left( {{\rm i}}\varphi+\frac{\pi}{2} \right)}=\cosh(\varphi),$$ we can see that both $sinh$-spirals (\[sinheq\]) and $cosh$-spirals (\[cosheq\]) are derivable from $sin$-spirals (\[sineq\]). (This time substituting $l\to {{\rm i}}l$ in (\[sineq\]) gives us (\[sinheq\]).) This connection between these three families is reflected in pedal coordinates by the fact that together they solve the general equation in pedal coordinates: $$\frac{1}{p^2}=\frac{a}{r^2}+\frac{b}{r^{2\alpha+2}}, \qquad \forall a,b\in\mathbb{R}, a\not=1, \ b\not=0.$$ Indeed, for $a<1, b>0$ we have $sin$-spirals. For $a>1, b>0$ we have $sinh$-spirals and for $a>1, b<0$ the $cosh$-spirals. The case $a<1, b<0$ does not define any curve in pedal coordinates since in this case it holds $p>r$, which is impossible. The limiting cases $a=1$ are just $Id$-spirals and $b=0$ gives us logarithmic spirals. ### Elliptic spirals We are going now to make similar argument for elliptic version of sinusoidal spirals (i.e. “snusoidal spiral”). Remember that there are 12 Jacobian elliptic functions: $${\rm sn}(z),{\rm cn}(z), {\rm dn}(z), {\rm ns}(z), {\rm nc}(z), {\rm nd}(z), {\rm sc}(z), {\rm sd}(z), {\rm cs}(z), {\rm ds}(z), {\rm cd}(z), {\rm dc}(z),$$ all of which depend on an additional parameter $k\in (-1,1)$ (the so-called “modulus”) which we will not explicitly mention. Those function are doubly periodic with periods $4K, 4{{\rm i}}K^\prime $, where $K\equiv K(k)$ (so-called *quarter period*) is the complete elliptic integral of the first kind $$K(k):={\int \limits_{0}^{\frac{\pi}{2}}}\frac{1}{\sqrt{1-k^2\sin\varphi}}{\rm d}\varphi=\frac{\pi}{2}\!\! \ _2 F_1{\left( {\begin{array}{c} \frac12\quad \frac12 \\ 1 \end{array}};k^2 \right)},$$ and $k^\prime:=\sqrt{1-k^2}$, $K^\prime:=K(k^\prime)$. These functions are connected by the formula $$pq(z)=\frac{pr(z)}{qr(z)},$$ where $p,q,r$ can be any of the letters $s,c,d,n$. In particular $pp(z)=1$ and $pq(z)=\frac{1}{qp(z)}$. The parameter $k$ can be analytically continued beyond the interval $(-1,1)$. In fact, for fixed $z$, all elliptical functions are meromorphic with respect to $k^2$. With this in mind we can construct a $13^{th}$ elliptical function (which we denote sn$^\star$) which is just sn function with modulus $k$ restricted to the unit circle in the complex plane, more precisely: $${\rm sn}^\star(z;\lambda):=e^{{{\rm i}}\frac{\lambda}{2}} {\rm sn}{\left( e^{-{{\rm i}}\frac{\lambda}{2}}z, e^{{{\rm i}}\lambda} \right)}.$$ The property of the sn function: $${\rm sn}(z,k)= \frac{1}{k}{\rm sn}{\left( zk,\frac{1}{k} \right)},$$ ensures that ${\rm sn}^\star$ is a real valued function (for real argument). No other function than sn has analogous property (save $ns$) and without this function the picture will be incomplete, as we will see. Let us start with sn function. For the pedal equation of $sn$-spirals $$r^\alpha=\frac{c}{l}{\rm sn}{\left( l\alpha\varphi+\varphi_0 \right)},$$ we make use of the differential equation valid for $r={\rm sn}(\varphi)$ $${r^\prime_\varphi}^2=(1-r^2)(1-k^2 r^2),$$ which translates into pedal coordinates by Proposition \[MPC\] as $$\frac{r^4}{p^2}-r^2=(1-r^2)(1-k^2r^2),$$ and making the transform $S_{{\left( \frac{l}{c} \right)}^{\frac{1}{\alpha}}} H_{\frac{1}{l}}M_\alpha$ we get (\[sneq\]). Similar argument can be made for all the remaining 12 elliptic function but, fortunately, we can exploit many known relations to simplify the job: $$\begin{aligned} \label{ellrel} {\rm ns}(z)&=k\ {\rm sn}(z+{{\rm i}}K^\prime), & {\rm sn}^\star(z)&:=e^{{{\rm i}}\frac{\lambda}{2}} {\rm sn}{\left( e^{-{{\rm i}}\frac{\lambda}{2}}z, e^{{{\rm i}}\lambda} \right)} \\ \nonumber {\rm cd}(z)&={\rm sn}(z+K), & {\rm dc}(z)&=k\ {\rm sn}(z+K+{{\rm i}}K^\prime)\\ \nonumber {\rm dn}(z)&=k^\prime\ {\rm sn}({{\rm i}}z+K^\prime+{{\rm i}}K,k^\prime), & {\rm nd}(z)&={\rm sn}({{\rm i}}z+K^\prime,k^\prime),\\ \nonumber {\rm cs}(z)&={{\rm i}}k^\prime\ {\rm sn}({{\rm i}}z+{{\rm i}}K,k^\prime), & {\rm sc}(z)&=-{{\rm i}}\ {\rm sn}({{\rm i}}z,k^\prime),\\ \nonumber {\rm sd}(z)&=\frac{1}{k^\prime} {\rm sn}{\left( zk^\prime,\frac{{{\rm i}}k}{k^\prime} \right)}, & {\rm ds}(z)&={{\rm i}}k\ {\rm sn}\!{\left( zk^\prime+Kk^\prime+{{\rm i}}K^\prime k^\prime,\frac{{{\rm i}}k}{k^\prime} \right)},\\ \nonumber {\rm cn}(z)&= {\rm sn}{\left( zk^\prime+Kk^\prime,\frac{{{\rm i}}k}{k^\prime} \right)}, & {\rm nc}(z)&=-{{\rm i}}\ {\rm sn}\!{\left( zk^\prime+{{\rm i}}K^\prime k^\prime,\frac{{{\rm i}}k}{k^\prime} \right)}.\end{aligned}$$ This table tells us two things: first, the functions ${\rm sn}\equiv {\rm cd}\equiv{\rm dc}\equiv{\rm ns}$ are equivalent in the sense that they generate the same family of $f$-spirals in pedal coordinates (since they differ only by scaling and a shift of the argument). In the same way the functions ${\rm cn}\equiv{\rm sd}$, ${\rm dn}\equiv{\rm nd}$, ${\rm nc}\equiv{\rm ds}$ and ${\rm sc}\equiv{\rm cs}$ are equivalent. The function sn$^\star$ stands alone. Hence we have only 6 distinct $f$-spirals (out of 13 Jacobian elliptic functions). Second, the pedal equation of $f$-spirals for all elliptic functions can be obtained from the $sn$-spiral case. For example, the relation for cn function informs us that the $cn$-spirals (\[cneq\]) can be obtained from the $sn$-spirals (\[sneq\]) substituting $k\to {{\rm i}}\frac{k}{k^\prime} $, $l\to k^\prime l$, $c\to k^\prime c$. Similarly, ${\rm sn}^\star$-spirals: $$\frac{1}{p^2}=\frac{1-2l^2\cos\lambda}{r^2}+\frac{l^4}{c^2}r^{2\alpha-2}+\frac{c^2}{r^{2\alpha+2}}.$$ are obtained by substituting $k\to e^{{{\rm i}}\lambda} $, $l\to l e^{-{{\rm i}}\frac{\lambda}{2}}$, in (\[sneq\]). And so on. Once again this interconnectedness is reflected in pedal coordinates by the fact that together they solve the general equation: $$\label{ellspirals} \frac{1}{p^2}=\frac{a}{r^2}+\beta r^{2\alpha-2}+\frac{\gamma}{r^{2\alpha+2}}. $$ Specifically: $$\begin{aligned} a&<1-4\beta\gamma & \beta&>0 & \gamma&>0 & \text{ $sn$-spirals,}\\ a&>1+4\beta\gamma & \beta&>0 & \gamma&>0 & \text{ $sc$-spirals,}\\ (a-1)^2&\leq 4\beta \gamma & \beta&>0 & \gamma&>0 & \text{ $sn^\star$-spirals,} \\ & & \beta&<0 & \gamma&>0 & \text{ $cn$-spirals,}\\ & & \beta&>0 & \gamma&<0 & \text{ $ds$-spirals,}\\ (a-1)^2&\geq 4\beta\gamma & \beta&<0 & \gamma&<0 & \text{ $dn$-spirals,}\\ (a-1)^2&<4\beta\gamma & \beta&<0 & \gamma&<0 & \text{ no curve ($p>r$).}\end{aligned}$$ For $sn$-spiral, the exact solution is $$c=\sqrt{\gamma},\qquad l^2=\sqrt{(1-a)^2-4\beta\gamma}+1-a,\qquad k=\frac{2\sqrt{\beta\gamma}}{\sqrt{(1-a)^2-4\beta\gamma}+1-a}.$$ It is straightforward from (\[ellrel\]) to make similar computation for all elliptic spirals. The limiting case $\beta\gamma=0$ is just the $sin$-spiral (and its complex extensions) treated above. Central and Lorentz like force problems {#CentralForce} ======================================= We are ready to prove Theorem \[cfth\]. Making the scalar product of the equation (\[dynsys\]) with $\dot x$ we obtain $$\ddot x \cdot \dot x=F^\prime{\left( {\left| x \right|}^2 \right)}x\cdot \dot x,$$ integrating we get the first conserved quantity $${\left| \dot x \right|}^2=F{\left( {\left| x \right|}^2 \right)}+c.$$ Similarly, making the scalar product with $x^\perp$ we get $$\frac{{\rm d}}{{\rm d} t}{\left( {\dot{x}} \cdot x^\perp \right)}=\ddot x \cdot x^\perp=2 G^\prime{\left( {\left| x \right|}^2 \right)}{\dot x}^\perp\cdot x^\perp= 2 G^\prime{\left( {\left| x \right|}^2 \right)}{\dot x}\cdot x,$$ with the integral $$\dot x\cdot x^\perp=-x\cdot {\dot x}^{\perp}=G{\left( {\left| x \right|}^2 \right)}-L.$$ The pedal coordinates $p$ (distance to the tangent vector) and $r$ (distance from the origin) are given by $$p=\frac{x\cdot {\dot x}^\perp}{{\left| \dot x \right|}},\qquad r={\left| x \right|}.$$ Hence we have $$p^2=\frac{{\left( G{\left( r^2 \right)}-L \right)}^2}{F{\left( r^2 \right)}+c},$$ as claimed. The inequality (\[regine\]) is the consequence of the fact that $$p\leq r.$$ For the strictly central force case (i.e. $G\equiv 0$) we might equivalently say that “the potential energy is inversely proportional to the square of the distance to the tangent”, i.e. $$F \propto\frac{1}{p^2}.$$ At the time when it was still an open issue, it was suggested that orbit of planets are Cassini ovals, with Sun at one focus. The person who suggested it was, allegedly, Cassini himself and we are now in the position to see what force law we must assume for him to be right. Remember that a Cassini oval is locus of points such that product of distances from two foci is constant, i.e. $${\left| x \right|}{\left| x-a \right|}=C.$$ Pedal form of this equation is easy to show to be (by an analogous argument as in Example \[Covalex\]): $$\frac{{\left( 3C^2+r^4-{\left| a \right|}^2 r^2 \right)}^2 }{p^2}=4C^2{\left( \frac{2C^2}{r^2}+2r^2-{\left| a \right|}^2 \right)}.$$ Hence the corresponding dynamical system by Theorem \[cfth\] looks like $$\ddot x={\left( 8C^2 -\frac{16C^4}{{\left| x \right|}^4} \right)}x+{\left( {\left| x \right|}^2-{\left| a \right|}^2 \right)}\dot x^\perp,$$ quite different from the gravitational inverse square law. We can even work the case of a Cassini oval with Sun at the center, i.e. $${\left| x-a \right|}{\left| x+a \right|}=C, \qquad \text{or}\qquad {\left| x^2-a^2 \right|}=C,$$ where quantities $x,a$ are treated as complex numbers. This is, obviously, a (complex) square of a circle, i.e. $${\left| x-a^2 \right|}=C \qquad \stackrel{M_{2}}{\longrightarrow} \qquad {\left| x^2-a^2 \right|}=C.$$ Hence its pedal equation is $$2Rp=r^2+R^2-a^2, \qquad \stackrel{M_2}{\longrightarrow}\qquad 2R pr=r^{4}+R^2-a^2,$$ or $$\frac{{\left( r^{4}+R^2-a^2 \right)}^2}{p^2}=4R^2 r^2.$$ More challenging are inverse questions which we will tackle in some cases. Revisiting the Kepler problem from Example \[Kp\], $$\ddot{x}=-\frac{M}{{\left| x \right|}^{3}}x,$$ we can arrive at the solution immediately without the need of polar coordinates: $$\frac{L^2}{p^2}=\frac{2M}{r}+c,$$ albeit, apparently, without the information about the construction. But with a quite easy observation, making the transform (\[translistp\]) $E^\star_{\alpha}$ with $\alpha:=-\frac{M}{L^2}$ we obain $$\frac{L^2}{p^2}=\frac{M^2}{L^2}+c,$$ which is a line distant $\frac{L^2}{\sqrt{M^2+\frac{c}{L^2}}}$. Hence we have discovered that a conic section is dually parallel to a line – which is, under close inspection, the same construction as in Example \[Kp\]. The dual curve $D_1$ of the solution is easy to see to be a circle: $$L^2r^2=2Mp+c.$$ Hence we have recovered the famous Newton result on the solution’s curve of velocities. This observation is usually derived by studying the Runge-Laplace-Lenz vector – a conserved quantity we even do not need. In pedal coordinates this amounts to taking the dual curve. Proof of Theorem \[nonlocalrevolvingorbits\]: For the sake of simplicity we write the force $F$ as $F(r)=r F^\prime(r^2)$. $$\begin{aligned} \ddot x&=F^\prime(r^2)x \\ &\Downarrow Th \ref{cfth}\ (L=\tilde L)\\ \frac{\tilde L^2}{p^2}&= F(r^2)+ c \\ &\downarrow \ T_f \ (\ref{Tftransform}) \\ \frac{(\tilde Lk)^2}{p^2}&= F{\left( f^2 \right)}+\frac{\tilde L^2 k^2}{r^2}-\frac{\tilde L^2}{f^2}+ c \\ &\Uparrow Th \ref{cfth}\ (L=\tilde Lk)\\ \ddot x&={\left( f f^\prime F^\prime (f^2)-\frac{\tilde L^2 k^2}{r^3}+\frac{\tilde L^2 f^\prime}{f^3} \right)}\frac{x}{r}.\end{aligned}$$ Kepler problem in General relativity {#RKP} ------------------------------------ We can make the same analysis for many problems. Particularly interesting is the problem of orbits (or geodesics) around a non-rotating compact body described by the Schwarzschild solution of Einstein equations of General relativity [@schwarzschild]: $$\label{KPGR} {r^\prime_\varphi}^2=\frac{r^4}{b^2}-{\left( 1-\frac{r_s}{r} \right)}{\left( \frac{r^4}{a^2}+a^2 \right)},$$ where $$r_s:=\frac{2G M}{c^2}, \qquad a:=\frac{L}{GM c}, \qquad b:=\frac{cL}{E}.$$ The quantity $r_s$ is the Schwarzschild radius and $a,b$ are length-scales introduced for brevity which depends on the angular momentum $L$ and energy $E$ of a test particle. In pedal coordinates this becomes (using Proposition \[MPC\]): $$\label{KPGRp} \frac{1}{p^2}=d+\frac{r_s}{a^2 r}+\frac{r_s}{r^3}, \qquad d:=\frac{1}{b^2}-\frac{1}{a^2}.$$ The second part of Theorem \[cfth\] informs us that the image of the trajectory is located in the region $$\frac{1}{r^2}\leq d+\frac{r_s}{a^2 r}+\frac{r_s}{r^3},$$ or $$0\leq d r^3+\frac{r_s}{a^2} r^2-r+r_s=:h(r).$$ Since the absolute term $r_s>0$ is positive, the origin is always included in the image (put $r=0$) and this ensures the existence of an unstable component (a component which includes the origin) where the trajectories will reach the origin (a.k.a. singularity). Furthermore this unstable region is at least $r_s$ long since $h(r_s)=\frac{r_s^3}{b^2}\geq 0$. The overall behavior depends additionally on $N$ – the number of positive real roots of the polynomial $h$. By the classical result of Fourier and Budan [@Fourier; @Budan] this numbers is equal to $N= \nu-2\lambda$, where $\lambda\in\mathbb{Z}_+$ and $\nu$ *sign variation* of the polynomial $h$ (i.e. how many times its non-zero coefficients change sign as listed in order of increasing degree). The list of coefficients of polynomial $h$ is $${\left( r_s,-1,\frac{r_s}{a^2}, d \right)},$$ hence the sign variation $\nu$ depends only on the sign of $d$ $$\nu=\left\{ {\begin{array}{c} 2 \\ 3 \end{array}}{\begin{array}{c} d\geq 0 \\ d<0 \end{array}}\right.,$$ thus the number of zeros is either $N=2,0$ for $d\geq 0$ or $N=3,1$ for $d<0$. (-2.2994212840329946,0.) – (4.0994728384867365,0.); in [-2.,-1.,1.,2.,3.,4.]{} (0pt,-2pt); (0.,-1.808244285033815) – (0.,2.074989615176037); in [-1.,1.,2.]{} (2pt,0pt) – (-2pt,0pt); (-2.2994212840329946,-1.808244285033815) rectangle (4.0994728384867365,2.074989615176037); (-6.115120681630512,5.468598371446387) – (-2.73839555101324,5.468598371446387); (-6.08135343032434,4.827020596629106) – (-2.7046282997070676,4.827020596629106); (-6.047586179018166,4.16855919615874) – (-2.6708610484008943,4.16855919615874); (-6.013818927711994,3.52698142134146) – (-2.637093797094722,3.52698142134146); plot(,[0.9\*(()-1.0)\^(3.0)-(()-1.0)-(()-1.0)+0.1]{}); (0.,0.)– (1.0500564407024764,0.); (1.0097693439719326,-1.0484811306449306) node\[anchor=north west\] [$N=2$]{}; plot(,[(-0.-0.\*)/3.3734329641487864]{}); (-2.4513739149107736,-1.808244285033815) – (-2.4513739149107736,2.074989615176037); (4.183890966752164,-1.808244285033815) – (4.183890966752164,2.074989615176037); plot(,[(-2.2100586204007278-0.\*)/-1.]{}); plot(,[(–1.9095460389523327-0.\*)/-1.]{}); (-4.122852854566322,5.468598371446387) circle (2.5pt); (-3.954016598035458,5.865363574293915) node [$a = 0.9$]{}; (-4.7306633780774305,4.827020596629106) circle (2.5pt); (-4.612477998505826,5.223785799476635) node [$b = -1$]{}; (-4.325456362403358,4.16855919615874) circle (2.5pt); (-4.156620105872494,4.565324399006268) node [$c = 0.1$]{}; (-3.9877838493416307,3.52698142134146) circle (2.5pt); (-3.8695984697700263,3.923746624188989) node [$x0 = 1$]{}; (-0.3915715852342355,-1.495897210451718) node [$h(r)$]{}; (-0.5151097573083177,0.) circle (1.5pt); (1.0500564407024764,0.) circle (1.5pt); (2.465053316605841,0.) circle (1.5pt); (0.,0.) circle (1.5pt); (0.30065706654230534,-0.2971597890825898) node [0]{}; (5.838486280754627,0.) circle (2.5pt); (0.,1.2) circle (1.5pt); (0.2753316280626758,0.9606703220723405) node [$r_s$]{}; (-2.2994212840329946,0.) – (4.13324008979291,0.); in [-2.,-1.,1.,2.,3.,4.]{} (0pt,-2pt); (0.,-1.8588951619930738) – (0.,2.058105989522951); in [-1.,1.,2.]{} (2pt,0pt) – (-2pt,0pt); (-2.2994212840329946,-1.8588951619930738) rectangle (4.13324008979291,2.058105989522951); (-6.115120681630512,5.468598371446387) – (-2.73839555101324,5.468598371446387); (-6.08135343032434,4.827020596629106) – (-2.7046282997070676,4.827020596629106); (-6.047586179018166,4.16855919615874) – (-2.6708610484008943,4.16855919615874); (-6.013818927711994,3.52698142134146) – (-2.637093797094722,3.52698142134146); plot(,[0.9\*(()-1.0)\^(3.0)+0.0\*(()-1.0)-(()-1.0)+0.7]{}); (1.0097693439719326,-1.0484811306449306) node\[anchor=north west\] [$N=0$]{}; (-2.4513739149107736,-1.8588951619930738) – (-2.4513739149107736,2.058105989522951); (4.183890966752164,-1.8588951619930738) – (4.183890966752164,2.058105989522951); plot(,[(-2.2100586204007278-0.\*)/-1.]{}); plot(,[(–1.9095460389523327-0.\*)/-1.]{}); plot(,[(-0.-0.\*)/5.838486280754627]{}); (-4.122852854566322,5.468598371446387) circle (2.5pt); (-3.954016598035458,5.865363574293915) node [$a = 0.9$]{}; (-4.392990865015704,4.827020596629106) circle (2.5pt); (-4.325456362403358,5.223785799476635) node [$b = 0$]{}; (-4.122852854566322,4.16855919615874) circle (2.5pt); (-3.954016598035458,4.565324399006268) node [$c = 0.7$]{}; (-3.9877838493416307,3.52698142134146) circle (2.5pt); (-3.8695984697700263,3.923746624188989) node [$x0 = 1$]{}; (-0.2396189543564582,-1.495897210451718) node [$h(r)$]{}; (-0.3063314801980477,0.) circle (1.5pt); (0.,0.) circle (1.5pt); (0.30065706654230534,-0.2971597890825898) node [0]{}; (5.838486280754627,0.) circle (2.5pt); (0.,0.8) circle (1.5pt); (0.2753316280626758,0.555463306398269) node [$r_s$]{}; (-2.316304909686081,0.) – (4.116356464139823,0.); in [-2.,-1.,1.,2.,3.,4.]{} (0pt,-2pt); (0.,-1.808244285033815) – (0.,2.0412223638698648); in [-1.,1.,2.]{} (2pt,0pt) – (-2pt,0pt); (-2.316304909686081,-1.808244285033815) rectangle (4.116356464139823,2.0412223638698648); (-6.115120681630512,5.468598371446387) – (-2.73839555101324,5.468598371446387); (-6.08135343032434,4.827020596629106) – (-2.7046282997070676,4.827020596629106); (-6.047586179018166,4.16855919615874) – (-2.6708610484008943,4.16855919615874); (-6.013818927711994,3.52698142134146) – (-2.637093797094722,3.52698142134146); plot(,[0-0.9\*(()-1.7)\^(3.0)+2.7\*(()-1.7)-(()-1.7)+0.2]{}); (1.0097693439719326,-1.0484811306449306) node\[anchor=north west\] [$N=3$]{}; (-2.4513739149107736,-1.808244285033815) – (-2.4513739149107736,2.0412223638698648); (4.183890966752164,-1.808244285033815) – (4.183890966752164,2.0412223638698648); plot(,[(-2.2100586204007278-0.\*)/-1.]{}); plot(,[(–1.9095460389523327-0.\*)/-1.]{}); (0.,0.)– (0.3887344566956177,0.); (1.581471357542777,0.)– (3.1297941857616047,0.); (-4.7306633780774305,5.468598371446387) circle (2.5pt); (-4.511176244587308,5.865363574293915) node [$a = -0.9$]{}; (-3.4812750797490404,4.827020596629106) circle (2.5pt); (-3.312438823218176,5.223785799476635) node [$b = 2.7$]{}; (-4.291689111097185,4.16855919615874) circle (2.5pt); (-4.122852854566322,4.565324399006268) node [$c = 0.2$]{}; (-3.7514130901984215,3.52698142134146) circle (2.5pt); (-3.531925956708299,3.923746624188989) node [$x0 = 1.7$]{}; (0.0642863073990963,5.916014451253174) node [$h(r)$]{}; (0.3887344566956177,0.) circle (1.5pt); (1.581471357542777,0.) circle (1.5pt); (3.1297941857616047,0.) circle (1.5pt); (0.,0.) circle (1.5pt); (0.30065706654230534,-0.2971597890825898) node [0]{}; (5.838486280754627,0.) circle (2.5pt); (0.,1.7317) circle (1.5pt); (0.2753316280626758,1.5009463429711025) node [$r_s$]{}; (-2.316304909686081,0.) – (4.0994728384867365,0.); in [-2.,-1.,1.,2.,3.,4.]{} (0pt,-2pt); (0.,-1.8251279106869012) – (0.,2.058105989522951); in [-1.,1.,2.]{} (2pt,0pt) – (-2pt,0pt); (-2.316304909686081,-1.8251279106869012) rectangle (4.0994728384867365,2.058105989522951); (-6.115120681630512,5.468598371446387) – (-2.73839555101324,5.468598371446387); (-6.08135343032434,4.827020596629106) – (-2.7046282997070676,4.827020596629106); (-6.047586179018166,4.16855919615874) – (-2.6708610484008943,4.16855919615874); (-6.013818927711994,3.52698142134146) – (-2.637093797094722,3.52698142134146); plot(,[0-0.1\*(()-2.8)\^(3.0)+(()-2.8)-(()-2.8)-0.8]{}); (1.0097693439719326,-1.0484811306449306) node\[anchor=north west\] [$N=1$]{}; (-2.4513739149107736,-1.8251279106869012) – (-2.4513739149107736,2.058105989522951); (4.183890966752164,-1.8251279106869012) – (4.183890966752164,2.058105989522951); plot(,[(-2.2100586204007278-0.\*)/-1.]{}); plot(,[(–1.9095460389523327-0.\*)/-1.]{}); (0.,0.)– (0.8,0.); (-4.4605253676280485,5.468598371446387) circle (2.5pt); (-4.2410382341379265,5.865363574293915) node [$a = -0.1$]{}; (-4.055318351953977,4.827020596629106) circle (2.5pt); (-3.987783849341631,5.223785799476635) node [$b = 1$]{}; (-4.629361624158912,4.16855919615874) circle (2.5pt); (-4.4098744906687894,4.565324399006268) node [$c = -0.8$]{}; (-3.379973325830522,3.52698142134146) circle (2.5pt); (-3.160486192340399,3.923746624188989) node [$x0 = 2.8$]{}; (-0.7292440982959628,5.916014451253174) node [$h(r)$]{}; (0.8,0.) circle (1.5pt); (0.,0.) circle (1.5pt); (0.30065706654230534,-0.2971597890825898) node [0]{}; (5.838486280754627,0.) circle (2.5pt); (0.,1.3952) circle (1.5pt); (0.2753316280626758,1.1632738299093763) node [$r_s$]{}; We now show that the following: The trajectories of the Kepler problem in General relativity (\[KPGR\]) are dually parallel to ($\alpha=\frac12$) elliptic spirals (\[ellspirals\]). Starting with pedal equation for trajectories (\[KPGRp\]) and applying dual parallel transform $E_\gamma^\star$ we get $$\frac{1}{p^2}=d+\frac{r_s}{a^2 r}+\frac{r_s}{r^3}\qquad \stackrel{E^\star_\gamma}{\longrightarrow}\qquad \frac{1}{p^2}=\tilde a+\frac{\tilde b}{r}+\frac{\tilde d}{r^2}+\frac{r_s}{r^3},$$ where $$\begin{aligned} \tilde a&:=d-\gamma\frac{r_s}{a^2}-\gamma^2-\gamma^3 r_s,\\ \tilde b&:=\frac{r_s}{a^2}+2\gamma+3r_s \gamma^2,\\ \tilde d&:=-3\gamma r_s.\\\end{aligned}$$ Choosing $\gamma$ such that $\tilde a=0$, which is always possible since a third degree algebraic equation has always a real solution, we see that the equation becomes the equation for elliptic spirals (\[ellspirals\]) for the case $\alpha=\frac12$. By the same argument it can be shown that any curve of the form $$\frac{1}{p^2}=a+\frac{b}{r}+\frac{c}{r^2}+\frac{d}{r^3}, \qquad d\not=0,$$ is dually parallel to ($\alpha=\frac12$) elliptic spirals. Dark Kepler problem {#DKP} ------------------- Consider the dynamical system: $$\ddot x=-\frac{M}{{\left| x \right|}^3}x+\frac{F}{{\left| x \right|}}x-\omega^2 x,\qquad F,M\geq 0,$$ which generalizes Kepler problem to include in addition to the gravitational effect of a central body (the $M$ term), also that of a homogeneous spherical bulk of dark matter around this central body (the $\omega^2$ term) and the dark energy – i.e. constant outward repulsing force (the $F$ term). In the pedal coordinates this takes form by Theorem \[cfth\]: $$\label{dkpzadani} \frac{L^2}{p^2}=\frac{2M}{r}+2Fr-\omega^2 r^2+c.$$ Passing to the rotating frame of reference with angular velocity $\frac{\omega}{L} $ (using $A_{\frac{\omega}{L}}$ (\[Aomega\])) this transforms to $$\frac{{\left( L+\omega r^2 \right)}^2}{p^2}=\frac{2M}{r}+2Fr+c+2\omega L.$$ Comparing with the pedal form of Cartesian oval ${\left| x \right|}+\alpha{\left| x-a \right|}=C$ (See Example \[Covalex\]): $$\frac{{\left( b^2-(1-\alpha^2)r^2 \right)}^2}{4p^2}=\frac{Cb^2}{r}+(1-\alpha^2)C r -{\left( (1-\alpha^2)C^2+b^2 \right)},$$ where $b^2:=C^2-\alpha^2{\left| a \right|}^2$. We can see that these equations match if there exist a constant $\mu$ such that: $$\begin{aligned} 2L&=b^2\mu\\ 2\omega &=- (1-\alpha^2)\mu\\ 2M &=Cb^2 \mu^2\\ 2F &=(1-\alpha^2)C \mu^2\\ c+2\omega L&=-{\left( (1-\alpha^2)C^2+b^2 \right)}\mu^2.\\\end{aligned}$$ This is a system of 5 algebraic equation in 4 unknowns $(\alpha,C,b^2,\mu)$, which cannot be, in general, satisfied if all the equations are independent. Hence, there has to be some connection between coordinates $(M,F,L,\omega,c)$. This connection is $$FL+\omega M=0.$$ Actually, it is enough that $$F^2 L^2=\omega^2 M^2,$$ since the sign of $\omega$ (direction of rotation) can be chosen arbitrarily. With this constrain the solution is $$\begin{aligned} b^2&=\frac{2L}{\mu},\\ \alpha^2&=1-\frac{2\omega}{\mu},\\ C&=\frac{M}{L\mu},\\ 2L\mu^2+(c+2\omega L)\mu+2\omega \frac{M^2}{L^2}&=0.\end{aligned}$$ The discriminant of the last equation is (using $\omega M=-FL$) $$D=(c+2\omega L)^2+4^2 FM\geq 0,$$ hence the solution exists and is given by $$\mu=\frac{\sqrt{D}-(c+2\omega L)}{4L}.$$ We can see that $\frac{\mu}{L}>0$ and since $\omega <0$ (by agreement) the first three equations can be solved as well. This solution assumes that $L\not=0, F\not=0$. For the special case $F=\omega=0$ (the usual Kepler problem) one gets singular solution with $\alpha=\pm 1$, i.e. an ellipse or a hyperbola. The solution of the case $L=0$ is a line segment with the origin as one its endpoints, thus $p=0$ in pedal coordinates. This can be seen by multiplying the original equation (\[dkpzadani\]) by $p^2$ and letting $L\to 0$. Cartesian oval offers therefore a specific solution to the dark Kepler problem (in a suitable rotating frame of reference). Furthermore, straightforward calculations shows that Dark Kepler problem is invariant under the transforms $E^\star_{\alpha} B_\alpha $ (\[Balpha\]), specifically: $$\frac{L^2}{p^2}=\frac{2M}{r}+2Fr-\omega^2 r^2 +c\qquad \stackrel{E^\star_\alpha B_\alpha}{\longrightarrow} \qquad \frac{\tilde L^2}{p^2}=\frac{2\tilde M}{r}+2\tilde Fr-\tilde\omega^2 r^2 +\tilde c,$$ where $$\begin{aligned} \tilde \omega^2&:=\omega^2+2F\alpha-\alpha^2 c+ 2\alpha^3 M +\alpha^4 L^2,\\ \tilde F&:=F-\alpha c+3\alpha^2M+2\alpha^3 L^2 & &=\frac12 \partial_\alpha \tilde \omega^2,\\ \tilde c&:= c-6\alpha M-6\alpha^2L^2& &=-\frac12\partial^2_\alpha \tilde \omega^2,\\ \tilde M&:=M+2\alpha L^2& &=\frac{1}{12}\partial_\alpha^3 \tilde \omega^2,\\ \tilde L^2&:=L^2& &=\frac{1}{24}\partial_\alpha^4 \tilde \omega^2.\end{aligned}$$ This means that starting with parameters $(L^2,2M,2F,\omega^2,c)$ it might be possible to transform them into parameters $(L^2,\tilde M,\tilde F,\tilde \omega^2, \tilde c)$ for which it holds $$\tilde F^2 L^2=\tilde\omega^2 \tilde M^2.$$ Remarkably, this can *always* be done. The equality above is an algebraic equation in $\alpha$ of the 6th order but (fortunately) all the coefficients of $\alpha^6,\alpha^5,\alpha^4$ are zero. Hence we are left with the equation of the third order: $$\left( -4\,F{L}^{4}-2\,cM{L}^{2}-2\,{M}^{3} \right) {\alpha}^{3}+ \left( -4\,{\omega}^{2}{L}^{4}+{c}^{2}{L}^{2}-2\,FM{L}^{2}+c{M}^{2} \right) {\alpha}^{2}$$ $$+ \left( -4\,{\omega}^{2}{L}^{2}M-2\,cF{L}^{2}-2\,F{M} ^{2} \right) \alpha+ \left( FL-\omega M \right) \left( FL+\omega M \right)=0 ,$$ which has always a real solution unless the leading term $2\,F{L}^{4}+\,cM{L}^{2}+\,{M}^{3}$ vanishes. If the leading term vanishes the equation becomes $$(2\alpha L^2+M)^2 (F^2L^2-\omega^2 M^2)=0,$$ which has evidently a real solution as well. [1]{} Bertrand J (1873). “Théorème relatif au mouvement d’un point attiré vers un centre fixe.”. C. R. Acad. Sci. 77: 849–853 Budan, François D. (1807). *Nouvelle méthode pour la résolution des équations numériques*. Paris: Courcier. J. Edwards (1892). *Differential Calculus*. London: MacMillan and Co. pp. 161 ff. Fourier, Jean Baptiste Joseph (1820). *Sur l’usage du théorème de Descartes dans la recherche des limites des racines*. Bulletin des Sciences, par la Société Philomatique de Paris: 156–165. Lawrence, J. D. *A Catalog of Special Plane Curves.* New York: Dover, 1972. Lynden-Bell, D; Lynden-Bell RM (1997). “On the Shapes of Newton’s Revolving Orbits”. Notes and Records of the Royal Society of London. 51 (2): 195–198. doi:10.1098/rsnr.1997.0016 Lynden-Bell D, Jin S (2008). “Analytic central orbits and their transformation group”. Monthly Notices of the Royal Astronomical Society. 386 (1): 245–260. arXiv:0711.3491Freely accessible. Bibcode:2008MNRAS.386..245L. doi:10.1111/j.1365-2966.2008.13018.x Mahomed FM, Vawda F (2000). “Application of Symmetries to Central Force Problems”. Nonlinear Dynamics. 21 (4): 307–315. doi:10.1023/A:1008317327402 Newton I (1999). *The Principia: Mathematical Principles of Natural Philosophy* (3rd edition (1726); translated by I. Bernard Cohen and Anne Whitman, assisted by Julia Budenz ed.). Berkeley, CA: University of California Press. pp. 147–148, 246–264, 534–545. ISBN 978-0-520-08816-0. Schwarzschild, K. (1916). *Über das Gravitationsfeld eines Massenpunktes nach der Einstein’schen Theorie*. Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften 1, 189–196. Benjamin Williamson (1899). *An elementary treatise on the differential calculus.* Logmans, Green, and Co. R.C. Yates (1952). “Pedal Equations”.*A Handbook on Curves and Their Properties*. Ann Arbor, MI: J. W. Edwards. Zwikker, C. *The Advanced Geometry of Plane Curves and Their Applications.* New York: Dover, 1963. [^1]: Supported by GA ČR grant no. 201/12/G028 [^2]: To get an exact match between Theorem \[nonlocalrevolvingorbits\] and equation (\[Mohamedeq\]) a little bit of scaling is needed. Concretely, in equation (\[Mohamedeq\]) set $m:=1$ and divide the right hand side by $a^2$. [^3]: It is a shame that no better name is given to such an important curve. It is as if we would say instead of “parabola” just “antipedal of a line”.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The paper introduces an OpenMP implementation of pipelined Parareal and compares it to a standard MPI-based implementation. Both versions yield essentially identical runtimes, but, depending on the compiler, the OpenMP variant consumes about $7$% less energy. However, its key advantage is a significantly smaller memory footprint. The higher implementation complexity, including manual control of locks, might make it difficult to use in legacy codes, though.' address: - 'School of Mechanical Engineering, University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK' - 'Centre for Computational Medicine in Cardiology, Institute of Computational Science, Universit[à]{} della Svizzera italiana, Via Giuseppe Buffi 13, CH-6900 Lugano, Switzerland' author: - Daniel Ruprecht bibliography: - 'pint\_mod.bib' - 'hpc\_mod.bib' - 'MyCodes.bib' - 'other.bib' title: A shared memory implementation of pipelined Parareal --- Parareal ,parallel-in-time integration ,pipelining ,shared memory ,memory footprint ,energy-to-solution 68W10 ,65Y05 ,68N19 Acknowledgments {#acknowledgments .unnumbered} =============== I would like to thank the Centre for Interdisciplinary Research (ZIF) at the University of Bielefeld, Germany, for inviting me for a research visit in August 2015. The writing of this article greatly benefited from the tranquil and productive atmosphere at ZIF. I gratefully acknowledge Andrea Arteaga’s support with the energy measurements.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We report on the optical, mechanical and structural characterization of the sputtered coating materials of Advanced LIGO, Advanced Virgo and KAGRA gravitational-waves detectors. We present the latest results of our research program aiming at decreasing coating thermal noise through doping, optimization of deposition parameters and post-deposition annealing. Finally, we propose sputtered Si$_3$N$_4$ as a candidate material for the mirrors of future detectors.' address: | *$^1$Laboratoire des Matériaux Avancés, CNRS/IN2P3, F-69622 Villeurbanne, France.\ $^2$OPTMATLAB, Università di Genova, Via Dodecaneso 33, 16146 Genova, Italy.\ $^3$Institut Lumière et Matière, CNRS, Université de Lyon, F-69622 Villeurbanne, France.\ $^*$Corresponding author: [email protected]* author: - 'Alex Amato$^{1*}$, Gianpietro Cagnoli$^1$, Maurizio Canepa$^2$, Elodie Coillet$^3$, Jerome Degallaix$^1$, Vincent Dolique$^1$, Daniele Forest$^1$, Massimo Granata$^1$, Valérie Martinez$^3$, Christophe Michel$^1$, Laurent Pinard$^1$, Benoit Sassolas$^1$, Julien Teillon$^1$' bibliography: - 'bibliography.bib' title: 'High-Reflection Coatings for Gravitational-Wave Detectors: *[State of The Art and Future Developments]{}*' --- The high-reflecting (HR) coatings of the gravitational-wave (GW) detectors Advanced LIGO[@0264-9381-32-7-074001], Advanced Virgo[@0264-9381-32-2-024001] and KAGRA[@PhysRevD.88.043007] have been deposited by the Laboratoire des Matériaux Avancés (LMA) in Lyon (Fr), where they have been the object of an extensive campaign of optical and mechanical characterization. In parallel, an intense research program is currently ongoing at the LMA, aiming at the development of low-thermal-noise optical coatings. The materials presented in this study are deposited by ion beam sputtering (IBS), using different coaters: a commercially available Veeco SPECTOR and the custom-developed DIBS and Grand Coater (GC). Unless specified otherwise, each coater uses different sets of parameters for the ion beam sources. Coating refractive index and thickness are measured by transmission spectrophotometry at LMA using fused silica substrates ($\varnothing$ 1“, 6 mm thick) and by reflection spectroscopic ellipsometry[@PRATO20112877] at the OPTMATLAB using silicon substrates ($\varnothing$ 2”, 1 mm thick). Results of the two techniques are in agreement within 3% and here are presented the average values, used to calculate coating density. Structural properties are probed by Raman scattering at the Institut Lumière Matière (ILM), using fused silica substrates. Finally, coating loss angle $\phi_c$ is measured on a Gentle Nodal Suspension[@doi:10.1063/1.3124800; @PhysRevD.93.012007] (GeNS) system at LMA, with disk-shaped resonators of fused-silica ($\varnothing$ 2“ and 3” with flats, 1 mm thick) and of silicon ($\varnothing$ 3", 0.5 mm thick). $\phi_c$ is evaluated using the resonant method[@nowick1972anelastic] i.e. by measuring the ring-down time of several vibrational modes of each sample. For the $i$-th mode, it writes $$\phi_{i,c} = \frac{1}{D_i}\left[\phi_{i,\text{tot}}-\phi_{i,s}(1-D_i)\right]\medspace,\qquad D_i = 1 - \left(\frac{f_{i,s}}{f_{i,\text{tot}}}\right)^2\frac{m_s}{m_{\text{tot}}}\medspace,$$ where $\phi_{i,\text{tot}}$ is the loss angle of coated disk and $\phi_{i,s}$ is the loss angle of the substrate. $D_i$ is the so-called *dilution factor* which can be related to $f_{i,s}$, $f_{i,\text{tot}}$, $m_s$ and $m_{\text{tot}}$[@PhysRevD.89.092004], that are the frequencies and the mass of the sample before and after the coating deposition, respectively. Standard materials in gravitational-wave interferometers ======================================================== HR coatings of Advanced LIGO and Advanced Virgo are Bragg reflectors of alternate titania-doped tantala (TiO$_2$:Ta$_2$O$_5$) and silica (SiO$_2$) layers[@0264-9381-32-7-074001; @0264-9381-32-2-024001]. Fig. \[plot:LossStand\] shows the mechanical loss of these materials, which seems to follow a power-law function $\phi_c = a\cdot f^b\medspace$. \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = X, y = Y, y error = Y\_error, \] [Grafici/d14007a-c-a\_freq.txt]{}; table \[ x = X, y = Y, y error = Y\_error, \] [Grafici/c2f1014a-c-a\_freq.txt]{}; table \[ x = freq, y = fit, \] [Grafici/d14007\_res.txt]{}; table \[ x = freq, y = fit\_min, \] [Grafici/d14007\_res.txt]{}; table \[ x = freq, y = fit\_max, \] [Grafici/d14007\_res.txt]{}; table \[ x = freq, y = fit, \] [Grafici/c2f1014\_res.txt]{}; table \[ x = freq, y = fit\_min, \] [Grafici/c2f1014\_res.txt]{}; table \[ x = freq, y = fit\_max, \] [Grafici/c2f1014\_res.txt]{}; at (9.5,2.3) [$\begin{array}{lcc} \toprule & \text{TiO}_2\text{:Ta}_2\text{O}_5 & \text{SiO}_2 \\ \bottomrule\toprule a~(10^{-4}) & 1.5\pm0.3 & 3\pm1 \\ b & 0.11\pm0.04 & 0.25\pm0.05 \\ n@1064\text{nm} & 2.07\pm0.02 & 1.45\pm0.01 \\ \rho~[g/cm^3] & 6.65\pm0.06 & 2.15\pm0.06 \\ \bottomrule \end{array}$ ]{}; The loss angles of the HR coatings are shown on Fig. \[plot:LossStack\], together with their properties. \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = Freq, y = Phi, \] [Grafici/ETM.txt]{}; table \[ x = Freq, y = Phi, \] [Grafici/ITM.txt]{}; at (9,2.3) [$\begin{array}{ll} \toprule \multicolumn{2}{c}{\text{ETM}} \\ \bottomrule\toprule \text{n\textdegree~of doublets} & 18 \\ \text{ratio } \text{TiO}_2\text{:Ta}_2\text{O}_5/\text{SiO}_2 & 0.56 \\ \text{transmission} & 1 \text{ ppm} \\\bottomrule\toprule \multicolumn{2}{c}{\text{ITM}} \\ \bottomrule\toprule \text{n\textdegree~of doublets} & 8 \\ \text{ratio } \text{TiO}_2\text{:Ta}_2\text{O}_5/\text{SiO}_2 & 0.32 \\ \text{transmission} & 1.4\text{\%} \\\bottomrule \end{array}$ ]{}; The end mirror (ETM) coating has higher loss angle than the input mirror (ITM) coating because of its higher ratio TiO$_2$:Ta$_2$O$_5$/SiO$_2$. \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = X, y = Y, y error = eY \] [Grafici/doping.txt]{}; \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = X, y = Y, y error = Y\_error \] [Grafici/d14008a-c-a\_freq.txt]{}; table \[ x = X, y = Y, y error = Y\_error \] [Grafici/d14007a-c-a\_freq\_total.txt]{}; Optimization ============ Doping {#sec:doping} ------ The purpose of TiO$_2$ doping is to increase Ta$_2$O$_5$ refractive index and reduce its loss angle. Increasing the refractive index contrast in the HR stack would allow to decrease the HR coating thickness, at constant reflectivity. Fig. \[plot:Doping\]a shows Ta$_2$O$_5$ coating loss as function of doping. The current doping value in GW detectors is 18%, which yelds a minimum loss but a refractive index only slightly higher than that of Ta$_2$O$_5$. As shown by Fig. \[plot:Doping\]b, 18%-doped $\phi_{\text{TiO}_2\text{:Ta}_2\text{O}_5}$ is lower than $\phi_{\text{Ta}_2\text{O}_5}$ by $\sim$25%. Increasing TiO$_2$ concentration will increase TiO$_2$:Ta$_2$O$_5$ refractive index, while $\phi_{\text{TiO}_2\text{:Ta}_2\text{O}_5}$ for TiO$_2\geq$40% can not be predicted and needs further investigation. Deposition parameters {#sec:parameters} --------------------- Fig. \[plot:SiO2\]a shows coating loss of SiO$_2$ deposited by GC and Spector using their respective standard deposition parameters. It is clear that by using different parameters the same material gets different properties: GC parameters yield lower coating loss. For further test, SiO$_2$ has been deposited in the Spector with GC parameters. As Fig. \[plot:SiO2\]b shows, Spector coating loss is lower but still higher than GC coating loss, because of the different configuration of the coaters. \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = X, y = Y, y error = Y\_error \] [Grafici/d14002a-c-a\_freq\_complete.txt]{}; table \[ x = X, y = Y, y error = Y\_error \] [Grafici/c2f1014a-c-a\_freq.txt]{}; table \[ x = freq, y = fit, \] [Grafici/c2f1014\_res.txt]{}; table \[ x = freq, y = fit\_min, \] [Grafici/c2f1014\_res.txt]{}; table \[ x = freq, y = fit\_max, \] [Grafici/c2f1014\_res.txt]{}; table \[ x = freq, y = fit, \] [Grafici/d14002\_res.txt]{}; table \[ x = freq, y = fit\_min, \] [Grafici/d14002\_res.txt]{}; table \[ x = freq, y = fit\_max, \] [Grafici/d14002\_res.txt]{}; \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = X, y = Y, y error = Y\_error \] [Grafici/d14001a-c-a\_freq.txt]{}; table \[ x = X, y = Y, y error = Y\_error \] [Grafici/c2f1014a-c-a\_freq.txt]{}; table \[ x = freq, y = fit, \] [Grafici/c2f1014\_res.txt]{}; table \[ x = freq, y = fit\_min, \] [Grafici/c2f1014\_res.txt]{}; table \[ x = freq, y = fit\_max, \] [Grafici/c2f1014\_res.txt]{}; table \[ x = freq, y = fit, \] [Grafici/d14001\_res.txt]{}; table \[ x = freq, y = fit\_min, \] [Grafici/d14001\_res.txt]{}; table \[ x = freq, y = fit\_max, \] [Grafici/d14001\_res.txt]{}; Coating losses of Ta$_2$O$_5$ deposited using different coaters have different values before annealing but converge toward a common limit value after, as shown in Fig. \[plot:LossAngleTaO\], suggesting that annealing ’deletes’ the deposition history of the sample. Material properties are listed in table \[tab:fitresults\]. ----------------- ----------------- ----------------- --------------- --------------- ----------------- Spector Spector as GC GC Spector DIBS $a~(10^{-4})$ $1.1\pm0.2$ $0.33\pm0.01$ $1.89\pm0.09$ $2.6\pm0.5$ $2.08\pm0.08$ $b~(10^{-1})$ $-0.4\pm0.3$ $0.59\pm0.05$ $1.00\pm0.05$ $0.67\pm0.08$ $0.88\pm0.04$ $n$ $1.474\pm0.005$ $1.468\pm0.005$ $2.03\pm0.02$ $2.08\pm0.02$ $2.014\pm0.005$ $\rho~[g/cm^3]$ $2.34\pm0.01$ $2.33\pm0.05$ $7.34\pm0.07$ $7.5\pm0.1$ $6.9\pm0.2$ ----------------- ----------------- ----------------- --------------- --------------- ----------------- : Coating loss, fit parameters ($\phi_c = a\cdot f^b\medspace$), refractive index $n$ at $\lambda=1064$ nm and density $\rho$ of materials deposited in different coaters.[]{data-label="tab:fitresults"} \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = X\_0, y = Y\_0, y error = eY\_0, \] [Grafici/Loss\_SiO2\_time\_d14002.txt]{}; table \[ x = X\_5, y = Y\_5, y error = eY\_5, \] [Grafici/Loss\_SiO2\_time\_d14002.txt]{}; table \[ x = X\_10, y = Y\_10, y error = eY\_10, \] [Grafici/Loss\_SiO2\_time\_d14002.txt]{}; table \[ x = X\_30, y = Y\_30, y error = eY\_30, \] [Grafici/Loss\_SiO2\_time\_d14002.txt]{}; table \[ x = X\_70, y = Y\_70, y error = eY\_70, \] [Grafici/Loss\_SiO2\_time\_d14002.txt]{}; table \[ x = X\_189, y = Y\_189, y error = eY\_189, \] [Grafici/Loss\_SiO2\_time\_d14002.txt]{}; table \[ x = X\_305, y = Y\_305, y error = eY\_305, \] [Grafici/Loss\_SiO2\_time\_d14002.txt]{}; \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = X\_0, y = Y\_0, y error = eY\_0, \] [Grafici/Loss\_Ta2O5\_time\_wafer\_0.txt]{}; table \[ x = X\_10, y = Y\_10, y error = eY\_10, \] [Grafici/Loss\_Ta2O5\_time\_wafer.txt]{}; table \[ x = X\_20, y = Y\_20, \] [Grafici/Loss\_Ta2O5\_time\_wafer.txt]{}; \ \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = X\_0, y = Y\_0, \] [Grafici/1b-S15022-NR.txt]{}; table \[ x = X\_5, y = Y\_5, \] [Grafici/1b-S15022-5h.txt]{}; table \[ x = X\_10, y = Y\_10, \] [Grafici/1b-S15022-10h.txt]{}; table \[ x = X\_30, y = Y\_30, \] [Grafici/1b-S15022-30h.txt]{}; table \[ x = X\_70, y = Y\_70, \] [Grafici/1b-S15022-70h.txt]{}; table \[ x = X\_184, y = Y\_184, \] [Grafici/1b-S15022-184h.txt]{}; table \[ x = X\_300, y = Y\_300, \] [Grafici/1b-S15022-300h.txt]{}; \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = X\_0, y = Y\_0, \] [Grafici/Raman\_Ta2O5\_time.txt]{}; table \[ x = X\_10, y = Y\_10, \] [Grafici/Raman\_Ta2O5\_time.txt]{}; table \[ x = X\_20, y = Y\_20, \] [Grafici/Raman\_Ta2O5\_time.txt]{}; table \[ x = X\_50, y = Y\_50, \] [Grafici/Raman\_Ta2O5\_time.txt]{}; \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = X\_0, y = Y\_0, y error = eY\_0, \] [Grafici/Loss\_SiO2\_temp\_d14002.txt]{}; table \[ x = X\_500, y = Y\_500, y error = eY\_500, \] [Grafici/Loss\_SiO2\_temp\_d14002.txt]{}; table \[ x = X\_900, y = Y\_900, y error = eY\_900, \] [Grafici/Loss\_SiO2\_temp\_d14002\_900.txt]{}; \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = X\_0, y = Y\_0, y error = eY\_0, \] [Grafici/Loss\_Ta2O5\_temp\_wafer.txt]{}; table \[ x = X\_400, y = Y\_400, y error = eY\_400, \] [Grafici/Loss\_Ta2O5\_temp\_wafer\_400.txt]{}; table \[ x = X\_500, y = Y\_500, y error = eY\_500, \] [Grafici/Loss\_Ta2O5\_temp\_wafer\_500.txt]{}; table \[ x = X\_600, y = Y\_600, y error = eY\_600, \] [Grafici/Loss\_Ta2O5\_temp\_wafer\_600.txt]{}; \ \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = X\_0, y = Y\_0, \] [Grafici/Raman\_SiO2\_temp.txt]{}; table \[ x = X\_500, y = Y\_500, \] [Grafici/Raman\_SiO2\_temp\_500.txt]{}; table \[ x = X\_700, y = Y\_700, \] [Grafici/Raman\_SiO2\_temp.txt]{}; table \[ x = X\_1000, y = Y\_1000, \] [Grafici/Raman\_SiO2\_temp.txt]{}; \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = X, y = Y\_0, \] [Grafici/Raman\_Ta2O5\_temp.txt]{}; table \[ x = X, y = Y\_400, \] [Grafici/Raman\_Ta2O5\_temp.txt]{}; table \[ x = X, y = Y\_450, \] [Grafici/Raman\_Ta2O5\_temp.txt]{}; table \[ x = X, y = Y\_500, \] [Grafici/Raman\_Ta2O5\_temp.txt]{}; table \[ x = X, y = Y\_600, \] [Grafici/Raman\_Ta2O5\_temp.txt]{}; table \[ x = X, y = Y\_650, \] [Grafici/Raman\_Ta2O5\_temp.txt]{}; table \[ x = X, y = Y\_700, \] [Grafici/Raman\_Ta2O5\_temp.txt]{}; \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = X, y = Y, y error = Y\_error, \] [Grafici/c2f1019a-c-a\_freq.txt]{}; table \[ x = X, y = Y, y error = Y\_error, \] [Grafici/d14008a-c-a\_freq.txt]{}; table \[ x = X, y = Y, y error = Y\_error, \] [Grafici/c21006a-c-a\_freq.txt]{}; \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = X, x error = eX, y = Y, y error = eY, \] [Grafici/LossStruct.txt]{}; table \[ x = X, x error = eX, y = Y, y error = eY, \] [Grafici/LossStruct\_GC.txt]{}; table \[ x = X, x error = eX, y = Y, y error = eY, \] [Grafici/LossStruct\_fusedSiO.txt]{}; Post-deposition annealing ------------------------- Annealing parameters are of fundamental importance for the purpose of reducing coating thermal noise. The problem is to find the optimal annealing temperature $T_a$ and duration $\Delta t$, avoiding coating crystallization which would increase optical loss by scattering and absorption. In Fig. \[plot:AnnLosstime\] is shown the effect of increasing $\Delta t$ with $T_a=$500 constant. SiO$_2$ loss decreases and this behaviour has a structural counterpart. SiO$_2$ is composed of tetrahedral units arranged in rings of different size[@PhysRevB.50.118] and the area of the D$_2$ band near 600 $cm^{-1}$ is associated to 3-fold ring population[@PhysRevLett.80.5145]. A correlation between coating loss and D$_2$ has been found, suggesting that SiO$_2$ loss increases with the 3-fold ring population[@granata2017correlated]. This correlation holds for different kinds of SiO$_2$, coating and bulk (Fig. \[plot:Loss-struct\]). On the other hand, Ta$_2$O$_5$ loss does not change for $\Delta t\geq$10h and its structure evolves only for $\Delta t\leq$10h. Fig. \[plot:Anntemp\] shows coating loss and structure for increasing $T_a$, with $\Delta t=$10h constant. SiO$_2$ loss decreases and its structure evolves considerably. Surprisingly, crystallization occurs at $T_a=$1000. For Ta$_2$O$_5$, coating loss is roughly constant for $T_a>$500 and its structure does not change up to $T_a=$600, when crystallization occurs. \[x = [ (1 cm , 0 cm )]{} , y = [ (0 cm , 1 cm )]{}\] table \[ x = X, y = Y, y error = err\_Y, \] [Grafici/0\_SiN\_Data\_mean\_phiC.txt]{}; table \[ x = X, y = Y, y error = err\_Y, \] [Grafici/0\_SiN\_Data\_mean\_phiC\_a.txt]{}; table \[ x = X, y = Y, y error = err\_Y, \] [Grafici/0\_SiN\_Data\_mean\_phiC\_a2.txt]{}; table \[ x = X, y = Y, y error = err\_Y, \] [Grafici/0\_SiN\_Data\_mean\_phiC\_a3.txt]{}; table \[ x = X, y = Y, y error = err\_Y, \] [Grafici/0\_SiN\_Data\_mean\_phiC\_a4.txt]{}; table \[ x = freq, y = fit\_min, \] [Grafici/d14007\_res.txt]{}; table \[ x = freq, y = fit\_max, \] [Grafici/d14007\_res.txt]{}; fill between\[of=A and B\]; New material ============ TiO$_2$:Ta$_2$O$_5$ could be replaced by a material with lower mechanical loss and possibly higher refractive index. Here silicon nitride (Si$_3$N$_4$) is proposed, which features high refractive index[@Chao2017] and very low mechanical loss[@Liu2007]. Usually, Si$_3$N$_4$ is deposited by low pressure chemical vapour deposition (LPCVD). However, LPCVD Si$_3$N$_4$ might suffer from hydrogen contamination and thickness uniformity issues, which are not compatible with the stringent optical specifications required for GW detectors. Instead, IBS Si$_3$N$_4$ can be developed in the GC for deposition on large optics. Fig. \[plot:LossAngleSiN\] shows a comparison between TiO$_2$:Ta$_2$O$_5$ and IBS Si$_3$N$_4$, this latter being annealed at different temperatures. Si$_3$N$_4$ loss decreases significantly at $T_a=$900. Thus, one could increase the annealing temperature of the entire HR stack to decrease also SiO$_2$ loss, eventually reducing the coating loss of the whole HR stack. Conclusions =========== Coating materials of all present GW-detectors have been extensively characterized, showing a frequency-dependent loss angle. These standard materials can be optimized in different ways. The first approach is to increase the TiO$_2$ content in TiO$_2$:Ta$_2$O$_5$. Another option is to work on deposition parameters, in order to tune the optical and mechanical properties of the materials. In particular, the current GC configuration seems particularly well suited to deposit low loss SiO$_2$. In the case of Ta$_2$O$_5$, the effect of different deposition parameters is erased by the first few hours of post-deposition annealing. For 400$<T_a<$600 and 10h$<\Delta t<$50h Ta$_2$O$_5$ loss shows null or limited evolution and its structure is frozen in a stable configuration. In the case of SiO$_2$, the annealing parameters $T_a$ and $\Delta t$ have a significant impact on mechanical loss and coating structure. A correlation is found between D$_2$ peak area in the Raman spectra, associated to the three fold ring population, and mechanical loss. IBS Si$_3$N$_4$ is an interesting new possibility to replace TiO$_2$:Ta$_2$O$_5$ because of its low mechanical loss. Furthermore, Si$_3$N$_4$ can be annealed at higher temperature than TiO$_2$:Ta$_2$O$_5$, reducing also SiO$_2$ coating loss angle and thus the loss of the whole HR coating. References {#references .unnumbered} ==========
{ "pile_set_name": "ArXiv" }
--- abstract: 'Jointly identifying discrete and continuous factors of variability can help unravel complex phenomena. In neuroscience, a high-priority instance of this problem is the analysis of neuronal identity. Here, we study this problem in a variational framework by utilizing interacting autoencoding agents, designed to function in the absence of prior distribution over the discrete variable and reach collective decisions. We provide theoretical justification for our method and demonstrate improvements in terms of interpretability, stability, and accuracy over comparable approaches with experimental results on two benchmark datasets and a recent dataset of gene expression profiles of mouse cortical neurons. Furthermore, we demonstrate how our method can determine the neuronal cell types in an unsupervised setting, while identifying the genes implicated in regulating biologically relevant neuronal states.' author: - | Yeganeh M. Marghi, Rohan Gala, Uygar Sümbül\ Allen Institute, WA, USA\ `{yeganeh.marghi; rohang; uygars}@alleninstitute.org`\ bibliography: - 'reference.bib' title: Joint Learning of Discrete and Continuous Variability with Coupled Autoencoding Agents --- Introduction {#sec:Introduction} ============ Preliminaries {#sec:Preliminaries} ============= Coupled mix-VAE framework {#sec:Coupled mix-VAE Model} ========================= Experiments {#sec:Experiments} =========== Conclusion {#sec:Conclusion} ========== Broader Impact {#sec:Broader Impact .unnumbered} ============== We do not expect our work to have immediate societal impact. Longer term, we consider two different kinds of potential impact: (i) The basic statistical machine learning aspect of our work improves the field of unsupervised clustering and representation learning. It is important to emphasize that these algorithms are data-driven, and as such, are bound to capture the biases present in the training datasets. We would like to warn against explicit and implicit sources of bias (e.g., gender, race, income) in the training examples. (ii) We expect that the computational tool studied in this paper will improve our understanding of the organization and function of biological systems, in particular the nervous system. Naturally, we are hopeful that our work will bring us closer to understanding how the brain works and the diseases of the nervous system. For example, many diseases are thought to have a cell type bias, where the progression of the disease may be reflected in the state of the cells. If the community finds our research useful, pharmaceutical companies may utilize it to develop drugs against certain diseases. In its current form, this is at best a remote possibility. While drug development would be a welcome outcome, we would like to warn against the fact that pharmaceutical productions are almost always protected by patents, which adversely affects the affordability and accessibility of drugs based on income and geographical location. Acknowledgements {#acknowledgements .unnumbered} ================ We wish to thank the Allen Institute for Brain Science founder, Paul G Allen, for his vision, encouragement and support.
{ "pile_set_name": "ArXiv" }
, no. 1, 1-32 (2001)                                                (also arXiv.org e-print archive, http://arXiv.org/abs/quant-ph/9906091) {#no.-1-1-32-2001-also-arxiv.org-e-print-archive-httparxiv.orgabsquant-ph9906091 .unnumbered} =========================================================================================================================================== [**On the theory of the anomalous photoelectric effect stemming from a substructure of matter waves** ]{} [**Volodymyr Krasnoholovets**]{} [Institute of Physics, National Academy of Sciences,\ Prospect Nauky 46, UA-03028 Kyïv, Ukraine\ (web page http://inerton.cjb.net)]{}                                                                          February 2000 The two opposite concepts – multiphoton and effective photon – readily describing the photoelectric effect under strong irradiation in the case that the energy of the incident light is essentially smaller than the ionization potential of gas atoms and the work function of the metal are treated. Based on the submicroscopic construction of quantum mechanics developed in the previous papers by the author the analysis of the reasons of the two concepts discrepancies is led. Taking into account the main hypothesis of those works, i.e., that the electron is an extended object that is not point-like, the study of the interaction between the electron and a photon flux is carried out in detail. A comparison with numerous experiments is performed. [**Key words:**]{} : space, matter waves, inertons, laser radiation, photoelectric effect\ : 03.75.-b Matter waves.  32.80.Fb Photoionization of atoms and ions.        42.50.Ct Quantum description of interaction of light and matter; related experiments **Introduction** ================ The previous papers of the author \[1-3\] present a quantum theory operating at the scale $\sim 10^{-28}$ cm (this size combines all types of interactions as required by the grand unification of interaction). The theory takes into account such general directions as: deterministic view on quantum mechanics pioneered by L. de Broglie and D. Bohm (see, e.g. Refs. 4,5), the search for a physical vacuum model in the form of a real substance (see, e.g. Refs. 6-11), the introduction in the united models as if a “superparticle” whose different states are the electron, muon, quark, etc. \[12\], and the model of polaron in a solid, i.e., that a moving charged carrier strongly interacts with a polar medium. The kinetics of a particle constructed in works \[1-3\] easily results in the Schrödinger and Dirac formalisms at the atom scale. Besides the developed theory could overcome the two main conceptual difficulties of standard nonrelativistic quantum theory. First, the theory advanced a mechanism which could naturally remove long-range action from the nonrelativistic quantum mechanics. Second, the Schrödinger equation gained in works \[1,2\] is Lorentz invariant owing to the invariant time entered the equation. The main distinctive property of the theory was the prediction of special elementary excitations of space surrounding a moving particle. It was shown that space should always exhibit resistance to any canonical particle when it starts to move: the moving particle rubs against space and such a friction generates virtual excitations called “inertons” in papers \[1,2\]. Thus the inerton cloud around a moving particle one can identify with a volume of space $\mathfrak{V}$ that the canonical particle excites at its motion. In other words, the inerton cloud may be considered as a substructure of the matter waves which are described by the wave $\psi$-function in the region of the $\mathfrak{V}$. Nonetheless, the question arises whether one can reveal a cloud of inertons, which accompany a single canonical particle. As was deduced in Ref. 1, the amplitude of spatial oscillations of the inerton cloud $\Lambda /\pi$ correlates with the amplitude of spatial oscillations of the particle, that is, the de Broglie wavelength of the particle $\lambda$: $$\Lambda = \lambda {\kern 1pt}c/v_0 \label{1}$$ where $v_0$ is the initial velocity of the particle and $c$ is the initial velocity of inertons (velocity of light). If $v_0 \ll c$ then $\Lambda \gg \lambda$ and hence the disturbance of space in the form of the inerton cloud should appear in an extensive region around the particle. In this connection, the cloud of inertons may be detected, for instance, by applying a high-intensity luminous flux. To examine this assertion, let us turn to experimental and theoretical results available when laser-induced gas ionization phenomena and photoemission from a laser-irradiated metal take place. **The two opposite concepts** ============================= First reports on the experimental demonstration of the laser-induced gas ionization occurrence at a frequency below the threshold appeared in the mid-1960s (Meyerand and Haught \[13\], Voronov and Delone \[14\], Smith and Haught \[15\] and others). Those works launched detailed experimental and theoretical study of the new unexpected phenomena. At the time being, it would seem the mechanism providing the framework for the phenomena has been roughly understood. However, this is not the case: all materials available one can subdivide into two different classes. Having taken a critical view of the effect in which the photon energy of the incident light is essentially smaller than the ionization potential of atoms of rarefied noble gases and the work function of the metal, we shall turn to the two opposite standpoints excellently expounded in the reviews by Agostini and Petite \[16\] and Panarella \[17,18\]. At the same time it should be particularly emphasized that any improvement of the multiphoton theory is not the aim of the present work. The author wishes only to show that something more fundamental is hidden behind the formalism of orthodox quantum mechanics that is employed as a base for the study of a matter irradiated by an intensive light. *Multiphoton concept* --------------------- The review paper by Agostini and Petite \[16\] analysed several tens of works exploiting the prevailing multiphoton theory. The multiphoton concept is based on the typical interaction Hamiltonian $${\hat H}_{\rm int}= -e{\hat {\vec z}}{\vec E}_0 \cos \omega t \label{2}$$ which specifies the interaction between the dipole moment $e \vec z$ of an atom and the incident electromagnetic field $\vec E ={\vec E}_0 \cos \omega t$. The concept starts from the standard time dependent perturbation theory, Fermi \[19\], describing a probability per unit time of a transition of an atom from the bound state $|i>$ to a state $<{\rm c}|$ in the continuum. On the next stage the concept modifies the simple photoelectric effect to the nonlinear one (see, e.g. Keldysh \[20\] and Reiss \[21\]) in which the atom is ionized by absorption of several photons. The $\cal N$th-order time dependent perturbation theory changes the usual Fermi golden rule to $\cal N$ photon absorption that produces the probability \[16\] $$w_{\cal N} = \frac{2\pi}{\hbar}\Bigl( \frac {2e^2}{\varepsilon_0 c} \Bigr)^{\cal N} \sum_{\rm c} \Big| \sum_{i, j, ..., k} \frac {<g|z|i><i|z|j>... <k|z|{\rm c}>}{(E_g + \hbar \omega - E_i)(E_g +2\hbar\omega - E_j)...} \Big|^2 \label{3}$$ where $|i>, \ |j>, \ ... ,\ |k>$ are the atomic states, $I$ the intensity of laser beam and $|{\rm c}>$ the continuum states with energy $E_g + \hbar \omega, \ E_g $ being the energy of the ground state $|g>$. The summation over intermediate states could be performed by several methods. An estimation of the probabilities of multiphoton processes can be made utilizing the so-called generalized cross section \[16\] $$s_{\cal N}= 2\pi (8\pi \alpha)^{\cal N} r^{\kern 1pt 2 {\cal N}}{\kern 1pt}\omega^{-{\cal N}+1} \label{4}$$ where $r \sim 0.1$ nm is the effective atom radius and $\alpha =1/137$ is the fine structure constant. The Einstein law $E=\hbar \omega$ characterizing the simple photoelectric effect changes to the relation specifying the nonlinear photoelectric effect $$E_{\rm c}={\cal N} \hbar \omega - E_i. \label{5}$$ The $\cal N$-photon ionization rate (3) is proportional to $I^{\cal N}$. This prediction, as was pointed out by Agostini and Petite \[16\], verified experimentally up to ${\cal N}=22$ and with laser intensity up to $10^{15}$ W/cm$^2$. They noted that “$I$ must be below the saturation intensity to perform this measurement. When $I$ approaches to $I_{\rm s}$, one must make out account the depletion of the neutral atom population, which modifies the intensity dependence of the ion number”. It may be seen from the preceding that $I_{\rm s}\geq 10^{15}$ W/cm$^2$. At the same time we should note that the experiment does not point clearly to the dependence of $I^{\cal N}$. The experiment only demonstrates that in a log-log plot $N_i$ versus light intensity $I$ where $N_i$ is the number of ionized atoms of gas all points are located along a straight line whose slope is proportional to $\cal N$. This was shown by Lompre [*et al.*]{} \[22\] for Xe, Kr, and Ar with an accuracy about 2 $\%$ . Such result was interpreted \[22\] as a simultaneous absorption of $\cal N$ photons; the linear slope was held to $2 \times 10^{13}$ W/cm$^2$ and the maximum value was ${\cal N} = 14$. The authors of the review \[16\] marked that good agreement the multiphoton theory and experiment had till the experimental investigation (Martin and Mandel \[23\] and Boreham and Hora \[24\]) of the energy spectra of electrons ejected in the ionization of atoms; the kinetics energy of ejected electrons was far in excess of the prediction. Since then the multiphoton concept has advanced to so-called the above-threshold ionization (ATI). It replaced relationship (5) for $$E_{\rm c}=({\cal N}+S)\hbar \omega - E_i \label{6}$$ where $S$ is the positive integer. Several consequences were checked experimentally: branching ratios (Petite [*et al.*]{} \[25\] and Kruit [*at el.*]{} \[26\]), the intensity dependence, i.e., proportionality to $I^{{\cal N}+S}$ (Fabre [*et al.*]{} \[27\], Agostini [*et al.*]{} \[28\] and others \[16\]). An attempt to verify the nonlinear photoelectric effect on metals was undertaken by Farkas \[29\] (however, see below). During the last decade a number of further studies of the multiphoton ionization of atoms under ultra intensive laser radiation have been performed both experimentally and theoretically (see, e.g. review papers and monographs \[30-37\]). For example, papers of Avetissian [*et al.*]{} \[35,38\] deal with the relativistic theory of ATI of hydrogen-like atoms; at the same time the authors note that the idea of introducing of the stimulated bremsstrahlung for the description of the photoelectron final state still remains as a great problem for the ATI process. Besides the definition of wave dynamic function of an ejected electron stands problematic as well. Unfortunately the major deficiency of the ATI and more advanced models is too complicated expressions for the probability of ejected photoelectrons. Such expressions need additional assumptions. Hence a distinguish feature of the nonlinear multiphoton theory is the availability of a great many free parameters. Besides all the recent experiments operate with extremely short laser pulses which rather strikes atoms then slowly excite them. And this has cast some suspicion on the application of the time dependent perturbation theory (nonrelativistic or relativistic) for the description of ejection of photoelectrons from atoms in all cases. More likely femtosecond laser pulses create new effects which need new detailed studies (such as the scattering of electrons radiated from atoms immediately after ionization that tries to account the eikonal approximation \[38\], etc.). Thus the results obtained with different lasers might be different as well. Below we will analyze only the pure multiphoton concept that became the starting point for the further complications; in other words we will treat the case of the adiabatic turning on electromagnetic perturbation. Notwithstanding the fact that the multiphoton methodology is widely recognize today, we should emphasize that it ignored some “subtle” experimental results obtained with the use of nanosecond and picosecond light/laser pulses in the 1960s and 1970s (perhaps setting that such results were caused by indirect reasons). *Effective photon concept* -------------------------- In review papers Panarella \[17,18\] analysed about a hundred of other experiments devoted a laser-induced gas ionization and laser-irradiated metal. Panarella explicitly described all dramatic events connected with the construction of a reasonable mechanism which could explain unusual experimental data on the basis of standard concepts of quantum theory. Based on those experimental results he convincingly demonstrated the inconsistency of generally accepted multiphoton methodology. In particular, Panarella studied the following series of experiments: 1) variation of the total number $N_i$ of ionized gas atoms as a function of the laser intensity $I_{\rm p}$ (see Refs. 17,18 and also Agostini [*et al.*]{} \[39\]). In a log-log plot the experimental points did not lie on a straight line and the inflection point, for all gases studied, got into the range approximately from $10^{12}$ to $10^{13}$ W cm$^{-2}$ at the laser wavelength 1.06 $\mu$m and from $10^{11}$ to $10^{12}$ W/cm$^2$ at the laser wavelength 0.53 $\mu$m (note that such an inflection point, as was mentioned in the previous subsection, should refer to the saturation intensity whose value $I_{\rm s}$, however, of the order of $10^{15}$ W/cm$^2$!); 2) variation of the total number $N_i$ as a function of time $t$ of the increase in intensity of laser pulse (the experiment by Chalmeton and Papoular \[40\]); 3) variation of the breakdown intensity threshold against the gas density (see experiments by Okuda [*et al.*]{} \[41-43\]); 4) focal volume dependence of the breakdown threshold intensity (see, e.g. the experiment by Smith and Haught \[15\]); and others. All those experiments could not be explained in the framework of the multiphoton methodology. The multiphoton concept failed to interpret just fine details revealed in the experiments. Among other things Panarella stressed that the experiment by Chalmeton and Papoular \[40\] was a crucial one. The cascade theory (see, e.g. Zel’dovich and Raizer \[44\]) was also untenable to explain a number of data (see Ref. 17). This theory conjectured that random free electrons with the great energy were present in the gas and those electrons along with newly formed electrons generated other electrons; it was conceived that the optical field accelerated the electrons. Panarella analysed several other theoretical hypotheses which assumed the existence of high-than-normal energy photons in laser beam: the model based on quantum formalism, Allen \[45\], the model based on quantum potential theory, Dewdney [*et al.*]{} \[46,47\], and the model resting on classical electromagnetic wave theory of laser line broadening, de Brito and Jobs \[48\] and de Brito \[49\]. The first two models operated with the Heisenberg uncertainty principle and de Broglie-Bohm quantum potential respectively; it was expected that the deficient energy of a photon could appear due to some quantum effects. The last model suggested that the existence of separate high-energy photons in the laser beam might be stipulated by the laser line shape. Unfortunately the models could not explain the whole series of available experimental results. In contrast to those concepts, Panarella noted \[17\] that new physics should be present in the phenomena described above and proposed an effective photon theory \[17,18\]. He postulated that the photon energy expression $\varepsilon =h\nu$ had to be modified “ad hoc” into the novel one: $$\varepsilon = \frac {h\nu}{1-\beta_{\nu}f(I)} \label{7}$$ where $f(I)$ is the function of the light intensity and $\beta_{\nu}$ is a coefficient. In this manner Panarella’s theory holds that, at the extremely high intensities of light, photon-photon interaction begins to play a significant role in the light beam such that the photon energy becomes a function of the photon flux intensity. To develop an effective photon concept it was pointed out \[18\] that the number density of photons in the focal volume is much larger than ${\widetilde \lambda}^{-3}$ where $\widetilde \lambda$ is the wavelength of laser’s irradiated light. In this respect he came up with the proposal to reduce the photon wavelength in the focal volume. He assumed that it unquestionably followed from quantum electrodynamics that photons could not come any closer than $\widetilde \lambda$. The effective photon concept satisfied all available experimental facts mentioned above in this subsection. Moreover the concept was successfully applied to Panarella’s own first-class experiments on electron emission from a laser irradiated metal surface \[50,51,18\] and to other experiments (Pheps \[52\] and see also Refs. 17,18). Such remarkable success of the formula (7) gave rise to the confidence that some hidden reasons could be a building block for understanding the principles of effective photons formation \[18\]. An elementary consideration of photons and hence effective photons based on neutrinos has been constructed by Raychaudhuri \[53\]. Thus in this section we have given an objective account of facts and adduced the two absolutely opposite views on the same phenomena. So we need to establish the reasons for the main discrepancies between the multiphoton and effective photon concepts and then develop an approach that would reconcile them. **Interaction between the photon flux and an electron’s inerton cloud** ======================================================================= First of all we need to discuss in short such notions as the photon and photon flux. On question, what is photon?, quantum electrodynamics answers (see, e.g. Berestetskii [*et al.*]{} \[54\]): it is something that can be described by the equation $$\partial^2 \vec A /\partial t^2 - c^{-2} \partial^2\vec A/\partial {\vec r}^2=0 \label{8}$$ where $\vec A$ is the vector potential that satisfies the condition $${\rm div} {\vec A}=0. \label{9}$$ The vector potential operator $\hat{\vec A}$ of the free electromagnetic field is constructed in such a way that each wave with a wavevector $\vec q$ corresponds to one photon with the energy $h\nu_{\vec q}$ in the volume $V$, that is, $\hat{\vec A}$ is normalized to $V$ in accordance with the formula (see, e.g. Davydov \[55,56\]) $${\hat {\vec A}}(\vec r, t) = \sum_{\vec q, \alpha} \Bigl( \frac {ch}{|\vec q| V}\Bigr)^{1/2} \ e^{i\vec q \vec r}{\kern 2pt} {\vec j}_{\alpha} \ (\vec q) \ \Bigl({\hat a}_{\vec q \alpha}(t) + {\hat a}^{\dagger}_{-\vec q \alpha} (t) \Bigr) \label{10}$$ where $c$ is the velocity of light, $h$ is Planck’s constant, $\vec q$ is the wave vector ($|\vec q| = 2 \pi/{\widetilde \lambda}$), ${\vec j}_{\alpha} (\vec q)$ is the unit vector of the $\alpha$th polarization, ${\hat a}^{\dagger}_{\vec q \alpha}$ (${\hat a}_{\vec q \alpha}$) is the Bose operator of creation (annihilation) of a photon, and $V$ is the volume containing the electromagnetic field. A pure particle formalism can also be applied to the description of the free electromagnetic field; in this case each of the particles – photons – has the energy $\varepsilon =h\nu$ and the momentum $\hbar{\vec k}(=h\nu/c)$. Just such “photon language” is often more convenient. It admits to consider a monochromatic electromagnetic field as a single mode which contains a number of photons. Now let us start by considering the origin of the disagreements between the two opposite concepts. That was considerable success of the multiphoton concept that it incorporated $\cal N$ photons whose total energy was equal to the potential of ionization of an atom, expression (5). A prerequisite for the construction of the concept was the supposition that there was strong nonlinear interaction between a laser beam and a gas. [*Criticism*]{}: The multiphoton methodology does not take into account the threshold light intensity needed for gas ionization. The photoelectric effect, as such, is not investigated, the methodology only suggests that atoms of gas may be excited to the energy level (5) in the continuum. Besides the methodology ignores the fact of the coherence of the electromagnetic field irradiated by laser. At the same time the problem of electromagnetic radiation may be reduced to the problem of totality of harmonic oscillators, ter Haar \[57\], which in the case of the laser radiation must be regarded as coherent. This means that each of the $\cal N$ photons absorbed should have the same right, but using the $\cal N$th-order time dependent perturbation theory one adds photons successively. (The distinction between the incoherent and coherent electromagnetic field is akin to that between the normal and superconducting state of the same metal in some sense. Indeed in a superconductor electrons can not be considered separately: all superconducting phenomena are caused by cooperate quantum properties of electrons. That is why describing superconducting phenomena one should include the cooperation of electrons, for instance the Meissner-Ochsenfeld effect.) The advantages of the effective photon concept are its flexibility at the analysis of experimental results. The concept assumed the existence of the threshold light intensity that launches ionization of atoms of gas and ejection of electrons from the metal. The effective photon was deduced from the assumption that there could not more than one orthodox photon in a volume of space $\sim {\widetilde \lambda}^3$. Owing to the huge photon density in the laser pulse the concept conjectured that photons could interact with each other forming “effective photons” (7). The latter are absorbed as the whole and the absorption is a linear process, which is highly similar to the simple photoelectric effect. [*Criticism*]{}: Photons are subjected to Bose-Einstein statistics and this means that it is not impossible that the volume $V$ contains an enormous number of photons with the same energy $h \nu_{\vec q_0}$. In other words, the density of photons depends on the initial conditions of the electromagnetic field generation. In any event the statistics is absolutely true at the atom (and even nucleus) scale, i.e., so long as the photon concentration in the pulse does not far exceed the concentration of atoms in a solid $\sim 10^{23}$ cm$^{-3}$. (Note that a somewhat similar pattern is observed when the intensity of sound in a crystal is enhanced. In the original state acoustic phonons obey the Planck distribution, but when the ultrasound is switched on, the phonon density increases while the volume of the crystal remains the same.) Having described ionization of atoms of gas and photoemission from a metal in terms of the submicroscopic approach \[1-3\], an effort can be made to try to develop a theory of the anomalous photoelectric effect in which electron’s wide spread inerton cloud simultaneously absorbs a number of coherent photons from the intensive laser pulse. Thus the theory will combine Panarella’s idea on the anomalous photoelectric effect and the idea of the multiphoton concept on simultaneous absorption of $\cal N$ photons. We shall assume that in the first approximation atoms of gas and the metal may be considered as systems of quasi-free electrons. The Fermi velocity of $s$ and $p$ electrons in an atom is equal to (1-2)$\times 10^8$ cm/s. Setting $v_{\rm F}=v_0 \simeq 2\times 10^8$ cm/s one obtains $\lambda =h/mv_0 \simeq 0.36$ nm ($m$ is the electron mass) and then in accordance with relation (1) the amplitude of oscillations of the inerton cloud equals $\Lambda /\pi \simeq 17$ nm. The cloud has anisotropic properties: it is extended on $\lambda $ along the electron path, that is, along the velocity vector $\vec v_0$, and on $2\Lambda/\pi$ in the transversal directions. This means that the cross section $\sigma$ of the electron together with its inerton cloud in the systems under consideration should satisfy the inequalities: $$\frac{\lambda^2}{4\pi}< \sigma < \frac{\Lambda^2}{\pi}, \ \ \ \ \ \ \ {\rm or } \ \ \ \ 10^{-16} \ {\rm cm}^2< \sigma < 1.7 \times 10^{-12} \ {\rm cm}^2; \label{11}$$ here one takes into account that the radius of electron’s inerton cloud equals $\Lambda/\pi$. At the same time the cross-section of an atom is only $\sim 10^{-16}$ cm$^2$. The intensity of light in (10-100)-psec focused laser pulses used for the study of gas ionization and photoemission from metals was of the order of $10^{12} - 10^{15}$  W/cm$^2$, that is, $10^{30} - 10^{33}$ photons/cm$^2$ per second. Dividing this intensity into the velocity of light one obtains the concentration of photons in the focal volume $n\simeq 3\times (10^{19} - 10^{22})$ cm$^{-3}$ and hence the mean distance between photons is $n^{-1/3}\simeq (30 - 3)$ nm. The number of photons bombarding the inerton cloud around an individual electron is $\sigma n^{2/3}$; this value can be estimated, in view of inequality (11), as $$\begin{aligned} &1&< \sigma n^{2/3} < 10^3 \ \ \ {\rm at} \ \ \ n \approx 3\times 10^{19} \ {\rm cm}^{-3} \ \ \ \ \ {\rm and} \nonumber \\ &1&< \sigma n^{2/3}< 10^5 \ \ \ {\rm at} \ \ \ n \approx 3 \times 10^{22} \ {\rm cm}^{-3}. \label{12} \end{aligned}$$ The next thing to do is to write the model interaction between the electron inerton cloud and an incident coherent light. In an ordinary classical representation the electron in the applied electromagnetic field is characterized by the energy $${\cal E}= \frac 1{2m}(\vec p - e \vec A)^2 \label{13}$$ where $\vec A$ is the vector potential of the electromagnetic field. This usually implies that the vector potential $\vec A$ in Ampére’s formula (13) relates to the field of one photon. This is confirmed by expression (11) and the supposition that the electron can be considered as a point in its classical trajectory. In the language of quantum theory this means that both the wave function of the electron and the wave function of the photon are normalized to one particle in the same volume $V$, Berestetskii [*et al.*]{} \[58\]. However, as follows from the analysis above, the electron jointly with its inerton cloud is an extended object. Because of this, it can interact with many photons simultaneously and the coupling function between the electron and the applied coherent electromagnetic field should be defined by the density of the photon flux. Therefore, contrary to the usual practice to use the approximation of single electron-photon coupling (13) in all cases, one can introduce the approximation of the strong electron-photon coupling $${\cal E}= \frac 1{2m} (\vec p - e {\vec A}_{\rm eff})^2 \label{14}$$ which should be correct in the case of simultaneous absorption/scattering of $\cal N$ photons by the electron. Thus in (13) $${\vec A}_{\rm eff} =e \vec A {\cal N}, \ \ \ \ \ \ \ {\cal N}=\sigma n^{2/3}. \label{15}$$ In experiments involving noble gases discussed by Panarella \[17,18\] the laser pulse intensity had the triangular shape. We shall apply the same approach. In other words, let the intensity be changed over the duration $\Delta t$ of the laser pulse whose intensity runs along the two equal sides of the isosceles triangle, that is from $I=0$ at $t=0$ to the peak intensity $I=I_{\rm p}$ at $t=t_{\rm p}=\Delta t/2$ and then to $I=0$ at $t=\Delta t$. Thus ${\vec A}_{\rm eff}$ becomes time dependent; it can be present in the form $${\vec A} _{\rm eff}(\vec r, t) = {\vec A}_{\rm p}{\kern 1pt} e^{i \vec k \vec r - i\omega t} {\cal N}(t) \label{16}$$ where ${\vec A}_{\rm p}$ is the vector potential of the electromagnetic field at the peak intensity of the pulse, $${\cal N}(t)= \sigma_{\rm th} n^{2/3}_{\rm th}\frac{t}{t_{\rm p}} \label{17}$$ is the number of photons absorbed by the electron where $n^{2/3}_{\rm th}$ is the effective photon density in the unit area at the threshold intensity of the laser pulse when the energy of $\sigma_{\rm th} n^{2/3}_{\rm th}$ photons reaches the absolute value of the ionization potential of atoms or the work function of the metal, that is $h\nu \sigma_{\rm th} n^{2/3}_{\rm th}= W$. As relation (17) indicates the cross section of electron’s inerton cloud, $\sigma$ is also signed by dependence on the threshold intensity; much probably $\sigma$ is not constant and depends on the velocity of the electron, the frequency of incident light and the light intensity. The presentation (16) is correct within the time interval $\Delta t/2$, that is, $t \in [0, t_{\rm p}]$. Hence passing on to the Hamiltonian operator of the electron in the intensive field one has $$\hat H =\frac{\hat {\vec p}^{\ 2}}{2m} - \frac{e}{m}{\vec A}_{\rm eff}(r,t) \ {\hat {\vec p}}; \label{18}$$ here we are restricted to the linear field effect, much as it is made in the theory of simple photoelectric effect (see, e.g. Berestetskii [*et al.*]{} \[58\], Blokhintsev \[59\], and Davydov \[60\]). In the case of the simple photoelectric effect the Schödinger equation for the electron $$i\hbar \frac{\partial \psi}{\partial t}=({\hat {\cal H}}+ {\hat {\cal W}} (\vec r,t) ) \psi \label{19}$$ contains the Hamiltonian operator ${\hat {\cal H}}$ of the electron in an atom (or the metal) and the interaction operator $${\hat {\cal W}}(\vec r, t) =-\frac{e}{m}\vec A (\vec r, t) \ {\hat {\vec p}} \label{20}$$ whose matrix elements are much smaller than those of the operator $\hat{\cal H}$. However in our case the matrix elements of the operator $${\hat {\cal W}}_{\rm eff}(\vec r, t)= - \frac {e}{m}{\vec A}_{\rm eff} (\vec r, t) \ {\hat {\vec p}} \label{21}$$ do not seem to be small due to the great value of ${\vec A}_{\rm p}$. Therefore, exploiting the perturbation theory, we should resort to the procedure, which makes it feasible to extract a small parameter. Nonetheless, the necessary smallness is already inserted into the structure of the vector potential ${\vec A}_{\rm eff}(\vec r, t)$: the number of photons absorbed by the electron is a linear function of the duration of the growing intensity of the pulse \[see (17)\]. Consequently the interaction operator (21) can be safely used for $t\ll t_{\rm p}$. **Anomalous photoelectric effect** ================================== In the absence of the external field the Schrödinger equation $$i\hbar \frac {\partial \psi_0}{\partial t}={ \hat{\cal H}} \psi_0 \label{22}$$ which describes the electron (in an atom or metal) has the solution $$\psi_0 = e^{-i \frac{{\hat{\cal H}}}{\hbar}t}. \label{23}$$ Eq. (22) is transformed in the presence of the field to the equation $$i\hbar \frac {\partial \psi}{\partial t}= ({\hat {\cal H}}+ {\hat {\cal W}} _{\rm eff}(t))\psi. \label{24}$$ The $\psi$ function from (24) can be represented in the form (see, e.g. Fermi \[19\]) $$\psi (\vec r, t) =e^{- \frac {\hat {\cal H}}{\hbar} t}\sum_l a_l (t) \psi_l (\vec r); \label{25}$$ here $a_l (t)$ are coefficients at eigenfunctions $\psi_l (\vec r)$. By substituting function (25) into Eq. (24) and multiplying a new equation by $\psi^*_f (\vec r)$ to left and then integrating over $\vec r$ one obtains $$i\hbar \frac { \partial a_f (t)}{\partial t} = \sum_l a_l(t)<f|{\hat {\cal W}}_{\rm eff}|l> e^{i \omega_{fl}t} \label{26}$$ where $\hbar \omega_{fl}= E_f - E_l$, $E_{f(l)}$ is the eigenvalue of Eq. (22) and the matrix element $$<f|{\hat {\cal W}}_{\rm eff}|l> = - \frac{e}{m}\int \psi^*_f \ {\vec A}_ {\rm eff}(\vec r, t) \ {\hat {\vec p}} \ \psi_l \ d \vec r. \label{27}$$ In the first approximation the coefficient is equal $$a^{(1)}_{f} \simeq \frac 1{i \hbar} \int\limits^t_0 <f| {\hat {\cal W}}_ {\rm eff}| l> e^{i \omega_{fl} \tau} d \tau. \label{28}$$ The possibility of the transition from the atomic state $E_l$ to the state of ionized atom $E_f$ (or the possibility of the ejection of electron out of the metal) is given by the expression $$P(t) \equiv P_f(t)= |a^{(1)}_f(t)|^2, \label{29}$$ or in the explicit form $$P(t) = \Big| \frac 1{i\hbar} \frac {e}{m}<f|{\vec A}_{\rm p} {\hat {\vec p}}|l> \Big|^2 \ \ \Big| \int\limits^t_0{\cal N}(\tau) e^{i(\omega_{fl} - \omega) \tau} d \tau \Big|^2. \label{30}$$ The first factor in (30) is well known in the simple photoelectric effect, because it defines the probability of the electron transition from the atomic $|l>$ to the free state $<f|$. This factor can be designated as $|M|^2$ and extracted from (30) in the explicit form (see, e.g. Blokhintsev \[59\]): $$\begin{aligned} |M|^2 \equiv \Big| \frac 1{i \hbar}<f|{\hat {\vec A}}_{\rm p} {\hat {\vec p}} |l> \Big|^2 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nonumber \\ =16 \pi \ \frac{e^2 \hbar^2}{m^2 V} \ {\vec A}^{\kern 1.5pt 2}_{\rm p} \ \bigl( \frac Z{a_{\rm Bohr}} \bigr)^5 \ \bigl( \frac \hbar {\vec p}_{\rm free} \bigr)^6 \ \frac{\sin^2 \theta \cos^2 \phi}{(1 - \frac{v}{c}\cos \theta)^4}; \label{31}\end{aligned}$$ here $V$ is a normalizing volume, $a_{\rm Bohr}$ is Bohr’s radius, $Z$ is the number charge, ${\vec p}_{\rm free}$ is the momentum of the stripped electron. The last factor in (31) shows that the momentum ${\vec p}_{\rm free}$ falls within the solid angle $d \Omega$ ($v$ is the velocity of the free electron and $|{\vec p}_{\rm free}|=mv$). Taking into account that the vector potential $\vec A$ of the electromagnetic field is connected with the intensity $I$ of the field through the formulas $$I=\varepsilon_0 c^2 |\vec E|^2, \ \ \ \ \ \ \ \ \vec E =- \frac{\partial \vec A}{\partial t} = i \omega {\vec A}_{\rm p} e^{i (\omega t - \vec k \vec r)} \label{32}$$ we gain the relation $${\vec A}^{\kern 1.5pt 2}_{\rm p} = \frac 1{\varepsilon_0 c^2 \omega^2 } \ I_{\rm p}. \label{33}$$ The intensity $I_{\rm p}$ can be separated out of the matrix element (31), i.e., we can write $$|M|^2 = |{\cal M}|^2 \ I_{\rm p} \label{34}$$ where $$|{\cal M}|^2 =16 \pi \ \frac {e^2 \hbar^2} {\varepsilon_0 c^2 \omega^2 m^2 V} \ {\vec A}^{\kern 1.5pt 2}_{\rm p} \ \bigl( \frac Z{a_{\rm Bohr}} \bigr)^5 \ \bigl( \frac \hbar {\vec p}_{\rm free} \bigr)^6 \ \frac{\sin^2 \theta \cos^2 \phi}{(1 - \frac{v}{c}\cos \theta)^4}; \label{35}$$ Now, expression (30) can be rewritten as $$P(t)=|{\cal M}|^2 \ I_{\rm p} \ |{\cal I}(t)|^2 \label{36}$$ where $$|{\cal I}(t)|^2 = \int\limits^t_0 {\cal N}^* (\tau) e^{-i (\omega_{fl} -\omega)\tau } d \tau \int\limits^t_0 {\cal N}(\tau) e^{i(\omega_{fl} -\omega)\tau } d \tau. \label{37}$$ Let us calculate the integral ${\cal I}(t)$: $$\begin{aligned} {\cal I}(t) = \int\limits^t_0 {\cal N} (\tau) e^{i(\omega_{fl} -\omega)\tau} d \tau = \frac {\sigma_{\rm th} n^{2/3}_{\rm th}}{t_{\rm p}} \int\limits^t_0 \tau e^{i(\omega_{fl}- \omega) \tau} d \tau \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nonumber \\ = \frac{\sigma_{\rm th} n^{2/3}_{\rm th}}{t_{\rm p}} {\kern 1pt}[\frac {t}{i(\omega_{fl} - \omega)} {\kern 1pt}e^{i(\omega_{fl} - \omega)t} + \frac 1{(\omega_{fl} -\omega)^2} {\kern 1pt} (e^{i(\omega_{fl} - \omega)t} - 1) ]. \ \label{38}\end{aligned}$$ Substituting ${\cal I}(t)$ and ${\cal I}^*(t)$ into (37) one obtains $$\begin{aligned} |{\cal I}(t)|^2 = \frac{(\sigma_{\rm th} n^{2/3}_{\rm th})^2}{t^2_{\rm p} (\omega_{fl} -\omega)^2} {\kern 2pt}\{ t^2 &-& \frac {2t}{(\omega_{fl} -\omega)} {\kern 1pt}\sin ((\omega_{fl} - \omega) t) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nonumber \\ &+& \frac 2{(\omega_{fl} -\omega)^2} {\kern 1pt} [1 - \cos ((\omega_{fl} - \omega) t)] \}. \ \ \ \ \ \ \ \label{39}\end{aligned}$$ In our case $\omega_{fl} - \omega = (E_f-E_l) / \hbar - \omega$ where $\omega =2\pi \nu$ and $\nu$ is the frequency of incident light. As $\omega_{fl} \gg \omega$, one can put $\omega_{fl} - \omega \simeq \omega_{fl}$. Besides we consider the approximation when $t \ll t_{\rm p}= \Delta t/2 \approx 10^{-8} - 10^{-7}$ s. Hence for the wide range of time (i.e., $\omega_{fl}^{-1} \ll t \ll t_{\rm p}$ the inequality $\omega_{fl} t \gg 1$ is held and expression (39) can be replaced by $$|{ \cal I} (t)|^2 \simeq (\frac {\sigma_{\rm th}n^{2/3}_{\rm th}}{\omega_{fl} {\kern 1pt}t_{\rm p}})^2 \ t^2. \label{40}$$ The matrix element $\omega_{fl}$ in (40) can be eliminated by substituting the absolute value of ionization potential of atoms (or the work function of the metal) $W$, that is, $\omega_{fl} \rightarrow W / \hbar$. If we substitute (40) into (36), we finally get $$\begin{aligned} P(t)&=&| { \cal M } |^2 \ \Big( \frac {\hbar {\kern 2pt} \sigma_{\rm th}{\kern 1pt} n^{2/3}_{\rm th}} {W t_{\rm p}}\Big)^2 \ I_{\rm p} {\kern 1pt} t^2 \nonumber \\ &\equiv& |{\cal M}|^2 (\hbar {\cal N}/Wt_{\rm p})^2 I_{\rm p}{\kern 1pt} t^2. \label{41}\end{aligned}$$ In the case when the incident laser pulse one may consider as a perturbation that is not time dependent, the interaction operator (21) can be regarded as a constant value $$\begin{aligned} {\hat W}_{\rm eff}&=&-\frac{e}{m}{\kern 1pt}{\vec A}_{\rm eff}(\vec r) {\kern 1pt}{\hat {\vec p}} \nonumber \\ &\equiv& -\frac{e}{m}{\cal N}{\vec A}_0 {\kern 1pt} e^{i \vec k \vec r} {\hat {\vec p}} \label{42}\end{aligned}$$ between the moments of cut-in and cut-off and ${\hat W}_{\rm eff}=0$ behind the time interval $\Delta t$ corresponding to the duration of the laser pulse. Now having the interaction operator (42) we directly use the Fermi golden rule and obtain the probability of the anomalous photoelectric effect (compare with the theory of the simple photoelectric effect, e.g. Refs. 59, 60) $$P_0= \frac {2\pi}{\hbar}|{\cal M}|^2 {\cal N}^{\kern 1pt 2} I V m |{\vec p}_{\rm {\kern 1pt} free}| \ \Delta t \ d \Omega; \label{43}$$ here $|{\cal M}|^2$ is the matrix element defined above (35), ${\cal N}= \sigma n^{2/3}$ is the number of photons absorbed by an atom (or the metal) simultaneously, $I$ is the typical intensity of the laser pulse, $Vm|{\vec p}_{\rm free}|$ is the density of states ($V$ is the normalizing volume, $m$ is the electron mass and $|{\vec p}_{\rm free}|$ is the momentum of the stripped electron). Thus it is easily seen that the interaction between the laser pulse and gas atoms (or the metal) is not nonlinear. This is why the results to be expected from this new approach would correlate with the results predicted by the effective photon (7). **Discussion** ============== Let us apply the results obtained above to the experimental data used by Panarella \[17,18\] for the verification of the effective photon. Besides other experimental results are taken into account as well. We shall restrict our consideration to qualitative evaluations, which note only the general tendency towards the behaviour of the systems in question. *Laser-induced gas ionization* ------------------------------ Let probability (41) describes the transition from the stationary state of an atom to the ionized state of the same atom. Multiplying both sides of expression (41) by the concentration $N_a$ of gas atoms which are found in the focal volume investigated, one gains the formula for the concentration $N_i$ of ionized atoms $$N_i= N_a \ |{\cal M}|^2 \ \Bigl( \frac {\hbar {\cal N}_{\rm th}} {W t_{\rm p}} \Bigr)^2 \ I_{\rm p} \ t^2. \label{44}$$ So, it is readily seen that $$N_i \propto N_a I_{\rm p} t^2, \label{45}$$ that is, the concentration $N_i$ of ionized atoms is directly proportional to the peak laser pulse intensity $I_{\rm p}$ and the time to the second power. Time dependence of ionization before breakdown was analysed by Panarella \[17,18\] in the framework of the same formula (45) obtained by him using the effective photon. The experiment by Chalmeton and Papoular \[40\] showed that the evolution of free electrons knocked out of gas atoms, that is $d \ln N_e(t)/ d t$, is only a function of time. As the electron density $N_e(t)=N_i(t)$, following Refs. 17,18 we obtain from (45) (or (44)): $d \ln N_e(t) / dt =2/t$, in agreement with the experiment. [**5.1.2.**]{} Temporal dependence of the breakdown threshold intensity was studied by Panarella \[18\] with the aid of the same expression (45). At breakdown $N_i$=const, $I_{\rm p}$ is replaced by the threshold intensity $I_{\rm th}$ and a time interval $t$ is equal to the breakdown time $t_{\rm b}$. Hence in this case expression (45) gives $I_{\rm th}$=const$\times t^{-2}_{\rm b}$ or, according to formulas (32), $E_{\rm th}\propto t^{-1}_{\rm b}$. This expression agrees with the experiment by Pheps \[52\]. [**5.1.3.**]{} The experimental results on the number of ions created by the laser pulse as a function of the pulse intensity can also be described in terms of the anomalous photoelectric effect. For this purpose we should concentrate upon expression (43), which yields after multiplying both sides by the concentration $N_a$ of gas atoms $$N_i = {\rm const} \times N_a {\cal N}^{\kern 1pt 2} I. \label{46}$$ However before proceeding to the verification of the theory we should call attention to the process, which is the reverse of the photoelectric effect. The case in point is the radiation recombination of an electron with a fixed ion, Berestetskii [*et al.*]{} \[61\]. The intensity $I$ of the laser pulse characterizes the density of electromagnetic energy per unit of time, that is, one can deem that $I$ is in inverse proportion to time. This enables the construction of a possible model describing the occupancy of states of ions and atoms in the presence of the strong laser irradiation. The processes of ionization of atoms and recombination of ions may be represented by the following kinetic equations: $$\dot N_a = \alpha N_a - \beta N_i + D; \label{47}$$ $$\dot N_i = \gamma N_i - \alpha N_a \label{48}$$ where the dot over $N_{a(i)}$ means derivation with respect to the “time” variable $\tilde t \equiv 1/I$. Here $\alpha N_a$ and $\beta N_i$ present the rate of ionization and restoration of atoms of gas respectively, $\gamma N_i$ represents the rate of recombination of ions in gas, and $D$ is the rate of irreversible decay of the atoms (it specifies a part of electrons which leave the gas studied). As the first approximation we can put $D=0$ and therefore $\gamma = \beta$. Such an approximation allows the following solution of Eqs. (47) and (48): $$N_a =N_{a0} \bigl( 1-e^{-(\alpha + \beta)/I} \bigr); \label{49}$$ $$N_i = N_{a0} \Bigl( \frac{\alpha}{\beta} - \frac{\alpha}{\alpha + 2\beta} e^{-(\alpha + \beta)/I} \Bigr) \label{50}$$ where $N_{a0}$ is the initial concentration of atoms of gas in the focal volume. Denote the parameter $(\alpha + \beta)$ by $I_m$, which may correspond to an intensity supporting the balance between ionization and recombination in the gas system studied. Then substituting $N_a$ from the solution (49) into relation (46) we get the resultant expression governs the total number of ions $N_i$ as a function of the laser intensity $I$ and the number of absorbed photons $\cal N$: $$N_i = {\rm const} \times N_{a0} {\cal N}^{\kern 1pt 2} I \bigl( 1-e^{-I_m/I} \bigr). \label{51}$$ Expression (51) correlates in outline with Panarella’s \[17,18\] expression which he utilized to explain the total number of ions produced by the laser pulse (the experimental results by Agostini [*et al.*]{} \[39\]). In fact when $I<I_m$, the exponential term can be neglected in (51) and in a log-log plot the number of ions versus the pulse intensity is proportional to the number of absorbed photons, that is $$\log N_i / \log I \propto {\cal N} \label{52}$$ and, hence, $N_i$ against $I$ is a straight line whose slope is $\cal N$ (see, e.g. the experimental results by Lompre [*et al.*]{} \[22\]). When $I>I_m$, the exponent can not be neglected and, therefore, a curve $N_i$ versus $I$ must show an inflection point (probably at $I \simeq I_m$) in accord with the experimental results by Agostini [*et al.*]{} \[39\]. [**5.1.4.**]{} Expression (51) is able to explain the breakdown intensity threshold measured as a function of pressure or gas density. If expression (51) is written in the form $$I_{\rm th} \simeq {\rm const} \times N^{-1}_{a0} \bigl(1+ e^{-I_m/I} \bigr) \label{53}$$ where $I_{\rm th}$ is the breakdown threshold intensity, the function $I_{\rm th}$ versus $N_{a0}$ indicates that $I_{\rm th} \propto N_{a0}^{-1 + \delta}$ where the value $\delta$ satisfies the inequalities $0< \delta < 1/2$. Such a variation of the parameter $\delta$ rhymes satisfactory with the experimental results by Okuda [*et al.*]{} \[41-43\] and their analysis carried out by Panarella \[17,18\]. [**5.1.5.**]{} The appearance of electrons released from atoms of gas at high energies (more than 100 eV at the laser intensity at $5\times 10^{14}$ W/cm$^2$, Agostini and Petite \[16\]) follows immediately from the theory constructed. The two possibilities may be realized. First of all expressions (41) and (43) allow the kinetic energy of revealed electrons larger than $h\nu \sigma_{\rm th}n_{\rm th}^{2/3}$ because as is evident from inequalities (12), an electron’s inerton cloud can absorb in principle more photons, ${\cal N} + \Delta {\cal N} = \sigma n^{2/3}$, than is required for overcoming the threshold value ${\cal N}=\sigma_{\rm th} n_{\rm th}^{2/3}$. This is no surprise, since the anomalous photoelectric effect is a generalization for the simple one. In the theory of the simple photoelectric effect one can recognize the approximations $h \nu \geq W$ and $h \nu \gg W$. The first inequality can be related to the anomalous photoelectric effect considered above. The second one corresponds to the Born (adiabatic) approximation, Berestetskii [*et al.*]{} \[61\], and in the case of the anomalous photoelectric effect the inequality changes merely to $({\cal N} +\Delta {\cal N})h \nu \gg W$. Notice that this inequality is in agreement with formula (6) utilized by the multiphoton theory to account for the energy spectrum of electrons ejected in the ionization of atoms. At the same time the absorption of radiation by an accelerated electron (called the above-threshold ionization in Ref. 16) must not be ruled out. Actually, if a final state of a released electron is the state of a free electron in an electromagnetic field (so called “Volkov state” \[16\]), one may assume that the electron was stripped having a very small kinetic energy. Let initial velocity $v_0$ of the electron released from an atom be several times less than the velocity of the electron in the atom which we set equal to the Fermi velocity $v_{\rm F} \simeq 2\times 10^6$ m/s in Section 3. In such the case as it follows from relation (1) and inequalities (11) the electron excites surrounding space significantly wider than the Fermi electron and this is why the cross section of the excited range of space around the low speed electron should be at least ten times greater than the magnitude of cross section evaluated in Section 3. This means that our low speed electron will be immediately scattered by more than ${\cal N} + 10$ photons of the laser beam and therefore its kinetic energy may reach the value of several tens eV. *Electron emission from a laser-irradiated metal* -------------------------------------------------- The investigation of the photoelectric emission from a laser-irradiated metal performed experimentally by Panarella \[50,51,18\] has shown that: 1) the photoelectric current $i_e$ is linear with light intensity $I$, $$i_e \propto I; \label{54}$$ 2) the maximum energy $\varepsilon_{\rm max}$ of the emitted electron is a function of light intensity $I$, $$\varepsilon_{\rm max}\propto f(I) \label{55}$$ and $\varepsilon_{\rm max}$ increases with $I$. The same dependence of $i_e$ and $\varepsilon_{\rm max}$ on $I$ is predicted by the effective photon theory \[18\] (note that the multiphoton methodology predicted that $i_e$ depends on $I$ to the power $\cal N$ and $\varepsilon_{\rm max}$ depends on $\nu$ only of the light). Let us compare the results of the anomalous photoelectric effect theory developed above with the experimental results by Panarella (formulas (54) and (55)). In his experiments the light intensity $I$ changed from $\sim 10^6$ W/cm$^2$ to $\sim 10^9$ W/cm$^2$ from experiment to experiment. This value of $I$ is not very great and we can take into consideration the total power transferred during one pulse. By this is meant that the light intensity is assumed to be constant during the pulse. Therefore the expression (43) $$P_0 = {\rm const} \times (\sigma n^{2/3})^2 I \label{56}$$ can be used to evaluate of the electron emission from the metal. Expression (56) was obtained utilizing the perturbation theory. In other words, the interaction energy ${\cal W}_{\rm eff} \equiv e{\vec A}_0 \vec p \sigma n^{2/3}/m$ that forms the perturbation operator (42) should be smaller than the absolute value of the work function $W$. In Panarella’s experiments the value of $W$ was about $10^{-18}$ J (i.e., approximately 6 eV). At $I=10^6 - 10^9$ W/cm$^2$ (i.e., $10^{24} - 10^{27}$ photons/cm$^2$ per second) one has $${\cal W}_{\rm eff}= (1.8 \times 10^{-22} - 5.7 \times 10^{-21})\times (\sigma n^{2/3})^2 \ \ [{\rm J}]. \label{57}$$ If we try formally to estimate an additional number of photons $\sigma n^{2/3}-1$ which pass their energy on to the electron that absorbed a single photon, we will find with regard for the inequality (11): $$\ \ \ \ \ \ 1<\sigma n^{2/3} \ll 1 \ \ \ \ \ \ \ \ {\rm at} \ \ \ \ n \approx 3\times 10^{15} \ \ {\rm cm}^{-3}; \eqno(58a)$$ $$\ \ \ \ 1 < \sigma n^{2/3} < 2 \ \ \ \ \ \ \ {\rm at}\ \ \ \ \ n \approx 10^{18}\ \ {\rm cm}^{-3}. \eqno(58b)$$ Substituting $\sigma n^{2/3}$ from (58$b$) in expression (57), it is easily seen that the inequality $W\gg {\cal W}_{\rm eff}$ is not broken, that is formula (56) could be applied to the study of anomalous electron emission from the metal. Nonetheless, inequalities (58$a$) are not correct while the experiment \[51,18\] pointed to the presence of photoelectrons at the light intensity $I=10^6$ W/cm$^2$ ($n \approx 3\times 10^{15}$ cm$^{-3}$). One way around this problem is to take into account the large concentration of electrons $n_{\rm elec}$ in a metal. Indeed, the value of $n_{\rm elec}\sim 10^{21}$ cm$^{-3}$ and consequently the mean distance between electrons is $n^{-1/3}_{\rm elec}\sim 1$ nm. Bearing in mind that owing to relationship (1) the electron’s inerton cloud in the metal is characterized by amplitude $\Lambda /\pi \simeq 17$ nm, one should supplement the parameter $\sigma$ by a correlation function $F(\Lambda, n^{1/3}_{\rm elec})$. The function can be chosen in the form $$F= \Bigl[\frac {\Lambda}{n^{-1/3}_{\rm elec}}\Bigr]^{\gamma}, \ \ \ \ \gamma > 0. \eqno(59)$$ The function (59) corrects inequalities (58$a$). Hence expression (56) takes the form $$P_0={\rm const}\times (\sigma n^{2/3}F)^2 \times I \eqno(60)$$ and it can be used until $W \gg {\cal W}_{\rm eff}$. For large $I_{\rm p}$ when ${\cal W}_{\rm eff} \sim W$, expression (60) is also suitable, but only at the initial stage of the laser pulse (in this case the factor $t/t_{\rm p}$ should again be introduced into the right hand side (60)). Note that in the case of rarefied gases the overlapping of inerton clouds of neighboring atoms begins for their concentration $n_{\rm atom} \geq 10^{17}$ cm$^{-3}$; here the mean distance between atoms $n^{-1/3}_{\rm atom} \sim 20$ nm. Comparing expressions (54) and (60) we notice that they agree: expression (60) describes the probability of the appearance of free electrons and hence their current $i_e$ at the difference of electric potential as a linear function of $I$. The behaviour of emitted electrons described by expression (55) is consistent with the prediction of the present theory as well. Panarella \[51\] pointed out that the incident laser beam did not heat the metal specimen. This statement is correct for the background temperature, i.e. phonon temperature of the small specimen. However the electron temperature should increase with the intensity of light; it is well known phenomenon called heat electrons (see, e.g. Refs. 62-64). The greater the light flux intensity, the greater the kinetic energy of the heat electrons in small metal specimens \[63,64\]. As a result the work function $W$ of the specimen becomes a function of the intensity of light $I$:  $W$ falls as $I$ increases. Thus, expression (55) should also follow from the theory based on the inerton concept; the theory gives the explicit form of expression (55): $$\varepsilon_{\rm max}= {\cal N} h\nu - W(I) \eqno(61)$$ where $h\nu$ is the photon energy of incident light, ${\cal N}$ is the threshold number of photons scattered by the electron’s inerton cloud and $W(I)$ is the work function depending on the intensity of light $I$. **Conclusion** ============== The present theory of anomalous photoelectric effect has been successfully applied to the numerous experiments where the photon energy of incident light is essentially smaller than the ionization potential of gas atoms and the work function of the metal. This theory is based on submicroscopic quantum mechanics developed in the previous papers by the author \[1-3\]. Note that ideas on the microstructure of the space set forth in that author’s research are in excellent agreement with the recent construction of a mathematical space carried out by Bounias and Bonaly \[65\] and Bounias \[66\]. Space reveals its properties through the engagement of the particle with it. As a result – a cloud of inertons, that is, elementary excitations of the space, is created in the surrounding of the particle and just these clouds enclosing electrons were detected in the experiments mentioned above by a high-intensity luminous flux. It is obvious that clouds of inertons, which accompany electrons were fixed also in another series of experiments carried out by a large group of physicists, Briner [*et al.*]{} \[67\]. Their article is entitled “Looking at Electronic Wave Functions on Metal Surfaces” and it contains the colored spherical and elliptical figures, which the authors called “the images of $\psi$ wave functions of electrons”. However, the wave $\psi$-function is only a mathematical function that sets connections between parameters of the system studied. So the wave $\psi$-function can not be observed in principle. This means that the researchers could register perturbations of space surrounding the electrons in the metal, i.e., clouds of inertons accompanying moving electrons. It is believed that mobile small deformations of space – inertons, which constitute a substructure of the matter waves – promise new interesting effects and phenomena \[68,69\]. At the same time, for the description of a whole series of phenomenological aspects of effects caused by highly intensive laser radiation in the case when the adiabatic approximation may be used, Panarella’s effective photon theory \[17,18\] is also suitable (the theory is similar to the phenomenological theory of propagation of electromagnetic waves in nonlinear media, see, e.g. Ref. 70). As it follows from the analysis above, the effective photon methodology, indeed, specifies the effective photon density, or the number of photons absorbed by the electron’s inerton cloud (see expression (15)); therefore, the methodology allows the correct calculation of the photon energy absorbed by an atom of gas or an electron in the metal and, as the rule, just the value of this energy is very significant for the majority of the problems which are researched. As for the nonlinear multiphoton concept, its basis should be altered to the linear one, that is, to the anomalous photoelectric concept developed herein. An important conclusion arising from the theory considered in the present work is that the Ampére’s formula $\vec p - e \vec A$ is not universal. In the general case, when the intensity of the electromagnetic field is high, it should be replaced by the formula $\vec p - e {\cal N}\vec A$ where the vector potential $\vec A$ is normalized to one photon and $\cal N$ is the quantity of coherent photons scattering/absorbing by the electron’s inerton cloud simultaneously. In other words, for highly intensive electromagnetic field, one should use the approximation of the strong electron-photon coupling (see expressions (14) and (15)). The submicroscopic approach is not only advantageous in the study of matter under strong laser irradiation. The approach provides a means of more sophisticated analysis of the nature of matter waves and the nature of light. Thereby such an analysis is able to originate radically new view-points on the structure of real space, the notions of particle and field and their interaction. [**Acknowledgement**]{} I am very thankful to Dr. E. Panarella who provided me with his reviews, which were used as a basis for the paper presented herein and I would like to thank to Prof. M. Bounias for the fruitful discussion concerning the background of the developed concept. And I am very thankful to Mrs. Gwendolin Wagner who paid the page charge for the publication of the present work.  [**References**]{} 1. Krasnoholovets, V. and Ivanovsky, D. – Motion of a particle and the vacuum, [*Phys. Essays*]{} [**6**]{}, 554-563 (1993) (also arXiv.org e-print archive, http://arXiv.org/abs/quant-ph/9910023). 2. Krasnoholovets, V. – Motion of a relativistic particle and the vacuum, [*Phys. Essays*]{} [**10**]{}, 407-416 (1997) (also quant-ph/9903077). 3. V. Krasnoholovets, – On the nature of spin, inertia and gravity of a moving canonical particle, [*Ind. J. Theor. Phys.*]{} [**48**]{}, 97-132 (2000) (also quant-ph/0007027). 4. De Broglie, L. – Interpretation of quantum mechanics by the double solution theory, [*Ann. de la Fond. L. de Broglie*]{} [**12**]{}, 399-421 (1987). 5. Bohm, D. – A suggested interpretation of the quantum theory, in terms of “hidden” variables. I, [*Phys. Rev.*]{} [**85**]{}, 166-179 (1952); A suggested interpretation of the quantum theory, in terms of “hidden” variables. II, [*ibid.*]{} [**85**]{}, 180-193 (1952). 6. Rado, S. – [*Aethero-kinematics*]{}, CD-ROM (1994), Library of Congress Catalog Card, \# TXu 628-060 (also http://www.aethero-kinematics.com). 7. Aspden, H. – [*Aetherth science papers*]{}, Subberton Publications, P. O. Box 35, Southampton SO16 7RB, England (1996). 8. Kohler, C. – Point particles in 2+1 dimensional gravity as defects in solid continua, [*Class. Quant. Gravity*]{} [**12**]{}, L11-L15 (1995). 9. Vegt, J. W. – A particle-free model of matter based on electromagnetic self-confinement (III), [*Ann. de la Fond. L. de Broglie*]{} [**21**]{}, 481-506 (1996). 10. Winterberg, F. – [*The Planck aether hypothesis. An attempt for a finistic theory of elementary particles*]{}, Verlag relativistischer Interpretationen – VRI, Karlsbad (2000). 11. Rothwarf, A. – An aether model of the universe, [*Phys. Essays*]{} [**11**]{}, 444-466 (1998). 12. Berezinskii, V. S. – Unified gauge theories and unstable proton, [*Priroda*]{} (Moscow), no. 11 (831), 24-38 (1984) (in Russian). 13. Meyerand, R. G., and Haught, A. F. – Gas breakdown at optical frequencies, [*Phys. Rev. Lett.*]{} [**11**]{}, 401-403 (1963). 14. Voronov, G. S., and Delone, N. B. – Ionization of xenon atom by electric field of ruby laser radiation, [*JETP Lett.*]{} [**1**]{}, no. 2, 42-45 (1965) (in Russian). 15. Smith, D. C., and Haught, A. F. – Energy-loss processes in optical-frequency gas breakdown, [*Phys. Rev. Lett.*]{} [**16**]{}, 1085-1088 (1966). 16. Agostini, P., and Petite, G., – Photoelectric effect under strong irradiation, [*Contemp. Phys.*]{} [**29**]{}, 57-77 (1988). 17. Panarella, E. – Theory of laser-induced gas ionization, [*Found. Phys.*]{} [**4**]{}, 227-259 (1974). 18. Panarella, E. – Effective photon hypothesis vs. quantum potential theory: theoretical predictions and experimental verification, in: [*Quantum uncertainties. Recent and future experiments and interpretations*]{}. NATO ASI. Series B 162, Physics, eds.: Honig, W. M., Kraft, D. W. and Panarella, E., Plenum Press, New York (1986), 237-269. 19. Fermi, E. – [*Notes on quantum mechanics*]{}, Mir, Moscow (1965), p. 211 (Russian translation). 20. Keldysh, L. V. – Ionization in the field of a strong electromagnetic wave, [*JETP*]{} [**47**]{}, 1945-1957 (1964) (in Russian). 21. Reiss, H. R. – Semiclassical electrodynamics of bound systems in intense fields, [*Phys. Rev. A*]{} [**1**]{}, 803-818 (1970). 22. Lompre, L. A., Mainfray, G., Manus, C., and Thebault, J. – Multiphoton ionization of rare gases by a tunable-wavelength 30-psec laser pulse as 1.06 $\mu$m, [*Phys. Rev. A*]{} [**15**]{}, 1604-1612 (1977). 23. Martin, E. A. and Mandel, L. – Electron energy spectrum in laser-induced multiphoton ionization of atoms, [*Appl. Opt.*]{} [**15**]{}, 2378-2380 (1976). 24. Boreham, B. W., and Hora, H. – Debye-length discrimination of nonlinear laser forces acting on electrons in tenuous plasmas, [*Phys. Rev. Lett.*]{} [**42**]{}, 776-779 (1979). 25. Petite, G., Fabre, F., Agostini, P., Grance, M., and Aymar, M., – Nonresonant multiphoton ionization of cesium in strong fields: angular distributions and above-threshold ionization, [*Phys. Rev. A*]{} [**29**]{}, 2677-2689 (1984). 26. Kruit, P., Kimman, J., and Van der Wiel, M. J. – Absorption of additional photons in the multiphoton ionization continuum of xenon at 1064, 532 and 440 nm, [*J. Phys. B*]{} [**14**]{}, L597-602 (1981). 27. Fabre, F., Petite, G., Agostini, P., and Clement, M. – Multiphoton above-threshold ionization of xenon at 0.53 and 1.06 $\mu$m, [*J. Phys. B*]{} [**15**]{}, 1353-1369 (1982). 28. Agostini, P., Kupersztych, J., Lompre, L. A., Petite, G., and Yergeau, F. – Direct evidence of ponderomotive effect via laser pulse duration in above-threshold ionization, [*Phys. Rev. A*]{} [**36**]{}, 4111-4114 (1987). 29. Farkas, G. in: [*Photons and continuum states of atoms and molecules*]{}, eds.: N. K. Rahman, C. Guidotti and M. Allegrini, Springer-Verlag, Berlin (1987), p. 36. 30. Fedorov, M. V. – [*An electron in strong light field*]{}, Nauka, Moscow, (1991) (in Russian). 31. Manfray, G., and Manus, C. – Multiphoton ionization of atoms, [*Rep. Prog. Phys.*]{} [**54**]{}, 1333-1372 (1991). 32. Mittleman, M. H. – [*Introduction to the theory of laser-atom interactions*]{}, Plenum, New York, (1993). 33. Delone, N. B., and Krainov, V. P. – [*Multiphoton processes in atoms*]{}, Springer, Heidelberg (1994). 34. Delone, N. B., and Krainov, V. P. – Stabilization of an atom by the field of laser radiation, [*Usp. Fiz. Nauk*]{} [**165**]{}, 1295-1321 (1995) (in Russian). 35. Avetissian, H. K., Markossian, A. G., and Mkrtchian, G. F. – Relativistic theory of the above-threshold multiphoton ionization of hydrogen-like atoms in the ultrastrong laser fields, quant-ph/9911070. 36. Protopapas, M., Keitel, C. H., and Knight, P. L. – Atomic physics with super-high intensity lasers, [*Rep. Prog. Phys.*]{} [**60**]{}, 389 (1997). 37. Salamin, Y. I. – Strong-field multiphoton ionization of hydrogen: Nondipolar asymmetry and ponderomotive scattering, [*Phys. Rev. A*]{} [**56**]{}, 4910-4917 (1997). 38. Avetissian, H. K., Markossian, A. G., Mkrtchian, G. F., and Movsissian, S. V. – Generalized eikonal wave function of an electron in stimulated bremsstrahlung in the field of a strong electromagnetic wave, [*Phys. Rev. A*]{} [**56**]{}, 4905-4909 (1997). 39. Agostini, P., Barjot, G., Mainfray, G., Manus, C., and Thebault, J. – Multiphoton ionization of rare gases at 1.06 $\mu$m and 0.53 $\mu$m, [*IEEE J. Quant. Electr.*]{} [**QE-6**]{}, 782-788 (1970). 40. Chalmeton, V., and Papoular, R. – Emission of light by a gas under the efeect of an intense laser radiation, [*Compt. Rend.*]{} [**264B**]{}, 213-216 (1967). 41. Okuda, T., Kishi, K., and Savada, K. – Two-photon ionization process in optical breakdown of cesium vapor, [*Appl. Phys. Lett.*]{} [**15**]{}, 181-183 (1969). 42. Kishi, K., Sawada, K., Okuda, T., and Matsuoka, Y. – Two-photon ionization of cesium and sodium vapors, [*J. Phys. Soc. Jap.*]{} [**29**]{}, 1053-1061 (1970). 43. Kishi, K., and Okuda, T. – Two-photon ionization of alkali metal vapors by ruby laser, [*J. Phys. Soc. Japan*]{} [**31**]{}, 1289 (1971). 44. Zel’dovich, Ya. B., and Raizer, Yu. P. – Cascade ionization of a gas by a light pulse, [*JETP*]{} [**47**]{}, 1150-1161 (1964). 45. Allen, A. D. – A testable Noyes-like interpretation of Panarella’s effective-photon theory, [*Found. Phys.*]{} [**7**]{}, 609-615 (1977). 46. Dewdney, C., Garuccio, A., Kyprianidis, A., and Vigier, J. P. – The anomalous photoelectric effect: quantum potential theory versus effective photon hypothesis, [*Phys. Lett.*]{} [**105A**]{}, 15-18 (1984). 47. Dewdney, C., Kyprianidis, A., Vigier, J. P., and Dubois, A. – Causal stochastic prediction of the nonlinear photoelectric effects in coherent intersecting laser beams, [*Lett. Nuovo Cim.*]{} [**41**]{}, 177-185 (1984). 48. De Brito, A. L., and Jabs, A. – Line broadening by focusing, [*Can. J. Phys.*]{} [**62**]{}, 661-668 (1984). 49. De Brito, A. L. – Gas ionization by focused laser beams, [*Can. J. Phys.*]{} [**62**]{}, 1010-1013 (1984). 50. Panarella, E. – Experimental test of multiphoton theory, [*Lett. Nuovo Cim.*]{} [**3**]{}, Ser.2, 417-423 (1972). 51. Panarella, E. – Spectral purity of high-intensity laser beams, [*Phys. Rev. A*]{} [**16**]{}, 672-680 (1977). 52. Pheps, A. V. – Theory of growth of ionization during laser breakdown, in: [*Physics of quantum electronics*]{}, eds. P. L. Kelley, B. Lax and P. E. Tannelwald, McGraw-Hill Book Company, New York (1966), 538-547. 53. Raychaudhuri, P. – Effective photon hypothesis and the structure of the photon, [*Phys. Essays*]{} [**2**]{}, pp. 339-345 (1989). 54. Berestetskii, V. B., Lifshitz, E. M., and Pitaevskii, L. P. – [*Quantum electrodynamics*]{}, Nauka, Moscow (1980), p. 28 (in Russian). 55. Davydov, A. S.– [*The theory of solids*]{}, Nauka, Moscow (1976), p. 350 (in Russian). 56. Davydov, A. S. – [*Quantum mechanics*]{}, Nauka, Moscow (1973), p. 374 (in Russian). 57. Ter Haar, D. – [*Elements of Hamiltonian mechanics*]{}, Nauka, Moscow (1973), p. 374 (Russian translation). 58. See Ref. 54, p. 231. 59. Blokhintsev, D. I. – [*Principles of quantum mechanics*]{}, Nauka, Moscow (1976), p. 407 (in Russian). 60. See Ref. 56, p. 472. 61. See Ref. 54, p. 242. 62. Anisimov, S. I., Imos, Ya. A., Romanov, G. S., and Khodyko, Yu. V. – [*Action of high-intensity radiation on metals*]{}, Nauka, Moscow (1970) (in Russian). 63. Tomchuk, P. M. – Electron emission from island metal films under the action of laser infrared radiation (theory), [*Izvestia Acad. Sci. UdSSR*]{}, Ser. Phys. [**52**]{}, 1434-1440 (1988)(in Russian). 64. Belotsky, E. D., and Tomchuk, P. M. – Electron-photon interaction and hot electrons in smal metal islands, [*Surface Sci.*]{} [**239**]{}, 143-155 (1990). 65. Bounias, M. and Bonaly, A. – On mathematical links between physical existence, observebility and information: towards a “theorem of something”, [*Ultra Scientist of Phys. Sci.*]{} [**6**]{}, 251-259 (1994);   Timeless space is provided by empty set, [*ibid.*]{} [**8**]{}, 66-71 (1996);  On metric and scaling: physical co-ordinates in topological spaces, [*Ind. J. Theor. Phys.*]{} [**44**]{}, 303-321 (1996);  Some theorems on the empty set as necessary and sufficient for the primary topological axioms of physical existence, [*Phys. Essays*]{} [**10**]{}, 633-643 (1997). 66. Bounias, M. – The theory of something: a theorem supporting the conditions for existence of a physical universe, from the empty set to the biological self, [*Int. J. Anticip. Syst.*]{} [**5-6**]{}, 1-14 (2000). 67. Briner, G., Hofmann, Ph. Doering, M., Rust, H. P., Bradshaw, A. M., Petersen, L., Sprunger, Ph., Laegsgaard, E., Besenbacher, F. and Plummer, E. W. – Looking at electronic wave functions on metal surfaces, [*Europhys. News*]{} [**28**]{}, 148-152 (1997). 68. Krasnoholovets, V., and Byckov, V. – Real inertons against hypothetical gravitons. Experimental proof of the existence of inertons, [*Ind. J. Theor. Phys.*]{} [**48**]{}, 1-23 (2000). 69. Krasnoholovets, V., and Lev, B. – Systems of particles with interaction and the cluster formation in condensed matter, [*Conden. Matt. Phys.*]{}, in press. 70. Vinogradova, M. V., Rudenko, O. V., and Sukhorukov, A. P. – [*The theory of waves*]{}, Nauka, Moscow (1979) (in Russian).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study homogenization for a class of generalized Langevin equations (GLEs) with state-dependent coefficients and exhibiting multiple time scales. In addition to the small mass limit, we focus on homogenization limits, which involve taking to zero the inertial time scale and, possibly, some of the memory time scales and noise correlation time scales. The latter are meaningful limits for a class of GLEs modeling anomalous diffusion. We find that, in general, the limiting stochastic differential equations (SDEs) for the slow degrees of freedom contain non-trivial drift correction terms and are driven by non-Markov noise processes. These results follow from a general homogenization theorem stated and proven here. We illustrate them using stochastic models of particle diffusion.' address: - | Nordita, KTH Royal Institute of Technology and Stockholm University\ Roslagstullsbacken 23\ SE-106 91 Stockholm\ Sweden - 'Department of Mathematics and Program in Applied Mathematics University of ArizonaTucson, AZ 85721-0089USA' - 'ICFO - Institut de Ciéncies FotóniquesThe Barcelona Institute of Science and Technology Av. Carl Friedrich Gauss 308860 Castelldefels (Barcelona)Spain\' - 'ICREAPg. Lluis Companys 23 08010 Barcelona Spain' author: - Soon Hoe Lim - Jan Wehr - Maciej Lewenstein bibliography: - 'ref1.bib' title: Homogenization for Generalized Langevin Equations with Applications to Anomalous Diffusion --- Introduction ============ Motivation ---------- Most of the mathematical models of diffusion phenomena use noise which is white (i.e. uncorrelated), or Markovian [@nelson1967dynamical]. The present paper is a step towards removing this limitation. The diffusion models studied here are driven by noises, belonging to a wide class of non-Markov processes. A standard example of Markovian noise is a multidimensional Ornstein-Uhlenbeck process. An important class of Gaussian stochastic processes is obtained by linear transformations of multidimensional Ornstein-Uhlenbeck processes. The covariance (equal to correlation in the case of zero mean) of such a process is a linear combination of exponentials decaying and possibly oscillating on different time scales, and its spectral density (power spectrum) is a ratio of two semi-positive defined polynomials [@doob1953stochastic]. In cases when the polynomial in the denominator has degenerate zeros, the covariance contains products of exponentials and polynomials in time. This is a very general class of processes—every stationary Gaussian process whose covariance is a Bohl function (see Section 2) can be obtained as a linear transformation of an Ornstein-Uhlenbeck process in some (finite) dimension. In general, these processes are not Markov. Let us mention here the seminal result by L.A. Khalfin from 1957 [@khalfin1958contribution], who showed, quite generally that in any system with energy spectrum bounded from below (which is a necessary condition for the physical stability), correlations must decay no faster than according to a power law. To this day this result provides inspirations and motivations for further studies in the context of thermalization [@tavora2016inevitable], cooling of atoms in photon reservoirs [@lewenstein1993cooling], decay of metastable states as monitored by luminescence [@rothe2006violation], or quantum anti-Zeno effect (c.f. [@peres1980nonexponential; @lewenstein2000quantum]), to name a few examples. Khalfin’s result further motivates studying systems with non-Markovian noise, as most natural examples of strongly correlated processes do not satisfy Markov property. While the noise processes studied here have exponentially decaying covariances, their class is very rich and they may be useful in approximating strongly correlated noises on time intervals, relevant for studied phenomena [@siegle2011markovian]. In addition, as discussed in more detail later, generalization of the method applied here may lead to a representation of a class of noises whose covariances decay as powers (see Remark \[rem\_inf\_dim\]). Also, the representation of spectral density of the noise processes as ratio of two polynomials is convenient in applications, in particular for solving the problem of predicting (in the least mean square sense) a colored noise process given observations on a finite segment of the past or on the full past [@doob1953stochastic]. Definitions and Models {#intro} ---------------------- We consider the following stochastic model for a particle (for instance, Brownian particle or a tagged tracer particle) interacting with the environment (for instance, a heat bath or a viscous fluid). Let ${\boldsymbol}{x}_t \in {\mathbb{R}}^d$ denote the particle’s position, where $t \geq 0$ denotes time and $d$ is a positive integer. The evolution of the particle’s velocity, ${\boldsymbol}{v}_t := \dot{{\boldsymbol}{x}}_t \in {\mathbb{R}}^d$, is described by the following [*generalized Langevin equation (GLE)*]{}: $$m d{\boldsymbol}{v}_t = {\boldsymbol}{F}_0 \left(t,{\boldsymbol}{x}_t,{\boldsymbol}{v}_t,{\boldsymbol}{\eta}_t \right)dt + {\boldsymbol}{F}_1\left(t, \{{\boldsymbol}{x}_s,{\boldsymbol}{v}_s\}_{s \in [0,t]}, {\boldsymbol}{\xi}_t \right)dt + {\boldsymbol}{F}_e(t, {\boldsymbol}{x}_t)dt. \label{gle2}$$ In the above, $m>0$ is the particle’s mass, ${\boldsymbol}{\eta}_t$ is a $k$-dimensional Gaussian white noise satisfying $E[{\boldsymbol}{\eta}_t] = {\boldsymbol}{0}$ and $E[{\boldsymbol}{\eta}_t {\boldsymbol}{\eta}_s^*] = \delta(t-s){\boldsymbol}{I}$, and ${\boldsymbol}{\xi}_t$ is a colored noise process independent of ${\boldsymbol}{\eta}_t$. Here and throughout the paper, the superscript $^*$ denotes transposition of matrices or vectors, ${\boldsymbol}{I}$ denotes identity matrix of appropriate dimension, $E$ denotes expectation, and ${\mathbb{R}}^+ := [0,\infty)$. The initial data are random variables, ${\boldsymbol}{x}_0 = {\boldsymbol}{x}$, ${\boldsymbol}{v}_0 = {\boldsymbol}{v}$, independent of $\{{\boldsymbol}{\xi}_t, t \in {\mathbb{R}}^+ \}$ and $\{{\boldsymbol}{\eta}_t, t \in {\mathbb{R}}^+ \}$. The three terms on the right hand side of model forces of different physical natures acting on the particle. - ${\boldsymbol}{F}_e$ is an external force field, which may be conservative (potential) or not. - ${\boldsymbol}{F}_0$ is a Markovian force of the form $${\boldsymbol}{F}_0\left(t, {\boldsymbol}{x}_t,{\boldsymbol}{v}_t, {\boldsymbol}{\eta}_t\right) dt = -{\boldsymbol}{\gamma}_0(t, {\boldsymbol}{x}_t){\boldsymbol}{v}_t dt + {\boldsymbol}{\sigma}_0(t, {\boldsymbol}{x}_t)d{\boldsymbol}{W}^{(k)}_t,$$ containing an instantaneous damping term and a multiplicative white noise term. The damping and noise coefficients, ${\boldsymbol}{\gamma}_0: {\mathbb{R}}^+ \times {\mathbb{R}}^d \to {\mathbb{R}}^{d \times d}$ and ${\boldsymbol}{\sigma}_0: {\mathbb{R}}^+ \times {\mathbb{R}}^d \to {\mathbb{R}}^{d \times k}$, may depend on the particle’s position and on time. ${\boldsymbol}{W}^{(k)}_t$ denotes a $k$-dimensional Wiener process—the time integral of the white noise ${\boldsymbol}{\eta}_t$. - ${\boldsymbol}{F}_1$ is a non-Markovian force of the form $$\label{nonM_force} {\boldsymbol}{F}_1\left(t, \{{\boldsymbol}{x}_s,{\boldsymbol}{v}_s\}_{s \in [0,t]},{\boldsymbol}{\xi}_t\right) = - {\boldsymbol}{g}(t, {\boldsymbol}{x}_t) \left( \int_{0}^{t} {\boldsymbol}{\kappa}(t-s) {\boldsymbol}{h}(s, {\boldsymbol}{x}_s) {\boldsymbol}{v}_s ds \right) + {\boldsymbol}{\sigma}(t, {\boldsymbol}{x}_t) {\boldsymbol}{\xi}_t,$$ containing a non-instantaneous damping term, describing the delayed drag effects by the environment on the particle, and a multiplicative colored noise term. The coefficients, ${\boldsymbol}{g}: {\mathbb{R}}^+ \times {\mathbb{R}}^d \to {\mathbb{R}}^{d\times q}$, ${\boldsymbol}{h}: {\mathbb{R}}^+ \times {\mathbb{R}}^d \to {\mathbb{R}}^{q \times d}$ and ${\boldsymbol}{\sigma}: {\mathbb{R}}^+ \times {\mathbb{R}}^d \to {\mathbb{R}}^{d \times r} $, depend in general on the particle’s position and on time. In the above, $q$ and $r$ are positive integers, and the memory function ${\boldsymbol}{\kappa}: {\mathbb{R}}\to {\mathbb{R}}^{q \times q}$ is a real-valued function that decays sufficiently fast at infinities. ${\boldsymbol}{\xi}_t \in {\mathbb{R}}^{r}$ is a mean-zero stationary Gaussian vector process, to be defined in detail later. The statistical properties of the process ${\boldsymbol}{\xi}_t$ are completely determined by its (matrix-valued) [*covariance function*]{}, $${\boldsymbol}{R}(t):= E [{\boldsymbol}{\xi}_t {\boldsymbol}{\xi}^{*}_0] = {\boldsymbol}{R}^{*}(-t) \in {\mathbb{R}}^{r \times r},$$ or equivalently, by its [*spectral density*]{}, ${\boldsymbol}{\mathcal{S}}(\omega)$, i.e. the Fourier transform of ${\boldsymbol}{R}(t)$ defined as: $$\label{spec_form} {\boldsymbol}{\mathcal{S}}(\omega) = \int_{-\infty}^{\infty} {\boldsymbol}{R}(t) e^{-i\omega t} dt.$$ For simplicity, we have omitted other forces such as the Basset force [@grebenkov2013hydrodynamic] from Eqn. . Note that ${\boldsymbol}{F}_0$ and ${\boldsymbol}{F}_1$ describe two types of forces associated with different physical mechanisms. Of particular interest is when the noise term in ${\boldsymbol}{F}_0$ and ${\boldsymbol}{F}_1$ models environments of different nature (passive bath and active bath respectively [@dabelow2019irreversibility]) that the particle interacts with. As the name itself suggests, GLEs are generalized versions of the Markovian Langevin equations, frequently employed to model physical systems. A basic form of the GLEs was first introduced by Mori in [@mori1965transport] and subsequently used in numerous statistical physics models [@Kubo_fd; @toda2012statistical; @Zwanzig1973]. The studies of GLEs have attracted increasing interest in recent years. We refer to, for instance, [@mckinley2009transient; @lysy2016model; @siegle2010markovian; @hartmann2011balanced; @goychuk2012viscoelastic; @maes2013langevin; @lei2016data; @Wei2018; @2018superstatistical] for various applications of GLEs and [@Ottobre; @mckinley2018anomalous; @glatt2018generalized; @leimkuhler2018ergodic] for their asymptotic analysis. The main merit of GLEs from modeling point of view is that they take into account the effects of memory and the colored nature of noise on the dynamics of the system. \[fdt\_rem\] In general, there need not be any relation between ${\boldsymbol}{\kappa}(t)$ and ${\boldsymbol}{R}(t)$, or any relation between the damping coefficients and the noise coefficients appearing in the formula for ${\boldsymbol}{F}_0$ and ${\boldsymbol}{F}_1$. A particular but important case that we will revisit often in this paper is the case when a [*fluctuation-dissipation relation*]{} holds. In this case, ${\boldsymbol}{\gamma}_0$ is proportional to ${\boldsymbol}{\sigma}_0 {\boldsymbol}{\sigma}_0^*$, ${\boldsymbol}{h} = {\boldsymbol}{g}^*$, ${\boldsymbol}{g}$ is proportional to ${\boldsymbol}{\sigma}$ and (without loss of generality[^1]) ${\boldsymbol}{R}(t) = {\boldsymbol}{\kappa}(t)$. Studies of microscopic Hamiltonian models for open classical systems lead to GLEs of the form satisfying the above fluctuation-dissipation relation (see, for instance, Appendix A of [@LimWehr_Homog_NonMarkovian] or [@Cui18]). On another note, GLEs of the form are extended versions of the ones studied in our previous work [@LimWehr_Homog_NonMarkovian] – here the GLEs are generalized to include a Markovian force, in addition to the non-Markovian one, as well as explicit time dependence in the coefficients. As a motivation, we now provide and elaborate on examples of systems that can be modeled by our GLEs. An important type of diffusion, which has been observed in many physical systems, from charge transport in amorphous materials to intracellular particle motion in cytoplasm of living cells [@reverey2015superdiffusion], is [*ballistic diffusion*]{}. It is a subclass of anomalous diffusions and is characterized by the property that the particle’s long-time mean-squared displacement grows quadratically in time – in contrast to linear growth in usual diffusion. There are many different theoretical models of anomalous diffusion with diverse properties, coming from different physical assumptions; see [@metzler2014anomalous] for a comprehensive survey. In the following, we provide two GLE models that are employed to study such phenomena. Their properties will be studied in Section \[sect\_appl\], as an application of the results proven here. [*Two GLE models for anomalous diffusion of a free Brownian particle in a heat bath.*]{} \[ex\_mot\] A large class of models for diffusive systems is described by the system of equations (for simplicity, we restrict to one dimension): $$\begin{aligned} dx_t &= v_t dt, \label{mot1} \\ m dv_t &= - \left(\int_0^t \kappa(t-s) v_s ds\right) dt + \xi_t dt, \label{mot2}\end{aligned}$$ where $x_t, \ v_t \in {\mathbb{R}}$ are the position and velocity of the particle, $\kappa(t)$ is called the memory function, and $\xi_t$ is a mean-zero stationary Gaussian process.\ Two particular GLE models are described by -, with: - memory function of the bi-exponential form: $$\label{kappaeg} \kappa(t) = \frac{ \Gamma_2^2(\Gamma_2 e^{-\Gamma_2 |t|} - \Gamma_1 e^{-\Gamma_1 |t|})}{2(\Gamma_2^2-\Gamma_1^2)},$$ where the parameters satisfy $\Gamma_2 > \Gamma_1 > 0$, and $\xi_t$ has the covariance function $R(t)= \kappa(t)$ and thus the spectral density, $$\mathcal{S}(\omega) = \frac{ \Gamma_2^2 \omega^2}{(\omega^2+\Gamma_1^2)(\omega^2+\Gamma_2^2)}.$$ This model is similar to the one first introduced and studied in [@bao2003ballistic]. The noise with the above covariance function can be realized by the difference between two Ornstein-Uhlenbeck processes, with different damping rates, driven by the same white noise. Various properties as well as applications of GLEs of the form - were studied in [@bao2003ballistic; @bao2005harmonic; @siegle2010markovian]. - memory function of the form: $$\kappa(t) = \frac{1}{2}(\delta(t)-\Gamma_1 e^{-\Gamma_1 |t|}),$$ where $\Gamma_1 > 0$, and $\xi_t$ has the covariance function $R(t)= \kappa(t)$ and thus the spectral density, $$\mathcal{S}(\omega) = \frac{ \omega^2}{\omega^2+\Gamma_1^2}.$$ This model can be obtained from the one in (M1) by sending $\Gamma_2 \to \infty$ in the formula for $\kappa(t)$ in . Observe that the spectral densities in both models share the same asymptotic behavior near $\omega = 0$, i.e. $\mathcal{S}(\omega) \sim \omega^2$ as $\omega \to 0$, contributing to the enhanced diffusion (super-diffusion) of the particle with mean-squared displacement growing as $t^2$ as $t \to \infty$ [@siegle2010markovian]. See Proposition \[asympbeh\] for a precise argument. Other examples of systems that can be modeled by our GLEs are multiparticle systems with hydrodynamic interaction [@ermak1978brownian], active matter systems [@sevilla2018non], among others. Although our main results are applicable to these systems, we will not pursue the study of these systems here. Goals, Organization and Summary of Results of the Paper {#goaletc} ------------------------------------------------------- [**Goals of the Paper.**]{} We aim to derive homogenized models for a general class of GLEs (see Section \[sect\_gles\]), containing the examples (M1) and (M2) as special cases (see Corollary \[w1case\] and Corollary \[w2case\]). This will allow us to gain insights into the stochastic dynamics of such systems, including many systems that exhibit anomalous diffusion (see discussion in the paragraph before Example \[ex\_mot\]) – this is, in fact, the main motivation of the present paper. To the best of our knowledge, this is the first work that studies homogenization for GLE models describing anomalous diffusion. Given a GLE system, it is often desirable to work with simpler, reduced models that capture the essential features of its dynamics. To obtain satisfactory and optimal models, one needs to take into account the trade-off between the simplicity and accuracy of the reduced models sought after. Indeed, one may find that a reduced model, while simplified, fails to give a physically correct model for describing a system of interest [@safdari2017aging]. Two successful reductions were carried out in [@hottovy2015smoluchowski] for the case ${\boldsymbol}{F}_1={\boldsymbol}{0}$ and in [@LimWehr_Homog_NonMarkovian] for the case ${\boldsymbol}{F}_0 = {\boldsymbol}{0}$. One of our main goals in this paper is to devise and study new homogenization procedures that yield reduced models retaining essential features of a more general class of models. This program is of importance for identification, parameter inference and uncertainty quantification of stochastic systems [@Picci2011_nicereview; @hall2016uncertainty; @lysy2016model; @lei2016data] arising in the studies of anomalous diffusion [@mckinley2009transient; @morgado2002relation], climate modeling [@gottwald2015stochastic; @majda2001mathematical] and molecular systems [@cordoba2012elimination], among others. There is increasing amount of effort striving to implement this or related programs, starting from microscopic models [@picci1992stochastic], using various techniques [@givon2004extracting; @Pavliotis; @bo2016multiple; @froyland2016trajectory; @hartmann2011balanced], for different systems of interest in the literature. The derived effective SDE models will be of particular interest for modelers of anomalous diffusion.\ [**Organization of the Paper.**]{} The paper is organized as follows. We first present the application of the results obtained in the later sections (Section \[sect\_newsmallmlimit\] and Section \[sect\_newhomogcase\]) to study homogenization of generalized versions of the one-dimensional models (M1) and (M2) from Example \[ex\_mot\] in Section \[sect\_appl\]. Since these results are easier to state and require minimal notation to understand, we have chosen to present them as early as possible to demonstrate the value of our study to application-oriented readers. The later sections study an extended, multi-dimensional version of the GLEs in Section \[sect\_appl\]. In Section \[sect\_gles\] we introduce the GLEs to be studied and revisit them from the perspective of input-output stochastic dynamical systems exhibiting multiple time scales. In Section \[sect\_homogofGLE\], we discuss various ways of homogenizing GLEs. Following this discussion, we study the small mass limit of the GLEs in Section \[sect\_newsmallmlimit\]. We introduce and study novel homogenization procedures for a class of GLEs in Section \[sect\_newhomogcase\]. We state conclusions and make final remarks in Section \[sect\_conclusions\]. Relevant technical details and supplementary materials are provided in the appendix. In particular, we state a homogenization theorem for a general class of SDEs with state-dependent coefficients in Appendix \[sect\_generalhomogthm\]. The proof of this theorem is given in Appendix \[proof\_ch2\].\ [**Summary of the Main Results.**]{} For reader’s convenience, below we list (not in exactly the same order as the results appear in the paper) and summarize the main results obtained in the paper. - The first main result is Theorem \[newsmallm\]. It studies the small mass limit of the GLE described by -. It states that the position process converges, in a strong pathwise sense, to a component of a higher dimensional process satisfying an Itô SDE. The SDE contains non-trivial drift correction terms. We stress that, while being a component of a Markov process, the limiting position process itself is not Markov. This is in constrast to the nature of limiting processes obtained in earlier works, the difference which holds interesting implications from a physical point of view (recall the discussion after Eqn. ). Therefore, Theorem \[newsmallm\] constitutes a novel result, both mathematically and physically. - The second main result is Theorem \[compl\]. It describes the homogenized behavior of a family of GLEs (Eqns. -), parametrized by $\epsilon > 0$, in the limit as $\epsilon \to 0$. This limit is equivalent to the limit in which the inertial time scale, some of the memory time scales and some of the noise correlation time scales in the pre-limit system, tend to zero at the same rate. As in Theorem \[newsmallm\], the result here states that the position process converges, in a strong pathwise sense, to a component of a higher dimensional process satisfying an Itô SDE which contains non-trivial drift correction terms. Again, the limiting position process is non-Markov. However, the structure of the SDE is rather different from the one obtained in Theorem \[newsmallm\]. As discussed later, this result holds interesting consequences for systems exhibiting anomalous diffusion. - The third and forth main result are Corollary \[w1case\] and Corollary \[w2case\]. These results specialize the earlier ones to one-dimensional GLE models, which are generalizations of (M1) and (M2), and follow from the earlier theorems. They give explicit expressions for the drift correction terms present in the limiting SDEs and therefore may be used directly for modeling and simulation purposes. Furthermore, we show that, in the important case where the fluctuation-dissipation relation (see Remark \[fdt\_rem\]) holds, the two corollaries are intimately connected. Recall that these results are going to be presented first in Section \[sect\_appl\]. - The last main result is Theorem \[mainthm\], on homogenization of a family of parametrized SDEs whose coefficients are state-dependent. These SDEs are variants of the ones studied in earlier works [@hottovy2015smoluchowski; @birrell2017small; @2017BirrellLatest]. In comparison with all the earlier studies, the state-dependent coefficients of the pre-limit SDEs - may depend on the parameter $\epsilon > 0$ (to be taken to zero) explicitly. Therefore, this result is new and not simply a minor generalization of earlier results. Moreover, it is important in the context of present paper and is needed here to study various homogenization limits of GLEs, the importance of which is evident in the discussions above, in the main paper. Application to One-Dimensional GLE Models {#sect_appl} ========================================= We first study the small mass limit of a one-dimensional GLE, which is a generalized version of the GLE in model (M2) of Example \[ex\_mot\], modeling super-diffusion of a particle in a heat bath. Our models are generalized in that the coefficients of the GLEs are state-dependent. For simplicity, we are going to omit the explicit time dependence in the damping and noise coefficients—but not in the external force. For $t \in {\mathbb{R}}^+$, $m>0$, let $x_t, v_t \in {\mathbb{R}}$ be the solutions to the equations: $$\begin{aligned} dx_t &= v_t dt, \label{one-dim-pos0} \\ m dv_t &= -g(x_t)\left(\int_0^t \kappa(t-s) h(x_s) v_s ds\right) dt + \sigma(x_t) \xi_t dt + F_e(t,x_t)dt, \label{one-dim-vel0}\end{aligned}$$ where $$\label{w2} \kappa(t) = \frac{\beta^2}{2} (\delta(t) - \Gamma_1 e^{-\Gamma_1 |t|}),$$ where $\Gamma_1 > 0$, and $\xi_t$ is the mean-zero stationary Gaussian process with the covariance function $R(t)=\kappa(t)$ and spectral density, $$\mathcal{S}(\omega) = \frac{\beta^2 \omega^2}{\omega^2+\Gamma_1^2},$$ The initial data $(x,v)$ are random variables independent of $\epsilon$ and have finite moments of all orders. The following corollary describes the limiting SDE for the particle’s position obtained in the small mass limit of -. \[w1case\] Assume that for every $y \in {\mathbb{R}}$, $g(y), g'(y), h(y), h'(y)$, $\sigma(y)$ are bounded continuous functions in $y$, $F_e(t,y)$ is bounded and continuous in $t$ and $y$, and all the listed functions have bounded $y$-derivatives. Then in the limit $m \to 0$, the particle’s position, $x_t \in {\mathbb{R}}$, satisfying -, converges to $X_t$, where $X_t$ solves the following Itô SDE: $$\begin{aligned} dX_t &= \frac{2}{\beta^2 g h} F_e(t,X_t) dt - \frac{2}{\beta h} Y_t dt + S_1(X_t) dt + \frac{2 \sigma}{\beta g h} (Z_t dt + dW_t), \label{e1} \\ dY_t &= -\frac{\Gamma_1}{\beta g} F_e(t,X_t) dt + S_2(X_t) dt - \frac{\Gamma_1 \sigma}{g} (dW_t + Z_t dt), \label{e2} \\ dZ_t &= -\Gamma_1 Z_t dt - \Gamma_1 dW_t, \label{e3}\end{aligned}$$ where $$\begin{aligned} S_1(X) &= \frac{2}{\beta^2} \frac{\partial}{\partial X}\left(\frac{1}{g h} \right) \frac{\sigma^2}{g h}, \ \ \ \ S_2(X) = -\frac{\Gamma_1}{\beta} \frac{\partial}{\partial X}\left(\frac{1}{g} \right) \frac{\sigma^2}{g h}.\end{aligned}$$ Moreover, if in addition $g := \phi \sigma$, where $\phi > 0$, then the number of limiting SDEs reduces from three to two: $$\begin{aligned} dX_t &= \frac{2}{\beta^2 \phi^2 } \frac{\partial}{\partial X}\left(\frac{1}{\sigma h}\right) \frac{\sigma}{h} dt + \frac{2}{ \phi \sigma h \beta^2} F_e(t,X_t) dt - \frac{2}{\beta \phi h}U_t^{\phi} dt + \frac{2}{\beta \phi h} dW_t, \label{r1} \\ dU_t^\phi &= -\frac{\Gamma_1}{\beta \phi^2} \frac{\partial}{\partial X}\left(\frac{1}{\sigma}\right) \frac{\sigma}{h} dt - \frac{\Gamma_1}{ \beta \sigma} F_e(t,X_t) dt, \label{r2}\end{aligned}$$ where $U_t^\phi = \phi Y_t-Z_t$. The convergence is in the sense that for every $T>0$, $\sup_{t \in [0,T]} |x_t - X_t| \to 0$ in probability as $m \to 0$. We apply Theorem \[newsmallm\] by setting $d=1, d_2 = d_4 = 2$, $\alpha_1 = \alpha_3 = 0$, $\alpha_2 = \alpha_4 = 1$, ${\boldsymbol}{\gamma}_0 = \beta^2 g h/2$, ${\boldsymbol}{\sigma}_0 = \beta \sigma$, ${\boldsymbol}{h} = h$, ${\boldsymbol}{g} = g$, ${\boldsymbol}{\sigma} = \sigma$, ${\boldsymbol}{C}_2 = {\boldsymbol}{C}_4 = \beta$, ${\boldsymbol}{\Gamma}_2 = \Gamma_1$, ${\boldsymbol}{M}_2 {\boldsymbol}{C}_2^* = -\Gamma_1 \beta/2$, ${\boldsymbol}{\Gamma}_4 = \Gamma_1$, ${\boldsymbol}{\Sigma}_4 = -\Gamma_1$, and ${\boldsymbol}{F}_e = F_e$. The assumptions of Theorem \[newsmallm\] can be verified in a straightforward way and so the results of the corollary follow. We next specialize the result of Theorem \[compl\] to study homogenization of one-dimensional GLEs which are generalizations of the model (M1) in Example \[ex\_mot\]: for $t \in {\mathbb{R}}^+$, $m>0$, let $x_t, v_t \in {\mathbb{R}}$ be the solutions to the equations: $$\begin{aligned} dx_t &= v_t dt, \label{one-dim-pos} \\ m dv_t &= -g(x_t)\left(\int_0^t \kappa(t-s) h(x_s) v_s ds\right) dt + \sigma(x_t) \xi_t dt + F_e(t,x_t)dt, \label{one-dim-vel}\end{aligned}$$ where $$\label{w3} \kappa(t) = \frac{\beta^2 \Gamma_2^2(\Gamma_2 e^{-\Gamma_2 |t|} - \Gamma_1 e^{-\Gamma_1 |t|})}{2(\Gamma_2^2-\Gamma_1^2)},$$ with $\Gamma_2 > \Gamma_1 > 0$, and $\xi_t$ is the mean-zero stationary Gaussian process with the covariance function $R(t)=\kappa(t)$ and spectral density, $$\mathcal{S}(\omega) = \frac{\beta^2 \Gamma_2^2 \omega^2}{(\omega^2+\Gamma_1^2)(\omega^2+\Gamma_2^2)}.$$ The initial data $(x,v)$ are random variables independent of $\epsilon$ and have finite moments of all orders. For $\epsilon > 0$, we set $m = m_0 \epsilon$ and $\Gamma_2 = \gamma_2/\epsilon$ in -, where $m_0$ and $\gamma_2$ are positive constants. This gives the family of equations: $$\begin{aligned} dx^\epsilon_t &= v^\epsilon_t dt, \label{res_one-dim-pos} \\ m_0 \epsilon dv^\epsilon_t &= -g(x^\epsilon_t)\left(\int_0^t \kappa^\epsilon(t-s) h(x^\epsilon_s) v^\epsilon_s ds\right) dt + \sigma(x^\epsilon_t) \xi^\epsilon_t dt + F_e(t,x^\epsilon_t)dt, \label{res_one-dim-vel}\end{aligned}$$ where $$\label{w3_rescaled} \kappa^\epsilon(t) = \frac{\beta^2 \gamma_2^2(\frac{\gamma_2}{\epsilon} e^{-\frac{\gamma_2 }{\epsilon}|t|} - \Gamma_1 e^{-\Gamma_1 |t|})}{2(\gamma_2^2- \epsilon^2 \Gamma_1^2)},$$ and $\xi^\epsilon_t$ is the family of mean-zero stationary Gaussian processes with the covariance functions, $R^\epsilon(t) = \kappa^\epsilon(t)$.\ [**Discussion.**]{} We discuss the physical meaning behind the above rescaling of parameters. Recall that in the first case of Example \[ex\_mot\] (i.e. the model (M1)), the mean-square displacement of the particle grows as $t^2$ as $t \to \infty$ and therefore the above model describes a particle exhibiting super-diffusion. As $\epsilon \to 0$, the environment allows for more and more negative correlation and in the limit the covariance function consists of a delta-type peak at $t=0$ and a negative long tail compensating for the positive peak when integrated (see Figure \[fig1\] and also page 105 of [@toda2012statistical]). Indeed, $$\kappa^\epsilon(t) \to \kappa(t) := \frac{\beta^2}{2} (\delta(t)-\Gamma_1 e^{-\Gamma_1|t|})$$ as $\epsilon \to 0$. This is the so-called [*vanishing effective friction case*]{} in [@bao2005non]. The noise with the covariance function $\kappa^\epsilon(t)$ is called harmonic velocity noise, whereas the noise with the covariance function $\kappa(t)$ is the derivative of an Ornstein-Uhlenbeck process. The following corollary provides the homogenized model in the limit $\epsilon \to 0$ of -. \[w2case\] Assume that for every $y \in {\mathbb{R}}$, $g(y), g'(y), h(y), h'(y)$, $\sigma(y)$ are bounded continuous functions in $y$, $F_e(t,y)$ is bounded and continuous in $t$ and $y$, and all the listed functions have bounded derivatives in $y$. Then in the limit $\epsilon \to 0$, the particle’s position, $x^\epsilon_t \in {\mathbb{R}}$, satisfying -, converges to $X_t$, where $X_t$ solves the following Itô SDE: $$\begin{aligned} dX_t &= \frac{2}{\beta^2 g h} F_e(t,X_t) dt - \frac{2}{\beta h}Y_t dt + S_1(X_t) dt + \frac{2\sigma}{\beta g h } (dW_t + Z_t dt), \\ dY_t &= -\frac{\Gamma_1}{\beta g} F_e(t,X_t) dt + S_2(X_t) dt - \frac{\Gamma_1 \sigma}{g}(dW_t + Z_t dt), \\ dZ_t &= -\Gamma_1 Z_t dt - \Gamma_1 dW_t,\end{aligned}$$ where $g=g(X_t)$, $h = h(X_t)$, $\sigma=\sigma(X_t)$, $W_t$ is a one-dimensional Wiener process, and $$\begin{aligned} S_1 &= \frac{2}{\beta^2} \frac{\partial}{\partial X}\left(\frac{1}{gh}\right)\frac{\sigma^2}{gh} - \frac{\partial}{\partial X}\left(\frac{1}{h}\right) \frac{4 \sigma^2}{g(gh \beta^2+4m_0\gamma_2)} \nonumber \\ &\ \ \ \ + \frac{\partial}{\partial X}\left(\frac{\sigma}{gh}\right)\frac{4 \sigma}{ \beta^2 g h +4m_0\gamma_2}, \\ S_2 &= -\frac{\Gamma_1}{\beta}\frac{\partial}{\partial X}\left(\frac{1}{g}\right)\frac{\sigma^2}{g h} - \frac{\partial}{\partial X}\left(\frac{\sigma}{g}\right)\frac{2 \Gamma_1 \beta \sigma}{\beta^2 g h +4m_0\gamma_2}.\end{aligned}$$ Moreover, if in addition $g := \phi \sigma$, where $\phi > 0$, then the number of limiting SDEs reduces from three to two: $$\begin{aligned} dX_t &= \frac{2}{\beta^2 \phi^2 } \frac{\partial}{\partial X}\left(\frac{1}{\sigma h}\right) \frac{\sigma}{h} dt + \frac{2}{ \phi \sigma h \beta^2} F_e(t,X_t) dt - \frac{2}{\beta \phi h}U_t^{\phi} dt + \frac{2}{\beta \phi h} dW_t, \label{r1} \\ dU_t^\phi &= -\frac{\Gamma_1}{\beta \phi^2} \frac{\partial}{\partial X}\left(\frac{1}{\sigma}\right) \frac{\sigma}{h} dt - \frac{\Gamma_1}{ \beta \sigma} F_e(t,X_t) dt, \label{r2}\end{aligned}$$ where $U_t^\phi = \phi Y_t-Z_t$. The convergence is in the sense that for every $T>0$, $\sup_{t \in [0,T]} |x^\epsilon_t - X_t| \to 0$ in probability as $\epsilon \to 0$. Let $d=1$, $d_2 = d_4 = 2$ and denote the one-dimensional version of the variables, coefficients and parameters in Theorem \[compl\] by non-bold letters (for instance, $x_t$, $B_2$, $\Gamma_{2,2}$ etc.). Furthermore, set $B_2 = B_4 = \beta > 0$, $\gamma_{2,2}=\gamma_{4,2}=\gamma_2 > 0$ and $\Gamma_{2,1}=\Gamma_{4,1}=\Gamma_1$. Then it can be verified that the assumptions of Theorem \[compl\] hold and the results follow upon solving a Lyapunov equation. A few remarks on the contents of Corollary \[w2case\] follow. - the homogenized position process is non-Markov, driven by a colored noise process which is the derivative of the Ornstein-Uhlenbeck process. This behavior is expected in view of the asymptotic behavior of the rescaled memory function and spectral density as $\epsilon \to 0$. - similarly to the small mass limit case considered earlier, the limiting equation for the particle’s position not only contains noise-induced drift terms but is also coupled to equations for other slow variables. Moreover, the limiting equations for these other slow variables also contain non-trivial correction terms – the [*memory induced drift*]{}. [**Relation between Corollary \[w1case\] and Corollary \[w2case\].**]{} The limiting SDE systems in Corollaries \[w1case\] & \[w2case\] are generally different because of the different correction drift terms $S_1$ and $S_2$. In other words, sending $\Gamma_2 \to \infty$ first in - and then taking $m \to 0$ of the resulting GLE does not, in general, give the same limiting SDE as taking the joint limit of $m \to 0$ and $\Gamma_2 \to \infty$. However, if one further assumes that $g$ is proportional to $\sigma$, then the limiting SDE systems coincide. An important particular case is when $g = h = \sigma$, in which case a fluctuation-dissipation relation holds and the GLE can be derived from a microscopic Hamiltonian model (see Remark \[fdt\_rem\]). In this case, the homogenized model described in both corollaries reduces to: $$\begin{aligned} dX_t &= \frac{2}{\beta^2 \sigma^2} F_e(t,X_t) dt - \frac{2}{\beta \sigma} U_t dt + \frac{2}{\beta^2}\frac{\partial}{\partial X_t}\left(\frac{1}{\sigma^2} \right) dt + \frac{2}{\beta \sigma} dW_t, \label{fdt1} \\ dU_t &= -\frac{\Gamma_1}{\beta \sigma} F_e(t,X_t) dt - \frac{\Gamma_1}{\beta} \frac{\partial}{\partial X}\left(\frac{1}{\sigma} \right) dt. \label{fdt2}\end{aligned}$$ To end this section, we remark that one could in principle repeat the above analysis for the case where the spectral density varies as $\omega^{2l}$, for $l=2,4,\dots$ (i.e. the highly nonlinear case) as well as extending the studies done so far in various other directions. To illustrate how non-trivial the calculations and results could become, we work out another example in Appendix \[anothereg\]. ![Plot of the memory function $\kappa(t)$ in with $\Gamma_1 = 1$, $\beta = 1$ for different values of $\Gamma_2$ (left) and the memory function in with $\Gamma_1 = 1$, $\Gamma_2 = 2$, $\beta = 1$ for different values of $\Gamma_3$ (right)[]{data-label="fig1"}](w3 "fig:"){width="48.00000%"} ![Plot of the memory function $\kappa(t)$ in with $\Gamma_1 = 1$, $\beta = 1$ for different values of $\Gamma_2$ (left) and the memory function in with $\Gamma_1 = 1$, $\Gamma_2 = 2$, $\beta = 1$ for different values of $\Gamma_3$ (right)[]{data-label="fig1"}](w5 "fig:"){width="48.00000%"} GLEs in Finite Dimensions {#sect_gles} ========================= We call a system modeled by GLE of the form a [*generalized Langevin system*]{}. Its dynamics will be referred to as [*generalized Langevin dynamics*]{}. We assume that the memory function ${\boldsymbol}{\kappa}(t)$ in the GLE is a [*Bohl function*]{}, i.e. that each matrix element of ${\boldsymbol}{\kappa}(t)$ is a finite, real-valued linear combination of exponentials, possibly multiplied by polynomials and/or by trigonometric functions. The noise process, $\{{\boldsymbol}{\xi}(t), t \in {\mathbb{R}}^+ \}$, is a mean-zero, mean-square continuous stationary Gaussian process with Bohl covariance function and, therefore, its spectral density ${\boldsymbol}{\mathcal{S}}(\omega)$ is a rational function—(see Theorem 2.20 in [@trentelman2002control]). In this case, the generalized Langevin dynamics can be realized by an SDE system in a finite-dimensional space (see next subsection for details). The case in which an infinite-dimensional space is required is deferrred to a future work (see also Remark \[rem\_inf\_dim\] and Section \[sect\_conclusions\]). We recall a useful fact: given a rational spectral density ${\boldsymbol}{\mathcal{S}}(\omega) \in {\mathbb{R}}^{r \times r}$, there exists a rational function ${\boldsymbol}{G}(z) \in {\mathbb{C}}^{r \times l}$, called a [*spectral factor*]{}, such that ${\boldsymbol}{\mathcal{S}}(\omega) = {\boldsymbol}{G}(i\omega){\boldsymbol}{G}^{*}(-i\omega)$. We emphasize that such factorization is not unique [@lindquist2015linear]. Generalized Langevin Systems ---------------------------- Below we define the memory function and the noise process in the GLE (see Eqn. ), and along the way introduce our notation. They are defined in a manner ensuring simplicity as well as providing sufficient parameters for matching the memory function and the correlation function of the noise, thereby preserving the essential statistical properties of the GLE. This provides a systematic framework for our homogenization studies (see the discussion in Section \[sect\_homogofGLE\]). For $i=1,2,3,4$, let ${\boldsymbol}{\Gamma}_i \in {\mathbb{R}}^{d_i \times d_i}$, ${\boldsymbol}{M}_i \in {\mathbb{R}}^{d_i \times d_i}$, ${\boldsymbol}{\Sigma}_i \in {\mathbb{R}}^{d_i \times q_i}$ be constant matrices. Also, let ${\boldsymbol}{C}_i \in {\mathbb{R}}^{q \times d_i}$ (for $i=1,2$) and ${\boldsymbol}{C}_i \in {\mathbb{R}}^{r \times d_i}$ (for $i=3,4$) be constant matrices. Here, the $d_i$ and $q_i$ ($i=1,2,3,4$) are positive integers. Let $\alpha_i \in \{0,1\}$ be a “switch on or off” parameter. We define the memory function in terms of the sextuple $({\boldsymbol}{\Gamma}_1,{\boldsymbol}{M}_1,{\boldsymbol}{C}_1;{\boldsymbol}{\Gamma}_2,{\boldsymbol}{M}_2,{\boldsymbol}{C}_2)$ of matrices: $$\label{memory_realized} {\boldsymbol}{\kappa}(t)= \alpha_1 {\boldsymbol}{\kappa}_1(t) + \alpha_2{\boldsymbol}{\kappa}_2(t) = \sum_{i=1}^2 \alpha_i {\boldsymbol}{C}_i e^{-{\boldsymbol}{\Gamma_i}|t|}{\boldsymbol}{M}_i{\boldsymbol}{C}_i^*,$$ The noise process is defined as: $$\label{noise} {\boldsymbol}{\xi}_t = \alpha_3 {\boldsymbol}{C}_3 {\boldsymbol}{\beta}^3_t + \alpha_4 {\boldsymbol}{C}_4 {\boldsymbol}{\beta}^4_t,$$ where the ${\boldsymbol}{\beta}^{j}_t \in {\mathbb{R}}^{d_j}$ ($j=3,4$) are independent Ornstein-Uhlenbeck type processes, i.e. solutions of the SDEs: $$\label{realize} d{\boldsymbol}{\beta}^j_t = -{\boldsymbol}{\Gamma}_j {\boldsymbol}{\beta}^j_t dt + {\boldsymbol}{\Sigma}_j d{\boldsymbol}{W}^{(q_j)}_t,$$ with the initial conditions, ${\boldsymbol}{\beta}^j_0$, normally distributed with mean-zero and covariance ${\boldsymbol}{M}_j$. Here, ${\boldsymbol}{W}^{(q_j)}_t$ denotes a $q_j$-dimensional Wiener process, independent of ${\boldsymbol}{\beta}^j_0$. Also, the Wiener processes ${\boldsymbol}{W}_t^{(q_3)}$ and ${\boldsymbol}{W}_t^{(q_4)}$ are independent. For $i=1,2,3,4$, ${\boldsymbol}{\Gamma}_i$ is [*positive stable*]{}, i.e. all eigenvalues of ${\boldsymbol}{\Gamma}_i$ have positive real parts and ${\boldsymbol}{M}_i = {\boldsymbol}{M}_i^* > 0$ satisfies the following Lyapunov equation: $${\boldsymbol}{\Gamma}_i {\boldsymbol}{M}_i+{\boldsymbol}{M}_i {\boldsymbol}{\Gamma}_i^*={\boldsymbol}{\Sigma}_i {\boldsymbol}{\Sigma}_i^*.$$ The ${\boldsymbol}{M}_i$ are therefore the steady-state covariances of the systems, i.e. the resulting Ornstein-Uhlenbeck processes are stationary. In control theory, ${\boldsymbol}{M}_i$ is also known as the [*controllability Gramian*]{} for the pair $({\boldsymbol}{\Gamma}_i, {\boldsymbol}{\Sigma}_i)$ [@trentelman2002control]. The covariance matrix, ${\boldsymbol}{R}(t)$, of the mean-zero Gaussian noise process is expressed by the sextuple $({\boldsymbol}{\Gamma}_3,{\boldsymbol}{M}_3,{\boldsymbol}{C}_3; {\boldsymbol}{\Gamma}_4,{\boldsymbol}{M}_4,{\boldsymbol}{C}_4)$ of matrices as follows: $$\label{cov} {\boldsymbol}{R}(t)=\alpha_3 {\boldsymbol}{R}_3(t)+ \alpha_4 {\boldsymbol}{R}_4(t) = \sum_{i=3}^4 \alpha_i {\boldsymbol}{C}_i e^{-{\boldsymbol}{\Gamma_i}|t|}{\boldsymbol}{M}_i{\boldsymbol}{C}_i^*,$$ and so the sextuple $({\boldsymbol}{\Gamma}_3,{\boldsymbol}{M}_3,{\boldsymbol}{C}_3;{\boldsymbol}{\Gamma}_4,{\boldsymbol}{M}_4,{\boldsymbol}{C}_4)$, together with the parameters $\alpha_3, \alpha_4$, completely determine the probability distributions of ${\boldsymbol}{\xi}_t$. We denote the spectral density of the noise process by ${\boldsymbol}{\mathcal{S}}(\omega) = \sum_{i=3,4}\alpha_i {\boldsymbol}{\mathcal{S}}_i(\omega)$, where ${\boldsymbol}{\mathcal{S}}_i(\omega)$ is the Fourier transform of ${\boldsymbol}{R}_i(t)$ for $i=3,4$. We will view the system - (which is in a statistical steady state) as a representation of the noise process ${\boldsymbol}{\xi}_t$ and call such a representation a (finite-dimensional) [*stochastic realization*]{} of ${\boldsymbol}{\xi}_t$. Similarly, we view as a representation of the memory function ${\boldsymbol}{\kappa}(t)$ and call such a representation a (finite-dimensional, deterministic) [*memory realization*]{} of ${\boldsymbol}{\kappa}(t)$. We call the Fourier transform of ${\boldsymbol}{\kappa}(t)$ and ${\boldsymbol}{R}(t)$ the [*spectral density of the memory function*]{} and [*spectral density of the noise process*]{} respectively. An important message from the stochastic realization theory is that the system - is more than a representation of ${\boldsymbol}{\xi}_t$ in terms of a white noise, in that it also contains state variables ${\boldsymbol}{\beta}^j$ ($j=3,4$) which serve as a “dynamical memory". In contrast to standard treatments, this dynamical memory comes not from one, but from two independent systems of type . This will be used to include two distinct types of dynamical memory that can be switched on or off using the parameters $\alpha_i$ – see Proposition \[asympbeh\]. This consideration motivates us to define the memory function (and noise) explicitly using two independent systems, with different constraints on their parameters easier to state than if a single higher-dimensional system were used. The sextuples that define the memory function in and the noise process in are only unique up to the following transformations: $$\label{transf_realize} ({\boldsymbol}{\Gamma}'_i={\boldsymbol}{T}_i {\boldsymbol}{\Gamma}_i {\boldsymbol}{T}^{-1}_i, {\boldsymbol}{M}_i' = {\boldsymbol}{T}_i {\boldsymbol}{M}_i {\boldsymbol}{T}_i^{*}, {\boldsymbol}{C}'_i = {\boldsymbol}{C}_i {\boldsymbol}{T}_i^{-1}),$$ where $i=1,2,3,4$ and ${\boldsymbol}{T}_i$ are any invertible matrices of appropriate dimensions [@lindquist2015linear]. Different choices of ${\boldsymbol}{T}_i$ correspond to different coordinate systems. Realization of the memory function and noise process in terms of the matrix sextuples, as defined above, covers all GLEs driven by Gaussian processes that can be realized in a finite dimension (see the propositions and theorems on page 303-308 of [@willems1980stochastic]). See also the remarks on the subject in [@LimWehr_Homog_NonMarkovian]. A summary of the above discussion is included in the following: \[ass\_bohl\] The memory function ${\boldsymbol}{\kappa}(t)$ in the GLE is a real-valued Bohl function defined by and the noise process, $\{{\boldsymbol}{\xi}_t, t \in {\mathbb{R}}^+ \}$, is a mean-zero, mean-square continuous, stationary Gaussian process with Bohl covariance function (hence, with rational spectral density), admitting a stochastic realization given by -. Furthermore, we assume that any spectral factors ${\boldsymbol}{\Phi}_i(z)$ ($i=1,2,3,4$) of the spectral densities ${\boldsymbol}{\mathcal{S}}_i(\omega)$ are [*minimal*]{} (see Chapter 10 in [@lindquist2015linear]). We introduce a generalized version of the effective damping constant and effective diffusion constant used in [@LimWehr_Homog_NonMarkovian], which will be useful to study the asymptotic behavior of spectral densities. \[defn\_effconstnats\] For $n \in {\mathbb{Z}}$, the [*$n$th order effective damping constant*]{} is defined as the constant matrix, parametrized by $\alpha_1, \alpha_2 \in \{0,1\}$: $$\label{eff_damping} {\boldsymbol}{K}^{(n)}(\alpha_1,\alpha_2) := \alpha_1 {\boldsymbol}{K}_1^{(n)} + \alpha_2 {\boldsymbol}{K}_2^{(n)} \in {\mathbb{R}}^{q \times q},$$ where ${\boldsymbol}{K}_i^{(n)} = {\boldsymbol}{C}_i {\boldsymbol}{\Gamma}_i^{-n} {\boldsymbol}{M}_i {\boldsymbol}{C}_i^*$ (for $i=1,2$). Likewise, the [*$n$th order effective diffusion constant*]{}, $$\label{eff_diff} {\boldsymbol}{L}^{(n)}(\alpha_3,\alpha_4) := \alpha_3 {\boldsymbol}{L}_3^{(n)} + \alpha_4 {\boldsymbol}{L}_4^{(n)} \in {\mathbb{R}}^{r \times r},$$ where ${\boldsymbol}{L}_j^{(n)} = {\boldsymbol}{C}_j {\boldsymbol}{\Gamma}_j^{-n} {\boldsymbol}{M}_j {\boldsymbol}{C}_j^*$ (for $j=3,4$). Note that the first order effective damping constant ${\boldsymbol}{K}^{(1)}(\alpha_1,\alpha_2) = \int_0^{\infty} {\boldsymbol}{\kappa}(t) dt$ and the first order effective diffusion constant ${\boldsymbol}{L}^{(1)}(\alpha_3,\alpha_4) = \int_0^{\infty} {\boldsymbol}{R}(t) dt$ are simply the effective damping constant and effective diffusion constant introduced in [@LimWehr_Homog_NonMarkovian]. The memory function and the covariance function of the noise process can be expressed in terms of these constants: $${\boldsymbol}{\kappa}(t) = \sum_{i=1,2} \sum_{n=0}^{\infty} \alpha_i \frac{(-|t|)^n}{n!} {\boldsymbol}{K}^{(-n)}_i, \ \ \ {\boldsymbol}{R}(t) = \sum_{j=3,4} \sum_{n=0}^{\infty} \alpha_j \frac{(-|t|)^n}{n!} {\boldsymbol}{L}^{(-n)}_j.$$\ \[ass\_vanishingornot\] The matrix ${\boldsymbol}{K}_1^{(1)}$ in the expression for first order effective damping constant is invertible and the matrix ${\boldsymbol}{K}_2^{(1)}$ equals zero. Similarly, in the expression for the first order effective diffusion constant ${\boldsymbol}{L}_3^{(1)}$, which is invertible, ${\boldsymbol}{L}_4^{(1)} = {\boldsymbol}{0}$. In order to develop intuition about general GLEs, it will be helpful to study the following exactly solvable special case. \[ass\_exactsolve\] (An exactly solvable case) In the GLE , set ${\boldsymbol}{F}_e = {\boldsymbol}{0}$. Let ${\boldsymbol}{\gamma}_0(t,{\boldsymbol}{x}) = {\boldsymbol}{\gamma}_0$, ${\boldsymbol}{\sigma}_0(t,{\boldsymbol}{x}) = {\boldsymbol}{\sigma}_0$, ${\boldsymbol}{h}(t,{\boldsymbol}{x}) = {\boldsymbol}{h}$, ${\boldsymbol}{g}(t,{\boldsymbol}{x}) = {\boldsymbol}{g}$ and ${\boldsymbol}{\sigma}(t,{\boldsymbol}{x}) = {\boldsymbol}{\sigma}$ be constant matrices. The initial data are the random variables, ${\boldsymbol}{x}(0) = {\boldsymbol}{x}$, ${\boldsymbol}{v}(0) = {\boldsymbol}{v}$, independent of $\{{\boldsymbol}{\xi}(t), t \in {\mathbb{R}}^+ \}$ and of $\{{\boldsymbol}{W}^{(k)}(t), t \in {\mathbb{R}}^+\}$. The resulting GLE is: $$\label{gle_es} m d{\boldsymbol}{v}(t) = -{\boldsymbol}{\gamma}_0 {\boldsymbol}{v}(t) dt -{\boldsymbol}{g} \left( \int_0^t {\boldsymbol}{\kappa}(t-s) {\boldsymbol}{h} {\boldsymbol}{v}(s) ds \right) dt + {\boldsymbol}{\sigma}_0 d{\boldsymbol}{W}^{(k)}(t) + {\boldsymbol}{\sigma} {\boldsymbol}{\xi}(t) dt.$$ Of particular interest is the GLE with ${\boldsymbol}{\gamma}_0 = {\boldsymbol}{\sigma}_0 {\boldsymbol}{\sigma}_0^*/2 \geq 0$, ${\boldsymbol}{g} = {\boldsymbol}{h}^* = {\boldsymbol}{\sigma} > 0$, and ${\boldsymbol}{R}(t) = {\boldsymbol}{\kappa}(t) = {\boldsymbol}{\kappa}^*(t)$, so that the fluctuation-dissipation relations hold (see Remark \[fdt\_rem\] and also Remark \[msd\_general\]). The resulting GLE gives a simple model describing the motion of a free particle, interacting with a heat bath. Note that generally the process ${\boldsymbol}{v}(t)$ is not assumed to be stationary, in particular ${\boldsymbol}{v}(0)$ could be an arbitrarily distributed random variable. The following proposition gives the asymptotic behavior of the spectral densities (equivalently, covariance functions or memory functions), the regularity[^2] (in the mean-square sense) of the noise process, and, in the exactly solvable case of Example \[ass\_exactsolve\], the long-time mean-squared displacement of the particle. \[asympbeh\] Suppose that the Assumptions \[ass\_bohl\] and \[ass\_vanishingornot\] are satisfied. Let ${\boldsymbol}{x}(t) = \int_0^t {\boldsymbol}{v}(s) ds \in {\mathbb{R}}^d$, where ${\boldsymbol}{v}(t)$ solves the GLE . - We have ${\boldsymbol}{\mathcal{S}}_3(\omega) = O(1)$ as $\omega \to 0$. Also, let $k \geq 3$ be a positive odd integer and assume that ${\boldsymbol}{L}_4^{(n)} = 0$ for $0 < n < k$, where $n$ is odd, and ${\boldsymbol}{L}_4^{(k)} \neq 0$. Then ${\boldsymbol}{\mathcal{S}}_{4}(\omega) = O(\omega^{k-1})$ as $\omega \to 0$. If there exists $h > 0$ such that the noise spectral density, ${\boldsymbol}{\mathcal{S}}(\omega) = O\left(\frac{1}{\omega^{2h+1}}\right)$ as $\omega \to \infty$, then ${\boldsymbol}{\xi}_t$ is $n$-times mean-square differentiable[^3] for $n < h$. - Let $\hat{{\boldsymbol}{\kappa}}(z)$ denote the Laplace transform of ${\boldsymbol}{\kappa}(t)$, i.e. $\hat{{\boldsymbol}{\kappa}}(z) :=\int_0^\infty {\boldsymbol}{\kappa}(t) e^{-zt} dt$, and $\mathcal{E} = \frac{1}{2} m E[{\boldsymbol}{v}{\boldsymbol}{v}^*]$ be the particle’s initial average kinetic energy. Assume for simplicity that ${\boldsymbol}{R}(t) = {\boldsymbol}{\kappa}(t)$ and ${\boldsymbol}{\sigma}{\boldsymbol}{\kappa}(t) {\boldsymbol}{\sigma}^* = {\boldsymbol}{h}^* {\boldsymbol}{\kappa}^*(t) {\boldsymbol}{g}^*$. Then we have the following formula for the particle’s mean-squared displacement (MSD): $$\begin{aligned} \label{msd_formula} E[{\boldsymbol}{x}(t){\boldsymbol}{x}^*(t)] &= 2 \int_0^t {\boldsymbol}{H}(s) ds + 2m \left({\boldsymbol}{H}(t) \mathcal{E} {\boldsymbol}{H}^*(t) - \int_0^t {\boldsymbol}{H}(u) \dot{{\boldsymbol}{H}^*}(u) du \right) \nonumber \\ & \ \ \ \ + \int_0^t {\boldsymbol}{H}(u) ({\boldsymbol}{\sigma}_0 {\boldsymbol}{\sigma}_0^* - 2 {\boldsymbol}{\gamma}^*_0) {\boldsymbol}{H}^*(u) du, \end{aligned}$$ where the Laplace transform of ${\boldsymbol}{H}(t)$ is given by $\hat{{\boldsymbol}{H}}(z) = z \hat{{\boldsymbol}{F}}(z)$, with $$\hat{{\boldsymbol}{F}}(z) = (z^2(mz{\boldsymbol}{I}+{\boldsymbol}{\gamma}_0+ {\boldsymbol}{g}\hat{{\boldsymbol}{\kappa}}(z){\boldsymbol}{h}))^{-1}.$$\ For (iii) and (iv) below, we consider the process ${\boldsymbol}{x}_t$ solving the GLE with ${\boldsymbol}{\gamma}_0 = {\boldsymbol}{\sigma}_0 {\boldsymbol}{\sigma}_0^*/2 \geq 0$, ${\boldsymbol}{g} = {\boldsymbol}{h}^* = {\boldsymbol}{\sigma} > 0$, and ${\boldsymbol}{R}(t) = {\boldsymbol}{\kappa}(t) = {\boldsymbol}{\kappa}^*(t)$. - Let $\alpha_1 = \alpha_3 = 1$ ($\alpha_i$, for $i=2,4$, can be 0 or 1 and ${\boldsymbol}{F}_0$ can be zero or nonzero). Then $E[{\boldsymbol}{x}(t){\boldsymbol}{x}^*(t)] = O(t)$ as $t \to \infty$, in which case we say that the particle diffuses normally. - Let $\alpha_1 = 0$, $\alpha_2 = 1$ and ${\boldsymbol}{F}_0 = {\boldsymbol}{0}$ (the vanishing effective damping constant case). Then $E[{\boldsymbol}{x}(t){\boldsymbol}{x}^*(t)] = O(t^{2})$ as $t \to \infty$, in which case we say that the particle exhibits a ballistic (super-diffusive) behavior. <!-- --> - For $i=3,4$, it is easy to compute that $$\begin{aligned} {\boldsymbol}{\mathcal{S}}_i(\omega) &= {\boldsymbol}{C}_i[(i\omega{\boldsymbol}{I}+{\boldsymbol}{\Gamma}_i)^{-1} + (-i\omega {\boldsymbol}{I}+{\boldsymbol}{\Gamma}_i)^{-1}]{\boldsymbol}{M}_i {\boldsymbol}{C}_i^* \\ &= 2{\boldsymbol}{C}_i [(i\omega{\boldsymbol}{I}+{\boldsymbol}{\Gamma}_i)^{-1} {\boldsymbol}{\Gamma}_i (-i\omega {\boldsymbol}{I}+{\boldsymbol}{\Gamma}_i)^{-1}]{\boldsymbol}{M}_i {\boldsymbol}{C}_i^* \\ &= 2{\boldsymbol}{C}_i{\boldsymbol}{\Gamma}_i^{-1}(\omega^2 {\boldsymbol}{\Gamma}_i^{-2} + {\boldsymbol}{I})^{-1} {\boldsymbol}{M}_i {\boldsymbol}{C}_i^*,\end{aligned}$$ and so one has: $${\boldsymbol}{\mathcal{S}}_i(\omega) = 2{\boldsymbol}{C}_i{\boldsymbol}{\Gamma}_i^{-1}{\boldsymbol}{M}_i {\boldsymbol}{C}_i^* - 2{\boldsymbol}{C}_i{\boldsymbol}{\Gamma}_i^{-3}{\boldsymbol}{M}_i {\boldsymbol}{C}_i^* \omega^2 + 2{\boldsymbol}{C}_i{\boldsymbol}{\Gamma}_i^{-5}{\boldsymbol}{M}_i {\boldsymbol}{C}_i^* \omega^4 + \dots,$$ as $\omega \to 0$. The first two statements in (i) then follow by Assumption \[ass\_vanishingornot\]. The last statement follows from Lemma 6.11 in [@lord2014introduction].\ - Note that $\dot{{\boldsymbol}{x}}(t) = {\boldsymbol}{v}(t)$, with ${\boldsymbol}{x}(0) = {\boldsymbol}{0}$ and ${\boldsymbol}{v}(t)$ solving the GLE , rewritten as: $$\label{ggg} m \dot{{\boldsymbol}{v}}(t)=-{\boldsymbol}{\gamma}_0 {\boldsymbol}{v}(t)+{\boldsymbol}{\sigma}_0 {\boldsymbol}{\eta}(t) -{\boldsymbol}{g}\int_0^t {\boldsymbol}{\kappa}(t-s) {\boldsymbol}{h}{\boldsymbol}{v}(s) ds + {\boldsymbol}{\sigma} {\boldsymbol}{\xi}(t),$$ where ${\boldsymbol}{\eta}(t) dt = d {\boldsymbol}{W}^{(k)}(t)$, and ${\boldsymbol}{v}_0 = {\boldsymbol}{v}$ is a random variable that is independent of $\{{\boldsymbol}{\xi}(t), t \in {\mathbb{R}}^+\}$ and of $\{{\boldsymbol}{\eta}(t), t \in {\mathbb{R}}^+\}$. These equations can be solved analytically by means of Laplace transform. Applying Laplace transform on the equations for ${\boldsymbol}{x}_t$ and ${\boldsymbol}{v}_t$ gives: $$\begin{aligned} z \hat{{\boldsymbol}{x}}(z) &= \hat{{\boldsymbol}{v}}(z), \\ m (z \hat{{\boldsymbol}{v}}(z) - {\boldsymbol}{v}(0)) &= -{\boldsymbol}{g} \hat{{\boldsymbol}{\kappa}}(z) {\boldsymbol}{h} \hat{{\boldsymbol}{v}}(z) - {\boldsymbol}{\gamma}_0 \hat{{\boldsymbol}{v}}(z) + {\boldsymbol}{\sigma}_0 \hat{{\boldsymbol}{\eta}}(z) + {\boldsymbol}{\sigma} \hat{{\boldsymbol}{\xi}}(z),\end{aligned}$$ and thus $$\hat{{\boldsymbol}{x}}(z) = \hat{{\boldsymbol}{H}}(z) (m {\boldsymbol}{v}(0) + {\boldsymbol}{\sigma}_0 \hat{{\boldsymbol}{\eta}}(z) + {\boldsymbol}{\sigma} \hat{{\boldsymbol}{\xi}}(z)),$$ where $\hat{{\boldsymbol}{H}}(z) = (mz^2{\boldsymbol}{I}+z{\boldsymbol}{\gamma}_0+ z{\boldsymbol}{g}\hat{{\boldsymbol}{\kappa}}(z){\boldsymbol}{h})^{-1}$. Taking the inverse transform gives the following formula for ${\boldsymbol}{x}(t)$: $${\boldsymbol}{x}(t) = {\boldsymbol}{H}(t) m {\boldsymbol}{v} + \int_0^t {\boldsymbol}{H}(t-s) ({\boldsymbol}{\sigma}_0 {\boldsymbol}{\eta}(s) + {\boldsymbol}{\sigma} {\boldsymbol}{\xi}(s)) ds,$$ where ${\boldsymbol}{H}(0) = {\boldsymbol}{0}$. Therefore, using the mutual independence of ${\boldsymbol}{v}$, $\{{\boldsymbol}{\xi}(t), t \in {\mathbb{R}}^+\}$ and $\{{\boldsymbol}{\eta}(t), t \in {\mathbb{R}}^+\}$, the Itô isometry, and the assumption that ${\boldsymbol}{R}(t) = {\boldsymbol}{\kappa}(t)$, we obtain: $$\begin{aligned} E[{\boldsymbol}{x}(t) {\boldsymbol}{x}^T(t)] &= 2m {\boldsymbol}{H}(t) \mathcal{E} {\boldsymbol}{H}^*(t) + \int_0^t {\boldsymbol}{H}(t-s) {\boldsymbol}{\sigma}_0 {\boldsymbol}{\sigma}_0^* {\boldsymbol}{H}^*(t-s) ds + {\boldsymbol}{L}(t), \label{mssd}\end{aligned}$$ where $$\begin{aligned} {\boldsymbol}{L}(t) &= \int_0^t ds \int_0^t du \ {\boldsymbol}{H}(t-s) {\boldsymbol}{\sigma} {\boldsymbol}{\kappa}(|s-u|) {\boldsymbol}{\sigma}^* {\boldsymbol}{H}^*(t-u).\end{aligned}$$ To compute the double integral ${\boldsymbol}{L}(t)$, we first rewrite it as ${\boldsymbol}{L}(t) = {\boldsymbol}{L}_1(t) + {\boldsymbol}{L}_2(t)$, with $$\begin{aligned} {\boldsymbol}{L}_1(t) &= \int_0^t ds \ {\boldsymbol}{H}(t-s) \int_s^t du \ {\boldsymbol}{\sigma} {\boldsymbol}{\kappa}(u-s) {\boldsymbol}{\sigma}^* {\boldsymbol}{H}^*(t-u), \\ {\boldsymbol}{L}_2(t) &= \int_0^t ds \ {\boldsymbol}{H}(t-s) \int_0^s du \ {\boldsymbol}{\sigma} {\boldsymbol}{\kappa}(s-u) {\boldsymbol}{\sigma}^* {\boldsymbol}{H}^*(t-u).\end{aligned}$$ We then compute: $$\begin{aligned} {\boldsymbol}{L}_1(t) &= \int_0^t ds \ {\boldsymbol}{H}(t-s) \int_s^t d(t-u) \ {\boldsymbol}{\sigma} {\boldsymbol}{\kappa}(t-s-(t-u)) \cdot (-1) {\boldsymbol}{\sigma}^* {\boldsymbol}{H}^*(t-u), \\ &= \int_0^t ds \ {\boldsymbol}{H}(t-s) \int_0^{t-s} d\tau \ {\boldsymbol}{\sigma} {\boldsymbol}{\kappa}(t-s-\tau) {\boldsymbol}{\sigma}^* {\boldsymbol}{H}^*(\tau), \\ &= \int_0^t ds \ {\boldsymbol}{H}(t-s) ({\boldsymbol}{\sigma} {\boldsymbol}{\kappa} {\boldsymbol}{\sigma}^* \star {\boldsymbol}{H}^*)(t-s), \\ &= \int_0^t du \ {\boldsymbol}{H}(u) ({\boldsymbol}{\sigma} {\boldsymbol}{\kappa} {\boldsymbol}{\sigma}^* \star {\boldsymbol}{H}^*)(u),\end{aligned}$$ where $\star$ denotes convolution. Now note that, by the convolution theorem, $({\boldsymbol}{\sigma} {\boldsymbol}{\kappa} {\boldsymbol}{\sigma}^* \star {\boldsymbol}{H}^*)(u)$ is the inverse Laplace transform of ${\boldsymbol}{\sigma}\hat{{\boldsymbol}{\kappa}}(z) {\boldsymbol}{\sigma}^* \hat{{\boldsymbol}{H}^*}(z)$, which can be written as ${\boldsymbol}{I}/z-(mz{\boldsymbol}{I} + {\boldsymbol}{\gamma}_0^*) \hat{{\boldsymbol}{H}^*}(z)$ by using the assumption that ${\boldsymbol}{\sigma} {\boldsymbol}{\kappa}(t) {\boldsymbol}{\sigma}^* = {\boldsymbol}{h}^* {\boldsymbol}{\kappa}^*(t) {\boldsymbol}{g}^*$. Computing the inverse transform gives us: $${\boldsymbol}{L}_1(t) = \int_0^t du \ {\boldsymbol}{H}(u) ({\boldsymbol}{I} - m \dot{{\boldsymbol}{H}^*}(u) - {\boldsymbol}{\gamma}_0^* {\boldsymbol}{H}^*(u)). \label{L1t}$$ Similarly, we obtain ${\boldsymbol}{L}_2(t) = {\boldsymbol}{L}_1(t)$, and so ${\boldsymbol}{L}(t) = 2 {\boldsymbol}{L}_1(t)$. Therefore, combining and gives us the desired formula for MSD.\ - $\&$ (iv) The assumptions that ${\boldsymbol}{g} = {\boldsymbol}{h}^* = {\boldsymbol}{\sigma}$ and ${\boldsymbol}{R}(t) = {\boldsymbol}{\kappa}(t) = {\boldsymbol}{\kappa}^*(t)$ ensure that we can apply the MSD formula in (ii). The additional assumption that ${\boldsymbol}{\gamma}_0 = {\boldsymbol}{\sigma}_0 {\boldsymbol}{\sigma}_0^*/2$ (fluctuation-dissipation relation of the first kind) implies that $\hat{{\boldsymbol}{H}}(z) = \hat{{\boldsymbol}{H}}^*(z)$ and simplifies the formula to: $$\begin{aligned} \label{msd_formula2} E[{\boldsymbol}{x}(t){\boldsymbol}{x}^*(t)] &= 2 \int_0^t {\boldsymbol}{H}(s) ds + 2m \left({\boldsymbol}{H}(t) \mathcal{E} {\boldsymbol}{H}(t) - \int_0^t {\boldsymbol}{H}(u) \dot{{\boldsymbol}{H}}(u) du \right).\end{aligned}$$ To determine the behavior of $E[{\boldsymbol}{x}(t){\boldsymbol}{x}^*(t)]$ as $t \to \infty$, it suffices to investigate the asymptotic behavior of $\hat{{\boldsymbol}{H}}(z)$, whose formula is given in (ii), as $z \to 0$. Noting that $$\hat{{\boldsymbol}{H}}(z) = \frac{1}{z}\left[mz{\boldsymbol}{I} + {\boldsymbol}{\gamma}_0 + {\boldsymbol}{g}\sum_{i=1,2}\alpha_i {\boldsymbol}{C}_i(z{\boldsymbol}{I}+{\boldsymbol}{\Gamma}_i)^{-1}{\boldsymbol}{M}_i{\boldsymbol}{C}_i^* {\boldsymbol}{h} \right]^{-1}$$ and using Assumption \[ass\_vanishingornot\], we find that, as $z \to 0$, $$\begin{aligned} &\hat{{\boldsymbol}{H}}(z) \sim \frac{1}{z}\bigg[{\boldsymbol}{\gamma}_0 + \alpha_1 {\boldsymbol}{g} {\boldsymbol}{K}_1^{(1)}{\boldsymbol}{h} + \left(m{\boldsymbol}{I}-\sum_{j=1,2}\alpha_j {\boldsymbol}{g}{\boldsymbol}{K}_j^{(2)}{\boldsymbol}{h}\right)z \nonumber \\ &\hspace{2cm} + \alpha_2 {\boldsymbol}{g} {\boldsymbol}{K}_2^{(3)}{\boldsymbol}{h} z^2 + \alpha_2 {\boldsymbol}{g} {\boldsymbol}{K}_2^{(4)}{\boldsymbol}{h} z^3 + \dots \bigg]^{-1}.\end{aligned}$$ Therefore, if ${\boldsymbol}{\gamma}_0 = {\boldsymbol}{\sigma}_0 {\boldsymbol}{\sigma}_0^*/2$ is non-zero, then $\hat{{\boldsymbol}{H}}(z) \sim 1/z$ as $z \to 0$. Otherwise, if in addition $\alpha_1 = 1$, then $\hat{{\boldsymbol}{H}}(z) \sim 1/z$ as $z \to 0$, whereas if in addition $\alpha_1=0$, $\alpha_2=1$, then $\hat{{\boldsymbol}{H}}(z) \sim 1/z^2$ as $z \to 0$. The results in (iii) and (iv) then follow by applying the Tauberian theorems [@feller-vol-2], which say, in particular, that if $\hat{{\boldsymbol}{H}}(z) \sim 1/z^\beta$ as $z \to 0$, then ${\boldsymbol}{H}(t) \sim t^{\beta-1}$ as $t \to \infty$, for $\beta = 1, 2$ here. \[msd\_general\] We emphasize that superdiffusion with $E[{\boldsymbol}{x}(t) {\boldsymbol}{x}^*(t)]$ behaving as $t^\alpha$ as $t \to \infty$, where $\alpha > 2$, cannot take place when the velocity process converges to a stationary state. For a system to behave this way, the velocity itself has to grow with time. Moreover, we remark that one could obtain a richer class of asymptotic behaviors for the MSD by relaxing the assumption of fluctuation-dissipation relations. To summarize, (i) says that in the case where ${\boldsymbol}{F}_0 = {\boldsymbol}{0}$, $\alpha_1 = \alpha_3 = 0$, the $n$th order effective constants characterize the asymptotic behavior of the spectral densities at low frequencies; (ii) provides a formula for the particle’s mean-squared displacement, and (iii)-(iv) classify the types of diffusive behavior of the GLE model, in the exactly solvable case of Example \[ass\_exactsolve\], satisfying the fluctuation-dissipation relations. We emphasize that in the sequel we go beyond the above exactly solvable case; in particular the coefficients ${\boldsymbol}{g}$, ${\boldsymbol}{h}$, ${\boldsymbol}{\sigma}$, ${\boldsymbol}{\gamma}_0$, ${\boldsymbol}{\sigma}_0$ will depend in general on the particle’s position. However, the GLE in the exactly solvable case can be viewed as linear approximation to the general GLE (by expanding these coefficients in a Taylor series about a fixed position ${\boldsymbol}{x}' \in {\mathbb{R}}^d$). In view of Proposition \[asympbeh\], the parameters $\alpha_i \in \{0,1\}$ allow us to control diffusive behavior of the generalized Langevin dynamics. Our GLE models are very general and need not satisfy a fluctuation-dissipation relation. As we will see, these different behaviors motivate our introduction and study of various homogenization schemes for the GLE. Depending on the physical systems under consideration, one scheme might be more realistic than the others. It is one of the goals of this paper to explore homogenization schemes for different GLE classes. \[rem\_inf\_dim\] In finite dimension, it is not possible to realize generalized Langevin dynamics with a noise and/or memory function whose spectral density varies as $1/\omega^p$, $p \in (0,1)$, near $\omega = 0$ (i.e. the so-called $1/f$-type noise [@Kupferman2004]), and, consequently, the noise covariance function and/or memory function decay as a power $1/t^\alpha$, $\alpha \in (0,1)$, as $t \to \infty$. In this case one can use the formula in (ii) of Proposition \[asympbeh\] to show, at least for the exactly solvable case in Example \[ass\_exactsolve\] where the fluctuation-dissipation relations hold, that the asymptotic behavior of the particle is sub-diffusive, i.e. $E[{\boldsymbol}{x}(t) {\boldsymbol}{x}^*(t)] = O(t^\beta)$, where $\beta \in (0,1)$, as $t \to \infty$ (see also the related works [@mckinley2018anomalous; @didier2019asymptotic]). Sub-diffusive behavior has been discovered in a wide range of statistical and biological systems [@kou2008stochastic], and, therefore, making the study in this case relevant. One could, following the ideas in [@glatt2018generalized; @2018arXiv180409682N], extend the state space of the GLEs to an infinite-dimensional one, in order to study the sub-diffusive case. Homogenization studies, where more technicalities are expected to be encountered due to the infinite-dimensional nature of the systems, for this case will be explored in a future work.\ Generalized Langevin Systems as Input-Output Stochastic Dynamical Systems with Multiple Time Scales --------------------------------------------------------------------------------------------------- In this subsection, we discuss GLEs of the form , under Assumptions \[ass\_bohl\]-\[ass\_vanishingornot\], from the input-output system-theoretic and multiple time scale points of view. First, we introduce the notion of stochastic dynamical systems. \[stochdynsystem\] A [*stochastic dynamical system*]{} is a pair $({\boldsymbol}{Z},{\boldsymbol}{\mathcal{F}})$ of vector-valued stochastic processes satisfying equations of the form: $$\begin{aligned} d{\boldsymbol}{Z}(t) &= {\boldsymbol}{A}(t, {\boldsymbol}{Z}(t)) dt + {\boldsymbol}{B}(t, {\boldsymbol}{Z}(t)){\boldsymbol}{\eta}(t)dt, \\ {\boldsymbol}{\mathcal{F}}(t) &= {\boldsymbol}{C}(t, {\boldsymbol}{Z}(t)),\end{aligned}$$ where ${\boldsymbol}{A}$, ${\boldsymbol}{B}$, ${\boldsymbol}{C}$ are measurable (jointly in $t$ and ${\boldsymbol}{Z}$) mappings, ${\boldsymbol}{\eta}(t)$ is a random process (the [*input*]{}). ${\boldsymbol}{Z}(t)$ is called the [*state process*]{} and ${\boldsymbol}{\mathcal{F}}(t)$ the [*output process*]{} (observation process). The system is [*linear*]{} if all the mappings are at most linear in ${\boldsymbol}{Z}$; otherwise the system is [*nonlinear*]{}. The system is [*time-invariant*]{} if all the mappings are independent of $t$. The equation for the particle’s position, together with the GLE , can be cast as the system of SDEs for the Markov process ${\boldsymbol}{z}_t := ({\boldsymbol}{x}_{t}, {\boldsymbol}{v}_{t}, {\boldsymbol}{y}^1_{t}, {\boldsymbol}{y}^2_t, {\boldsymbol}{\beta}^3_{t}, {\boldsymbol}{\beta}^4_{t}) \in {\mathbb{R}}^{d}\times {\mathbb{R}}^d \times {\mathbb{R}}^{d_1} \times {\mathbb{R}}^{d_2} \times {\mathbb{R}}^{d_3} \times {\mathbb{R}}^{d_4}$: $$\begin{aligned} d{\boldsymbol}{x}_{t} &= {\boldsymbol}{v}_{t} dt, \label{sd1} \\ m d{\boldsymbol}{v}_{t} &= -{\boldsymbol}{\gamma}_0(t, {\boldsymbol}{x}_t) {\boldsymbol}{v}_t dt + {\boldsymbol}{\sigma}_0(t, {\boldsymbol}{x}_t) d{\boldsymbol}{W}_t^{(k)} - {\boldsymbol}{g}(t, {\boldsymbol}{x}_{t}) \sum_{i=1,2} \alpha_i {\boldsymbol}{C}_i {\boldsymbol}{y}^i_{t} dt \nonumber \\ &\ \ \ \ + {\boldsymbol}{\sigma}(t, {\boldsymbol}{x}_{t}) \sum_{j=3,4} \alpha_j {\boldsymbol}{C}_j {\boldsymbol}{\beta}^j_{t} dt + {\boldsymbol}{F}_e(t, {\boldsymbol}{x}_{t})dt, \\ d{\boldsymbol}{y}^i_{t} &= -{\boldsymbol}{\Gamma}_i {\boldsymbol}{y}^i_{t} dt + {\boldsymbol}{M}_i {\boldsymbol}{C}_i^* {\boldsymbol}{h}(t,{\boldsymbol}{x}_{t}) {\boldsymbol}{v}_{t} dt, \ \ i=1,2,\\ d{\boldsymbol}{\beta}^j_{t} &= -{\boldsymbol}{\Gamma}_j {\boldsymbol}{\beta}^j_{t} dt + {\boldsymbol}{\Sigma}_j d{\boldsymbol}{W}^{(q_j)}_{t}, \ \ j=3,4, \label{sd6}\end{aligned}$$ where we have defined the auxiliary [*memory processes*]{}: $${\boldsymbol}{y}^i_{t} := \int_{0}^{t} e^{-{\boldsymbol}{\Gamma}_i(t-s)} {\boldsymbol}{M}_i {\boldsymbol}{C}_i^* {\boldsymbol}{h}(s,{\boldsymbol}{x}_{s}) {\boldsymbol}{v}_{s} ds \in {\mathbb{R}}^{d_i}, \ \ i=1,2.$$ It is easy to see that the pairs $({\boldsymbol}{\beta}^i,{\boldsymbol}{\xi}^i)$, $i=3,4$, defined in the previous subsection, are linear time-invariant Gaussian stochastic dynamical systems with a white noise input (and therefore the state processes ${\boldsymbol}{\beta}^j(t)$ are Markov) in the sense of Definition \[stochdynsystem\]. Also, the pairs $({\boldsymbol}{y}_t^i,{\boldsymbol}{C}_i{\boldsymbol}{y}_t^i )$ $(i=1,2$) are linear stochastic dynamical systems driven by the random processes ${\boldsymbol}{M}_i{\boldsymbol}{C}_i^*{\boldsymbol}{h}(t, {\boldsymbol}{x}_t){\boldsymbol}{v}_t$, which depend on the particle’s position and velocity variables. The generalized Langevin system can be viewed as a nonlinear stochastic dynamical system $({\boldsymbol}{z},{\boldsymbol}{\mathcal{F}})$, where the components of ${\boldsymbol}{z}$ satisfy the SDEs - and ${\boldsymbol}{\mathcal{F}}$ is a measurable mapping describing an output process or a quantity of interest, for instance, $${\boldsymbol}{\mathcal{F}} = E \left[ \sup_{t \in [0,T]} |{\boldsymbol}{x}_t|^p \right]$$ for $p>0$ and $T>0$. In the exactly solvable case of Example \[ass\_exactsolve\], the generalized Langevin system reduces to a linear time-invariant stochastic dynamical system and can be viewed as a network of input-output systems consisting of components modeling the memory and noise. One of the goals of homogenization of GLEs is to reduce the number of the components needed to describe the effective dynamics in the considered limit. It is natural question, what class of GLEs should be taken as the starting point for homogenization. For feasible treatment, the GLEs should be in some sense minimal. In the network interpretation, the original system should be completely described by a minimal number of components, with no redundancies. We will discuss this based on a time scale analysis in the following. The (discrete) spectrum of the ${\boldsymbol}{\Gamma}_i$ ($i=1,2$) and of the ${\boldsymbol}{\Gamma}_j$ ($j=3,4$) (or equivalently, the [*spectrum of the Bohl memory function*]{} ${\boldsymbol}{\kappa}(t)$ and that of the covariance function ${\boldsymbol}{R}(t)$— see Definition 2.5 in [@trentelman2002control]) encode information about the memory time scales and noise correlation time scales present in the generalized Langevin system respectively. In realistic experiments, there may be many, possibly infinitely many, time scales (each corresponding to a mode of the environment), but typically they cannot be all observed and/or controlled. When modeling a system, it is important to focus on those time scales that are controllable and observable. This motivates the following definition, closely related to the notions of controllable and observable eigenvalues from the systems theory [@trentelman2002control]. \[defn\_timescales\] Consider a linear stochastic dynamical system $({\boldsymbol}{Z},{\boldsymbol}{\mathcal{F}})$, as in Definition \[stochdynsystem\], where ${\boldsymbol}{A}\in {\mathbb{R}}^{n \times n}$, ${\boldsymbol}{B} \in {\mathbb{R}}^{n \times k}$, ${\boldsymbol}{C} \in {\mathbb{R}}^{m \times n}$ are constant matrices. The time scale, $\tau := 1/\lambda$, where $\lambda$ is an eigenvalue of ${\boldsymbol}{A}$, in the system, is called [*$({\boldsymbol}{A}, {\boldsymbol}{B})$-controllable*]{} (or simply controllable) if $rank[{\boldsymbol}{A}-\lambda {\boldsymbol}{I} \ \ {\boldsymbol}{B} ] = n$ and [*$({\boldsymbol}{C}, {\boldsymbol}{A})$-observable*]{} (or simply observable) if $rank[{\boldsymbol}{A}-\lambda {\boldsymbol}{I} \ \ {\boldsymbol}{C} ]^* = n$. The following proposition, which follows from Theorem 3.13 in [@trentelman2002control], states well-known results regarding the above notions. Consider the linear dynamical system defined in Definition \[defn\_timescales\]. Then - the system is controllable (more precisely, $({\boldsymbol}{A},{\boldsymbol}{B})$-controllable, i.e. $[{\boldsymbol}{B} \ \ {\boldsymbol}{A}{\boldsymbol}{B} \ \ \cdots \ \ {\boldsymbol}{A}^{n-1}{\boldsymbol}{B}]$ is full rank) if and only if every time scale of the system is controllable. - the system is observable (more precisely, $({\boldsymbol}{C},{\boldsymbol}{A})$-observable, i.e. $[{\boldsymbol}{C} \ \ {\boldsymbol}{C}{\boldsymbol}{A} \ \ \cdots \ \ {\boldsymbol}{C}{\boldsymbol}{A}^{n-1}]^{*}$ is full rank) if and only if every time scale of the system is observable. For $i=1,2,3,4$ we define the time scales, $\tau_{i,k_i} := 1/\lambda_{i,k_i}$, where $\lambda_{i,k_i}$ ($k_i=1,\dots,d_i$) are eigenvalues of ${\boldsymbol}{\Gamma}_i$. We refer to the $\tau_{1,k_1}, \tau_{2,k_2}$ as [*memory time scales*]{} and the $\tau_{3,k_3}, \tau_{4,k_4}$ as [*noise correlation time scales*]{}. Our consideration of GLEs will be based on the following assumption. \[minimal\] All the memory time scales and the noise correlation time scales in the generalized Langevin systems described by are controllable and observable. From the mathematical point of view, our consideration minimizes the dimension of the state space on which the GLE is realized and therefore minimizes the complexity of the model which will be taken as the starting point for our homogenization studies. Indeed, recall that a stochastic realization is [*minimal*]{} if the realized process has no other stochastic realization of smaller dimension. It follows from our assumptions that all the realizations of the memory function and noise process are minimal, since a sufficient condition for a linear stochastic dynamical system to be minimal is that it is controllable (or reachable in the language of [@lindquist2015linear]), observable and the spectral factor of its spectral density is minimal [@lindquist2015linear]. On the Homogenization of Generalized Langevin Dynamics {#sect_homogofGLE} ====================================================== In this section, we discuss some new directions for homogenization of GLEs. In the case of non-vanishing (first order) effective damping constant and effective diffusion constant, homogenization of a version of the GLE was studied in [@LimWehr_Homog_NonMarkovian], where a limiting SDE for the position process was obtained in the limit, in which all the characteristic time scales of the system (i.e. the inertial time scale, the memory time scale and the noise correlation time scale) tend to zero at the same rate. Extending this result, we are going to focus on the following two cases. - [*The case where an instantaneous damping term is present in the GLE, i.e. ${\boldsymbol}{F}_0\neq {\boldsymbol}{0}$, or the non-vanishing effective damping constant case, i.e. $\alpha_1 = 1$.*]{} Together with the conditions in Example \[ass\_exactsolve\], this gives a model for normally diffusing systems; see Proposition \[asympbeh\] (iii). One can study the limit in which the inertial time scale and a subset (possibly all or none of) of other characteristic time scales of the system tend to zero; in particular the small mass limit in the case ${\boldsymbol}{F}_0 \neq {\boldsymbol}{0}$ of the generalized Langevin dynamics. We remark that the small mass limit is not well-defined in the case ${\boldsymbol}{F}_0 = {\boldsymbol}{0}$ and $\alpha_1=\alpha_3=1$ – this was first observed in [@mckinley2009transient], where it was pointed out that the limit leads to the phenomenon of anomalous gap of the particle’s mean-squared displacement (see also [@cordoba2012elimination; @indei2012treating]).\ - [*The vanishing effective damping constant and effective diffusion constant case, i.e. ${\boldsymbol}{F}_0={\boldsymbol}{0}$, $\alpha_1=\alpha_3=0$, $\alpha_2=\alpha_4=1$.*]{} Together with the conditions in Example \[ass\_exactsolve\], this gives a model for systems with super-diffusive behavior; see Proposition \[asympbeh\] (iv). One can study the limit in which the inertial time scale, a subset of the memory time scales and a subset of the noise correlation time scales tend to zero at the same rate. Such effective models are physically relevant when they preserve the asymptotic behavior of the spectral densities at low and/or high frequencies in the limit. Situations are also possible, where some of the eigenmodes of the memory and noise spectrum are damped much stronger than other, for example due to an injection of monochromatic light from a laser into the system, which is originally in thermal equilibrium. This justifies studying homogenization limits that selectively target a part of frequencies of memory and noise. We will study homogenization of the GLE in the limits described in the above scenarios. In all cases, the inertial time scale is taken to zero – this gives rise to the singular nature of the limit problems. We remark that one could also consider the more interesting scenarios in which the time scales tend to zero at different rates, but we choose not to pursue this in this already lengthy paper.\ [**Notation.**]{} Throughout the paper, we denote the variables in the pre-limit equations by small letters (for instance, ${\boldsymbol}{x}^\epsilon(t)$), and those of the limiting equations by capital letters (for instance, ${\boldsymbol}{X}(t)$). We use Einstein’s summation convention on repeated indices. The Euclidean norm of an arbitrary vector ${\boldsymbol}{w}$ is denoted by $| {\boldsymbol}{w} |$ and the (induced operator) norm of a matrix ${\boldsymbol}{A}$ by $\| {\boldsymbol}{A} \|$. For an ${\mathbb{R}}^{n_2 \times n_3}$-valued function ${\boldsymbol}{f}({\boldsymbol}{y}):=([f]_{jk}({\boldsymbol}{y}))_{j=1,\dots,n_2; k=1,\dots, n_3}$, ${\boldsymbol}{y} := ([y]_1, \dots, [y]_{n_1}) \in {\mathbb{R}}^{n_1}$, we denote by $({\boldsymbol}{f})_{{\boldsymbol}{y}}({\boldsymbol}{y})$ the $n_1 n_2 \times n_3$ matrix: $$({\boldsymbol}{f})_{{\boldsymbol}{y}}({\boldsymbol}{y}) = ({\boldsymbol}{\nabla}_{{\boldsymbol}{y}}[f]_{jk}({\boldsymbol}{y}))_{j=1,\dots, n_2; k=1,\dots,n_3},$$ where ${\boldsymbol}{\nabla}_{{\boldsymbol}{y}}[f]_{jk}({\boldsymbol}{y})$ stands for the gradient vector $\left(\frac{\partial [f]_{jk}({\boldsymbol}{y})}{\partial [y]_1}, \dots, \frac{\partial [f]_{jk}({\boldsymbol}{y})}{\partial [y]_{n_1}}\right) \in {\mathbb{R}}^{n_1}$ for every $j,k$. We denote by ${\boldsymbol}{\nabla} \cdot$ the divergence operator which contracts a matrix-valued function to a vector-valued function, i.e. for the matrix-valued function ${\boldsymbol}{A}({\boldsymbol}{X})$, the $i$th component of its divergence is given by $({\boldsymbol}{\nabla} \cdot {\boldsymbol}{A})^i = \sum_j \frac{\partial A^{ij}}{\partial X^j}$. Lastly, the symbol $\mathbb{E}$ denotes expectation with respect to the probability measure $\mathbb{P}$. Small Mass Limit of Generalized Langevin Dynamics {#sect_newsmallmlimit} ================================================= Consider the following family of equations for the processes $({\boldsymbol}{x}_t^m, {\boldsymbol}{v}_t^m) \in {\mathbb{R}}^{d \times d}$, $t \in [0,T]$, $m>0$: $$\begin{aligned} d{\boldsymbol}{x}_t^m &= {\boldsymbol}{v}_t^m dt, \label{res_gle_smallmass1} \\ m d{\boldsymbol}{v}^m_t &= -{\boldsymbol}{\gamma}_0(t, {\boldsymbol}{x}_t^m) {\boldsymbol}{v}_t^m dt - {\boldsymbol}{g}(t, {\boldsymbol}{x}_t^m) \left(\int_0^t {\boldsymbol}{\kappa}(t-s) {\boldsymbol}{h}(s, {\boldsymbol}{x}_s^m) {\boldsymbol}{v}_s^m ds \right) dt \nonumber \\ &\ \ \ \ + {\boldsymbol}{\sigma}_0(t, {\boldsymbol}{x}_t^m) d{\boldsymbol}{W}_t^{(k)} + {\boldsymbol}{\sigma}(t, {\boldsymbol}{x}_t^m) {\boldsymbol}{\xi}_t dt + {\boldsymbol}{F}_e(t, {\boldsymbol}{x}_t^m) dt, \label{res_gle_smallmass}\end{aligned}$$ where ${\boldsymbol}{\kappa}(t)$ and ${\boldsymbol}{\xi}_t$ are the memory function and noise process defined in and respectively, with each of the $\alpha_i$ ($i=1,2,3,4$) equal to zero or to one. The equations - are equivalent to the following system of SDEs for the Markov process ${\boldsymbol}{z}^m_t := ({\boldsymbol}{x}^m_{t}, {\boldsymbol}{v}^m_{t}, {\boldsymbol}{y}^{1,m}_{t}, {\boldsymbol}{y}^{2,m}_t, {\boldsymbol}{\beta}^{3,m}_{t}, {\boldsymbol}{\beta}^{4,m}_{t}) \in {\mathbb{R}}^{d}\times {\mathbb{R}}^d \times {\mathbb{R}}^{d_1} \times {\mathbb{R}}^{d_2} \times {\mathbb{R}}^{d_3} \times {\mathbb{R}}^{d_4}$: $$\begin{aligned} d{\boldsymbol}{x}^m_{t} &= {\boldsymbol}{v}^m_{t} dt, \label{res_sd1} \\ m d{\boldsymbol}{v}^m_{t} &= -{\boldsymbol}{\gamma}_0(t, {\boldsymbol}{x}^m_t) {\boldsymbol}{v}^m_t dt + {\boldsymbol}{\sigma}_0(t, {\boldsymbol}{x}^m_t) d{\boldsymbol}{W}_t^{(k)} - {\boldsymbol}{g}(t, {\boldsymbol}{x}^m_{t}) \sum_{i=1,2} \alpha_i {\boldsymbol}{C}_i {\boldsymbol}{y}^{i,m}_{t} dt \nonumber \\ &\ \ \ \ + {\boldsymbol}{\sigma}(t, {\boldsymbol}{x}^m_{t}) \sum_{j=3,4} \alpha_j {\boldsymbol}{C}_j {\boldsymbol}{\beta}^{j,m}_{t} dt + {\boldsymbol}{F}_e(t, {\boldsymbol}{x}^m_{t})dt, \\ d{\boldsymbol}{y}^{i,m}_{t} &= -{\boldsymbol}{\Gamma}_i {\boldsymbol}{y}^{i,m}_{t} dt + {\boldsymbol}{M}_i {\boldsymbol}{C}_i^* {\boldsymbol}{h}(t, {\boldsymbol}{x}^m_{t}) {\boldsymbol}{v}^m_{t} dt, \ \ i=1,2,\\ d{\boldsymbol}{\beta}^{j,m}_{t} &= -{\boldsymbol}{\Gamma}_j {\boldsymbol}{\beta}^{j,m}_{t} dt + {\boldsymbol}{\Sigma}_j d{\boldsymbol}{W}^{(q_j)}_{t}, \ \ j=3,4, \label{res_sd6}\end{aligned}$$ where we have defined the auxiliary memory processes: $${\boldsymbol}{y}^{i,m}_{t} := \int_{0}^{t} e^{-{\boldsymbol}{\Gamma}_i(t-s)} {\boldsymbol}{M}_i {\boldsymbol}{C}_i^* {\boldsymbol}{h}(s, {\boldsymbol}{x}^m_{s}) {\boldsymbol}{v}^m_{s} ds \in {\mathbb{R}}^{d_i}, \ \ i=1,2.$$ Note that the processes ${\boldsymbol}{\beta}_t^{3,m}$ and ${\boldsymbol}{\beta}_t^{4,m}$ do not actually depend on $m$, but we are adding the superscript $m$ for a more homogeneous notation. We make the following simplifying assumptions concerning -. Let ${\boldsymbol}{W}^{(q_j)}$ ($j=3,4$) be independent Wiener processes on a filtered probability space $(\Omega, \mathcal{F}, \mathcal{F}_t,\mathbb{P})$ satisfying the usual conditions and let $\mathbb{E}$ denote expectation with respect to $\mathbb{P}$. \[exis\_gle\] There are no explosions, i.e. almost surely, for every $m > 0$ there exists global unique solution to the pre-limit SDE - and also to the limiting SDEs - on the time interval $[0,T]$. \[bounded\] For $t \in {\mathbb{R}}^+$, ${\boldsymbol}{y} \in {\mathbb{R}}^{d}$, the functions ${\boldsymbol}{F}_e(t, {\boldsymbol}{y})$, ${\boldsymbol}{\sigma}_0(t,{\boldsymbol}{y})$ and ${\boldsymbol}{\sigma}(t,{\boldsymbol}{y})$ are continuous and bounded (in $t$ and ${\boldsymbol}{y}$) as well as Lipschitz in ${\boldsymbol}{y}$, whereas the functions ${\boldsymbol}{\gamma}_0(t, {\boldsymbol}{y})$, ${\boldsymbol}{g}(t, {\boldsymbol}{y})$, ${\boldsymbol}{h}(t, {\boldsymbol}{y})$, $({\boldsymbol}{\gamma}_0)_{{\boldsymbol}{y}}(t, {\boldsymbol}{y})$, $({\boldsymbol}{g})_{{\boldsymbol}{y}}(t, {\boldsymbol}{y})$ and $({\boldsymbol}{h})_{{\boldsymbol}{y}}(t, {\boldsymbol}{y})$ are continuously differentiable and Lipschitz in ${\boldsymbol}{y}$ as well as bounded (in $t$ and ${\boldsymbol}{y}$). Moreover, the functions $({\boldsymbol}{\gamma}_0)_{{\boldsymbol}{y}{\boldsymbol}{y}}(t, {\boldsymbol}{y})$, $({\boldsymbol}{g})_{{\boldsymbol}{y}{\boldsymbol}{y}}(t, {\boldsymbol}{y})$ and $({\boldsymbol}{h})_{{\boldsymbol}{y}{\boldsymbol}{y}}(t, {\boldsymbol}{y})$ are bounded for every $t \in {\mathbb{R}}^+$, ${\boldsymbol}{y} \in {\mathbb{R}}^{d}$. \[initialdata\] The initial data ${\boldsymbol}{x}, {\boldsymbol}{v} \in {\mathbb{R}}^d$ are $\mathcal{F}_0$-measurable random variables independent of the $\sigma$-algebra generated by the Wiener processes ${\boldsymbol}{W}^{(q_j)}$ ($j=3,4$). They are independent of $m$ and have finite moments of all orders. The following theorem describes the homogenized behavior of the particle’s position modeled by the family of the equations -—or, equivalently, by the SDE systems -—in the limit as the particle’s mass tends to zero. \[newsmallm\] Let ${\boldsymbol}{z}_t^m := ({\boldsymbol}{x}_t^m, {\boldsymbol}{v}_t^m, {\boldsymbol}{y}_t^{1,m}, {\boldsymbol}{y}_t^{2,m}, {\boldsymbol}{\beta}_t^{3,m}, {\boldsymbol}{\beta}_t^{4,m}) $ be a family of processes solving the SDE system -. Suppose that Assumptions \[ass\_bohl\]-\[minimal\] and Assumptions \[exis\_gle\]-\[initialdata\] hold. In addition, suppose that for every $m > 0$, ${\boldsymbol}{x} \in {\mathbb{R}}^d$, the family of matrices ${\boldsymbol}{\gamma}_0(t, {\boldsymbol}{x})$ is positive stable, uniformly in $t$ and ${\boldsymbol}{x}$. Then as $m \to 0$, the position process ${\boldsymbol}{x}^m_t$ converges to ${\boldsymbol}{X}_t$, where ${\boldsymbol}{X}_t$ is the first component of the process $({\boldsymbol}{X}_t, {\boldsymbol}{Y}_t^1, {\boldsymbol}{Y}_t^2, {\boldsymbol}{\beta}_t^3, {\boldsymbol}{\beta}_t^4)$ satisfying the Itô SDE system: $$\begin{aligned} d{\boldsymbol}{X}_t &= {\boldsymbol}{\gamma}_0^{-1}(t, {\boldsymbol}{X}_t)\bigg[ - {\boldsymbol}{g}(t, {\boldsymbol}{X}_t) \sum_{i=1}^2 \alpha_i {\boldsymbol}{C}_i {\boldsymbol}{Y}_t^i + {\boldsymbol}{\sigma}(t, {\boldsymbol}{X}_t) \sum_{j=3}^4 \alpha_j {\boldsymbol}{C}_j {\boldsymbol}{\beta}_t^j \nonumber \\ &\ \ \ \ + {\boldsymbol}{F}_e(t, {\boldsymbol}{X}_t) \bigg] dt + {\boldsymbol}{\gamma}_0^{-1}(t, {\boldsymbol}{X}_t){\boldsymbol}{\sigma}_0(t, {\boldsymbol}{X}_t)d{\boldsymbol}{W}_t^{(k)} +{\boldsymbol}{S}^{(0)}(t, {\boldsymbol}{X}_t)dt, \label{sm1} \\ d{\boldsymbol}{Y}_t^k &= -{\boldsymbol}{\Gamma}_k {\boldsymbol}{Y}_t^k dt + {\boldsymbol}{M}_k {\boldsymbol}{C}_k^* {\boldsymbol}{h}(t, {\boldsymbol}{X}_t){\boldsymbol}{\gamma}_0^{-1}(t, {\boldsymbol}{X}_t) \bigg[ - {\boldsymbol}{g}(t, {\boldsymbol}{X}_t) \sum_{i=1}^2 \alpha_i {\boldsymbol}{C}_i {\boldsymbol}{Y}_t^i \nonumber \\ &\ \ \ \ + {\boldsymbol}{\sigma}(t, {\boldsymbol}{X}_t) \sum_{j=3}^4 \alpha_j {\boldsymbol}{C}_j {\boldsymbol}{\beta}_t^j + {\boldsymbol}{F}_e(t,{\boldsymbol}{X}_t) \bigg] dt + {\boldsymbol}{S}^{(k)}(t, {\boldsymbol}{X}_t) dt \nonumber \\ &\ \ \ \ + {\boldsymbol}{M}_k {\boldsymbol}{C}_k^* {\boldsymbol}{h}(t, {\boldsymbol}{X}_t){\boldsymbol}{\gamma}_0^{-1}(t, {\boldsymbol}{X}_t){\boldsymbol}{\sigma}_0(t, {\boldsymbol}{X}_t)d{\boldsymbol}{W}_t^{(k)}, \ \ \text{for } k=1,2, \label{sm2} \\ d{\boldsymbol}{\beta}_t^l &= -{\boldsymbol}{\Gamma}_l {\boldsymbol}{\beta}_t^l dt + {\boldsymbol}{\Sigma}_l d{\boldsymbol}{W}_t^{(q_l)}, \ \ \text{ for } l=3,4, \label{sm3}\end{aligned}$$ where the $i$th component of the ${\boldsymbol}{S}^{(k)}$ ($k=0,1,2$) is given by: $$\begin{aligned} S_i^{(0)}(t, {\boldsymbol}{X}) &= \frac{\partial}{\partial X_l}\left(({\boldsymbol}{\gamma}_0^{-1})_{ij}(t, {\boldsymbol}{X}) \right) J_{lj}, \ \ j,l=1,\dots,d, \end{aligned}$$ and for $k=1,2$, $$\begin{aligned} S_i^{(k)}(t, {\boldsymbol}{X}) &= \frac{\partial}{\partial X_l}\left(({\boldsymbol}{M}_k {\boldsymbol}{C}_k^* {\boldsymbol}{h}(t, {\boldsymbol}{X}) {\boldsymbol}{\gamma}_0^{-1}(t, {\boldsymbol}{X}))_{ij} \right) J_{lj}, \ \ j,l=1,\dots,d,\end{aligned}$$ with ${\boldsymbol}{J} \in {\mathbb{R}}^{d \times d}$ solving the Lyapunov equation, ${\boldsymbol}{\gamma}_0 {\boldsymbol}{J} + {\boldsymbol}{J} {\boldsymbol}{\gamma}_0^* = {\boldsymbol}{\sigma}_0 {\boldsymbol}{\sigma}_0^*$. The convergence is obtained in the following sense: for all finite $T>0$, $\sup_{t \in [0,T]} |{\boldsymbol}{x}^m_t - {\boldsymbol}{X}_t| \to 0$ in probability, as $m \to 0$. We prove the theorem by applying Theorem \[mainthm\]. Using the notation in the statement of Theorem \[mainthm\], let $\epsilon = m$, $n_1 =d+d_1+d_2+d_3+d_4$, $n_2 = d$, $k_1 = q_3 + q_4$, $k_2 = k$, ${\boldsymbol}{x}^\epsilon(t) = ({\boldsymbol}{x}_t^m, {\boldsymbol}{y}_t^{1,m}, {\boldsymbol}{y}_t^{2,m}, {\boldsymbol}{\beta}_t^{3,m}, {\boldsymbol}{\beta}_t^{4,m})$, ${\boldsymbol}{v}^\epsilon(t) = {\boldsymbol}{v}_t^m$, $$\begin{aligned} {\boldsymbol}{a}_1 &= [{\boldsymbol}{I} \ \ {\boldsymbol}{M}_1 {\boldsymbol}{C}_1^* {\boldsymbol}{h}(t, {\boldsymbol}{x}_t^m) \ \ {\boldsymbol}{M}_2 {\boldsymbol}{C}_2^* {\boldsymbol}{h}(t, {\boldsymbol}{x}_t^m) \ \ {\boldsymbol}{0} \ \ {\boldsymbol}{0}], \\ {\boldsymbol}{a}_2 &= -{\boldsymbol}{\gamma}_0(t, {\boldsymbol}{x}_t^m), \\ {\boldsymbol}{b}_1 &= -({\boldsymbol}{0},{\boldsymbol}{\Gamma}_1 {\boldsymbol}{y}_t^{1,m}, {\boldsymbol}{\Gamma}_2 {\boldsymbol}{y}_t^{2,m}, {\boldsymbol}{\Gamma}_3 {\boldsymbol}{\beta}_t^{3,m}, {\boldsymbol}{\Gamma}_4 {\boldsymbol}{\beta}_t^{4,m}), \\ {\boldsymbol}{b}_2 &= {\boldsymbol}{F}_e(t, {\boldsymbol}{x}_t^m) - {\boldsymbol}{g}(t, {\boldsymbol}{x}_t^m) \sum_{i=1,2} \alpha_i {\boldsymbol}{C}_i {\boldsymbol}{y}_t^{i,m} + {\boldsymbol}{\sigma}(t, {\boldsymbol}{x}_t^m) \sum_{j=3,4} \alpha_j {\boldsymbol}{C}_j {\boldsymbol}{\beta}_t^{j,m}, \\ {\boldsymbol}{\sigma}_1 &= \begin{bmatrix} {\boldsymbol}{0} & {\boldsymbol}{0} \\ {\boldsymbol}{0} & {\boldsymbol}{0} \\ {\boldsymbol}{0} & {\boldsymbol}{0} \\ {\boldsymbol}{\Sigma}_3 & {\boldsymbol}{0} \\ {\boldsymbol}{0} & {\boldsymbol}{\Sigma}_4 \end{bmatrix},\\ {\boldsymbol}{\sigma}_2 &= {\boldsymbol}{\sigma}_0(t, {\boldsymbol}{x}_t^m),\end{aligned}$$ ${\boldsymbol}{W}^{(k_1)}(t) = ({\boldsymbol}{W}_t^{(q_3)}, {\boldsymbol}{W}_t^{(q_4)})$ and ${\boldsymbol}{W}^{(k_2)}(t) = {\boldsymbol}{W}_t^{(k)}$. The initial conditions are ${\boldsymbol}{x}(0) = ({\boldsymbol}{x}, {\boldsymbol}{0}, {\boldsymbol}{0}, {\boldsymbol}{\beta}_0^3, {\boldsymbol}{\beta}_0^4)$ and ${\boldsymbol}{v}(0) = {\boldsymbol}{v}$, where ${\boldsymbol}{\beta}_0^j$ $(j=3,4$) are normally distributed with mean-zero and covariance ${\boldsymbol}{M}_j$. They are independent of $m$. Observe that in the above formula, ${\boldsymbol}{a}_i$, ${\boldsymbol}{b}_i$, ${\boldsymbol}{\sigma}_i$ ($i=1,2$) do not depend explicitly on $\epsilon = m$, so by the convention adopted earlier, we denote them ${\boldsymbol}{A}_i$, ${\boldsymbol}{B}_i$, ${\boldsymbol}{\Sigma}_i$ respectively, and we put $a_i = b_i = c_i = d_i = \infty$, where $a_i, b_i, c_i, d_i$ are the rates in Assumption \[a5\_ch2\]. Next, we verify the assumptions of Theorem \[mainthm\]. Assumption \[aexis\] clearly follows from the Assumption \[exis\_gle\]. Since the family of matrices ${\boldsymbol}{\gamma}_0(t, {\boldsymbol}{x})$ is positive stable (uniformly in $t$ and ${\boldsymbol}{x}$), Assumption \[a0\_ch2\] is satisfied. It is straightforward to see that our assumptions on the coefficients of the GLE imply Assumption \[a1\_ch2\]. As ${\boldsymbol}{x}(0)$ and ${\boldsymbol}{v}(0)$ are random variables independent of $m$, Assumption \[a2\_ch2\] holds by our assumptions on the initial conditions ${\boldsymbol}{x}_0$, ${\boldsymbol}{v}_0$ and ${\boldsymbol}{\beta}^j_0$ ($j=3,4$). Finally, as noted earlier, Assumption \[a5\_ch2\] holds with $a_i = b_i = c_i = d_i = \infty$. The assumptions of the Theorem \[mainthm\] are thus satisfied. Applying it, we obtain the limiting SDE system -. We remark that the limiting SDE is unique up to transformation in , as pointed out already in [@LimWehr_Homog_NonMarkovian]. In the special case when $\alpha_i=0$ for $i=1,2,3,4$ and the coefficients do not depend on $t$ explicitly, Theorem \[newsmallm\] reduces to the result obtained in [@hottovy2015smoluchowski]. In general, by comparing the result with the one obtained in [@hottovy2015smoluchowski], we see that perturbing the original Markovian system by adding a memory and colored noise changes the behavior of the homogenized system obtained in the small mass limit. In particular, - the limiting equation for the particle’s position not only contains a correction drift term (${\boldsymbol}{S}^{(0)}$) – the [*noise-induced drift*]{}, but is also coupled to equations for other slow variables; - in the case when $\alpha_1$ and/or $\alpha_2$ equal $1$, the limiting equation for the (slow) auxiliary memory variables contains correction drift terms (${\boldsymbol}{S}^{(1)}$ and/or ${\boldsymbol}{S}^{(2)}$) – which could be called the [*memory-induced drifts*]{}. Interestingly, the memory-induced drifts disappear when ${\boldsymbol}{h}$ is proportional to ${\boldsymbol}{\gamma}_0$, a phenomenon that can be attributed to the interaction between the forces ${\boldsymbol}{F}_0$ and ${\boldsymbol}{F}_1$. Note that the highly coupled structure of the limiting SDEs is due to the fact that only one time scale (inertial time scale) was taken to zero in the limit. We expect the structure to simplify when all time scales present in the problem are taken to zero at the same rate. Homogenization for the Case of Vanishing Effective Damping Constant and Effective Diffusion Constant {#sect_newhomogcase} ==================================================================================================== In this section we consider the GLE , with ${\boldsymbol}{F}_0 = {\boldsymbol}{0}$, $\alpha_1=\alpha_3=0$, and $\alpha_2 = \alpha_4 = 1$. We explore a class of homogenization schemes, aiming to: - reduce the complexity of the generalized Langevin dynamics in a way that the homogenized dynamics can be realized on a state space with minimal dimension and are described by minimal number of effective parameters; - retain non-trivial effects of the memory and the colored noise in the homogenized dynamics by matching the asymptotic behavior of the spectral density of the noise process and memory function in the original and the effective model. \[hmm\] Generally, the larger the number of time scales (the eigenvalues of the ${\boldsymbol}{\Gamma}_i$) present in the system, the higher the dimension of the state space needed to realize the generalized Langevin system. On the other hand, in addition to ${\boldsymbol}{\Gamma}_i$, information on ${\boldsymbol}{C}_i$ and ${\boldsymbol}{M}_i$ is needed to determine the asymptotic behavior of the spectral densities (see Proposition \[asympbeh\](i)). In other words, although analysis based solely on time scales consideration may reduce the dimension of the model, it does not in general allow one to achieve the model matching in (P2). It is desirable to have homogenization schemes that achieve both goals of dimension reduction (P1) and matching of models (P2). Such a scheme is considered below. The idea is to consider the limit when the inertial time scale, a proper subset of the memory time scales and a proper subset of the noise correlation time scales tend to zero at the same rate. The case of sending all the characteristic time scales to zero is excluded here as it is uninteresting when the effective damping and diffusion vanish in the limit. Recall that the notions of controllability and observability are invariant under the trivial equivalence relation of type . Therefore, one can, without loss of generality, assume that the ${\boldsymbol}{\Gamma}_i$ $(i=1,2,3,4)$ are already in the Jordan normal form and work in Jordan basis. Such form will reveal the slow-fast time scale structure of the system and so give us a rubric to develop homogenization schemes. \[jordan\] Let $i=2,4$. All the ${\boldsymbol}{\Gamma}_i$ are of the following Jordan normal form: $${\boldsymbol}{\Gamma}_i = diag({\boldsymbol}{\Gamma}_{i,1},\cdots,{\boldsymbol}{\Gamma}_{i,N_i}),$$ where $N_i < d_i$, ${\boldsymbol}{\Gamma}_{i,k} \in {\mathbb{R}}^{\nu(\lambda_{i,k}) \times \nu(\lambda_{i,k})}$ ($k=1, \dots, N_i$) is the Jordan block associated with the (controllable and observable) eigenvalue $\lambda_{i,k}$ (or time scale $\tau_{i,k}=1/\lambda_{i,k}$) and corresponds to the invariant subspace $\mathcal{X}_{i,k} = Ker(\lambda_{i,k}{\boldsymbol}{I}-{\boldsymbol}{\Gamma}_{i,k})^{\nu(\lambda_{i,k})}$, where $\nu(\lambda_{i,k})$ is the index of $\lambda_{i,k}$, i.e. the size of the largest Jordan block corresponding to the eigenvalue $\lambda_{i,k}$. Let $1 \leq M_i < N_i$ and the eigenvalues be ordered as $0 < \lambda_{i,1} \leq \dots \leq \lambda_{i,M_i} < \lambda_{i,M_{i}+1} \leq \dots \leq \lambda_{i,N_i}$, so that we have the invariant subspace decomposition, ${\mathbb{R}}^{d_i} = \bigoplus_{j=1}^{N_i} \mathcal{X}_{i,j}$, with $d_i = \sum_{k=1}^{N_i} \nu(\lambda_{i,k})$. Let $0 < l_i < d_i$. The following procedure studies generalized Langevin dynamics whose spectral densities of the memory and the noise process have the asymptotic behavior, ${\boldsymbol}{\mathcal{S}}_i(\omega) \sim \omega^{2l_i}$ for small $\omega$, and ${\boldsymbol}{\mathcal{S}}_i(\omega) \sim 1/\omega^{2d_i}$ for large $\omega$, for $i=2,4$. We construct a homogenized version of the model in such a way that its memory and noise processes have spectral densities whose asymptotic behavior at low $\omega$ matches that of the original model (to achieve (P2)), while that at high $\omega$ it varies as $1/\omega^{2l_i}$ (to achieve (P1)). \[alg\] [*Procedure to study a class of homogenization problems.*]{} - Let $\alpha_1=\alpha_3 =0$, $\alpha_2=\alpha_4 =1$ and ${\boldsymbol}{F}_0 = {\boldsymbol}{0}$ in the GLE . Suppose that Assumption \[jordan\] holds and there exists $M_i$ such that $l_i = \sum_{k=1}^{M_i} \nu(\lambda_{i,k})$. Take this $M_i$. - For $i=2,4$, set $m=m' \epsilon$ and $\lambda_{i,k} = \lambda'_{i,k}/\epsilon$, for $k=M_i+1,\dots,N_i$ (i.e. we scale the $(d_2-l_2)$ smallest memory time scales and the $(d_4-l_4)$ smallest noise correlation time scales with $\epsilon$), where $m'$ and the $\lambda'_{i,k}$ are positive constants. - Select the ${\boldsymbol}{C}_i$, ${\boldsymbol}{M}_i$, ${\boldsymbol}{\Sigma}_i$ such that the ${\boldsymbol}{C}_i$ are constant matrices independent of the $\lambda_{i,k}$ ($k=1,\dots,N_i$), ${\boldsymbol}{C}_i {\boldsymbol}{\Gamma}_i^{-n_i} {\boldsymbol}{M}_i {\boldsymbol}{C}_i^* = {\boldsymbol}{0}$ for $0 < n_i < 2l_i$, ${\boldsymbol}{C}_i {\boldsymbol}{\Gamma}_i^{-(2l_i+1)} {\boldsymbol}{M}_i {\boldsymbol}{C}_i^* \neq {\boldsymbol}{0}$, and upon a suitable rescaling involving the mass, memory time scales and noise correlation time scales the resulting family of GLEs can be cast in the form of the SDEs -. Note that the matrix entries of the ${\boldsymbol}{M}_i$ and/or ${\boldsymbol}{\Sigma}_i$ necessarily depend on the $\lambda_{i,k}$ due to the Lyapunov equations that relate them to the ${\boldsymbol}{\Gamma}_i$. - Apply Theorem \[mainthm\] to study the limit $\epsilon \to 0$ and obtain the homogenized model, under appropriate assumptions on the coefficients and parameters in the GLEs. We remark that while one has the above procedure to study homogenization schemes that achieve (P1) and (P2), the derivations and formulas for the limiting equations could become tedious and complicated as the $l_i$ and $d_i$ become large. To illustrate this, we consider a simple yet still sufficiently general instance of Algorithm \[alg\] in the following. \[special\] The spectral densities, ${\boldsymbol}{\mathcal{S}}_i(\omega) = {\boldsymbol}{\Phi}_i(i\omega){\boldsymbol}{\Phi}_i^*(-i\omega)$ ($i=2,4$), with the (minimal) spectral factor: $${\boldsymbol}{\Phi}_i(z) = {\boldsymbol}{Q}^{-1}_i(z) {\boldsymbol}{P}_i(z),$$ where the ${\boldsymbol}{P}_i(z) \in {\mathbb{R}}^{p_i \times m_i}$ are matrix-valued monomials with degree $l_i$ : $${\boldsymbol}{P}_i(z) = {\boldsymbol}{B}_{l_i}z^{l_i}$$ and the ${\boldsymbol}{Q}_i(z) \in {\mathbb{R}}^{p_i \times p_i}$ are matrix-valued polynomials with degree $d_i$, i.e. $${\boldsymbol}{Q}_i(z) = \prod_{k=1}^{d_i} (z{\boldsymbol}{I}+{\boldsymbol}{\Gamma}_{i,k}).$$ Here $p_2=q$, $p_4=r$, the $m_i$ $(i=2,4$) are positive integers, the ${\boldsymbol}{B}_{l_i} \in {\mathbb{R}}^{p_i \times m_i}$ are constant matrices, ${\boldsymbol}{\Gamma}_{i,k}\in {\mathbb{R}}^{p_i \times p_i}$ are diagonal matrices with positive entries, and ${\boldsymbol}{I}$ denotes identity matrix of appropriate dimension. Under Assumption \[special\], the spectral densities have the following asymptotic behavior: ${\boldsymbol}{\mathcal{S}}_i(\omega) \sim \omega^{2l_i}$ for small $\omega$, and ${\boldsymbol}{\mathcal{S}}_i(\omega) \sim 1/\omega^{2d_i}$ for large $\omega$. One can then implement Algorithm \[alg\] explicitly to study homogenization for a sufficiently large class of GLEs, where the rescaled spectral densities tend to the ones with the asymptotic behavior mentioned in the paragraph just before Algorithm \[alg\] in the limit. We discuss one such implementation in Appendix \[implem\_alg\]. Since the calculations become more complicated as $l_i$ and $d_i$ become large, we will only study simpler cases and illustrate how things could get complicated in the following. We assume $d_2$ and $d_4$ are even integers and consider in detail the case when $l_2 = l_4 = l = 1$, $d_2 = d_4 = h = 2$, $$\begin{aligned} {\boldsymbol}{\Gamma}_{2,1} &= diag(\lambda_{2,1}, \dots, \lambda_{2,d_2/2}), \ \ \ {\boldsymbol}{\Gamma}_{2,2}= diag(\lambda_{2,d_2/2+1},\dots,\lambda_{2,d_2}), \\ {\boldsymbol}{\Gamma}_{4,1}&=diag(\lambda_{4,1},\dots,\lambda_{4,d_4/2}), \ \ \ {\boldsymbol}{\Gamma}_{4,2}= diag(\lambda_{4,d_4/2+1},\dots,\lambda_{4,d_4}),\end{aligned}$$ with $\lambda_{2,d_2} \geq \dots \geq \lambda_{2,d_2/2+1}>\lambda_{2,d_2/2}\geq \dots \geq \lambda_{2,1}>0$ and $\lambda_{4,d_4} \geq \dots \geq \lambda_{4,d_4/2+1}>\lambda_{4,d_4/2}\geq \dots \geq \lambda_{4,1}>0$ in Assumption \[special\], so that for $i=2,4$, $$\label{gammai} {\boldsymbol}{\Gamma}_i = diag({\boldsymbol}{\Gamma}_{i,1},{\boldsymbol}{\Gamma}_{i,2}) \in {\mathbb{R}}^{d_i \times d_i}.$$ We consider: $$\begin{aligned} {\boldsymbol}{C}_i &= [{\boldsymbol}{B}_i \ \ {\boldsymbol}{B}_i] \in {\mathbb{R}}^{p_i \times d_i}, \label{Ci} \\ {\boldsymbol}{\Sigma}_i &= \left[ -{\boldsymbol}{\Gamma}_{i,1}{\boldsymbol}{\Gamma}_{i,2}({\boldsymbol}{\Gamma}_{i,2}-{\boldsymbol}{\Gamma}_{i,1})^{-1} \ \ \ {\boldsymbol}{\Gamma}_{i,2}^2({\boldsymbol}{\Gamma}_{i,2}-{\boldsymbol}{\Gamma}_{i,1})^{-1} \right]^* \in {\mathbb{R}}^{d_i \times d_i/2}, \label{Sigmai} \\ \text{ so that } \nonumber \\ {\boldsymbol}{M}_i &= \left[ \begin{array}{cc} \label{Mi} {\boldsymbol}{M}_i^{11} & {\boldsymbol}{M}_i^{12} \\ {\boldsymbol}{M}_i^{21} & {\boldsymbol}{M}_i^{22} \end{array} \right] \in {\mathbb{R}}^{d_i \times d_i}, \end{aligned}$$ where $$\begin{aligned} {\boldsymbol}{M}_i^{11} &= \frac{1}{2}{\boldsymbol}{\Gamma}_{i,1} {\boldsymbol}{\Gamma}_{i,2}^2 ({\boldsymbol}{\Gamma}_{i,1}-{\boldsymbol}{\Gamma}_{i,2})^{-2}, \\ {\boldsymbol}{M}_i^{12} &= {\boldsymbol}{M}_i^{21} = -{\boldsymbol}{\Gamma}_{i,1} {\boldsymbol}{\Gamma}^3_{i,2} ({\boldsymbol}{\Gamma}_{i,1}+{\boldsymbol}{\Gamma}_{i,2})^{-1} ({\boldsymbol}{\Gamma}_{i,1}-{\boldsymbol}{\Gamma}_{i,2})^{-2}, \\ {\boldsymbol}{M}_i^{22} &= \frac{1}{2}{\boldsymbol}{\Gamma}^3_{i,2} ({\boldsymbol}{\Gamma}_{i,1}-{\boldsymbol}{\Gamma}_{i,2})^{-2},\end{aligned}$$ $p_2 = q$ and $p_4 = r$ as in Assumption \[special\]. One can verify that this is indeed the vanishing effective damping constant and effective diffusion constant case (i.e. ${\boldsymbol}{C}_i {\boldsymbol}{\Gamma}_i^{-1} {\boldsymbol}{M}_i {\boldsymbol}{C}_i^* = {\boldsymbol}{0}$ for $i=2,4$). Also, for $i=2,4$, the memory kernel, ${\boldsymbol}{\kappa}_2(t)$ and covariance function, ${\boldsymbol}{R}_4(t)$, are of the following bi-exponential form: $$\label{78} {\boldsymbol}{C}_ie^{-{\boldsymbol}{\Gamma}_i|t|}{\boldsymbol}{M}_i {\boldsymbol}{C}_i^* = \frac{1}{2}{\boldsymbol}{B}_i {\boldsymbol}{\Gamma}_{i,2}^2({\boldsymbol}{\Gamma}_{i,2}^2-{\boldsymbol}{\Gamma}_{i,1}^2)^{-1} \left( {\boldsymbol}{\Gamma}_{i,2} e^{-{\boldsymbol}{\Gamma}_{i,2} |t|} - {\boldsymbol}{\Gamma}_{i,1} e^{-{\boldsymbol}{\Gamma}_{i,1} |t|} \right) {\boldsymbol}{B}_i^*$$ and their Fourier transforms are: $$\label{79} {\boldsymbol}{\mathcal{S}}_i(\omega)={\boldsymbol}{B}_i {\boldsymbol}{\Gamma}_{i,2}^2 {\boldsymbol}{B}_i^* \omega^2 ((\omega^2{\boldsymbol}{I}+{\boldsymbol}{\Gamma}_{i,1}^2)(\omega^2{\boldsymbol}{I}+{\boldsymbol}{\Gamma}_{i,2}^2))^{-1},$$ which vary as $\omega^2$ near $\omega = 0$. Note that in the above the ${\boldsymbol}{B}_i$ do not necessarily commute with the ${\boldsymbol}{\Gamma}_{i,j}$. Following step (2) of Algorithm \[alg\], we set $m = m_0 \epsilon$, ${\boldsymbol}{\Gamma}_{i,2} = {\boldsymbol}{\gamma}_{i,2}/\epsilon$ for $i=2,4$, where $m_0 > 0$ is a constant and the ${\boldsymbol}{\gamma}_{i,2}$ are diagonal matrices with positive eigenvalues, in -. We consider the family of GLEs (parametrized by $\epsilon > 0$): $$\begin{aligned} m_0 \epsilon d{\boldsymbol}{v}_t^\epsilon &= -{\boldsymbol}{g}(t, {\boldsymbol}{x}_t^\epsilon) \left(\int_0^t {\boldsymbol}{\kappa}_2^\epsilon(t-s) {\boldsymbol}{h}(s, {\boldsymbol}{x}_s^\epsilon) {\boldsymbol}{v}_s^\epsilon ds \right) dt + {\boldsymbol}{\sigma}(t, {\boldsymbol}{x}_t^\epsilon) {\boldsymbol}{C}_4 {\boldsymbol}{\beta}_t^{4,\epsilon} dt \nonumber \\ &\ \ \ \ + {\boldsymbol}{F}_e(t, {\boldsymbol}{x}_t^\epsilon) dt, \label{res_gle_1} \\ \epsilon d{\boldsymbol}{\beta}_t^{4,\epsilon} &= -{\boldsymbol}{\Gamma}_4 {\boldsymbol}{\beta}_t^{4,\epsilon} dt + {\boldsymbol}{\Sigma}_4 d{\boldsymbol}{W}_t^{(q_4)}, \label{res_gle_2}\end{aligned}$$ where $$\label{res_mkernel} {\boldsymbol}{\kappa}_2^\epsilon(t) = \frac{1}{2}{\boldsymbol}{B}_2 {\boldsymbol}{B}_2^* {\boldsymbol}{\gamma}_{2,2}^2({\boldsymbol}{\gamma}_{2,2}^2 - \epsilon^2 {\boldsymbol}{\Gamma}_{2,1}^2)^{-1} \left( \frac{{\boldsymbol}{\gamma}_{2,2}}{\epsilon} e^{-\frac{{\boldsymbol}{\gamma}_{2,2}}{\epsilon} |t|} - {\boldsymbol}{\Gamma}_{2,1} e^{-{\boldsymbol}{\Gamma}_{2,1} |t|} \right)$$ and the covariance function of the noise process ${\boldsymbol}{\xi}_t^\epsilon = {\boldsymbol}{C}_4 {\boldsymbol}{\beta}_t^{4,\epsilon}$ is given by $$\label{res_noiseee} {\boldsymbol}{R}_4^\epsilon(t) = \frac{1}{2}{\boldsymbol}{B}_4 {\boldsymbol}{B}_4^* {\boldsymbol}{\gamma}_{4,2}^2({\boldsymbol}{\gamma}_{4,2}^2 - \epsilon^2 {\boldsymbol}{\Gamma}_{4,1}^2)^{-1} \left( \frac{{\boldsymbol}{\gamma}_{4,2}}{\epsilon} e^{-\frac{{\boldsymbol}{\gamma}_{4,2}}{\epsilon} |t|} - {\boldsymbol}{\Gamma}_{4,1} e^{-{\boldsymbol}{\Gamma}_{4,1} |t|} \right).$$ Note that ${\boldsymbol}{\kappa}_2^\epsilon(t)$ and ${\boldsymbol}{R}_4^\epsilon(t)$ converge (in the sense of distribution), as $\epsilon \to 0$, to $$\frac{1}{2} {\boldsymbol}{B}_i {\boldsymbol}{B}_i^* (\delta(t){\boldsymbol}{I}-{\boldsymbol}{\Gamma}_{i,1} e^{-{\boldsymbol}{\Gamma}_{i,1} |t|}),$$ with $i=2$ and $i=4$ respectively. The corresponding spectral densities are $$\label{limitspec} {\boldsymbol}{\mathcal{S}}_i(\omega) = {\boldsymbol}{B}_i{\boldsymbol}{B}_i^*\omega^2 (\omega^2 {\boldsymbol}{I}+{\boldsymbol}{\Gamma}_{i,1}^2)^{-1},$$ with $i=2$ and $i=4$ respectively. Together with the equation for the particle’s position, the equations - form the SDE system: $$\begin{aligned} d{\boldsymbol}{x}^\epsilon_t &= {\boldsymbol}{v}^\epsilon_t dt, \label{res_s1} \\ \epsilon m_0 d{\boldsymbol}{v}^\epsilon_t &= -{\boldsymbol}{g}(t, {\boldsymbol}{x}^\epsilon_t){\boldsymbol}{B}_2({\boldsymbol}{y}_t^{2,1,\epsilon}+{\boldsymbol}{y}_t^{2,2,\epsilon}) dt + {\boldsymbol}{\sigma}(t, {\boldsymbol}{x}^\epsilon_t) {\boldsymbol}{B}_4 ({\boldsymbol}{\beta}_t^{4,1,\epsilon}+{\boldsymbol}{\beta}_t^{4,2,\epsilon})dt \nonumber \\ &\ \ \ \ + {\boldsymbol}{F}_e(t, {\boldsymbol}{x}^\epsilon_t) dt, \\ d{\boldsymbol}{y}_t^{2,1,\epsilon} &= -{\boldsymbol}{\Gamma}_{2,1}{\boldsymbol}{y}_t^{2,1,\epsilon} dt + \mathcal{M}_1^\epsilon {\boldsymbol}{h}(t, {\boldsymbol}{x}^\epsilon_t){\boldsymbol}{v}^\epsilon_t dt, \\ \epsilon d{\boldsymbol}{y}_t^{2,2,\epsilon} &= -{\boldsymbol}{\gamma}_{2,2}{\boldsymbol}{y}_t^{2,2,\epsilon}dt+\mathcal{M}_2^\epsilon{\boldsymbol}{h}(t, {\boldsymbol}{x}^\epsilon_t) {\boldsymbol}{v}^\epsilon_t dt, \\ d{\boldsymbol}{\beta}_t^{4,1,\epsilon} &= -{\boldsymbol}{\Gamma}_{4,1} {\boldsymbol}{\beta}_t^{4,1,\epsilon} dt + {\boldsymbol}{\sigma}_1^\epsilon d{\boldsymbol}{W}_t^{(q_4/2)}, \\ \epsilon d{\boldsymbol}{\beta}_t^{4,2,\epsilon} &= -{\boldsymbol}{\gamma}_{4,2} {\boldsymbol}{\beta}_t^{4,2,\epsilon} dt + {\boldsymbol}{\sigma}_2^\epsilon d{\boldsymbol}{W}_t^{(q_4/2)}, \label{res_s6}\end{aligned}$$ where $$\begin{aligned} \mathcal{M}_1^\epsilon &= \bigg( (2(\epsilon {\boldsymbol}{\Gamma}_{2,1}-{\boldsymbol}{\gamma}_{2,2})^2)^{-1} {\boldsymbol}{\Gamma}_{2,1}{\boldsymbol}{\gamma}_{2,2}^2 \nonumber \\ &\hspace{1cm} -((\epsilon {\boldsymbol}{\Gamma}_{2,1}-{\boldsymbol}{\gamma}_{2,2})^2(\epsilon {\boldsymbol}{\Gamma}_{2,1} + {\boldsymbol}{\gamma}_{2,2} ) )^{-1} {\boldsymbol}{\Gamma}_{2,1} {\boldsymbol}{\gamma}_{2,2}^3 \bigg) {\boldsymbol}{B}_2^* , \\ \mathcal{M}_2^\epsilon &= \bigg((2(\epsilon {\boldsymbol}{\Gamma}_{2,1}-{\boldsymbol}{\gamma}_{2,2})^2)^{-1} {\boldsymbol}{\gamma}_{2,2}^3 \nonumber \\ &\hspace{1cm} - \epsilon ((\epsilon {\boldsymbol}{\Gamma}_{2,1}-{\boldsymbol}{\gamma}_{2,2})^2(\epsilon {\boldsymbol}{\Gamma}_{2,1} + {\boldsymbol}{\gamma}_{2,2} ) )^{-1} {\boldsymbol}{\Gamma}_{2,1} {\boldsymbol}{\gamma}_{2,2}^3 \bigg){\boldsymbol}{B}_2^*, \\ {\boldsymbol}{\sigma}_1^\epsilon &= -({\boldsymbol}{\gamma}_{4,2}-{\boldsymbol}{\Gamma}_{4,1}\epsilon)^{-1} {\boldsymbol}{\Gamma}_{4,1} {\boldsymbol}{\gamma}_{4,2}, \\ {\boldsymbol}{\sigma}_2^\epsilon &= ({\boldsymbol}{\gamma}_{4,2}-{\boldsymbol}{\Gamma}_{4,1} \epsilon)^{-1} {\boldsymbol}{\gamma}_{4,2}^2.\end{aligned}$$ In the following, we take $\epsilon \in \mathcal{E}$ to be small. We make the following assumptions, similar to those made in Theorem \[newsmallm\]. \[exis\_gle2\] There are no explosions, i.e. almost surely, for every $\epsilon \in \mathcal{E}$, there exist unique solutions on the time interval $[0,T]$ to the pre-limit SDEs - and to the limiting SDEs . \[initialdata2\] The initial data ${\boldsymbol}{x}, {\boldsymbol}{v} \in {\mathbb{R}}^d$ are $\mathcal{F}_0$-measurable random variables independent of the $\sigma$-algebra generated by the Wiener processes ${\boldsymbol}{W}^{(q_j)}$ ($j=3,4$). They are independent of $\epsilon$ and have finite moments of all orders. The following theorem describes the homogenized dynamics of the family of the GLEs - (or equivalently, of the SDEs -) in the limit $\epsilon \to 0$, i.e. when the inertial time scale, one half of the memory time scales and one half of the noise correlation time scales in the original generalized Langevin system tend to zero at the same rate. \[compl\] Consider the family of the GLEs - (or equivalently, of the SDEs -). Suppose that Assumption \[bounded\] and Assumptions \[special\]-\[initialdata2\] hold, with the ${\boldsymbol}{C}_i$, ${\boldsymbol}{\Sigma}_i$, ${\boldsymbol}{M}_i$ and ${\boldsymbol}{\Gamma}_i$ ($i=2,4$) given in -. Assume that for every $t \in {\mathbb{R}}^+$, $\epsilon > 0$, ${\boldsymbol}{x} \in {\mathbb{R}}^d$, $${\boldsymbol}{I} + {\boldsymbol}{g}(t, {\boldsymbol}{x}) \tilde{{\boldsymbol}{\kappa}}_\epsilon(\lambda) {\boldsymbol}{h}(t, {\boldsymbol}{x})/\lambda m_0 \ \text{ and } \ {\boldsymbol}{I} + {\boldsymbol}{g}(t, {\boldsymbol}{x}) \tilde{{\boldsymbol}{\kappa}}(\lambda) {\boldsymbol}{h}(t, {\boldsymbol}{x})/\lambda m_0$$ are invertible for all $\lambda$ in the right half plane $\{\lambda \in {\mathbb{C}}: Re(\lambda) > 0\}$, where $$\tilde{{\boldsymbol}{\kappa}}_\epsilon(z) = {\boldsymbol}{B}_2(z {\boldsymbol}{I} + {\boldsymbol}{\gamma}_{2,2})^{-1} \mathcal{M}_2^\epsilon \ \text{ and } \ \tilde{{\boldsymbol}{\kappa}}(z) = \frac{1}{2} {\boldsymbol}{B}_2 (z{\boldsymbol}{I} + {\boldsymbol}{\gamma}_{2,2})^{-1} {\boldsymbol}{\gamma}_{2,2} {\boldsymbol}{B}_2^*.$$ Also, assume that ${\boldsymbol}{\nu}(t, {\boldsymbol}{x}) := \frac{1}{2} {\boldsymbol}{g}(t, {\boldsymbol}{x}) {\boldsymbol}{B}_2 {\boldsymbol}{B}_2^* {\boldsymbol}{h}(t, {\boldsymbol}{x}) $ is invertible for every $t \in {\mathbb{R}}^+$, ${\boldsymbol}{x} \in {\mathbb{R}}^d$. Then the particle’s position, ${\boldsymbol}{x}^\epsilon_t \in {\mathbb{R}}^d$, solving the family of GLEs, converges as $\epsilon \to 0$, to ${\boldsymbol}{X}_t \in {\mathbb{R}}^d$, where ${\boldsymbol}{X}_t$ is the first component of the process ${\boldsymbol}{\theta}_t := ({\boldsymbol}{X}_t, {\boldsymbol}{Y}_t, {\boldsymbol}{Z}_t) \in {\mathbb{R}}^{d+d_2/2+d_4/2}$, satisfying the Itô SDE: $$\begin{aligned} \label{lim_2} d{\boldsymbol}{\theta}_t &= {\boldsymbol}{P}(t,{\boldsymbol}{\theta}_t) dt + {\boldsymbol}{Q}(t, {\boldsymbol}{\theta}_t) dt + {\boldsymbol}{R}(t, {\boldsymbol}{\theta}_t) d{\boldsymbol}{W}_t^{(d_4/2)},\end{aligned}$$ where $$\label{theta} {\boldsymbol}{P}(t,{\boldsymbol}{\theta}) = \begin{bmatrix} {\boldsymbol}{\nu}^{-1}({\boldsymbol}{F}_e-{\boldsymbol}{g}{\boldsymbol}{B}_2{\boldsymbol}{Y}_t+{\boldsymbol}{\sigma}{\boldsymbol}{B}_4{\boldsymbol}{Z}_t) \\ -\frac{1}{2} {\boldsymbol}{\Gamma}_{2,1} {\boldsymbol}{B}_2^* {\boldsymbol}{h} {\boldsymbol}{\nu}^{-1}({\boldsymbol}{F}_e-{\boldsymbol}{g}{\boldsymbol}{B}_2{\boldsymbol}{Y}_t+{\boldsymbol}{\sigma}{\boldsymbol}{B}_4{\boldsymbol}{Z}_t) - {\boldsymbol}{\Gamma}_{2,1} {\boldsymbol}{Y}_t \\ -{\boldsymbol}{\Gamma}_{4,1} {\boldsymbol}{Z}_t \end{bmatrix},$$ $${\boldsymbol}{R}(t, \theta) = \begin{bmatrix} {\boldsymbol}{\nu}^{-1} {\boldsymbol}{\sigma} {\boldsymbol}{B}_4 \\ -\frac{1}{2} {\boldsymbol}{\Gamma}_{2,1} {\boldsymbol}{B}_2^* {\boldsymbol}{h} {\boldsymbol}{\nu}^{-1} {\boldsymbol}{\sigma} {\boldsymbol}{B}_4 \\ -{\boldsymbol}{\Gamma}_{4,1} \end{bmatrix},$$ and the $i$th component of ${\boldsymbol}{Q}$, $i=1,\dots,d+d_2/2+d_4/2$, is given by: $$Q_i = \frac{\partial}{\partial X_l}\left[ H_{i,j}(t, {\boldsymbol}{X}) \right] J_{j,l}, \ \ l=1,\dots,d; \ j=1,\dots,d+d_2/2+d_4/2,$$ with ${\boldsymbol}{H}(t, {\boldsymbol}{X}) = {\boldsymbol}{T}(t, {\boldsymbol}{X}){\boldsymbol}{U}^{-1}(t, {\boldsymbol}{X}) \in {\mathbb{R}}^{(d+d_2/2+d_4/2) \times (d+d_2/2+d_4/2)}$ and ${\boldsymbol}{J} \in {\mathbb{R}}^{(d+d_2/2+d_4/2) \times (d+d_2/2+d_4/2)}$ is the solution to the Lyapunov equation ${\boldsymbol}{U}{\boldsymbol}{J}+{\boldsymbol}{J}{\boldsymbol}{U}^* = diag({\boldsymbol}{0},{\boldsymbol}{0},{\boldsymbol}{\gamma}_{4,2}^2)$, where $$\label{TU} {\boldsymbol}{T} = \begin{bmatrix} {\boldsymbol}{I} & {\boldsymbol}{0} & {\boldsymbol}{0} \\ -\frac{1}{2}{\boldsymbol}{\Gamma}_{2,1} {\boldsymbol}{B}_2^* {\boldsymbol}{h} & {\boldsymbol}{0} & {\boldsymbol}{0} \\ {\boldsymbol}{0} & {\boldsymbol}{0} & {\boldsymbol}{0} \end{bmatrix}, \ \ \ {\boldsymbol}{U} = \begin{bmatrix} {\boldsymbol}{0} & {\boldsymbol}{g} {\boldsymbol}{B}_2/m_0 & -{\boldsymbol}{\sigma} {\boldsymbol}{B}_4/m_0 \\ -\frac{1}{2}{\boldsymbol}{\gamma}_{2,2}{\boldsymbol}{B}_2^* {\boldsymbol}{h} & {\boldsymbol}{\gamma}_{2,2} & {\boldsymbol}{0} \\ {\boldsymbol}{0} & {\boldsymbol}{0} & {\boldsymbol}{\gamma}_{4,2} \end{bmatrix}.$$ The convergence holds in the same sense as in Theorem \[newsmallm\], i.e. for all finite $T>0$, $\sup_{t \in [0,T]} |{\boldsymbol}{x}^\epsilon_t - {\boldsymbol}{X}_t| \to 0$ in probability, as $\epsilon \to 0$. We apply Theorem \[mainthm\] to the SDEs -. To this end, we set, in Theorem \[mainthm\], $n_1 = n_2 = d+d_2/2+d_4/2$, $k_1 = k_2 = d_4/2$ and $${\boldsymbol}{x}^\epsilon(t) = ({\boldsymbol}{x}^\epsilon_t, {\boldsymbol}{y}_t^{2,1,\epsilon},{\boldsymbol}{\beta}_t^{4,1,\epsilon}), \ {\boldsymbol}{v}^\epsilon(t) = ({\boldsymbol}{v}^\epsilon_t, {\boldsymbol}{y}_t^{2,2,\epsilon}, {\boldsymbol}{\beta}_t^{4,2,\epsilon}) \in {\mathbb{R}}^{d+d_2/2+d_4/2},$$ $$\begin{aligned} {\boldsymbol}{a}_1(t, {\boldsymbol}{x}^\epsilon(t),\epsilon) &= \begin{bmatrix} {\boldsymbol}{I} & {\boldsymbol}{0} & {\boldsymbol}{0} \\ \mathcal{M}_1^\epsilon {\boldsymbol}{h}(t, {\boldsymbol}{x}^\epsilon_t) & {\boldsymbol}{0} & {\boldsymbol}{0} \\ {\boldsymbol}{0} & {\boldsymbol}{0} & {\boldsymbol}{0} \end{bmatrix} \in {\mathbb{R}}^{(d+d_2/2+d_4/2) \times (d+d_2/2+d_4/2)}, \\ {\boldsymbol}{a}_2(t, {\boldsymbol}{x}^\epsilon(t),\epsilon) &= \begin{bmatrix} {\boldsymbol}{0} & -{\boldsymbol}{g}(t, {\boldsymbol}{x}^\epsilon_t) {\boldsymbol}{B}_2/m_0 & {\boldsymbol}{\sigma}(t, {\boldsymbol}{x}^\epsilon_t){\boldsymbol}{B}_4/m_0 \\ \mathcal{M}_2^\epsilon {\boldsymbol}{h}(t, {\boldsymbol}{x}^\epsilon_t) & -{\boldsymbol}{\gamma}_{2,2} & {\boldsymbol}{0} \\ {\boldsymbol}{0} & {\boldsymbol}{0} & -{\boldsymbol}{\gamma}_{4,2} \\ \end{bmatrix} \\ &\ \ \ \ \in {\mathbb{R}}^{(d+d_2/2+d_4/2) \times (d+d_2/2+d_4/2)}, \nonumber \\ {\boldsymbol}{b}_1(t, {\boldsymbol}{x}^\epsilon(t),\epsilon) &= ({\boldsymbol}{0}, -{\boldsymbol}{\Gamma}_{2,1} {\boldsymbol}{y}_t^{2,1,\epsilon}, -{\boldsymbol}{\Gamma}_{4,1} {\boldsymbol}{\beta}_t^{4,1,\epsilon}) \in {\mathbb{R}}^{d+d_2/2+d_4/2},\\ {\boldsymbol}{b}_2(t,{\boldsymbol}{x}^\epsilon(t),\epsilon) &= ((-{\boldsymbol}{g}(t, {\boldsymbol}{x}^\epsilon_t){\boldsymbol}{B}_2{\boldsymbol}{y}_t^{2,1,\epsilon} + {\boldsymbol}{\sigma}(t, {\boldsymbol}{x}^\epsilon_t) {\boldsymbol}{B}_4 {\boldsymbol}{\beta}_t^{4,1,\epsilon}+{\boldsymbol}{F}_e(t,{\boldsymbol}{x}^\epsilon_t))/m_0, \nonumber \\ &\ \ \ \ \ \ {\boldsymbol}{0},{\boldsymbol}{0}) \in {\mathbb{R}}^{d+d_2/2+d_4/2}, \\ {\boldsymbol}{\sigma}_1(t,{\boldsymbol}{x}^\epsilon(t),\epsilon) &= [{\boldsymbol}{0} \ \ {\boldsymbol}{0} \ \ {\boldsymbol}{\sigma}_1^\epsilon ]^* \in {\mathbb{R}}^{(d+d_2/2+d_4/2)\times d_4/2}, \\ {\boldsymbol}{\sigma}_2(t, {\boldsymbol}{x}^\epsilon(t),\epsilon) &= [{\boldsymbol}{0} \ \ {\boldsymbol}{0} \ \ {\boldsymbol}{\sigma}_2^\epsilon ]^* \in {\mathbb{R}}^{(d+d_2/2+d_4/2)\times d_4/2}.\end{aligned}$$ The initial conditions are ${\boldsymbol}{x}^\epsilon(0) = ({\boldsymbol}{x}, {\boldsymbol}{0}, {\boldsymbol}{\beta}_0^{4,1,\epsilon})$ and ${\boldsymbol}{v}^\epsilon(0) = ({\boldsymbol}{v}, {\boldsymbol}{0}, {\boldsymbol}{\beta}_0^{4,2,\epsilon})$; both depend on $\epsilon$. We now verify each of the assumptions of Theorem \[mainthm\]. Assumption \[aexis\] clearly holds by our assumptions on the GLE. The assumptions on the coefficients in the SDEs follow easily from the Assumptions \[bounded\]-\[initialdata\] and therefore Assumption \[a1\_ch2\] holds. Next, note that ${\boldsymbol}{\beta}_0^{4,\epsilon} = ({\boldsymbol}{\beta}_0^{4,1,\epsilon}, {\boldsymbol}{\beta}_0^{4,2,\epsilon})$ is a random variable normally distributed with mean-zero and covariance: $${\boldsymbol}{M}_4^\epsilon = \begin{bmatrix} \mathbb{E}[|{\boldsymbol}{\beta}_0^{4,1,\epsilon}|^2] & \mathbb{E}[ {\boldsymbol}{\beta}_0^{4,1,\epsilon} ({\boldsymbol}{\beta}_0^{4,2,\epsilon})^*] \\ \mathbb{E}[{\boldsymbol}{\beta}_0^{4,2,\epsilon} ({\boldsymbol}{\beta}_0^{4,1,\epsilon})^*] & \mathbb{E}[|{\boldsymbol}{\beta}_0^{4,2,\epsilon}|^2] \end{bmatrix},$$ where $$\begin{aligned} \mathbb{E}[|{\boldsymbol}{\beta}_0^{4,1,\epsilon}|^2] &= \frac{1}{2}{\boldsymbol}{\Gamma}_{4,1} {\boldsymbol}{\gamma}_{4,2}^2 (\epsilon{\boldsymbol}{\Gamma}_{4,1}-{\boldsymbol}{\gamma}_{4,2})^{-2} = O(1), \\ \mathbb{E}[{\boldsymbol}{\beta}_0^{4,1,\epsilon} ({\boldsymbol}{\beta}_0^{4,2,\epsilon})^*] &= \mathbb{E}[ {\boldsymbol}{\beta}_0^{4,2,\epsilon} ({\boldsymbol}{\beta}_0^{4,1,\epsilon})^*] \nonumber \\ &= -{\boldsymbol}{\Gamma}_{4,1} {\boldsymbol}{\gamma}^3_{4,2} (\epsilon {\boldsymbol}{\Gamma}_{4,1}+{\boldsymbol}{\gamma}_{4,2})^{-1} (\epsilon {\boldsymbol}{\Gamma}_{4,1}-{\boldsymbol}{\gamma}_{4,2})^{-2} = O(1), \\ \mathbb{E}[|{\boldsymbol}{\beta}_0^{4,2,\epsilon}|^2] &= \frac{1}{2\epsilon}{\boldsymbol}{\gamma}^3_{4,2} (\epsilon {\boldsymbol}{\Gamma}_{4,1}-{\boldsymbol}{\gamma}_{4,2})^{-2} = O\left(\frac{1}{\epsilon}\right)\end{aligned}$$ as $\epsilon \to 0$. Using the bound $\mathbb{E}[ |{\boldsymbol}{z}|^p ]\leq C_p (\mathbb{E}[|{\boldsymbol}{z}|^2])^{p/2}$, where ${\boldsymbol}{z}$ is a mean-zero Gaussian random variable, $C_p>0$ is a constant and $p>0$, it is straightforward to see that Assumption \[a2\_ch2\] is satisfied. Note that ${\boldsymbol}{B}_i = {\boldsymbol}{b}_i$ (for $i=1,2$) by our convention, as the ${\boldsymbol}{b}_i$ do not depend explicitly on $\epsilon$. The uniform convergence of ${\boldsymbol}{a}_i(t, {\boldsymbol}{x},\epsilon)$, $({\boldsymbol}{a}_i)_{{\boldsymbol}{x}}(t, {\boldsymbol}{x},\epsilon)$ and ${\boldsymbol}{\sigma}_i(t, {\boldsymbol}{x},\epsilon)$ (in ${\boldsymbol}{x}$) to ${\boldsymbol}{A}_i(t, {\boldsymbol}{x})$, $({\boldsymbol}{A}_i)_{{\boldsymbol}{x}}(t, {\boldsymbol}{x})$ and ${\boldsymbol}{\Sigma}_i(t, {\boldsymbol}{x})$ respectively in the limit $\epsilon \to 0$ can be shown easily and, in fact, we see that ${\boldsymbol}{A}_1 = {\boldsymbol}{T}$, ${\boldsymbol}{A}_2 = -{\boldsymbol}{U}$, where ${\boldsymbol}{T}$ and ${\boldsymbol}{U}$ are given in the theorem, $$\begin{aligned} {\boldsymbol}{\Sigma}_1 &= [{\boldsymbol}{0} \ \ {\boldsymbol}{0} \ \ -{\boldsymbol}{\Gamma}_{4,1} ]^*, \\ {\boldsymbol}{\Sigma}_2 &= [{\boldsymbol}{0} \ \ {\boldsymbol}{0} \ \ {\boldsymbol}{\gamma}_{4,2} ]^*,\end{aligned}$$ and $a_1 = a_2 = c_1 = c_2 = d_1 = d_2 = 1$, $b_1=b_2 = \infty$, where the $a_i$, $b_i$, $c_i$ and $d_i$ are from Assumption \[a5\_ch2\] of Theorem \[mainthm\]. Therefore, the first part of Assumption \[a5\_ch2\] is satisfied. It remains to verify the (uniform) Hurwitz stability of ${\boldsymbol}{a}_2$ and ${\boldsymbol}{A}_2$ (i.e. Assumption \[a0\_ch2\] and the last part of Assumption \[a5\_ch2\]). This can be done using the methods of the proof of Theorem 2 in [@LimWehr_Homog_NonMarkovian] and we omit the details here. The results then follow by applying Theorem \[mainthm\] and - follow from matrix algebraic calculations. It is clear from Theorem \[compl\] that the homogenized position process is a component of the (slow) Markov process ${\boldsymbol}{\theta}_t$. In general, it is not a Markov process itself. Also, the components of ${\boldsymbol}{\theta}_t$ are coupled in a non-trivial way. We emphasize that one could use Theorem \[mainthm\] to study cases in which the different time scales are taken to zero in a different manner. The limiting SDE for the position process may simplify under additional assumptions. In particular, in the one-dimensional case, i.e. with $d=1$ (or when all the matrix-valued coefficients and the parameters are diagonal in the multi-dimensional case), the formula for the limiting SDEs becomes more explicit. This special case has been studied in an earlier section in the context of the models (M1) and (M2) from Example \[ex\_mot\]. Conclusions and Final Remarks {#sect_conclusions} ============================= We have explored various homogenization schemes for a wide class of generalized Langevin equations. The relevance of the studied limit problems in the context of usual and anomalous diffusion of a particle in a heat bath. Our explorations here open up a wide range of possibilities and provide insights in the model reduction of and effective drifts in generalized Langevin systems. The following summarizes the main conclusions of the paper: - (stochastic modeling point of view) Homogenization schemes producing effective SDEs, driven by white noise, should be the exception rather than the rule. This is particularly important if one seeks to reduce the original model, retain its non-trivial features; - (complexity reduction point of view) There is a trade-off in simplifying GLE models with state-dependent coefficients: the greater the level of model reduction, the more complicated the correction drift terms, entering the homogenized model; - (statistical physics point of view) Homogenized equation obtained could be further simplified, i.e. number of effective equations could be reduced and the drift terms become simplified, when certain special conditions such as a fluctuation-dissipation theorem holds. We conclude this paper by mentioning a very interesting future direction. As mentioned in Remark \[rem\_inf\_dim\], one could extend the current GLE studies to the infinite-dimensional setting so that a larger class of memory functions and covariance functions can be covered. To this end, one can define the noise process as an appropriate linear functional of a Hilbert space valued process solving a stochastic evolution equation [@da2014stochastic; @da1996ergodicity]. This way, one can approach a class of GLEs, driven by noises having a completely monotone covariance function. This large class of functions contains covariances with power decay and thus the method outlined above can be viewed as an extension of those considered in [@glatt2018generalized; @2018arXiv180409682N], where the memory function and covariance of the driving noise are represented as suitable infinite series with a power-law tail (these works are, to our knowledge, among the few works that study rigorously GLEs with a power-law memory). This approach to systems driven by strongly correlated noise, which is our future project, is expected to involve substantial technical difficulties. More importantly, one can expect that power decay of correlations leads to new phenomena, altering the nature of noise-induced drift. Homogenization for a Class of SDEs with State-Dependent Coefficients {#sect_generalhomogthm} ==================================================================== In this section, we study homogenization for a general class of perturbed SDEs with state-dependent coefficients. Homogenization of differential equations has been extensively studied, from the seminal works of Kurtz [@kurtz1973limit], Papanicolaou [@papanicolaou1976some] and Khasminksy [@has1966stochastic] to the more recent works [@g2005analysis; @Pavliotis; @hottovy2015smoluchowski; @herzog2016small; @birrell2017; @birrell2017small; @chevyrev2016multiscale]. Here we are going to present yet another variant of homogenization result that will be needed for studying homogenization for our GLEs (see the last paragraph in Section \[goaletc\] for comments on novelty of this result). Let $n_1$, $n_2$, $k_1$, $k_2$ be positive integers. Let $\epsilon \in (0,\epsilon_0] =: \mathcal{E}$ be a small parameter and ${\boldsymbol}{x}^{\epsilon}(t) \in {\mathbb{R}}^{n_1}$, ${\boldsymbol}{v}^{\epsilon}(t) \in {\mathbb{R}}^{n_2}$ for $t \in [0,T]$, where $\epsilon_0>0$ and $T>0$ are finite constants. Let ${\boldsymbol}{W}^{(k_1)}$ and ${\boldsymbol}{W}^{(k_2)}$ denote independent Wiener processes, which are ${\mathbb{R}}^{k_1}$-valued and ${\mathbb{R}}^{k_2}$-valued respectively, on a filtered probability space $(\Omega, \mathcal{F}, \mathcal{F}_t, \mathbb{P})$ satisfying the usual conditions [@karatzas2012Brownian]. With respect to the standard bases of ${\mathbb{R}}^{n_1}$ and ${\mathbb{R}}^{n_2}$ respectively, we write: $$\begin{aligned} {\boldsymbol}{x}^{\epsilon}(t) &= ([x^{\epsilon}]_1(t),[x^{\epsilon}]_2(t),\dots, [x^{\epsilon}]_{n_1}(t)), \\ {\boldsymbol}{v}^{\epsilon}(t) &= ([v^{\epsilon}]_1(t),[v^{\epsilon}]_2(t),\dots, [v^{\epsilon}]_{n_2}(t)).\end{aligned}$$ We consider the following family of perturbed SDE systems[^4] for $({\boldsymbol}{x}^\epsilon(t), {\boldsymbol}{v}^\epsilon(t)) \in {\mathbb{R}}^{n_1+n_2}$: $$\begin{aligned} d{\boldsymbol}{x}^{\epsilon}(t) &= {\boldsymbol}{a}_{1}(t,{\boldsymbol}{x}^{\epsilon}(t), \epsilon) {\boldsymbol}{v}^{\epsilon}(t) dt + {\boldsymbol}{b}_{1}(t,{\boldsymbol}{x}^{\epsilon}(t),\epsilon) dt + {\boldsymbol}{\sigma}_{1}(t,{\boldsymbol}{x}^{\epsilon}(t),\epsilon) d{\boldsymbol}{W}^{(k_1)}(t), \label{sde1} \\ \epsilon d{\boldsymbol}{v}^{\epsilon}(t) &= {\boldsymbol}{a}_{2}(t,{\boldsymbol}{x}^{\epsilon}(t),\epsilon) {\boldsymbol}{v}^{\epsilon}(t) dt + {\boldsymbol}{b}_{2}(t,{\boldsymbol}{x}^{\epsilon}(t),\epsilon) dt + {\boldsymbol}{\sigma}_2(t,{\boldsymbol}{x}^{\epsilon}(t), \epsilon) d{\boldsymbol}{W}^{(k_2)}(t), \label{sde2}\end{aligned}$$ with the initial conditions, ${\boldsymbol}{x}^{\epsilon}(0) = {\boldsymbol}{x}^\epsilon$ and ${\boldsymbol}{v}^{\epsilon}(0) = {\boldsymbol}{v}^\epsilon$, where ${\boldsymbol}{x}^\epsilon$ and ${\boldsymbol}{v}^\epsilon$ are random variables that possibly depend on $\epsilon$. In the SDEs -, the coefficients ${\boldsymbol}{a}_1: {\mathbb{R}}^+ \times {\mathbb{R}}^{n_1} \times \mathcal{E} \to {\mathbb{R}}^{n_1 \times n_2}$, ${\boldsymbol}{a}_2 : {\mathbb{R}}^+ \times {\mathbb{R}}^{n_1} \times \mathcal{E} \to {\mathbb{R}}^{n_2 \times n_2}$, ${\boldsymbol}{\sigma}_2 : {\mathbb{R}}^+ \times {\mathbb{R}}^{n_1} \times \mathcal{E} \to {\mathbb{R}}^{n_2 \times k_2}$ are non-zero matrix-valued functions, whereas ${\boldsymbol}{b}_1 : {\mathbb{R}}^+ \times {\mathbb{R}}^{n_1} \times \mathcal{E} \to {\mathbb{R}}^{n_1}$, ${\boldsymbol}{b}_2 : {\mathbb{R}}^+ \times {\mathbb{R}}^{n_1} \times \mathcal{E} \to {\mathbb{R}}^{n_2}$, ${\boldsymbol}{\sigma}_1 : {\mathbb{R}}^+ \times {\mathbb{R}}^{n_1} \times \mathcal{E} \to {\mathbb{R}}^{n_1 \times k_1}$ are matrix-valued or vector-valued functions, which may depend on ${\boldsymbol}{x}^{\epsilon}$, as well as on $t$ and $\epsilon$ explicitly, as indicated by the parenthesis $(t, {\boldsymbol}{x}^{\epsilon}(t), \epsilon)$. In the case where the coefficients do not depend on $\epsilon$ explicitly, we will denote them by the corresponding capital letters (for instance, if ${\boldsymbol}{a}_i(t,{\boldsymbol}{x},\epsilon)={\boldsymbol}{a}_i(t,{\boldsymbol}{x})$, then ${\boldsymbol}{a}_i(t,{\boldsymbol}{x}) := {\boldsymbol}{A}_i(t,{\boldsymbol}{x})$ etc.). We are interested in the limit as $\epsilon \to 0$ of the SDEs -, in particular the limiting behavior of the process ${\boldsymbol}{x}^{\epsilon}(t)$, under appropriate assumptions[^5] on the coefficients. In this section, we present a homogenization theorem that studies this limit and delay its proof and applications to later sections.\ We make the following assumptions concerning the SDEs - and . \[aexis\] The global solutions, defined on $[0,T]$, to the pre-limit SDEs - and to the limiting SDE a.s. exist and are unique for all $\epsilon \in \mathcal{E} $ (i.e. there are no explosions). \[a0\_ch2\] The matrix-valued functions $$\{ -{\boldsymbol}{a}_2(t,{\boldsymbol}{y}, \epsilon); t \in [0,T], {\boldsymbol}{y} \in {\mathbb{R}}^{n_1}, \epsilon \in \mathcal{E} \}$$ are [*uniformly positive stable*]{}, i.e. all real parts of the eigenvalues of $-{\boldsymbol}{a}_2(t, {\boldsymbol}{y},\epsilon)$ are bounded from below, uniformly in $t$, ${\boldsymbol}{y}$ and $\epsilon$, by a positive constant (or, equivalently, the matrix-valued functions $\{{\boldsymbol}{a}_2(t, {\boldsymbol}{y},\epsilon); t \in [0,T], {\boldsymbol}{y} \in {\mathbb{R}}^{n_1}, \epsilon \in \mathcal{E} \}$ are [*uniformly Hurwitz stable*]{}). They are $O(1)$ as $\epsilon \to 0$ (see Assumption \[a5\_ch2\]). \[a1\_ch2\] For $t \in [0,T]$, ${\boldsymbol}{y} \in {\mathbb{R}}^{n_1}$, $\epsilon \in \mathcal{E}$, and $i=1,2$, the functions ${\boldsymbol}{b}_i(t,{\boldsymbol}{y},\epsilon)$ and ${\boldsymbol}{\sigma}_i(t,{\boldsymbol}{y},\epsilon)$ are continuous and bounded in $t$ and ${\boldsymbol}{y}$, and Lipschitz in ${\boldsymbol}{y}$, whereas the functions ${\boldsymbol}{a}_i(t,{\boldsymbol}{y},\epsilon)$ and $({\boldsymbol}{a}_i)_{{\boldsymbol}{y}}(t,{\boldsymbol}{y},\epsilon)$ are continuous in $t$, continuously differentiable in ${\boldsymbol}{y}$, bounded in $t$ and ${\boldsymbol}{y}$, and Lipschitz in ${\boldsymbol}{y}$. Moreover, the functions $({\boldsymbol}{a}_i)_{{\boldsymbol}{y} {\boldsymbol}{y}}(t,{\boldsymbol}{y},\epsilon)$ ($i=1,2$) are bounded for every $t \in [0,T]$, ${\boldsymbol}{y} \in {\mathbb{R}}^{n_1}$ and $\epsilon \in \mathcal{E}$. We assume that the (global) Lipschitz constants are bounded by $L(\epsilon)$, where $L(\epsilon)=O(1)$ as $\epsilon \to 0$, i.e. for every $t \in [0,T]$, ${\boldsymbol}{x}$, ${\boldsymbol}{y} \in {\mathbb{R}}^{n_1}$, $$\begin{aligned} &\max\bigg\{\|{\boldsymbol}{a}_i(t, {\boldsymbol}{x},\epsilon)-{\boldsymbol}{a}_i(t,{\boldsymbol}{y},\epsilon)\|,\|({\boldsymbol}{a}_i)_{{\boldsymbol}{x}}(t,{\boldsymbol}{x},\epsilon)-({\boldsymbol}{a}_i)_{{\boldsymbol}{x}}(t,{\boldsymbol}{y},\epsilon)\|, \nonumber \\ &\hspace{1.1cm} |{\boldsymbol}{b}_i(t,{\boldsymbol}{x},\epsilon)-{\boldsymbol}{b}_i(t,{\boldsymbol}{y},\epsilon)|, \|{\boldsymbol}{\sigma}_i(t,{\boldsymbol}{x},\epsilon)-{\boldsymbol}{\sigma}_i(t,{\boldsymbol}{y},\epsilon)\|; \ i=1,2\bigg\} \nonumber \\ &\leq L(\epsilon)|{\boldsymbol}{x}-{\boldsymbol}{y}|.\end{aligned}$$ \[a2\_ch2\] The initial condition ${\boldsymbol}{x}^\epsilon_0 = {\boldsymbol}{x}^\epsilon \in {\mathbb{R}}^{n_1}$ is an $\mathcal{F}_0$-measurable random variable that may depend on $\epsilon$, and we assume that $\mathbb{E}[|{\boldsymbol}{x}^\epsilon|^p] = O(1)$ as $\epsilon \to 0$ for all $p>0$. Also, ${\boldsymbol}{x}^\epsilon$ converges, in the limit as $\epsilon \to 0$, to a random variable ${\boldsymbol}{x}$ as follows: $\mathbb{E}\left[|{\boldsymbol}{x}^\epsilon - {\boldsymbol}{x}|^p \right] = O(\epsilon^{p r_0})$, where $r_0 > 1/2$ is a constant, as $\epsilon \to 0$. The initial condition ${\boldsymbol}{v}^\epsilon_0 = {\boldsymbol}{v}^\epsilon \in {\mathbb{R}}^{n_2}$ is an $\mathcal{F}_0$-measurable random variable that may depend on $\epsilon$, and we assume that for every $p>0$, $\mathbb{E}[ |\epsilon {\boldsymbol}{v}^\epsilon|^p] = O(\epsilon^\alpha)$ as $\epsilon \to 0$, for some $\alpha \geq p/2$. \[a5\_ch2\] For $i=1,2$, $t \in [0,T]$, and every ${\boldsymbol}{x} \in {\mathbb{R}}^{n_1}$, each of the matrix or vector entries of the (non-zero) functions ${\boldsymbol}{a}_i(t,{\boldsymbol}{x},\epsilon)$, $({\boldsymbol}{a}_i)_{{\boldsymbol}{x}}(t,{\boldsymbol}{x},\epsilon)$, ${\boldsymbol}{b}_i(t,{\boldsymbol}{x},\epsilon)$ and ${\boldsymbol}{\sigma}_i(t,{\boldsymbol}{x},\epsilon)$, converges, uniformly in ${\boldsymbol}{x}$, to a unique non-zero element, in the limit as $\epsilon \to 0$. Their limits are denoted by ${\boldsymbol}{A}_i(t,{\boldsymbol}{x})$, $({\boldsymbol}{A}_i)_{{\boldsymbol}{x}}(t,{\boldsymbol}{x})$, ${\boldsymbol}{B}_i(t,{\boldsymbol}{x})$ and ${\boldsymbol}{\Sigma}_i(t,{\boldsymbol}{x})$ respectively. Their rate of convergence is assumed to satisfy the following power law bounds: for every $t \in [0,T]$, ${\boldsymbol}{x} \in {\mathbb{R}}^{n_1}$ and $i=1,2$, $$\begin{aligned} \|{\boldsymbol}{a}_i(t,{\boldsymbol}{x},\epsilon)-{\boldsymbol}{A}_i(t,{\boldsymbol}{x}) \| &\leq \alpha_i(\epsilon), \\ |{\boldsymbol}{b}_i(t,{\boldsymbol}{x},\epsilon)-{\boldsymbol}{B}_i(t,{\boldsymbol}{x}) | &\leq \beta_i(\epsilon), \\ \|{\boldsymbol}{\sigma}_i(t,{\boldsymbol}{x},\epsilon)-{\boldsymbol}{\Sigma}_i(t,{\boldsymbol}{x}) \| &\leq \gamma_i(\epsilon),\\ \| ({\boldsymbol}{a}_i)_{{\boldsymbol}{x}}(t,{\boldsymbol}{x},\epsilon)-({\boldsymbol}{A}_i )_{{\boldsymbol}{x}}(t,{\boldsymbol}{x}) \| &\leq \theta_i(\epsilon)\end{aligned}$$ where $\alpha_i(\epsilon) = O(\epsilon^{a_i})$, $\beta_i(\epsilon) = O(\epsilon^{b_i})$, $\gamma_i(\epsilon) = O(\epsilon^{c_i})$ and $\theta_i(\epsilon) = O(\epsilon^{d_i})$, as $\epsilon \to 0$, for some positive exponents $a_i$, $b_i$, $c_i$ and $d_i$. Moreover, we assume that ${\boldsymbol}{A}_2(t,{\boldsymbol}{x})$ is Hurwitz stable for every $t$ and ${\boldsymbol}{x}$.\ [**Convention.**]{} In the case where the coefficients do not show explicit dependence on $\epsilon$ or the case when any of the coefficients ${\boldsymbol}{b}_1$, ${\boldsymbol}{b}_2$ and ${\boldsymbol}{\sigma}_1$ is zero, we set the exponent, describing the corresponding rate of convergence, to infinity. For instance, if ${\boldsymbol}{a}_i(t,{\boldsymbol}{x},\epsilon) = {\boldsymbol}{A}_i(t,{\boldsymbol}{x})$, we set $a_i = \infty$. Meanwhile, if ${\boldsymbol}{\sigma}_1 = {\boldsymbol}{0}$, we set $c_1 = \infty$, etc.. We now state our homogenization theorem. \[mainthm\] Suppose that the family of SDE systems $\eqref{sde1}$-$\eqref{sde2}$ satisfies Assumption \[aexis\]-\[a5\_ch2\]. Let $({\boldsymbol}{x}^{\epsilon}(t), {\boldsymbol}{v}^{\epsilon}(t)) \in {\mathbb{R}}^{n_1} \times {\mathbb{R}}^{n_2}$ be their solutions, with the initial conditions $({\boldsymbol}{x}^\epsilon, {\boldsymbol}{v}^\epsilon)$. Let ${\boldsymbol}{X}(t) \in {\mathbb{R}}^{n_1}$ be the solution to the following Itô SDE with the initial position ${\boldsymbol}{X}(0) = {\boldsymbol}{x}$: $$\begin{aligned} d{\boldsymbol}{X}(t) &= [{\boldsymbol}{B}_1(t,{\boldsymbol}{X}(t))-{\boldsymbol}{A}_1(t,{\boldsymbol}{X}(t)){\boldsymbol}{A}_2^{-1}(t,{\boldsymbol}{X}(t)){\boldsymbol}{B}_2(t,{\boldsymbol}{X}(t))] dt \nonumber \\ &\ \ \ \ + {\boldsymbol}{S}(t,{\boldsymbol}{X}(t)) dt + {\boldsymbol}{\Sigma}_1(t,{\boldsymbol}{X}(t)) d{\boldsymbol}{W}^{(k_1)}(t) \nonumber \\ &\ \ \ \ - {\boldsymbol}{A}_1(t,{\boldsymbol}{X}(t)) {\boldsymbol}{A}_2^{-1}(t,{\boldsymbol}{X}(t)){\boldsymbol}{\Sigma}_2(t,{\boldsymbol}{X}(t)) d{\boldsymbol}{W}^{(k_2)}(t), \label{mainlimitingeqn}\end{aligned}$$ where ${\boldsymbol}{S}(t,{\boldsymbol}{X}(t))$ is the [*noise-induced drift vector*]{} whose $i$th component is given by $$[S]_{i}(t,{\boldsymbol}{X}) = -\frac{\partial}{\partial X_{l}} \bigg([A_1 A_2^{-1}]_{i,j}(t,{\boldsymbol}{X}) \bigg) \cdot [A_1]_{l,k}(t,{\boldsymbol}{X}) \cdot [J]_{j,k}(t,{\boldsymbol}{X}),$$ where $i,l=1,\dots,n_1, \ j,k=1,\dots,n_2$, or in index-free notation, $$\label{indexfree} {\boldsymbol}{S} = {\boldsymbol}{A}_1 {\boldsymbol}{A}_2^{-1} {\boldsymbol}{\nabla}\cdot ({\boldsymbol}{J}{\boldsymbol}{A}_1^*) -{\boldsymbol}{\nabla} \cdot ({\boldsymbol}{A}_1 {\boldsymbol}{A}_2^{-1} {\boldsymbol}{J} {\boldsymbol}{A}_1^*) ,$$ and ${\boldsymbol}{J} \in {\mathbb{R}}^{n_2 \times n_2}$ is the unique solution to the Lyapunov equation: $$\label{lyp} {\boldsymbol}{J} {\boldsymbol}{A}_2^{*} + {\boldsymbol}{A}_2 {\boldsymbol}{J} = -{\boldsymbol}{\Sigma}_2 {\boldsymbol}{\Sigma}_2^{*}.$$ Then the process ${\boldsymbol}{x}^{\epsilon}(t)$ converges, as $\epsilon \to 0$, to the solution ${\boldsymbol}{X}(t)$, of the Itô SDE , in the following sense: for all finite $T > 0$, $p > 0$, there exists a positive random variable $\epsilon_1$ such that $$\label{mainconv} \mathbb{E}\left[\sup_{t \in [0,T]} |{\boldsymbol}{x}^{\epsilon}(t) - {\boldsymbol}{X}(t)|^p; \epsilon \leq \epsilon_1 \right] = O(\epsilon^{r}),$$ in the limit as $\epsilon \to 0$, with $r>0$ is the rate determined to be: $$\label{rate_mainresult} r= \begin{cases} \beta \ \text{ for all } 0 < \beta < \frac{p}{2}, & \text{ if}\ a_i, b_i, c_i, d_i \geq \frac{1}{2} \text{ for } i=1,2, \\ p \cdot \min(a_i, b_i, c_i, d_i; i=1,2) , & \text{ otherwise}, \end{cases}$$ where the $a_i$, $b_i$, $c_i$, $d_i$ ($i=1,2$) are the positive constants from Assumption \[a5\_ch2\]. In particular, for all finite $T>0$, $$\sup_{t \in [0,T]} |{\boldsymbol}{x}^\epsilon(t) - {\boldsymbol}{X}(t)| \to 0,$$ in probability, in the limit as $\epsilon \to 0$. \[warn\] With more work and additional assumptions, one could prove the statements in Assumption \[aexis\] from Assumption \[a0\_ch2\]-\[a5\_ch2\]. However, we choose to incorporate such existence and uniquess results into our assumptions and work with the assumptions as stated above. Moreover, as we have forewarned the readers, our assumptions can be relaxed in various directions at the cost of more technicalities. For instance, the boundedness assumption on the coefficients of the SDEs may be removed to obtain still a pathwise convergence result by adapting the techniques in [@herzog2016small] – see also analogous remarks in Remark 5 in [@LimWehr_Homog_NonMarkovian]. However, we choose not to pursue the above technical details in this already lengthy paper. Proof of Theorem \[mainthm\] {#proof_ch2} ============================ Proof of Theorem \[mainthm\] uses techniques developed in earlier works [@hottovy2015smoluchowski; @2017BirrellLatest; @LimWehr_Homog_NonMarkovian], but here one needs to additionally take into account the $\epsilon$-dependence of the coefficients in the SDEs -. As a preparation for the proof, we need a few lemmas and propositions. We start from an elementary calculus result. \[lipzlemma\] For $i=1,\dots,N$, let ${\boldsymbol}{f}_i({\boldsymbol}{y},\epsilon): {\mathbb{R}}^n \times (0,\infty) \to {\mathbb{R}}^{m_i \times n}$ be bounded and globally Lipschitz in ${\boldsymbol}{y}$ for every $\epsilon > 0$, with a Lipschitz constant that is bounded as $\epsilon \to 0$, i.e. for every ${\boldsymbol}{y}, {\boldsymbol}{z} \in {\mathbb{R}}^{n}$, there exists a constant $M_i(\epsilon)>0$ such that $$\|{\boldsymbol}{f}_i({\boldsymbol}{y},\epsilon)-{\boldsymbol}{f}_i({\boldsymbol}{z},\epsilon)\| \leq M_i(\epsilon)|{\boldsymbol}{y}-{\boldsymbol}{z}|,$$ where $M_i(\epsilon)=O(1)$ as $\epsilon \to 0$. - Suppose that for each $i$ and ${\boldsymbol}{y} \in {\mathbb{R}}^{n}$, there exists a unique bounded ${\boldsymbol}{F}_i({\boldsymbol}{y}):{\mathbb{R}}^n \to {\mathbb{R}}^{m_i \times n}$ and a constant $C_i>0$ such that $\|{\boldsymbol}{f}_i({\boldsymbol}{y},\epsilon) - {\boldsymbol}{F}_i({\boldsymbol}{y})\| \leq C_i \epsilon^{r_i}$, for some positive constant $r_i$, as $\epsilon \to 0$ (i.e. the left-hand side is of order $O(\epsilon^{r_i})$ as $\epsilon \to 0$). Then there exists constants $D$, $K_1, \dots, K_N >0$, such that $$\begin{aligned} \label{boundlip} \bigg\|\prod_{i=1}^{N} {\boldsymbol}{f}_i({\boldsymbol}{y},\epsilon)-\prod_{i=1}^N {\boldsymbol}{F}_i({\boldsymbol}{y})\bigg\| &\leq K_1 \epsilon^{r_1} + \dots + K_N \epsilon^{r_N} \leq D \epsilon^{\min(r_1, \dots, r_N)} \\ &= O(\epsilon^{\min(r_1, \dots, r_N)}),\end{aligned}$$ as $\epsilon \to 0$. If, in addition, $n=m_1$, ${\boldsymbol}{f}_1({\boldsymbol}{y},\epsilon)$ and ${\boldsymbol}{F}_1({\boldsymbol}{y})$ are invertible for every ${\boldsymbol}{y} \in {\mathbb{R}}^n$ and $\epsilon > 0$, then $\|{\boldsymbol}{f}_1^{-1}({\boldsymbol}{y},\epsilon)-{\boldsymbol}{F}_1^{-1}({\boldsymbol}{y})\| = O(\epsilon^{r_1})$ as $\epsilon \to 0$. - Let $c_i \in {\mathbb{R}}$, $i=1,\dots,N$. For every $\epsilon > 0$ and ${\boldsymbol}{y} \in {\mathbb{R}}^n$, $\sum_{i=1}^{N} c_i {\boldsymbol}{f}_i({\boldsymbol}{y},\epsilon)$ and $\prod_{i=1}^N c_i {\boldsymbol}{f}_i({\boldsymbol}{y},\epsilon)$ are globally Lipschitz with a Lipschitz constant that is $O(1)$ as $\epsilon \to 0$. Moreover, if $m_1=n$ and for every $\epsilon>0$, ${\boldsymbol}{y} \in {\mathbb{R}}^n$, ${\boldsymbol}{f}_1({\boldsymbol}{y},\epsilon)$ is invertible, then for every $\epsilon > 0$, ${\boldsymbol}{y} \in {\mathbb{R}}^n$, ${\boldsymbol}{f}^{-1}_1({\boldsymbol}{y},\epsilon)$ is globally Lipschitz in ${\boldsymbol}{y}$ with a Lipschitz constant that is $O(1)$ as $\epsilon \to 0$. <!-- --> - We prove this inductively. The base case of $N=1$ clearly holds with $D = C_1$. Let $k \in \{1,\dots,N-1\}$. Assume that holds with $N:=k$ and $D := D_k$. Then $$\begin{aligned} &\bigg\|\prod_{i=1}^{k+1} {\boldsymbol}{f}_i({\boldsymbol}{y},\epsilon)-\prod_{i=1}^{k+1} {\boldsymbol}{F}_i({\boldsymbol}{y})\bigg\| \nonumber \\ &= \bigg\|{\boldsymbol}{f}_{k+1}({\boldsymbol}{y},\epsilon)\cdot\prod_{i=1}^{k} {\boldsymbol}{f}_i({\boldsymbol}{y},\epsilon)-{\boldsymbol}{F}_{k+1}({\boldsymbol}{y})\cdot \prod_{i=1}^{k} {\boldsymbol}{F}_i({\boldsymbol}{y})\bigg\| \\ &\leq \|{\boldsymbol}{f}_{k+1}({\boldsymbol}{y},\epsilon)\| \cdot \left\| \prod_{i=1}^{k} {\boldsymbol}{f}_i({\boldsymbol}{y},\epsilon)-\prod_{i=1}^{k} {\boldsymbol}{F}_i({\boldsymbol}{y})\right\| \nonumber \\ &\ \ \ \ \ + \|{\boldsymbol}{f}_{k+1}({\boldsymbol}{y},\epsilon)- {\boldsymbol}{F}_{k+1}({\boldsymbol}{y})\| \cdot \left\| \prod_{i=1}^{k} {\boldsymbol}{F}_i({\boldsymbol}{y})\right\| \\ &\leq C ( D_k \epsilon^{\min(r_1,\dots,r_k)} + C_{k+1}\epsilon^{r_{k+1}} ) \\ &\leq C \max\{D_k, C_{k+1} \}(\epsilon^{\min(r_1,\dots,r_k)} + \epsilon^{r_{k+1}}) \leq D_{k+1} \epsilon^{\min(r_1,\dots,r_{k+1})},\end{aligned}$$ as $\epsilon \to 0$, where $C$, $D_{k+1}$ are positive constants and we have used the inductive hypothesis and assumptions of the lemma in the last two lines above. The last statement follows from: $$\begin{aligned} \|{\boldsymbol}{f}_1^{-1}({\boldsymbol}{y},\epsilon)-{\boldsymbol}{F}_1^{-1}({\boldsymbol}{y})\| &= \|{\boldsymbol}{f}_1^{-1}({\boldsymbol}{y},\epsilon) ({\boldsymbol}{F}_1({\boldsymbol}{y})-{\boldsymbol}{f}_1({\boldsymbol}{y},\epsilon)) {\boldsymbol}{F}_1^{-1}({\boldsymbol}{y}) \| \\ &\leq \|{\boldsymbol}{f}_1^{-1}({\boldsymbol}{y},\epsilon)\|\cdot \| {\boldsymbol}{F}_1({\boldsymbol}{y})-{\boldsymbol}{f}_1({\boldsymbol}{y},\epsilon) \| \cdot \|{\boldsymbol}{F}_1^{-1}({\boldsymbol}{y}) \| \\ &\leq C \epsilon^{r_1},\end{aligned}$$ as $\epsilon \to 0$, where $C$ is a positive constant. - The statements can be proven using the same techniques used for (i) and so we omit the proof. Let ${\boldsymbol}{x}^\epsilon(t) \in {\mathbb{R}}^{n_1}$, ${\boldsymbol}{v}^\epsilon(t) \in {\mathbb{R}}^{n_2}$ and $T>0$. For $t \in [0,T]$, let ${\boldsymbol}{p}^{\epsilon}(t) := \epsilon {\boldsymbol}{v}^{\epsilon}(t)$ denote a solution of the SDE: $$\begin{aligned} d{\boldsymbol}{p}^{\epsilon}(t) &= \frac{{\boldsymbol}{a}_{2}(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon)}{\epsilon} {\boldsymbol}{p}^{\epsilon}(t) dt + {\boldsymbol}{b}_{2}(t,{\boldsymbol}{x}^{\epsilon}(t),\epsilon) dt + {\boldsymbol}{\sigma}_2(t,{\boldsymbol}{x}^{\epsilon}(t),\epsilon) d{\boldsymbol}{W}^{(k_2)}(t). \label{sdeforp}\end{aligned}$$ We provide estimates for the moments concerning the process ${\boldsymbol}{p}^\epsilon(t)$, under appropriate assumptions on the coefficients and the initial conditions, in the limit as $\epsilon \to 0$. We need the following lemma, adapted from Proposition A.2.3 of [@kabanov2013two], to obtain an exponential bound on certain fundamental matrix solution. \[expb\] Fix a filtered probability space $(\Omega, \mathcal{F}, \mathcal{F}_t, \mathbb{P})$. For each $\epsilon > 0$, let ${\boldsymbol}{B}^\epsilon: [0,T] \times \Omega \to {\mathbb{R}}^{n \times n}$ be a bounded (uniformly in $\epsilon$, $\omega \in \Omega$ and $t \in [0,T]$), pathwise continuous process. Assume that the real parts of all eigenvalues of ${\boldsymbol}{B}$ are bounded from above by $-2\kappa$, uniformly in $\epsilon$, $\omega \in \Omega$ and $t \in [0,T]$, where $\kappa$ is a positive constant. Let ${\boldsymbol}{\Phi}^\epsilon(t,s,\omega)$ be the fundamental matrix that solves the initial value problem (IVP): $$\label{ivp} \frac{\partial {\boldsymbol}{\Phi}^\epsilon(t,s,\omega)}{\partial t} = \frac{{\boldsymbol}{B}^\epsilon(t,\omega)}{\epsilon} {\boldsymbol}{\Phi}^\epsilon(t,s,\omega), \ \ {\boldsymbol}{\Phi}^\epsilon(s,s,\omega)={\boldsymbol}{I}, \ \ 0 \leq s \leq t \leq T.$$ Then there exists a constant $C > 0$ and an (in general random[^6]) $\epsilon_{1}=\epsilon_1(\omega)$ such that $$\label{888} \|{\boldsymbol}{\Phi}^\epsilon(t,s,\omega)\| \leq C e^{-\kappa(t-s)/\epsilon}$$ for all $\epsilon \leq \epsilon_{1}$ and for all $s,t \in [0,T]$. Let $u \in [s,t]$. We rewrite for $\omega \in \Omega$, $s, t \in [0,T]$: $$\frac{\partial {\boldsymbol}{\Phi}^\epsilon(t,s,\omega)}{\partial t} = \frac{{\boldsymbol}{B}^\epsilon(u,\omega)}{\epsilon} {\boldsymbol}{\Phi}^\epsilon(t,s,\omega) + \frac{{\boldsymbol}{B}^\epsilon(t,\omega)-{\boldsymbol}{B}^\epsilon(u,\omega)}{\epsilon} {\boldsymbol}{\Phi}^\epsilon(t,s,\omega),$$ and represent the solution to the IVP as: $${\boldsymbol}{\Phi}^\epsilon(t,s,\omega) = e^{(t-s)\frac{{\boldsymbol}{B}^\epsilon(u,\omega)}{\epsilon}} + \frac{1}{\epsilon} \int_s^t e^{(t-r) \frac{{\boldsymbol}{B}^\epsilon(u,\omega)}{\epsilon}} ({\boldsymbol}{B}^\epsilon(r,\omega)-{\boldsymbol}{B}^\epsilon(u,\omega)) {\boldsymbol}{\Phi}^\epsilon(r,s,\omega) dr.$$ Denote ${\boldsymbol}{W}^\epsilon(t,s,\omega) := e^{\kappa(t-s)/\epsilon} {\boldsymbol}{\Phi}^\epsilon(t,s,\omega)$. Setting $u = t$ in the above representation and multiplying both sides by $e^{\kappa(t-s)/\epsilon}$, we obtain: $$\begin{aligned} &{\boldsymbol}{W}^\epsilon(t,s,\omega) \nonumber \\ &= e^{\kappa(t-s)/\epsilon} e^{(t-s){\boldsymbol}{B}^\epsilon(t,\omega)/\epsilon} + \frac{1}{\epsilon} \int_s^t e^{\kappa(t-s)/\epsilon} e^{(t-r) {\boldsymbol}{B}^\epsilon(t,\omega)/\epsilon} ({\boldsymbol}{B}^\epsilon(r,\omega)-{\boldsymbol}{B}^\epsilon(t,\omega)) \nonumber \\ &\ \hspace{5cm} \cdot {\boldsymbol}{\Phi}^\epsilon(r,s,\omega) dr \\ &= e^{\kappa(t-s)/\epsilon} e^{(t-s){\boldsymbol}{B}^\epsilon(t,\omega)/\epsilon} + \frac{1}{\epsilon} \int_s^t e^{\kappa(t-s)/\epsilon} e^{(t-r) {\boldsymbol}{B}^\epsilon(t,\omega)/\epsilon} e^{-\kappa(r-s)/\epsilon} \nonumber \\ &\ \hspace{5cm} \cdot ({\boldsymbol}{B}^\epsilon(r,\omega)-{\boldsymbol}{B}^\epsilon(t,\omega)) {\boldsymbol}{W}^\epsilon(r,s,\omega) dr.\end{aligned}$$ Since ${\boldsymbol}{B}^\epsilon$ is bounded (uniformly in $\omega$, $t$ and $\epsilon$), by assumption on the spectrum of ${\boldsymbol}{B}^\epsilon$, there exists a constant $C > 0$, such that for all $s,t \in [0,T]$ we have $$\|e^{s {\boldsymbol}{B}^\epsilon(t,\omega)/\epsilon}\| \leq C e^{-2\kappa s/\epsilon}$$ Using this, we obtain: $$\begin{aligned} &\| {\boldsymbol}{W}^\epsilon(t,s,\omega)\| \nonumber \\ &\leq C e^{-\kappa(t-s)/\epsilon} \nonumber \\ &\ \hspace{0.02cm} + \frac{C}{\epsilon} \int_s^t e^{-2 \kappa(t-r)/\epsilon} e^{-\kappa(r-s)/\epsilon} e^{\kappa(t-s)/\epsilon} \| {\boldsymbol}{W}^\epsilon(r,s,\omega)\| \cdot \|{\boldsymbol}{B}^\epsilon(r,\omega)-{\boldsymbol}{B}^\epsilon(t,\omega)\| dr.\end{aligned}$$ This leads to the estimate: $$\begin{aligned} &\sup_{s,t \in [0,T]} \| {\boldsymbol}{W}^\epsilon(t,s,\omega)\| \leq C + \sup_{r,s \in [0,T]}\| {\boldsymbol}{W}^\epsilon(r,s,\omega)\| \cdot A_\epsilon(\omega),\end{aligned}$$ where $$A_\epsilon(\omega) = \frac{C}{\epsilon} \sup_{t \in [0,T]} \int_0^t e^{- \frac{\kappa(t-r)}{\epsilon}} \left\|{\boldsymbol}{B}^\epsilon(r,\omega)-{\boldsymbol}{B}^\epsilon(t,\omega)\right\| dr.$$ For a fixed $\omega \in \Omega$, $A_\epsilon(\omega)$ can be made arbitrary small as $\epsilon \to 0$. Therefore, there exists an $\epsilon_1 = \epsilon_1(\omega) > 0$ (generally dependent on $\omega$) such that $$\sup_{s,t \in [0,T]} \| {\boldsymbol}{W}^\epsilon(t,s,\omega)\| \leq C + \frac{1}{2} \sup_{s,t \in [0,T]} \| {\boldsymbol}{W}^\epsilon(t,s,\omega)\|$$ for all $\epsilon \leq \epsilon_1$. This implies that $\sup_{s,t \in [0,T]} \| {\boldsymbol}{W}^\epsilon(t,s,\omega)\| \leq 2C$, which is the claimed bound. We now prove a lemma that gives a bound on a class of stochastic integrals. It is modification of Lemma 5.1 in [@birrell2017small]. In both cases, the main idea is to rewrite some of the stochastic integrals in terms of ordinary ones. \[sib\] Let ${\boldsymbol}{H}_{t} := {\boldsymbol}{H}_{0} + {\boldsymbol}{M}_{t} + {\boldsymbol}{A}_{t}$ be the Doob-Meyer decomposition of a continuous ${\mathbb{R}}^{k}$-valued semimartingale on $(\Omega, \mathcal{F}, \mathcal{F}_{t}, P)$ with a local martingale ${\boldsymbol}{M}_{t}$ and a process of locally bounded variation ${\boldsymbol}{A}_{t}$. Let ${\boldsymbol}{V} \in L_{loc}^{1}(A) \cap L_{loc}^2(M)$ be ${\mathbb{R}}^{n \times k}$-valued and let ${\boldsymbol}{B}^\epsilon(t)$ be an adapted process whose values are $n \times n$ matrices, satisfying the assumptions of Lemma \[expb\]. Let ${\boldsymbol}{\Phi}^\epsilon(t) := {\boldsymbol}{\Phi}^\epsilon(t,0)$ be the adapted $C^{1}$ process that pathwise solves the IVP . Then for every $T \geq \delta > 0$ and for every $\epsilon \leq \epsilon_{1}$, we have the $\mathbb{P}$-a.s bound: $$\begin{aligned} &\sup_{t \in [0,T]} \left|{\boldsymbol}{\Phi}^\epsilon(t) \int_{0}^{t} ({\boldsymbol}{\Phi}^\epsilon)^{-1}(s) {\boldsymbol}{V}_{s} d{\boldsymbol}{H}_{s} \right| \nonumber \\ &\leq C\left(1+\frac{4}{\kappa} \sup_{s \in [0,T]} \|{\boldsymbol}{B}^\epsilon(s)\| \right) \bigg(e^{-\kappa \delta/\epsilon} \sup_{t \in [0,T]} \left| \int_{0}^{t} {\boldsymbol}{V}_{r} d{\boldsymbol}{H}_{r} \right| \nonumber \\ &\ \ \ \ \ \ \ + \max_{k=0,1,\dots,N-1} \sup_{t \in [k\delta, (k+2)\delta]} \left| \int_{k\delta}^{t} {\boldsymbol}{V}_{r} d{\boldsymbol}{H}_{r} \right| \bigg) \label{8888},\end{aligned}$$ where $N = \max\{k \in {\mathbb{Z}}: k \delta < T\}$, $\epsilon_{1}$, $\kappa$ and $C$ are from Lemma \[expb\], and $l_{2}$-norm is used on every ${\mathbb{R}}^{k}$. The proof is identical to that of Lemma 5.1 in [@birrell2017small] up to line (5.10), with the constant $\alpha$ there replaced by $\kappa$, etc. We let $\epsilon \leq \epsilon_{1}$ and replace the bound in line (5.11) there by the following bound, which follows from the semigroup property of the fundamental matrix process and Lemma \[expb\]: $$\|{\boldsymbol}{\Phi}^\epsilon(t) ({\boldsymbol}{\Phi}^\epsilon)^{-1}(s)\|= \|{\boldsymbol}{\Phi}^\epsilon(t,0) {\boldsymbol}{\Phi}^\epsilon(0,s)\| = \|{\boldsymbol}{\Phi}^\epsilon(t,s)\| \leq C e^{-\kappa(t-s)/\epsilon}.$$ Then we proceed as in the proof of Lemma 5.1 in [@birrell2017small] to get the desired bound. In particular, and hold for ${\boldsymbol}{B}^\epsilon = {\boldsymbol}{a}_2(t, {\boldsymbol}{x}^\epsilon(t), \epsilon)$. \[mom\_bound\] Suppose that Assumptions \[aexis\]-\[a5\_ch2\] hold. For all $p \geq 1$, $T>0$, $0<\beta<p/2$, there exists a positive random variable $\epsilon_1$ such that: $$\mathbb{E}\left[\sup_{t \in [0,T]}|{\boldsymbol}{p}^\epsilon(t)|^p; \epsilon \leq \epsilon_1 \right] = O(\epsilon^\beta),$$ as $\epsilon \to 0$, where ${\boldsymbol}{p}^\epsilon(t)$ solves the SDE . Therefore, for any $p \geq 1$, $T>0$, $\beta > 0$, we have $$\mathbb{E}\left[ \sup_{t \in [0,T]} \|\epsilon {\boldsymbol}{v}^\epsilon(t){\boldsymbol}{v}^\epsilon(t)^*\|_{F}^p; \epsilon \leq \epsilon_1 \right] = O(\epsilon^{-\beta}),$$ as $\epsilon \to 0$, where $\|\cdot\|_F$ denotes the Frobenius norm. Let ${\boldsymbol}{\Phi}_\epsilon(t)$ be the matrix-valued process solving the IVP: $$\frac{\partial {\boldsymbol}{\Phi}_\epsilon(t)}{\partial t} = \frac{{\boldsymbol}{a}_2(t, {\boldsymbol}{x}^\epsilon(t), \epsilon)}{\epsilon} {\boldsymbol}{\Phi}_\epsilon(t), \ \ {\boldsymbol}{\Phi}_\epsilon(0) = {\boldsymbol}{I}.$$ Then, $$\begin{aligned} {\boldsymbol}{p}^\epsilon(t) &= {\boldsymbol}{\Phi}_\epsilon(t)\epsilon{\boldsymbol}{v}^\epsilon + {\boldsymbol}{\Phi}_\epsilon(t) \int_0^t {\boldsymbol}{\Phi}^{-1}_\epsilon(s) {\boldsymbol}{b}_2(s,{\boldsymbol}{x}^\epsilon(s),\epsilon)ds \nonumber \\ &\ \ \ \ + {\boldsymbol}{\Phi}_\epsilon(t) \int_0^t {\boldsymbol}{\Phi}^{-1}_\epsilon(s) {\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^\epsilon(s),\epsilon)d{\boldsymbol}{W}^{(k_2)}(s) \\\ &= {\boldsymbol}{\Phi}_\epsilon(t)\epsilon{\boldsymbol}{v}^\epsilon + {\boldsymbol}{\Phi}_\epsilon(t) \int_0^t {\boldsymbol}{\Phi}^{-1}_\epsilon(s) {\boldsymbol}{B}_2(s,{\boldsymbol}{x}^\epsilon(s))ds \nonumber \\ &\ \ \ \ +{\boldsymbol}{\Phi}_\epsilon(t) \int_0^t {\boldsymbol}{\Phi}^{-1}_\epsilon(s) \left[ {\boldsymbol}{b}_2(s,{\boldsymbol}{x}^\epsilon(s),\epsilon) -{\boldsymbol}{B}_2(s,{\boldsymbol}{x}^\epsilon(s)) \right]ds \nonumber \\ &\ \ \ \ + {\boldsymbol}{\Phi}_\epsilon(t) \int_0^t {\boldsymbol}{\Phi}^{-1}_\epsilon(s) {\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^\epsilon(s),\epsilon) d{\boldsymbol}{W}^{(k_2)}(s).\end{aligned}$$ Therefore, for $T>0$ and $p \geq 1$, using the bound $$\label{algebra} \left|\sum_{i=1}^N a_i \right|^p \leq N^{p-1} \sum_{i=1}^N |a_i|^p$$ for $p \geq 1$ (here the $a_i \in {\mathbb{R}}$ and $N$ is a positive integer), taking supremum on both sides, and applying Lemma \[expb\] (with ${\boldsymbol}{B}^\epsilon = {\boldsymbol}{a}_2(t, {\boldsymbol}{x}^\epsilon(t), \epsilon)$), we estimate: $$\begin{aligned} &\sup_{t \in [0,T]}|{\boldsymbol}{p}^\epsilon(t)|^p \nonumber \\ &\leq 4^{p-1} \sup_{t \in [0,T]} \bigg[C^p e^{-\frac{\kappa p}{\epsilon}t} \epsilon^p |{\boldsymbol}{v}^\epsilon|^p + C^p \left( \int_0^t e^{-\frac{\kappa}{\epsilon}(t-s)} |{\boldsymbol}{B}_2(s,{\boldsymbol}{x}^\epsilon(s))| ds \right)^p \nonumber \\ &\ \ \ \ \ \ + C^p \left( \int_0^t e^{-\frac{\kappa}{\epsilon}(t-s)} \bigg|[{\boldsymbol}{b}_2(s,{\boldsymbol}{x}^\epsilon(s),\epsilon) - {\boldsymbol}{B}_2(s,{\boldsymbol}{x}^\epsilon(s))] \bigg| ds \right)^p \nonumber \\ &\ \ \ \ \ \ + \bigg| {\boldsymbol}{\Phi}_\epsilon(t) \int_0^t {\boldsymbol}{\Phi}_\epsilon^{-1}(s) {\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^\epsilon(s),\epsilon) d{\boldsymbol}{W}^{(k_2)}(s) \bigg|^p \bigg] \\ &\leq 4^{p-1} \bigg(C^p \epsilon^p |{\boldsymbol}{v}^\epsilon|^p + \frac{C^p \epsilon^p}{\kappa^p} \bigg(\sup_{s \in [0,T]}|{\boldsymbol}{B}_2 (s,{\boldsymbol}{x}^\epsilon(s))|^p \nonumber \\ &\ \ \ \ \ \ \ + \sup_{s \in [0,T]}|{\boldsymbol}{b}_2(s,{\boldsymbol}{x}^\epsilon(s),\epsilon)-{\boldsymbol}{B}_2(s,{\boldsymbol}{x}^\epsilon(s))|^p\bigg) \nonumber \\ &\ \ \ \ \ \ \ + \sup_{t \in [0,T]} \bigg| {\boldsymbol}{\Phi}_\epsilon(t) \int_0^t {\boldsymbol}{\Phi}_\epsilon^{-1}(s) {\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^\epsilon(s),\epsilon) d{\boldsymbol}{W}^{(k_2)}(s) \bigg|^p \bigg), \label{lastline!}\end{aligned}$$ for $\epsilon \leq \epsilon_1$, where $C>0$, $\kappa >0$, and $\epsilon_1>0$ is the random variable whose existence was proven in Lemma \[expb\]. Note that $\sup_{s \in [0,T]}|{\boldsymbol}{B}_2 (s,{\boldsymbol}{x}^\epsilon(s))|^p < \infty$ and Assumption \[a5\_ch2\] implies that $$\sup_{s \in [0,T]}|{\boldsymbol}{b}_2(s,{\boldsymbol}{x}^\epsilon(s),\epsilon)-{\boldsymbol}{B}_2(s,{\boldsymbol}{x}^\epsilon(s))|^p \leq |\beta_2(\epsilon)|^p,$$ where $\beta_2(\epsilon) \leq K \epsilon^{b_2}$. Denote $\mathbb{E}_1[\cdot] = \mathbb{E}[\cdot; \epsilon \leq \epsilon_1]$, i.e. the expectation is taken on $\{ \omega : \epsilon \leq \epsilon_1(\omega)\}$. We are going to estimate $\mathbb{E}_1\left[\sup_{t \in [0,T]} |{\boldsymbol}{p}^\epsilon(t)|^p \right]$. By Assumption \[a2\_ch2\], we have $\mathbb{E}_1 [\sup_{t \in [0,T]} |\epsilon{\boldsymbol}{v}^\epsilon|^p] = O(\epsilon^{\alpha})$ as $\epsilon \to 0$, for some $\alpha \geq p/2$. Therefore, combining the above estimates, we obtain: $$\begin{aligned} &\mathbb{E}_1 \left[ \sup_{t \in [0,T]} |{\boldsymbol}{p}^\epsilon(t)|^p\right] \nonumber \\ &\leq C_1(p)(\epsilon^\alpha + \epsilon^{b_2 p} + \epsilon^p) \nonumber \\ &\ \ \ + C_2(p) \mathbb{E}_1 \left[ \sup_{t \in [0,T]} \bigg| {\boldsymbol}{\Phi}_\epsilon(t) \int_0^t {\boldsymbol}{\Phi}_\epsilon^{-1}(s) {\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^\epsilon(s),\epsilon) d{\boldsymbol}{W}^{(k_2)}(s) \bigg|^p \right],\end{aligned}$$ where $C_1(p), C_2(p) > 0$ are constants. Next, the idea is to use Lemma \[sib\] and the Burkholder-Davis-Gundy inequality (see Theorem 3.28 in [@karatzas2012Brownian]) to estimate the last term on the right hand side above. This is analogous to the technique used in the proof of Proposition 5.1 in [@birrell2017small]. Let $\delta$ be a constant such that $0<\delta < T$. Applying Lemma \[sib\], we estimate, using : $$\begin{aligned} &\mathbb{E}_1\left[ \sup_{t \in [0,T]} \left| {\boldsymbol}{\Phi}_{\epsilon}(t) \int_0^t {\boldsymbol}{\Phi}_{\epsilon}^{-1}(s) {\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^\epsilon(s),\epsilon) d{\boldsymbol}{W}^{(k_2)}(s) \right|^p \right] \nonumber \\ &\leq 2^{p-1} C^p \mathbb{E}_1 \left[ \left( 1 + \frac{4}{\kappa} \sup_{s \in [0,T]} \| {\boldsymbol}{a}_2(s, {\boldsymbol}{x}^\epsilon(s),\epsilon)\| \right)^p \cdot \Pi \right], \\ &\leq 2^{p-1} C^p \left( 1 + \frac{4}{\kappa} \| {\boldsymbol}{a}_2(t,{\boldsymbol}{x}^\epsilon(t),\epsilon)\|_{\infty} \right)^p \cdot \mathbb{E}_1 [\Pi],\end{aligned}$$ where $\| {\boldsymbol}{a}_2(t,{\boldsymbol}{x}^\epsilon(t),\epsilon)\|_{\infty} := \sup_{ t \in [0,T], {\boldsymbol}{y} \in {\mathbb{R}}^{n_1},\epsilon \in \mathcal{E}} \|{\boldsymbol}{a}_2(t,{\boldsymbol}{y},\epsilon)\|$ and $$\begin{aligned} \Pi &= e^{-p \delta \kappa/\epsilon} \sup_{t \in [0,T]} \bigg| \int_0^t {\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) d{\boldsymbol}{W}^{(k_2)}(s) \bigg|^p \nonumber \\ &\ \ \ + \max_{k=0,\dots,N-1} \sup_{t \in [k \delta, (k+2)\delta]} \bigg| \int_{k \delta}^t {\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) d{\boldsymbol}{W}^{(k_2)}(s) \bigg|^p.\end{aligned}$$ We estimate: $$\begin{aligned} \mathbb{E}_1 [\Pi] &= e^{-p \delta \kappa/\epsilon} \mathbb{E}_1 \left[ \sup_{t \in [0,T]} \bigg| \int_0^t {\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) d{\boldsymbol}{W}^{(k_2)}(s) \bigg|^p\right] \nonumber \\ &\ \ \ \ + \mathbb{E}_1 \left[ \max_{k=0,\dots,N-1} \sup_{t \in [k \delta, (k+2)\delta]} \bigg| \int_{k \delta}^t {\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) d{\boldsymbol}{W}^{(k_2)}(s) \bigg|^p \right] \\ &\leq e^{-p \delta \kappa/\epsilon} \mathbb{E}_1 \left[ \sup_{t \in [0,T]} \bigg| \int_0^t {\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) d{\boldsymbol}{W}^{(k_2)}(s) \bigg|^p \right] \nonumber \\ &\ \ \ \ + \mathbb{E}_1 \left[ \left( \sum_{k=0}^{N-1} \sup_{t \in [k \delta, (k+2)\delta]} \left(\int_{k \delta}^t {\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) d{\boldsymbol}{W}^{(k_2)}(s) \right)^{pq} \right)^{1/q} \right] \\ &\leq e^{-p \delta \kappa/\epsilon} \mathbb{E}_1 \left[ \sup_{t \in [0,T]} \bigg| \int_0^t {\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) d{\boldsymbol}{W}^{(k_2)}(s) \bigg|^p \right] \nonumber \\ &\ \ \ \ + \left( \sum_{k=0}^{N-1} \mathbb{E}_1 \left[ \sup_{t \in [k \delta, (k+2)\delta]} \left(\int_{k \delta}^t {\boldsymbol}{\sigma}_2(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) d{\boldsymbol}{W}^{(k_2)}(s) \right)^{pq} \right)^{1/q} \right],\end{aligned}$$ with $N := \max\{k \in {\mathbb{Z}}: k \delta < T\}$, where we have used the fact that the $l^\infty$-norm on ${\mathbb{R}}^{N}$ is bounded by the $l^q$ norm for every $q \geq 1$ and then applied Hölder’s inequality to get the last two lines above. Now, letting $\delta = \epsilon^{1-h}$ for $0 < h < 1$, and using the Burkholder-Davis-Gundy inequality, $$\begin{aligned} \mathbb{E}_1[\Pi] &\leq C_{p,q} \bigg[ e^{-p \kappa/\epsilon^h} \mathbb{E}_1 \bigg[ \bigg( \int_0^T \|{\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon)\|_{F}^2 ds \bigg)^{\frac{pq}{2}} \bigg]^{1/q} \nonumber \\ &\ \ \ \ + \left( \sum_{k=0}^{N-1} \mathbb{E}_1 \left(\int_{k \delta}^{(k+2)\delta} \|{\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) \|_F^2 ds \right)^{\frac{pq}{2}} \right)^{1/q} \bigg] \\ &\leq C_{p,q} \|{\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^\epsilon(s),\epsilon)\|^p_{F,\infty} (e^{-p \kappa/\epsilon^h} T^{p/2} + 2^{p/2} (N \delta^{\frac{pq}{2}})^{1/q}),\end{aligned}$$ where $C_{p,q}$ is some constant and $$\|{\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^\epsilon(s),\epsilon)\|_{F,\infty} := \sup_{t \in [0,T], {\boldsymbol}{y} \in {\mathbb{R}}^{n_{1}}, \epsilon \in \mathcal{E}} \|{\boldsymbol}{\sigma}_2(t, {\boldsymbol}{y},\epsilon)\|_F < \infty.$$ Since $N \delta < T$, we have $N \delta^{pq/2} < T \delta^{pq/2-1} = T \epsilon^{(1-h)(pq/2 - 1)}$. Therefore, $\mathbb{E}_1 [\Pi ] = O(\epsilon^{(1-h)(p/2-1/q)})$. For all $0 < \beta < p/2$, one can choose $0 < h < 1$ and $q > 1$ such that $(1-h)(p/2-1/q) = \beta$. Therefore, we have $$\mathbb{E}_1 \left[ \sup_{t \in [0,T]} \left| {\boldsymbol}{\Phi}_{\epsilon}(t) \int_0^t {\boldsymbol}{\Phi}_{\epsilon}^{-1}(s) {\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^\epsilon(s),\epsilon) d{\boldsymbol}{W}^{(k_2)}(s) \right|^p \right]= O(\epsilon^\beta)$$ as $\epsilon \to 0$, for all $0 < \beta < p/2$. Combining all the estimates obtained, one has: $$\mathbb{E}_1\left[\sup_{t \in [0,T]} |{\boldsymbol}{p}^\epsilon(t)|^p \right] \leq C_1 \epsilon^\alpha + C_2 \epsilon^{p} + C_3 \epsilon^{p b_2} + C_4 \epsilon^\beta$$ where the $C_i$ are positive constants, $\alpha \geq p/2$ is some constant, and $b_2 > 0$ is the constant from Assumption \[a5\_ch2\]. The statement of the proposition follows. We also need the following estimate on a class of integrals with respect to products of the coordinates of the process ${\boldsymbol}{p}^{\epsilon}(t)$. \[bound\_on\_integ\_wrt\_p\_square\] Suppose that Assumptions \[aexis\]-\[a5\_ch2\] hold and $\epsilon \in \mathcal{E}$. Let $h^\epsilon: {\mathbb{R}}^+ \times {\mathbb{R}}^{n_1} \to {\mathbb{R}}$ be a family of functions, continuously differentiable in ${\boldsymbol}{y} \in {\mathbb{R}}^{n_1}$ and bounded (in $s \in {\mathbb{R}}^+$ and ${\boldsymbol}{y} \in {\mathbb{R}}^{n_1}$), with bounded first derivatives ${\boldsymbol}{\nabla}_{{\boldsymbol}{y}} h^\epsilon({\boldsymbol}{y})$ for ${\boldsymbol}{y} \in {\mathbb{R}}^{n_1}$. Assume that $h^\epsilon$ and ${\boldsymbol}{\nabla}_{{\boldsymbol}{y}} h^\epsilon({\boldsymbol}{y})$ are $O(1)$ as $\epsilon \to 0$. Moreover, assume that $\frac{\partial}{\partial s}h^\epsilon$ is bounded (in all variables) and is $O(1)$ as $\epsilon \to 0$. Then for any $p \geq 1$, $T>0$, $0< \beta < p/2$, $i,j = 1, \dots,n_2$, in the limit as $\epsilon \to 0$ we have $$\mathbb{E}\left[\sup_{t \in [0,T]} \left| \int_{0}^{t} h^\epsilon(s,{\boldsymbol}{x}^{\epsilon}(s)) d( [{\boldsymbol}{p}^{\epsilon}]_{i}(s)\cdot [{\boldsymbol}{p}^{\epsilon}]_{j}(s)) \right|^{p}; \epsilon \leq \epsilon_1 \right] = O(\epsilon^\beta),$$ where ${\boldsymbol}{x}^\epsilon(t)$ and ${\boldsymbol}{p}^\epsilon(t)$ solve the SDEs - and the SDE respectively, and $\epsilon_1$ is from Proposition \[mom\_bound\]. Let $\epsilon \in \mathcal{E}$, $t \in [0,T]$, and $i, j =1,\dots, n_2$. An integration by parts gives: $$\begin{aligned} &\int_{0}^{t} h^\epsilon(s,{\boldsymbol}{x}^{\epsilon}(s)) d( [{\boldsymbol}{p}^{\epsilon}]_{i}(s)\cdot [{\boldsymbol}{p}^{\epsilon}]_{j}(s)) \nonumber \\ &= h^\epsilon(t, {\boldsymbol}{x}^\epsilon(t)) [{\boldsymbol}{p}^{\epsilon}]_{i}(t) [{\boldsymbol}{p}^{\epsilon}]_{j}(t) - h^\epsilon(t, {\boldsymbol}{x}^\epsilon) [{\boldsymbol}{p}^{\epsilon}]_{i} [{\boldsymbol}{p}^{\epsilon}]_{j} \nonumber \\ &\ \ \ \ - \int_0^t [{\boldsymbol}{p}^{\epsilon}]_{i}(s) [{\boldsymbol}{p}^{\epsilon}]_{j}(s) \left( {\boldsymbol}{\nabla}_{{\boldsymbol}{x}^\epsilon} h^\epsilon(s, {\boldsymbol}{x}^\epsilon(s)) \cdot \frac{{\boldsymbol}{p}^\epsilon(s)}{\epsilon} + \frac{\partial}{\partial s} h^\epsilon(s, {\boldsymbol}{x}^\epsilon(s)) \right) ds. \end{aligned}$$ Using the notation $\mathbb{E}_1[\cdot] = \mathbb{E}[\cdot; \epsilon \leq \epsilon_1]$, we estimate, for $p \geq 1$, $$\begin{aligned} &\mathbb{E}_1 \left[\sup_{t \in [0,T]} \left| \int_{0}^{t} h^\epsilon(s,{\boldsymbol}{x}^{\epsilon}(s)) d( [{\boldsymbol}{p}^{\epsilon}]_{i}(s)\cdot [{\boldsymbol}{p}^{\epsilon}]_{j}(s)) \right|^{p}\right] \nonumber \\ &\leq 4^{p-1}\bigg( \mathbb{E}_1 \sup_{t \in [0,T]} \left| h^\epsilon(t, {\boldsymbol}{x}^\epsilon(t)) [{\boldsymbol}{p}^{\epsilon}]_{i}(t) [{\boldsymbol}{p}^{\epsilon}]_{j}(t) \right|^p \nonumber \\ &\ \ \ \ \ \ \ \ + \mathbb{E}_1 \sup_{t \in [0,T]} \left| h^\epsilon(t, {\boldsymbol}{x}^\epsilon) [{\boldsymbol}{p}^{\epsilon}]_{i} [{\boldsymbol}{p}^{\epsilon}]_{j} \right|^p \nonumber \\ &\ \ \ \ \ \ \ \ + \mathbb{E}_1 \sup_{t \in [0,T]} \left| \int_0^t [{\boldsymbol}{p}^{\epsilon}]_{i}(s) [{\boldsymbol}{p}^{\epsilon}]_{j}(s) {\boldsymbol}{\nabla}_{{\boldsymbol}{x}^\epsilon} h^\epsilon(s, {\boldsymbol}{x}^\epsilon(s)) \cdot \frac{{\boldsymbol}{p}^\epsilon(s)}{\epsilon} ds \right|^p \nonumber \\ &\ \ \ \ \ \ \ \ + \mathbb{E}_1 \sup_{t \in [0,T]} \left| \int_0^t [{\boldsymbol}{p}^{\epsilon}]_{i}(s) [{\boldsymbol}{p}^{\epsilon}]_{j}(s) \frac{\partial}{\partial s} h^\epsilon(s, {\boldsymbol}{x}^\epsilon(s)) ds \right|^p \bigg) \\ &\leq C(p,T) \bigg[ \|h^\epsilon\|^p_{\infty} \left(\mathbb{E}_1 \sup_{t \in [0,T]} |{\boldsymbol}{p}^\epsilon(t)|^{2p} + \mathbb{E}_1 |{\boldsymbol}{p}^\epsilon|^{2p} \right) \nonumber \\ &\ \ \ \ \ \ \ \ + \frac{1}{\epsilon^p} \mathbb{E}_1 \sup_{t \in [0,T]} \left| \int_0^t [{\boldsymbol}{p}^{\epsilon}]_{i}(s) [{\boldsymbol}{p}^{\epsilon}]_{j}(s) [{\boldsymbol}{\nabla}_{{\boldsymbol}{x}^\epsilon} h^\epsilon]_k (s, {\boldsymbol}{x}^\epsilon(s)) [{\boldsymbol}{p}^\epsilon]_k(s) ds \right|^p \nonumber \\ &\ \ \ \ \ \ \ \ + \left\|\frac{\partial}{\partial s} h^\epsilon\right\|_{\infty}^p \cdot \mathbb{E}_1 \sup_{t \in [0,T]} |{\boldsymbol}{p}^\epsilon(t)|^{2p} \bigg],\end{aligned}$$ where $C(p,T) > 0$ is a constant, $\| g^\epsilon \|_{\infty} := \sup_{s \in [0,T], {\boldsymbol}{y} \in {\mathbb{R}}^{n_{1}}} |g^\epsilon(s, {\boldsymbol}{y})|$, and we have used Einstein’s summation over repeated indices convention. Now, estimating as before, we obtain: $$\begin{aligned} &\mathbb{E}_1 \sup_{t \in [0,T]} \left| \int_0^t [{\boldsymbol}{p}^{\epsilon}]_{i}(s) [{\boldsymbol}{p}^{\epsilon}]_{j}(s) [{\boldsymbol}{\nabla}_{{\boldsymbol}{x}^\epsilon} h^\epsilon]_k (s, {\boldsymbol}{x}^\epsilon(s)) [{\boldsymbol}{p}^\epsilon]_k(s) ds \right|^p \nonumber \\ &\leq D(p,T) \| {\boldsymbol}{\nabla}_{{\boldsymbol}{x}^\epsilon} h^\epsilon\|_{\infty} \cdot \mathbb{E}_1 \sup_{t \in [0,T]} |{\boldsymbol}{p}^\epsilon(t)|^{3p},\end{aligned}$$ where $D(p,T)>0$ is a constant. By our assumptions, all the quantities in the form $\| \cdot \|_{\infty}$ are bounded and are $O(1)$ as $\epsilon \to 0$. Therefore, collecting the above estimates, using Assumption \[a2\_ch2\], and applying Proposition \[mom\_bound\], we have, for $p \geq 1$, $T>0$, $i,j=1,\dots,n_2$, $$\begin{aligned} &\mathbb{E}_1 \left[\sup_{t \in [0,T]} \left| \int_{0}^{t} h^\epsilon(s,{\boldsymbol}{x}^{\epsilon}(s)) d( [{\boldsymbol}{p}^{\epsilon}]_{i}(s)\cdot [{\boldsymbol}{p}^{\epsilon}]_{j}(s)) \right|^{p}\right] = O(\epsilon^\beta),\end{aligned}$$ for every $0 < \beta < p/2$. Now we proceed to prove Theorem \[mainthm\]. Using the above moment estimates and the proof techniques in [@birrell2017small; @2017BirrellLatest], we are going to first obtain the convergence of ${\boldsymbol}{x}^\epsilon_{t}$ to ${\boldsymbol}{X}_{t}$ in the limit as $\epsilon \to 0$ in the following sense: for all finite $T>0$, $p \geq 1$, $$\label{nonst} \mathbb{E}\left[ \sup_{t \in [0,T]} |{\boldsymbol}{x}_t^\epsilon - {\boldsymbol}{X}_t|^p; \epsilon \leq \epsilon_1 \right] \to 0,$$ as $\epsilon \to 0$, where the $\epsilon_1$ is from Proposition \[mom\_bound\]. The main tools are well known ordinary and stochastic integral inequalities, as well as a Gronwall type argument. This result would then imply that for all finite $T>0$, $\sup_{t \in [0,T]} |{\boldsymbol}{x}_t^\epsilon - {\boldsymbol}{X}_t| \to 0$ in probability, in the limit as $\epsilon \to 0$ (see Lemma 1 in [@LimWehr_Homog_NonMarkovian]). (Proof of Theorem \[mainthm\]) Let $T>0$ and recall that $[{\boldsymbol}{B}]_{i,j}$ denotes the $(i,j)$-entry of a matrix ${\boldsymbol}{B}$. First, we assume that $p>2$. From , we have, for every $\epsilon > 0$, $t \in [0,T]$, $$\begin{aligned} {\boldsymbol}{v}^{\epsilon}(t) dt &= \epsilon {\boldsymbol}{a}_{2}^{-1}(t,{\boldsymbol}{x}^{\epsilon}(t),\epsilon) d{\boldsymbol}{v}^{\epsilon}(t) - {\boldsymbol}{a}_{2}^{-1}(t,{\boldsymbol}{x}^{\epsilon}(t),\epsilon) {\boldsymbol}{b}_{2}(t,{\boldsymbol}{x}^{\epsilon}(t),\epsilon) dt \nonumber \\ &\ \ \ \ - {\boldsymbol}{a}_{2}^{-1}(t,{\boldsymbol}{x}^{\epsilon}(t),\epsilon) {\boldsymbol}{\sigma}_2(t,{\boldsymbol}{x}^{\epsilon}(t), \epsilon) d{\boldsymbol}{W}^{(k_2)}(t).\end{aligned}$$ Substituting this into , we obtain: $$\begin{aligned} d{\boldsymbol}{x}^{\epsilon}(t) &= \epsilon {\boldsymbol}{a}_{1}(t,{\boldsymbol}{x}^{\epsilon}(t), \epsilon) {\boldsymbol}{a}_{2}^{-1}(t,{\boldsymbol}{x}^{\epsilon}(t),\epsilon) d{\boldsymbol}{v}^{\epsilon}(t) \nonumber \\ &\ \ \ - {\boldsymbol}{a}_{1}(t, {\boldsymbol}{x}^{\epsilon}(t), \epsilon) {\boldsymbol}{a}_{2}^{-1}(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon) {\boldsymbol}{b}_{2}(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon) dt \nonumber \\ &\ \ \ - {\boldsymbol}{a}_{1}(t, {\boldsymbol}{x}^{\epsilon}(t), \epsilon) {\boldsymbol}{a}_{2}^{-1}(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon) {\boldsymbol}{\sigma}_2(t, {\boldsymbol}{x}^{\epsilon}(t), \epsilon) d{\boldsymbol}{W}^{(k_2)}(t) \nonumber \\ &\ \ \ + {\boldsymbol}{b}_{1}(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon) dt + {\boldsymbol}{\sigma}_{1}(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon) d{\boldsymbol}{W}^{(k_1)}(t).\end{aligned}$$ In integral form, we have: $$\begin{aligned} {\boldsymbol}{x}^{\epsilon}(t) &= {\boldsymbol}{x}^\epsilon + \epsilon \int_0^t {\boldsymbol}{a}_{1}(s, {\boldsymbol}{x}^{\epsilon}(s), \epsilon) {\boldsymbol}{a}_{2}^{-1}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) d{\boldsymbol}{v}^{\epsilon}(s) \nonumber \\ &\ \ \ \ + \int_0^t \{ {\boldsymbol}{b}_{1}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) -{\boldsymbol}{a}_{1}(s, {\boldsymbol}{x}^{\epsilon}(s), \epsilon) {\boldsymbol}{a}_{2}^{-1}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) {\boldsymbol}{b}_{2}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) \} ds \nonumber \\ &\ \ \ \ - \int_0^t {\boldsymbol}{a}_{1}(s, {\boldsymbol}{x}^{\epsilon}(s), \epsilon) {\boldsymbol}{a}_{2}^{-1}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) {\boldsymbol}{\sigma}_2(s, {\boldsymbol}{x}^{\epsilon}(s), \epsilon) d{\boldsymbol}{W}^{(k_2)}(s) \nonumber \\ &\ \ \ \ + \int_0^t {\boldsymbol}{\sigma}_{1}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) d{\boldsymbol}{W}^{(k_1)}(s).\end{aligned}$$ Its $i$th component, $[{\boldsymbol}{x}^{\epsilon}]_{i}(t)$ ($i=1,2,\dots,n_1$) is (recall that we are employing Einstein’s summation convention): $$\begin{aligned} [{\boldsymbol}{x}^{\epsilon}]_i(t) &= [{\boldsymbol}{x}^\epsilon]_i + \epsilon \int_0^t [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(s, {\boldsymbol}{x}^{\epsilon}(s), \epsilon) \cdot d[{\boldsymbol}{v}^{\epsilon}]_j(s) \nonumber \\ &\ \ \ \ + \int_0^t \{ [{\boldsymbol}{b}_{1}]_i(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) -[{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}{\boldsymbol}{b}_{2}]_{i}(s, {\boldsymbol}{x}^{\epsilon}(s), \epsilon) \} ds \nonumber \\ &\ \ \ \ - \int_0^t [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}{\boldsymbol}{\sigma}_2]_{i,j}(s, {\boldsymbol}{x}^{\epsilon}(s), \epsilon) \cdot d[{\boldsymbol}{W}^{(k_2)}]_j(s) \nonumber \\ &\ \ \ \ + \int_0^t [{\boldsymbol}{\sigma}_{1}]_{i,j}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) \cdot d[{\boldsymbol}{W}^{(k_1)}]_j(s).\end{aligned}$$ Next, we perform integration by parts in the second term on the right hand side above: $$\begin{aligned} &\int_0^t [S^{\epsilon}]_i(s, {\boldsymbol}{x}^\epsilon(s),{\boldsymbol}{v}^\epsilon(s),\epsilon)ds :=\epsilon \int_0^t [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(s, {\boldsymbol}{x}^{\epsilon}(s), \epsilon) \cdot d[{\boldsymbol}{v}^{\epsilon}]_j(s) \\ &= \epsilon [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon) \cdot [{\boldsymbol}{v}^{\epsilon}]_j(t) - \epsilon [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(0, {\boldsymbol}{x},\epsilon) \cdot [{\boldsymbol}{v}^\epsilon]_j \nonumber \\ &\ \ \ \ \ - \int_0^t \frac{\partial}{\partial [{\boldsymbol}{x}^{\epsilon}]_l(s)}\bigg( [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) \bigg) \cdot d[{\boldsymbol}{x}^\epsilon]_l(s) \cdot \epsilon [{\boldsymbol}{v}^{\epsilon}]_j(s) \nonumber \\ &\ \ \ \ \ - \int_0^t \frac{\partial}{\partial s}\left([{\boldsymbol}{a}_1 {\boldsymbol}{a}_2^{-1}]_{i,j}(s, {\boldsymbol}{x}^\epsilon(s),\epsilon) \right) \cdot \epsilon [{\boldsymbol}{v}^\epsilon]_j(s) ds. \label{88}\end{aligned}$$ Substituting the following expression for $d[{\boldsymbol}{x}^\epsilon]_l(s)$: $$\begin{aligned} d[{\boldsymbol}{x}^\epsilon]_l(s) &= [{\boldsymbol}{a}_1]_{l,k}(s,{\boldsymbol}{x}^\epsilon(s),\epsilon)[{\boldsymbol}{v}^\epsilon]_k(s) ds + [{\boldsymbol}{b}_1]_l(s,{\boldsymbol}{x}^\epsilon(s),\epsilon)ds \nonumber \\ &\ \ \ \ + [{\boldsymbol}{\sigma}_1]_{l,k}(s,{\boldsymbol}{x}^\epsilon(s),\epsilon)d[{\boldsymbol}{W}^{(k_1)}]_k(s)\end{aligned}$$ into , we obtain: $$\begin{aligned} &\int_0^t [S^{\epsilon}]_i(s, {\boldsymbol}{x}^\epsilon(s),{\boldsymbol}{v}^\epsilon(s),\epsilon)ds \nonumber \\ &= \epsilon [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon) \cdot [{\boldsymbol}{v}^{\epsilon}]_j(t) - \epsilon [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(0, {\boldsymbol}{x},\epsilon) \cdot [{\boldsymbol}{v}^\epsilon]_j \nonumber \\ & - \int_0^t \frac{\partial}{\partial [{\boldsymbol}{x}^{\epsilon}]_l(s)}\bigg( [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) \bigg) \cdot [{\boldsymbol}{b}_1]_l(s, {\boldsymbol}{x}^\epsilon(s),\epsilon) \cdot \epsilon [{\boldsymbol}{v}^{\epsilon}]_j(s) ds \nonumber \\ & - \int_0^t \frac{\partial}{\partial [{\boldsymbol}{x}^{\epsilon}]_l(s)}\bigg( [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) \bigg) [{\boldsymbol}{\sigma}_1]_{l,k}(s, {\boldsymbol}{x}^\epsilon(s),\epsilon) \epsilon [{\boldsymbol}{v}^{\epsilon}]_j(s) d[{\boldsymbol}{W}^{(k_1)}]_{k}(s) \nonumber \\ & - \int_0^t \frac{\partial}{\partial [{\boldsymbol}{x}^{\epsilon}]_l(s)}\bigg( [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) \bigg) [{\boldsymbol}{a}_1]_{l,k}(s, {\boldsymbol}{x}^\epsilon(s),\epsilon) \epsilon [{\boldsymbol}{v}^\epsilon]_k(s) [{\boldsymbol}{v}^{\epsilon}]_j(s) ds \nonumber \\ & - \int_0^t \frac{\partial}{\partial s}\left([{\boldsymbol}{a}_1 {\boldsymbol}{a}_2^{-1}]_{i,j}(s, {\boldsymbol}{x}^\epsilon(s),\epsilon) \right) \cdot \epsilon [{\boldsymbol}{v}^\epsilon]_j(s) ds. \end{aligned}$$ Next, we apply Itô formula to $ \epsilon {\boldsymbol}{v}^{\epsilon}(t) (\epsilon {\boldsymbol}{v}^{\epsilon}(t))^{*} \in {\mathbb{R}}^{n_2\times n_2}$: $$\begin{aligned} &d[\epsilon {\boldsymbol}{v}^{\epsilon}(t) (\epsilon {\boldsymbol}{v}^{\epsilon}(t))^{*}] \nonumber \\ &= \epsilon d{\boldsymbol}{v}^{\epsilon}(t) \cdot \epsilon ({\boldsymbol}{v}^{\epsilon}(t))^* + \epsilon {\boldsymbol}{v}^{\epsilon}(t) \cdot \epsilon d({\boldsymbol}{v}^{\epsilon}(t))^{*} + d[\epsilon {\boldsymbol}{v}^{\epsilon}(t)] \cdot d[ (\epsilon {\boldsymbol}{v}^{\epsilon}(t))^{*}] \\ &= \left[{\boldsymbol}{a}_{2}(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon) {\boldsymbol}{v}^{\epsilon}(t) dt + {\boldsymbol}{b}_{2}(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon) dt + {\boldsymbol}{\sigma}_2(t, {\boldsymbol}{x}^{\epsilon}(t), \epsilon) d{\boldsymbol}{W}^{(k_2)}(t) \right] \epsilon {\boldsymbol}{v}^\epsilon(t)^{*} \nonumber \\ &\ \ + \epsilon {\boldsymbol}{v}^\epsilon(t)\left[ {\boldsymbol}{a}_{2}(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon) {\boldsymbol}{v}^{\epsilon}(t) dt + {\boldsymbol}{b}_{2}(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon) dt + {\boldsymbol}{\sigma}_2(t, {\boldsymbol}{x}^{\epsilon}(t), \epsilon) d{\boldsymbol}{W}^{(k_2)}(t) \right]^{*}\nonumber \\ &\ \ + {\boldsymbol}{\sigma}_2(t, {\boldsymbol}{x}^{\epsilon}(t), \epsilon){\boldsymbol}{\sigma}_2^*(t, {\boldsymbol}{x}^{\epsilon}(t), \epsilon) dt.\end{aligned}$$ Denoting ${\boldsymbol}{J}^\epsilon(t) := \epsilon {\boldsymbol}{v}^\epsilon(t) ({\boldsymbol}{v}^\epsilon(t))^{*}$, we can rewrite the above as: $$\label{lyap_prelimit} -{\boldsymbol}{a}_{2}(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon) {\boldsymbol}{J}^\epsilon(t)dt - {\boldsymbol}{J}^\epsilon(t) {\boldsymbol}{a}_{2}^*(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon)dt = {\boldsymbol}{F}^{\epsilon}_1(t) dt + {\boldsymbol}{F}^{\epsilon}_2(t) dt + {\boldsymbol}{F}^{\epsilon}_3(t) dt,$$ where $$\begin{aligned} {\boldsymbol}{F}^{\epsilon}_1(t) dt &= -d[\epsilon {\boldsymbol}{v}^{\epsilon}(t) (\epsilon {\boldsymbol}{v}^{\epsilon}(t))^{*}], \\ {\boldsymbol}{F}^{\epsilon}_2(t) dt &= ({\boldsymbol}{b}_{2}(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon) dt + {\boldsymbol}{\sigma}_2(t, {\boldsymbol}{x}^{\epsilon}(t), \epsilon)d{\boldsymbol}{W}^{(k_2)}(t) )\epsilon ({\boldsymbol}{v}^{\epsilon}(t))^{*} \nonumber \\ &\ \ \ \ \ + \epsilon {\boldsymbol}{v}^{\epsilon}(t)({\boldsymbol}{b}_{2}(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon) dt + {\boldsymbol}{\sigma}_2(t,{\boldsymbol}{x}^{\epsilon}(t), \epsilon) d{\boldsymbol}{W}^{(k_2)}(t))^{*},\\ {\boldsymbol}{F}^{\epsilon}_3(t) &= {\boldsymbol}{\sigma}_2(t,{\boldsymbol}{x}^{\epsilon}(t), \epsilon) {\boldsymbol}{\sigma}_2(t,{\boldsymbol}{x}^{\epsilon}(t), \epsilon)^{*}.\end{aligned}$$ Since $-{\boldsymbol}{a}_2(t, {\boldsymbol}{x}^\epsilon(t),\epsilon)$ is positive stable uniformly (in $t$, ${\boldsymbol}{x}^\epsilon$ and $\epsilon$) by Assumption \[a0\_ch2\], the solution of the Lyapunov equation can be represented as: $${\boldsymbol}{J}^\epsilon(t) = {\boldsymbol}{J}_1^\epsilon(t) + {\boldsymbol}{J}_2^\epsilon(t) + {\boldsymbol}{J}_3^\epsilon(t),$$ where $$\begin{aligned} {\boldsymbol}{J}_n^\epsilon(t) &= \int_0^\infty e^{{\boldsymbol}{a}_2(t, {\boldsymbol}{x}^\epsilon(t),\epsilon)y} {\boldsymbol}{F}^{\epsilon}_n(t) e^{{\boldsymbol}{a}^*_2(t, {\boldsymbol}{x}^\epsilon(t),\epsilon)y} dy\end{aligned}$$ for $n=1,2,3$. Therefore, for $s \in [0,T]$, $$\begin{aligned} &\epsilon [{\boldsymbol}{v}^\epsilon]_j(s) [{\boldsymbol}{v}^\epsilon]_k(s)ds \nonumber \\ &=-\int_0^\infty \bigg[e^{{\boldsymbol}{a}_2(s, {\boldsymbol}{x}^\epsilon(s),\epsilon)y}\bigg]_{j,p_1} \cdot \bigg[ d[\epsilon {\boldsymbol}{v}^{\epsilon}(s) (\epsilon {\boldsymbol}{v}^{\epsilon}(s))^{*}] \bigg]_{p_1,p_2} \cdot \bigg[ e^{{\boldsymbol}{a}^*_2(s, {\boldsymbol}{x}^\epsilon(s),\epsilon)y}\bigg]_{p_2,k} dy \nonumber \\ &\ \ \ + \int_0^\infty \bigg[ e^{{\boldsymbol}{a}_2(s, {\boldsymbol}{x}^\epsilon(s),\epsilon)y} \bigg]_{j,p_1} \cdot\bigg[ ({\boldsymbol}{b}_{2}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) ds \nonumber \\ &\hspace{1cm} + {\boldsymbol}{\sigma}_2(s, {\boldsymbol}{x}^{\epsilon}(s), \epsilon)d{\boldsymbol}{W}^{(k_2)}(s) )\epsilon ({\boldsymbol}{v}^{\epsilon}(s))^{*} \bigg]_{p_1,p_2} \cdot \bigg[ e^{{\boldsymbol}{a}^*_2(s, {\boldsymbol}{x}^\epsilon(s),\epsilon)y} \bigg]_{p_2,k} dy \nonumber \\ &\ \ \ + \int_0^\infty \bigg[ e^{{\boldsymbol}{a}_2(s, {\boldsymbol}{x}^\epsilon(s),\epsilon)y} \bigg]_{j,p_1} \cdot \bigg[ \epsilon {\boldsymbol}{v}^{\epsilon}(s)({\boldsymbol}{b}_{2}(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) ds \nonumber \\ &\hspace{1cm} + {\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^{\epsilon}(s), \epsilon) d{\boldsymbol}{W}^{(k_2)}(s))^{*} \bigg]_{p_1,p_2} \cdot \bigg[ e^{{\boldsymbol}{a}^*_2(s,{\boldsymbol}{x}^\epsilon(s),\epsilon)y} \bigg]_{p_2,k} dy \nonumber \\ &\ \ \ + \int_0^\infty \bigg[ e^{{\boldsymbol}{a}_2(s, {\boldsymbol}{x}^\epsilon(s),\epsilon)y} \bigg]_{j,p_1} \cdot \bigg[ {\boldsymbol}{\sigma}_2(s, {\boldsymbol}{x}^{\epsilon}(s), \epsilon) {\boldsymbol}{\sigma}_2(s, {\boldsymbol}{x}^{\epsilon}(s), \epsilon)^{*} ds \bigg]_{p_1,p_2} \nonumber \\ &\hspace{1cm} \cdot \bigg[ e^{{\boldsymbol}{a}^*_2(s, {\boldsymbol}{x}^\epsilon(s),\epsilon)y} \bigg]_{p_2,k} dy.\end{aligned}$$ On the other hand, by , $$\begin{aligned} {\boldsymbol}{X}(t) &= {\boldsymbol}{x} + \int_0^t [{\boldsymbol}{B}_1(s, {\boldsymbol}{X}(s))-{\boldsymbol}{A}_1(s, {\boldsymbol}{X}(s)){\boldsymbol}{A}_2^{-1}(s, {\boldsymbol}{X}(s)){\boldsymbol}{B}_2(s, {\boldsymbol}{X}(s))] ds\nonumber \\ &\ \ \ \ \ + \int_0^t {\boldsymbol}{S}(s, {\boldsymbol}{X}(s)) ds + \int_0^t {\boldsymbol}{\Sigma}_1(s, {\boldsymbol}{X}(s)) d{\boldsymbol}{W}^{(k_1)}(s) \nonumber \\ &\ \ \ \ \ - \int_0^t {\boldsymbol}{A}_1(s, {\boldsymbol}{X}(s)) {\boldsymbol}{A}_2^{-1}(s, {\boldsymbol}{X}(s)){\boldsymbol}{\Sigma}_2(s, {\boldsymbol}{X}(s)) d{\boldsymbol}{W}^{(k_2)}(s).\end{aligned}$$ We use again the notation $\mathbb{E}_1[ \cdot ] := \mathbb{E}[\cdot; \epsilon \leq \epsilon_1]$, where $\epsilon_1 > 0$ is the random variable from Proposition \[mom\_bound\]. For any $p > 2$, $T>0$, $i=1,\dots,n_1$ (recall that $[{\boldsymbol}{b}]_i$ denotes the $i$th component of vector ${\boldsymbol}{b}$), we estimate: $$\begin{aligned} &\mathbb{E}_1\left[ \sup_{t \in [0,T]} |[{\boldsymbol}{x}^{\epsilon}(t) - {\boldsymbol}{X}(t)]_i|^p \right] \nonumber \\ &\leq 6^{p-1}\bigg\{ \mathbb{E}_1\left[|{\boldsymbol}{x}^\epsilon - {\boldsymbol}{x}|^p\right] \nonumber \\ &\hspace{0.3cm} + \mathbb{E}_1\left[\sup_{t \in [0,T]} \bigg| \int_0^t \bigg[{\boldsymbol}{S}_\epsilon(s, {\boldsymbol}{x}^\epsilon(s),{\boldsymbol}{v}^\epsilon(s),\epsilon)- {\boldsymbol}{S}(s, {\boldsymbol}{X}(s))\bigg]_i ds \bigg|^p \right] \nonumber \\ &\hspace{0.3cm} + \mathbb{E}_1 \bigg[ \sup_{t \in [0,T]} \bigg( \int_0^t \bigg| \bigg[ {\boldsymbol}{a}_{1}(s, {\boldsymbol}{x}^{\epsilon}(s), \epsilon) {\boldsymbol}{a}_{2}^{-1}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) {\boldsymbol}{b}_{2}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) \nonumber \\ &\hspace{2cm} - {\boldsymbol}{A}_1(s, {\boldsymbol}{X}(s)){\boldsymbol}{A}_2^{-1}(s,{\boldsymbol}{X}(s)){\boldsymbol}{B}_2(s,{\boldsymbol}{X}(s)) \bigg]_i \bigg| ds \bigg)^p \bigg] \nonumber \\ &\hspace{0.3cm} + \mathbb{E}_1 \left[ \sup_{t \in [0,T]} \bigg( \int_0^t \bigg| \bigg[ {\boldsymbol}{b}_{1}(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) - {\boldsymbol}{B}_1(s,{\boldsymbol}{X}(s)) \bigg]_i \bigg| ds \bigg)^p \right] \nonumber \\ &\hspace{0.3cm} + \mathbb{E}_1 \bigg[ \sup_{t \in [0,T]} \bigg| \int_0^t \bigg[ {\boldsymbol}{a}_{1}(s,{\boldsymbol}{x}^{\epsilon}(s), \epsilon) {\boldsymbol}{a}_{2}^{-1}(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) {\boldsymbol}{\sigma}_2(s,{\boldsymbol}{x}^{\epsilon}(s), \epsilon) \nonumber \\ &\hspace{1cm} - {\boldsymbol}{A}_1(s,{\boldsymbol}{X}(s)) {\boldsymbol}{A}_2^{-1}(s,{\boldsymbol}{X}(s)){\boldsymbol}{\Sigma}_2(s,{\boldsymbol}{X}(s)) \bigg]_{i,j} d[{\boldsymbol}{W}^{(k_2)}]_j(s) \bigg|^p \bigg] \nonumber \\ &\hspace{0.3cm} + \mathbb{E}_1 \left[\sup_{t \in [0,T]} \bigg| \int_0^t \bigg[{\boldsymbol}{\sigma}_{1}(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) - {\boldsymbol}{\Sigma}_1(s,{\boldsymbol}{X}(s)) \bigg]_{i,j} d[{\boldsymbol}{W}^{(k_1)}]_j(s) \bigg|^p \right] \ \bigg\} \\ &=: 6^{p-1}\left(\sum_{k=0}^5 R_k \right).\end{aligned}$$ By Assumption \[a2\_ch2\], $R_0 = \mathbb{E}_1 \left[|{\boldsymbol}{x}^\epsilon - {\boldsymbol}{x}|^p\right] \leq \mathbb{E}\left[|{\boldsymbol}{x}^\epsilon - {\boldsymbol}{x}|^p \right] = O(\epsilon^{ p r_0})$ as $\epsilon \to 0$, where $r_0 > 1/2$ is a constant. We now estimate each of the $R_k$, $k=1,\dots,5$. We have: $$\begin{aligned} R_3 &\leq \mathbb{E}_1 \sup_{t \in [0,T]} \bigg( \int_0^t | {\boldsymbol}{b}_{1}(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) - {\boldsymbol}{B}_1(s,{\boldsymbol}{X}(s)) | ds \bigg)^p \\ &= \mathbb{E}_1 \sup_{t \in [0,T]} \bigg( \int_0^t |{\boldsymbol}{b}_{1}(s,{\boldsymbol}{x}^\epsilon(s),\epsilon) - {\boldsymbol}{b}_1(s,{\boldsymbol}{X}(s),\epsilon) + {\boldsymbol}{b}_1(s,{\boldsymbol}{X}(s),\epsilon) \nonumber \\ &\ \ \ \ \ \ \ \ \ \ \ - {\boldsymbol}{B}_1(s,{\boldsymbol}{X}(s)) | ds \bigg)^p \nonumber \\ &\leq 2^{p-1} \bigg[ \mathbb{E}_1 \sup_{t \in [0,T]} \left(\int_0^t |{\boldsymbol}{b}_{1}(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) - {\boldsymbol}{b}_1(s,{\boldsymbol}{X}(s),\epsilon)| ds \right)^p \nonumber \\ &\ \ \ + \mathbb{E}_1 \sup_{t \in [0,T]} \left(\int_0^t |{\boldsymbol}{b}_1(s,{\boldsymbol}{X}(s),\epsilon) - {\boldsymbol}{B}_1(s,{\boldsymbol}{X}(s))| ds \right)^p \bigg] \\ &\leq 2^{p-1}\left[ L^p(\epsilon) \mathbb{E}_1 \sup_{t \in [0,T]} \int_0^t |{\boldsymbol}{x}^\epsilon(s)-{\boldsymbol}{X}(s) |^p ds + T^p \beta_1(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{b}_1 \neq {\boldsymbol}{B}_1\}} \right] \\ &\leq L_3(\epsilon,p,T) \int_0^T \mathbb{E}_1 \sup_{u \in [0,s]} |{\boldsymbol}{x}^\epsilon(u)-{\boldsymbol}{X}(u) |^p ds + C_3(p,T) \beta_1(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{b}_1 \neq {\boldsymbol}{B}_1\}},\end{aligned}$$ on the set $S_1 := \{\epsilon : \epsilon \leq \epsilon_1\}$, where $\mathbb{1}_{A}$ denotes the indicator function of a set $A$, $L_3(\epsilon,p,T) = O(1)$ as $\epsilon \to 0$ and $C_3(p,T)$ is a constant dependent on $p$ and $T$. In the last two lines of the above estimate, we have used Assumption \[a1\_ch2\], Assumption \[a5\_ch2\], and the inequality: $$\mathbb{E}_1 \sup_{t\in [0,T]} \left( \int_0^t |{\boldsymbol}{u}(s)| ds \right)^p \leq T^{p-1} \mathbb{E}_1 \int_0^T |{\boldsymbol}{u}(s)|^p ds,$$ where ${\boldsymbol}{u}(s) \in {\mathbb{R}}^{n_1}$ for $s \in [0,T]$ (recall that $L(\epsilon) = O(1)$ as $\epsilon \to 0$ by the Assumption \[a1\_ch2\]). Using again the above techniques, together with Lemma \[lipzlemma\], one obtains: $$\begin{aligned} R_2 &\leq L_2(\epsilon,p,T) \int_0^T \mathbb{E}_1 \sup_{u \in [0,s]} |{\boldsymbol}{x}^\epsilon(u)-{\boldsymbol}{X}(u) |^p ds \nonumber \\ &\ \ \ \ + C_2(p,T)\left[\alpha_1(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{a}_1 \neq {\boldsymbol}{A}_1\}} + \alpha_2(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{a}_2 \neq {\boldsymbol}{A}_2\}} + \beta_2(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{b}_2 \neq {\boldsymbol}{B}_2\}} \right],\end{aligned}$$ on $S_1$, where $\alpha_1(\epsilon)$, $\alpha_2(\epsilon)$, $\beta_2(\epsilon)$ are from Assumption \[a1\_ch2\], $L_2(\epsilon, p,T) = O(1)$ as $\epsilon \to 0$ and $C_2(p,T)$ is a constant. To estimate $R_5$, we use the Burkholder-Davis-Gundy inequality: $$\begin{aligned} R_5 &\leq C'_p \mathbb{E}_1 \bigg( \int_0^T \|{\boldsymbol}{\sigma}_{1}(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) - {\boldsymbol}{\Sigma}_1(s,{\boldsymbol}{X}(s))\|_{F}^2 ds \bigg)^{p/2},\end{aligned}$$ where $C'_p$ is a positive constant and $\|\cdot\|_F$ denotes the Frobenius norm. Using Hölder’s inequality, Assumption \[a1\_ch2\], Assumption \[a5\_ch2\], and the above techniques, we obtain: $$\begin{aligned} R_5 &\leq C''_p \mathbb{E}_1 \bigg( \int_0^T \|{\boldsymbol}{\sigma}_1(s,{\boldsymbol}{x}^\epsilon(s),\epsilon)-{\boldsymbol}{\sigma}_1(s,{\boldsymbol}{X}(s),\epsilon)\|_{F}^2 ds \bigg)^{p/2} \nonumber \\ &\ \ \ \ + C''_p \mathbb{E}_1 \bigg( \int_0^T \|{\boldsymbol}{\sigma}_1(s,{\boldsymbol}{X}(s),\epsilon) - {\boldsymbol}{\Sigma}_1(s,{\boldsymbol}{X}(s)) \|_{F}^2 ds \bigg)^{p/2} \\ &\leq C''_p T^{\frac{p}{2}-1} \int_0^T \mathbb{E}_1 \|{\boldsymbol}{\sigma}_1(s,{\boldsymbol}{x}^\epsilon(s),\epsilon)-{\boldsymbol}{\sigma}_1(s,{\boldsymbol}{X}(s),\epsilon)\|_{F}^p ds \nonumber \\ &\ \ \ \ + C'''_p |\gamma_1(\epsilon)|^p T^{\frac{p}{2}} \mathbb{1}_{\{{\boldsymbol}{\sigma}_1 \neq {\boldsymbol}{\Sigma}_1 \}} \\ &\leq L_5(\epsilon,p,T) \int_0^T \mathbb{E}_1 \sup_{u \in [0,s]} |{\boldsymbol}{x}^\epsilon(u)-{\boldsymbol}{X}(u) |^p ds + C_5(p,T)\gamma_1(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{\sigma}_1 \neq {\boldsymbol}{\Sigma}_1\}},\end{aligned}$$ on the set $S_1$, where $C_p''$ and $C_p'''$ are constants, $\gamma_1(\epsilon)$ is from Assumption \[a1\_ch2\], $L_5(\epsilon,p,T) = O(1)$ as $\epsilon \to 0$ and $C_5(p,T)$ is a constant. Similarly, using the above techniques and Lemma \[lipzlemma\], one can show: $$\begin{aligned} R_4 &\leq L_4(\epsilon,p,T) \int_0^T \mathbb{E}_1 \sup_{u \in [0,s]} |{\boldsymbol}{x}^\epsilon(u)-{\boldsymbol}{X}(u) |^p ds \nonumber \\ &\ \ \ \ + C_4(p,T)\left[\alpha_1(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{a}_1 \neq {\boldsymbol}{A}_1\}} + \alpha_2(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{a}_2 \neq {\boldsymbol}{A}_2\}} + \gamma_2(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{\sigma}_2 \neq {\boldsymbol}{\Sigma}_2\}} \right],\end{aligned}$$ on $S_1$, where $\gamma_2(\epsilon)$ is from Assumption \[a1\_ch2\], $L_4(\epsilon,p,T) = O(1)$ as $\epsilon \to 0$ and $C_4(p,T)$ is a constant. To obtain a bound for $R_1$, first we estimate: $$\begin{aligned} & \bigg| \int_0^t \bigg[{\boldsymbol}{S}^\epsilon(s,{\boldsymbol}{x}^\epsilon(s),{\boldsymbol}{v}^\epsilon(s),\epsilon)- {\boldsymbol}{S}(s,{\boldsymbol}{X}(s))\bigg]_i ds \bigg| \nonumber \\ &\leq \bigg|\epsilon [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(t, {\boldsymbol}{x}^{\epsilon}(t),\epsilon) \cdot [{\boldsymbol}{v}^{\epsilon}]_j(t) - \epsilon [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(0, {\boldsymbol}{x},\epsilon) \cdot [{\boldsymbol}{v}]_j \bigg| \nonumber \\ &\ \ \ + \left|\int_0^t \frac{\partial}{\partial s} \left( [{\boldsymbol}{a}_1 {\boldsymbol}{a}_2^{-1}]_{i,j}(s, {\boldsymbol}{x}^\epsilon(s), \epsilon) \right) \cdot \epsilon [{\boldsymbol}{v}^\epsilon]_j(s) ds \right| \nonumber \\ &\ \ \ + \int_0^t \bigg| \frac{\partial}{\partial [{\boldsymbol}{x}^{\epsilon}]_l(s)}\bigg( [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) \bigg) \cdot [{\boldsymbol}{b}_1]_l(s, {\boldsymbol}{x}^\epsilon(s),\epsilon) \cdot \epsilon [{\boldsymbol}{v}^{\epsilon}]_j(s) \bigg| ds \nonumber \\ &\ \ \ + \bigg| \int_0^t \frac{\partial}{\partial [{\boldsymbol}{x}^{\epsilon}]_l(s)}\bigg( [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) \bigg) \cdot [{\boldsymbol}{\sigma}_1]_{l,k}(s, {\boldsymbol}{x}^\epsilon(s),\epsilon) \nonumber \\ &\ \ \ \ \ \ \ \ \ \cdot \epsilon [{\boldsymbol}{v}^{\epsilon}]_j(s) d[{\boldsymbol}{W}^{(k_1)}]_{k}(s) \bigg| \nonumber \\ &\ \ \ + \bigg| \int_0^t \frac{\partial}{\partial [{\boldsymbol}{x}^{\epsilon}]_l(s)}\bigg( [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) \bigg) \cdot [{\boldsymbol}{a}_1]_{l,k}(s, {\boldsymbol}{x}^\epsilon(s),\epsilon) \cdot [{\boldsymbol}{J}_1^\epsilon]_{j,k}(s) ds \bigg| \nonumber \\ &\ \ \ + \bigg| \int_0^t \frac{\partial}{\partial [{\boldsymbol}{x}^{\epsilon}]_l(s)}\bigg( [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) \bigg) \cdot [{\boldsymbol}{a}_1]_{l,k}(s, {\boldsymbol}{x}^\epsilon(s),\epsilon) \cdot [{\boldsymbol}{J}_2^\epsilon]_{j,k}(s) ds \bigg| \nonumber \\ &\ \ \ + \bigg| \int_0^t \frac{\partial}{\partial [{\boldsymbol}{X}]_l(s)}\left([{\boldsymbol}{A}_1{\boldsymbol}{A}_2^{-1}]_{i,j}(s, {\boldsymbol}{X}(s)) \right) \cdot [{\boldsymbol}{A}_1]_{l,k}(s, {\boldsymbol}{X}(s))\cdot [{\boldsymbol}{J}]_{j,k}(s) \nonumber \\ &\ \ \ \ \ \ \ \ \ - \frac{\partial}{\partial [{\boldsymbol}{x}^{\epsilon}]_l(s)}\bigg( [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) \bigg) \cdot [{\boldsymbol}{a}_1]_{l,k}(s, {\boldsymbol}{x}^\epsilon(s),\epsilon) \cdot [{\boldsymbol}{J}_3^\epsilon]_{j,k}(s) ds \bigg| \\ &=: \sum_{k=0}^6 \Pi_k,\end{aligned}$$ and so $R_1 \leq 6^{p-1} \sum_{k=0}^6 \left( \mathbb{E}_1 \sup_{t\in [0,T]} |\Pi_k|^p \right) =: 6^{p-1} \sum_{k=0}^6 M_k. $ It is straightforward to show, using the boundedness assumptions of the theorem, that for $k=0,1,2,3,5$: $$M_k \leq C_k(p,T) \cdot \mathbb{E}_1 \sup_{t\in [0,T]} |\epsilon {\boldsymbol}{v}^\epsilon(t)|^p,$$ where the $C_k$ are positive constants. Applying Proposition \[bound\_on\_integ\_wrt\_p\_square\], we obtain: $$\label{M4bound} M_4 := \mathbb{E}_1 \sup_{t \in [0,T]} |\Pi_4|^p \leq C_4(p,T) \epsilon^\beta,$$ on $S_1$, for all $0 < \beta < p/2$, as $\epsilon \to 0$, where $C_4(p,T)$ is a positive constant. We now estimate $M_6$: $$\begin{aligned} &M_6 \nonumber \\ &\leq \mathbb{E}_1 \sup_{t \in [0,T]}\bigg( \int_0^t \bigg| \frac{\partial}{\partial [{\boldsymbol}{x}^{\epsilon}]_l(s)}\bigg( [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) \bigg) \cdot [{\boldsymbol}{a}_1]_{l,k}(s,{\boldsymbol}{x}^\epsilon(s),\epsilon) \nonumber \\ &\ \ \ \ \ \ \ \cdot [{\boldsymbol}{J}_3^\epsilon]_{j,k}(s) - \frac{\partial}{\partial [{\boldsymbol}{X}]_l(s)}\left([{\boldsymbol}{A}_1{\boldsymbol}{A}_2^{-1}]_{i,j}(s,{\boldsymbol}{X}(s)) \right) \cdot [{\boldsymbol}{A}_1]_{l,k}(s,{\boldsymbol}{X}(s)) \nonumber \\ &\ \ \ \ \ \ \ \cdot [{\boldsymbol}{J}]_{j,k}(s) \bigg| ds \bigg)^p \\ &\leq C(p) \mathbb{E}_1 \sup_{t \in [0,T]}\bigg( \int_0^t \bigg| \frac{\partial}{\partial [{\boldsymbol}{x}^{\epsilon}]_l(s)}\bigg( [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(s,{\boldsymbol}{x}^{\epsilon}(s),\epsilon) \bigg) \cdot [{\boldsymbol}{a}_1]_{l,k}(s, {\boldsymbol}{x}^\epsilon(s),\epsilon) \nonumber \\ &\ \ \ \ \ \ - \frac{\partial}{\partial [{\boldsymbol}{X}]_l(s)}\bigg([{\boldsymbol}{A}_1{\boldsymbol}{A}_2^{-1}]_{i,j}(s,{\boldsymbol}{X}(s)) \bigg) \cdot [{\boldsymbol}{A}_1]_{l,k}(s, {\boldsymbol}{X}(s)) \bigg|^p \cdot |[{\boldsymbol}{J}_3^\epsilon]_{j,k}(s)|^p ds \bigg) \nonumber \\ &\ \ + C(p) \mathbb{E}_1 \sup_{t\in [0,T]} \bigg(\int_0^t \bigg| \frac{\partial}{\partial [{\boldsymbol}{X}]_l(s)}\bigg([{\boldsymbol}{A}_1{\boldsymbol}{A}_2^{-1}]_{i,j}(s, {\boldsymbol}{X}(s)) \bigg) \cdot [{\boldsymbol}{A}_1]_{l,k}(s, {\boldsymbol}{X}(s)) \bigg|^p \nonumber \\ &\hspace{3.3cm} \cdot |[{\boldsymbol}{J}_3^\epsilon-{\boldsymbol}{J}]_{j,k}(s)|^p ds\bigg) \\ &\leq C(p) \mathbb{E}_1 \sup_{t \in [0,T]}\bigg( \int_0^t \bigg| \frac{\partial}{\partial [{\boldsymbol}{x}^{\epsilon}]_l(s)}\bigg( [{\boldsymbol}{a}_{1}{\boldsymbol}{a}_{2}^{-1}]_{i,j}(s, {\boldsymbol}{x}^{\epsilon}(s),\epsilon) \bigg) \cdot [{\boldsymbol}{a}_1]_{l,k}(s, {\boldsymbol}{x}^\epsilon(s),\epsilon) \nonumber \\ &\ \ \ \ \ \ - \frac{\partial}{\partial [{\boldsymbol}{X}]_l(s)}\bigg([{\boldsymbol}{A}_1{\boldsymbol}{A}_2^{-1}]_{i,j}(s, {\boldsymbol}{X}(s)) \bigg) \cdot [{\boldsymbol}{A}_1]_{l,k}(s, {\boldsymbol}{X}(s)) \bigg|^p \cdot |[{\boldsymbol}{J}_3^\epsilon]_{j,k}(s)|^p ds \bigg) \nonumber \\ &\ \ \ + C(p) \mathbb{E}_1 \sup_{t \in [0,T]} \int_0^t \|{\boldsymbol}{J}_3^\epsilon(s) - {\boldsymbol}{J}(s) \|_F^p ds, \label{intermed}\end{aligned}$$ where the $C(p)$ are constants may vary from one expression to another. Note that in the above, ${\boldsymbol}{J}_3^\epsilon(s)$ and ${\boldsymbol}{J}(s)$ are solutions to the Lyapunov equation $${\boldsymbol}{a}_2(s, {\boldsymbol}{x}^\epsilon(s),\epsilon){\boldsymbol}{J}_3^\epsilon(s) +{\boldsymbol}{J}_3^\epsilon(s) {\boldsymbol}{a}_2^*(s, {\boldsymbol}{x}^\epsilon(s),\epsilon) = -({\boldsymbol}{\sigma}_2 {\boldsymbol}{\sigma}^*_2)(s, {\boldsymbol}{x}^\epsilon(s),\epsilon)$$ and $${\boldsymbol}{A}_2(s, {\boldsymbol}{X}(s)) {\boldsymbol}{J}(s) +{\boldsymbol}{J}(s) {\boldsymbol}{A}_2^{*}(s, {\boldsymbol}{X}(s)) = -({\boldsymbol}{\Sigma}_2 {\boldsymbol}{\Sigma}_2^{*})(s, {\boldsymbol}{X}(s)).$$ respectively. Let ${\boldsymbol}{H}^\epsilon(s) := {\boldsymbol}{J}_3^\epsilon(s) - {\boldsymbol}{J}(s) $ and ${\boldsymbol}{G}^\epsilon(s) := {\boldsymbol}{a}_2(s, {\boldsymbol}{x}^\epsilon(s),\epsilon)-{\boldsymbol}{A}_2(s, {\boldsymbol}{X}(s))$. After some algebraic manipulations with the above pair of Lyapunov equations, we obtain another Lyapunov equation: $$\begin{aligned} &{\boldsymbol}{A}_2(s, {\boldsymbol}{X}(s)) {\boldsymbol}{H}^\epsilon(s) + {\boldsymbol}{H}^\epsilon(s) {\boldsymbol}{A}_2^*(s, {\boldsymbol}{X}(s)) \nonumber \\ &= ({\boldsymbol}{\Sigma}_2 {\boldsymbol}{\Sigma}_2^*)(s, {\boldsymbol}{X}(s)) - ({\boldsymbol}{\sigma}_2 {\boldsymbol}{\sigma}_2^*)(s, {\boldsymbol}{x}^\epsilon(s),\epsilon) - {\boldsymbol}{G}^\epsilon(s){\boldsymbol}{J}_3^\epsilon(s) - {\boldsymbol}{J}_3^\epsilon(s) ({\boldsymbol}{G}^\epsilon)^*(s).\end{aligned}$$ By the last statement in Assumption \[a5\_ch2\], ${\boldsymbol}{A}_2$ is positive stable uniformly (in ${\boldsymbol}{X}$ and $s$), therefore the above Lyapunov equation has a unique solution: $$\begin{aligned} &{\boldsymbol}{H}^\epsilon(s) = \int_0^\infty e^{{\boldsymbol}{A}_2(s, {\boldsymbol}{X}(s)) y} \bigg( -({\boldsymbol}{\Sigma}_2 {\boldsymbol}{\Sigma}_2^*)(s, {\boldsymbol}{X}(s)) + ({\boldsymbol}{\sigma}_2 {\boldsymbol}{\sigma}_2^*)(s, {\boldsymbol}{x}^\epsilon(s),\epsilon) \nonumber \\ &\hspace{3cm} + {\boldsymbol}{G}^\epsilon(s){\boldsymbol}{J}_3^\epsilon(s) + {\boldsymbol}{J}_3^\epsilon(s) ({\boldsymbol}{G}^\epsilon)^*(s)\bigg) e^{{\boldsymbol}{A}^*_2(s, {\boldsymbol}{X}(s))y} dy. \label{hdiff}\end{aligned}$$ Using , the assumptions of the theorem, and estimating as before, we obtain: $$\begin{aligned} \mathbb{E}_1 \sup_{t \in [0,T]} \int_0^t \|{\boldsymbol}{J}^\epsilon_3(s) - {\boldsymbol}{J}(s)\|^p_F ds &\leq C(\epsilon, p, T) \int_0^T \mathbb{E}_1 \sup_{u \in [0,s]} |{\boldsymbol}{x}^\epsilon(u) - {\boldsymbol}{X}(u)|^p ds \nonumber \\ &\ \ \ \ \ + D(p,T)[\alpha_2(\epsilon)^p \mathbb{1}_{{\boldsymbol}{a}_2 \neq {\boldsymbol}{A}_2} + \gamma_2(\epsilon)^p \mathbb{1}_{{\boldsymbol}{\sigma}_2 \neq {\boldsymbol}{\Sigma}_2}]\end{aligned}$$ on the set $S_1$, where $C(\epsilon, p, T) = O(1)$ as $\epsilon \to 0$ and $D(p,T)$ is a positive constant, $\alpha_2(\epsilon)$ and $\gamma_2(\epsilon)$ are from Assumption \[a5\_ch2\]. Applying the above estimates, Lemma \[lipzlemma\] and techniques used earlier, one obtains from : $$\begin{aligned} M_6 &\leq L_6(\epsilon,p,T) \int_0^T \mathbb{E}_1 \sup_{u \in [0,s]} |{\boldsymbol}{x}^\epsilon(u)-{\boldsymbol}{X}(u) |^p ds \nonumber \\ &\ \ \ \ + C_6(p,T)\bigg[\alpha_1(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{a}_1 \neq {\boldsymbol}{A}_1\}} + \alpha_2(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{a}_2 \neq {\boldsymbol}{A}_2\}} + \gamma_2(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{\sigma}_2 \neq {\boldsymbol}{\Sigma}_2\}} + \nonumber \\ &\ \ \ \ \ \ \ \ \ \ + \theta_1(\epsilon)^p \mathbb{1}_{\{({\boldsymbol}{a}_1)_{{\boldsymbol}{x}} \neq ({\boldsymbol}{A}_1)_{{\boldsymbol}{x}}\}} + \theta_2(\epsilon)^p \mathbb{1}_{\{({\boldsymbol}{a}_2)_{{\boldsymbol}{x}} \neq ({\boldsymbol}{A}_2)_{{\boldsymbol}{x}}\}} \bigg],\end{aligned}$$ on $S_1$, where $L_6(\epsilon,p,T)=O(1)$ as $\epsilon \to 0$, $C_6(p,T)$ is a positive constant, and $\alpha_i(\epsilon)$, $\theta_i(\epsilon)$ ($i=1,2$) and $\gamma_2(\epsilon)$ are from Assumption \[a5\_ch2\]. Collecting the above estimates for the $M_k$, we obtain: $$\begin{aligned} R_1 &\leq C_1(p,T) \bigg( \mathbb{E}_1 \sup_{t \in [0,T]} |\epsilon {\boldsymbol}{v}^\epsilon(t)|^p \nonumber \\ &\ \ \ \ \ \ + \alpha_1(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{a}_1 \neq {\boldsymbol}{A}_1\}} + \alpha_2(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{a}_2 \neq {\boldsymbol}{A}_2\}} + \gamma_2(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{\sigma}_2 \neq {\boldsymbol}{\Sigma}_2\}} + \nonumber \\ &\ \ \ \ \ \ + \theta_1(\epsilon)^p \mathbb{1}_{\{({\boldsymbol}{a}_1)_{{\boldsymbol}{x}} \neq ({\boldsymbol}{A}_1)_{{\boldsymbol}{x}}\}} + \theta_2(\epsilon)^p \mathbb{1}_{\{({\boldsymbol}{a}_2)_{{\boldsymbol}{x}} \neq ({\boldsymbol}{A}_2)_{{\boldsymbol}{x}}\}} \bigg) \nonumber \\ &\ \ \ \ \ \ + C_2(\epsilon, p,T) \int_0^T \mathbb{E}_1 \sup_{u \in [0,s]} |{\boldsymbol}{x}^\epsilon(u) - {\boldsymbol}{X}(u)|^p ds + C_3(p,T) M_4 \end{aligned}$$ on $S_1$, where $C_1(p,T)$ and $C_3(p,T)$ are constants, $C_2(\epsilon, p, T) = O(1)$ as $\epsilon \to 0$, and $M_4$ satisfies the bound in . Using all the estimates for the $R_i$, we have: $$\begin{aligned} &\mathbb{E}_1 \left[\sup_{t \in [0,T]} |{\boldsymbol}{x}^{\epsilon}(t) - {\boldsymbol}{X}(t)|^p \right] = \mathbb{E}_1 \left[ \sup_{t \in [0,T]} \sum_{k=1}^{n_1} |[{\boldsymbol}{x}^{\epsilon}- {\boldsymbol}{X}]_k(t)|^p \right] \\ &\leq n_1 \max_{k=1,\dots,n_1} \left\{ \mathbb{E}_1 \sup_{t \in [0,T]} |[{\boldsymbol}{x}^{\epsilon}- {\boldsymbol}{X}]_k(t)|^p \right\} \\ &\leq L(\epsilon,p,T,n_1) \int_0^T \mathbb{E}_1 \sup_{u \in [0,s]} |{\boldsymbol}{x}^\epsilon(u)-{\boldsymbol}{X}(u) |^p ds \nonumber \\ &\ \ \ + C(p,T,n_1) \bigg( \epsilon^{p r_0} + \mathbb{E}_1 \sup_{t \in [0,T]} |\epsilon {\boldsymbol}{v}^\epsilon(t)|^p + M_4 \nonumber \\ &\ \ \ \ \ \ + \alpha_1(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{a}_1 \neq {\boldsymbol}{A}_1\}} + \alpha_2(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{a}_2 \neq {\boldsymbol}{A}_2\}} + \gamma_1(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{\sigma}_1 \neq {\boldsymbol}{\Sigma}_1\}} \nonumber \\ &\ \ \ \ \ \ + \gamma_2(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{\sigma}_2 \neq {\boldsymbol}{\Sigma}_2\}} + \beta_1(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{b}_1 \neq {\boldsymbol}{B}_1\}} + \beta_2(\epsilon)^p \mathbb{1}_{\{{\boldsymbol}{B}_2 \neq {\boldsymbol}{B}_2\}} \nonumber \\ &\ \ \ \ \ \ +\theta_1(\epsilon)^p \mathbb{1}_{\{({\boldsymbol}{a}_1)_{{\boldsymbol}{x}} \neq ({\boldsymbol}{A}_1)_{{\boldsymbol}{x}}\}} + \theta_2(\epsilon)^p \mathbb{1}_{\{({\boldsymbol}{a}_2)_{{\boldsymbol}{x}} \neq ({\boldsymbol}{A}_2)_{{\boldsymbol}{x}}\}} \bigg) \\ &\leq L(\epsilon,p,T,n_1) \int_0^T \mathbb{E}_1 \sup_{u \in [0,s]} |{\boldsymbol}{x}^\epsilon(u)-{\boldsymbol}{X}(u) |^p ds \nonumber \\ &\ \ \ + C(p,T,n_1)\epsilon^{r},\end{aligned}$$ on $S_1$, where $L(\epsilon, p, T,n_1)=O(1)$ as $\epsilon \to 0$, $r$ is the rate of convergence in the statement of the theorem, $C(p,T,n_1)$ is a constant that changes from line to line, and we have applied Proposition \[mom\_bound\], Lemma \[lipzlemma\] and Assumption \[a5\_ch2\] to get the last expression in the above estimate. Finally, applying the Gronwall lemma gives: $$\begin{aligned} &\mathbb{E}_1 \left[\sup_{t \in [0,T]} |{\boldsymbol}{x}^{\epsilon}(t) - {\boldsymbol}{X}(t)|^p \right] \leq \epsilon^{r} \cdot C(p,T,n_1) e^{L(\epsilon, p, T,n_1) T}\end{aligned}$$ on $S_1$. then follows for the case $p > 2$. The result for $0<p\leq 2$ follows by an application of the Hölder’s inequality: for $0<p\leq 2$, taking $q > 2$ so that $p/q < 1$, we have $$\begin{aligned} \mathbb{E}_1 \left[\sup_{t\in [0,T]} |{\boldsymbol}{x}^\epsilon(t)-{\boldsymbol}{X}(t)|^p \right] &\leq \bigg[ \mathbb{E}_1 \bigg( \sup_{t\in [0,T]} |{\boldsymbol}{x}^\epsilon(t)-{\boldsymbol}{X}(t)|^p \bigg)^{q/p} \bigg]^{p/q} \\ &= O(\epsilon^\beta),\end{aligned}$$ for all $0 < \beta < p'$, as $\epsilon \to 0$. The last statement on convergence in probabiity in the theorem follows from Lemma 1 in [@LimWehr_Homog_NonMarkovian]. An Implementation of Algorithm \[alg\] under Assumption \[special\] {#implem_alg} =================================================================== We describe how Algorithm \[alg\] can be applied to a large class of GLEs, satisfying Assumption \[special\]. For $i=2,4$, one can write $${\boldsymbol}{Q}_i(z) = z^{d_i}{\boldsymbol}{I} + {\boldsymbol}{a}_{i,d_i-1}z^{d_i-1}+\dots+{\boldsymbol}{a}_{i,1} z + {\boldsymbol}{a}_{i,0},$$ where the ${\boldsymbol}{a}_{i,k}$ are related to the ${\boldsymbol}{\Gamma}_{i,k}$ as follows: $$\begin{aligned} {\boldsymbol}{a}_{i,0} &= \prod_{k=1}^{d_i} {\boldsymbol}{\Gamma}_{i,k}, \nonumber \\ {\boldsymbol}{a}_{i,1} &= \sum_{k_1, \dots, k_{d_i-1}=1,\dots,d_i: k_1 > \dots > k_{d_i-1}} {\boldsymbol}{\Gamma}_{i,k_{1}} {\boldsymbol}{\Gamma}_{i,k_2} \cdots {\boldsymbol}{\Gamma}_{i,k_{d_i-1}}, \nonumber \\ &\vdots \nonumber \\ {\boldsymbol}{a}_{i,d_i-2} &= \sum_{k_1,k_2=1,\dots,d_i: k_1>k_2} {\boldsymbol}{\Gamma}_{i,k_1} {\boldsymbol}{\Gamma}_{i,k_2}, \nonumber \\ {\boldsymbol}{a}_{i,d_i-1} &= \sum_{k=1}^N {\boldsymbol}{\Gamma}_{i,k}.\end{aligned}$$ Then it can be shown that ${\boldsymbol}{\Phi}_i(z)$ admits the following (controllable) realization [@brockett2015finite]: ${\boldsymbol}{\Phi}_i(z) = {\boldsymbol}{H}_i(z{\boldsymbol}{I} + {\boldsymbol}{F}_i)^{-1}{\boldsymbol}{G}_i$, with $${\boldsymbol}{H}_i = [{\boldsymbol}{0} \ \cdots \ {\boldsymbol}{0} \ \ {\boldsymbol}{B}_{l_i} \ \ {\boldsymbol}{0} \ \cdots \ {\boldsymbol}{0}] \in {\mathbb{R}}^{p_i \times p_i d_i},$$ where ${\boldsymbol}{B}_{l_i}$ is in the $l_i$th slot, $${\boldsymbol}{F}_i = \begin{bmatrix} {\boldsymbol}{0} & -{\boldsymbol}{I} & \\ & {\boldsymbol}{0} & -{\boldsymbol}{I} & \\ & & \ddots & \ddots \\ & & & {\boldsymbol}{0} & -{\boldsymbol}{I} \\ {\boldsymbol}{a}_{i,0} & {\boldsymbol}{a}_{i,1} & \dots & {\boldsymbol}{a}_{i,d_{i-2}} & {\boldsymbol}{a}_{i,d_{i-1}} \end{bmatrix} \in {\mathbb{R}}^{p_i d_i \times p_i d_i},$$ $${\boldsymbol}{G}_i = [{\boldsymbol}{0} \ \cdots \ {\boldsymbol}{0} \ \ {\boldsymbol}{I}]^* \in {\mathbb{R}}^{p_i d_i}.$$ Then the realization of the memory function (for the case $i=2$) and noise process (for the case $i=4$) can be obtained by taking ${\boldsymbol}{\Gamma}_i = {\boldsymbol}{F}_i$, ${\boldsymbol}{C}_i = {\boldsymbol}{H}_i$ and solving the following linear matrix inequality: $${\boldsymbol}{F}_i{\boldsymbol}{M}_i + {\boldsymbol}{M}_i {\boldsymbol}{F}_i^* =: {\boldsymbol}{\Sigma}_i {\boldsymbol}{\Sigma}_i^* \geq 0, \ \ {\boldsymbol}{M}_i {\boldsymbol}{H}_i^* = {\boldsymbol}{G}_i$$ for ${\boldsymbol}{M}_i = {\boldsymbol}{M}_i^*$ [@willems1980stochastic]. The above realization gives us the desired spectral densities. Indeed, let us use the transformation of type to diagonalize the ${\boldsymbol}{M}_i$, i.e. ${\boldsymbol}{M}_i' = {\boldsymbol}{T}_i {\boldsymbol}{M}_i {\boldsymbol}{T}_i^{*} = {\boldsymbol}{I}$, ${\boldsymbol}{\Gamma}_{i}' = {\boldsymbol}{T}_i {\boldsymbol}{\Gamma}_{i} {\boldsymbol}{T}_i^{-1}$, ${\boldsymbol}{\Sigma}_i = {\boldsymbol}{T}_i {\boldsymbol}{\Sigma}_i$, ${\boldsymbol}{C}'_i = {\boldsymbol}{C}_i {\boldsymbol}{T}_i^{-1}$. In this case, for $i=4$ we have: $({\boldsymbol}{\xi}^i)'_t = {\boldsymbol}{C}_i' ({\boldsymbol}{\beta}^i)'_t = {\boldsymbol}{C}_i {\boldsymbol}{\beta}^i_t = {\boldsymbol}{\xi}^i_t$, where $({\boldsymbol}{\beta}^i)'_t$ solves the SDE: $$d({\boldsymbol}{\beta}^i)'_t = -{\boldsymbol}{\Gamma}_i' ({\boldsymbol}{\beta}^i)'_t dt + {\boldsymbol}{\Sigma}_i' d{\boldsymbol}{W}_t^{(q_4)},$$ and one can compute the spectral density to be: $${\boldsymbol}{\mathcal{S}}_i(\omega) = {\boldsymbol}{\Phi}_i(-i\omega){\boldsymbol}{\Phi}^*_i(i\omega) ={\boldsymbol}{B}_{l_i} \omega^{2 l_i} ((\omega^2{\boldsymbol}{I}+{\boldsymbol}{\Gamma}_{i,1})^2) \cdots (\omega^2{\boldsymbol}{I}+{\boldsymbol}{\Gamma}_{i,d_i})^2) )^{-1} {\boldsymbol}{B}_{l_i}^*.$$ A similar discussion applies to the realization of the memory function. For $i=2,4$, set $m=\epsilon m_0$, ${\boldsymbol}{\Gamma}_{i,k} = {\boldsymbol}{\gamma}_{i,k}/\epsilon$ for $k=l_i+1,\dots,d_i$ and rescale the ${\boldsymbol}{B}_{l_i}$ with $\epsilon$ accordingly, so that the limit as $\epsilon \to 0$ of the rescaled spectral densities gives us the desired asymptotic behavior. The choice of which and how many of the ${\boldsymbol}{\Gamma}_{i,k}$ to rescale as well as the smallness of $\epsilon$ (i.e. what determines the wide separation of time scales and their magnitude) depends on the physical system under study. The resulting family of GLEs can then be cast in a form suitable for application of Theorem \[mainthm\] and the homogenized SDE for the particle’s position can be obtained, under appropriate assumptions on the coefficients of the GLE. Another Example for Section \[sect\_appl\] {#anothereg} ========================================== Consider the case of $l_2=l_4=l=2$, $d_2=d_4=d=3$ in Assumption \[special\] and specialize to one-dimensional models as before. In this case the covariance function has a stronger singularity near $t=0$ than in cases studied previously. The spectral density of the driving noise in the GLE is taken to be: $$\mathcal{S}(\omega) = \frac{\Gamma_3^2 \beta^2 \omega^4}{(\omega^2+\Gamma_1^2)(\omega^2+\Gamma_2^2)(\omega^2+\Gamma_3^2)},$$ in which case the memory kernel (and covariance function) are $$\begin{aligned} \label{w5} \kappa(t) &= \beta^2 (\Gamma_3 \Gamma_2 + \Gamma_3 \Gamma_1 + \Gamma_2 \Gamma_1) \bigg(\frac{\Gamma_3^4 e^{-\Gamma_3 |t|}}{2(\Gamma_3^2-\Gamma_2^2)(\Gamma_3^2-\Gamma_1^2)(\Gamma_2+\Gamma_1)} \nonumber \\ &\ \ \ \ \ - \frac{\Gamma_3^2 \Gamma_2^2 e^{-\Gamma_2 |t|}}{2(\Gamma_3^2-\Gamma_2^2)(\Gamma_2^2-\Gamma_1^2)(\Gamma_1+\Gamma_3)} + \frac{\Gamma_3^2 \Gamma_1^2 e^{-\Gamma_1 |t|}}{2(\Gamma_3^2-\Gamma_1^2)(\Gamma_2^2-\Gamma_1^2)(\Gamma_3+\Gamma_2)} \bigg),\end{aligned}$$ where $0< \Gamma_1 < \Gamma_2 < \Gamma_3$ (see Figure \[fig1\] for a plot of $\kappa(t)$). This gives a model for hyper-diffusion of a particle in a heat bath [@siegle2010origin]. Rescale the parameters by setting $m=m_0 \epsilon$, $\Gamma_3=\gamma_3/\epsilon$, where $m_0$ and $\gamma_3$ are positive constants, and study the limit $\epsilon \to 0$ of the resulting family of GLEs as before. The resulting rescaled versions of $\kappa(t)$ and $\mathcal{S}(\omega)$ have the following asymptotic behavior as $\epsilon \to 0$: $$\begin{aligned} \kappa^\epsilon(t) &\to \beta^2 \delta(t) + \frac{\beta^2}{2(\Gamma_1^2-\Gamma_2^2)}(\Gamma_2^3 e^{-\Gamma_2|t|} - \Gamma_1^3 e^{-\Gamma_1|t|}), \\ \mathcal{S}^\epsilon(\omega) &= \frac{\gamma_3^2 \beta^2 \omega^4}{(\omega^2 + \Gamma_1^2)(\omega_2+\Gamma_2^2)(\epsilon^2 \omega^2 + \gamma_3^2)} \to \frac{\beta^2 \omega^4}{(\omega^2+\Gamma_1^2)(\omega^2+\Gamma_2^2)},\end{aligned}$$ We outline, omitting details, a convergence result similar to Corollary \[w2case\], focusing on the particular case $g = h$. In this case, the particle’s position, $x^\epsilon_t \in {\mathbb{R}}$, converges, as $\epsilon \to 0$, to $X_t$, satisfying the following Itô SDE system: $$\begin{aligned} dX_t &= \bigg[ \frac{2F_e}{\beta^2 g^2} + \frac{2}{\beta^2 g}\left(\Gamma_1 \Gamma_2 Z_t^0 + (\Gamma_1+\Gamma_2)Z_t^1 \right) \nonumber \\ &\ \ \ \ - \frac{2\sigma}{\beta^2 g^2}\left(\Gamma_1 \Gamma_2 Y_t^0 + (\Gamma_1+\Gamma_2)Y_t^1 \bigg) \right] dt \nonumber \\ &\ \ \ \ + \bigg[\frac{2}{\beta^2}\frac{\partial}{\partial X}\left(\frac{1}{g^2} \right) \frac{\sigma^2}{g^2} - \frac{\partial}{\partial X}\left(\frac{1}{g} \right)\frac{4 \sigma^2}{g(g^2\beta^2+4\gamma_3 m_0)} \nonumber \\ &\ \ \ \ \ \ \ \ + \frac{\partial}{\partial X}\left(\frac{\sigma}{g^2} \right)\frac{4\sigma}{g^2\beta^2+4\gamma_3 m_0} \bigg] dt + \frac{2\sigma}{\beta g^2} dW_t^{(1)}, \\ dZ_t^0 &= \left[ -\frac{F_e}{g(\Gamma_1+\Gamma_2)} - \frac{\Gamma_1 \Gamma_2}{\Gamma_1+\Gamma_2} Z_t^0 + \frac{\Gamma_1 \Gamma_2 \sigma}{g(\Gamma_1+\Gamma_2)} Y_t^0 + \frac{\sigma}{g} Y_t^1 \right] dt \nonumber \\ &\ \ \ \ - \frac{1}{\Gamma_1+\Gamma_2} \left[ \frac{\partial}{\partial X}\left(\frac{1}{g} \right)\frac{\sigma^2}{g^2} + \frac{\partial}{\partial X}\left(\frac{\sigma}{g} \right) \frac{2\beta^2 \sigma}{g^2\beta^2+4\gamma_3 m_0}\right]dt \nonumber \\ &\ \ \ \ \ - \frac{\beta \sigma}{g(\Gamma_1+\Gamma_2)} dW_t^{(1)}, \\ dZ_t^1 &= \left[\frac{F_e}{g}-\frac{\Gamma_1 \Gamma_2 \sigma}{g} Y_t^0 - \frac{\sigma}{g}(\Gamma_1+\Gamma_2) Y_t^1 \right]dt + \frac{\sigma \beta}{g} dW_t^{(1)} \nonumber \\ &\ \ \ \ + \left[\frac{\partial}{\partial X}\left(\frac{1}{g} \right)\frac{\sigma^2}{g^2} + \frac{\partial}{\partial X}\left(\frac{\sigma}{g} \right) \frac{2\beta^2 \sigma}{g^2\beta^2+4\gamma_3 m_0} \right]dt, \\ dY_t^0 &= Y_t^1 dt, \\ dY_t^1 &= -\Gamma_1 \Gamma_2 Y_t^0 dt - (\Gamma_1+\Gamma_2) Y_t^1 dt + \beta dW_t^{(1)}. \end{aligned}$$ Inspecting the above SDEs in detail, one can make similar remarks to those made in Section \[sect\_appl\]. In particular, taking $g$ to be proportional to $\sigma$ (so that a fluctuation-dissipation relation holds) again allows us to reduce the number of effective SDEs. Physically, this means that homogenized GLEs for models of hyper-diffusion of particles in a non-equilibrium bath may be highly non-trivial but they simplify when the fluctuation-dissipation relation is satisfied. Acknowledgment {#acknowledgment .unnumbered} -------------- S.H.Lim and J.Wehr were partially supported by the NSF grant DMS 1615045. S.H.Lim is grateful for the support provided by the Michael Tabor Fellowship from the Program in Applied Mathematics at the University of Arizona during the academic year 2017-2018. M.L. acknowledges the Spanish Ministry MINECO (National Plan 15 Grant: FISICATEAMO No. FIS2016-79508-P, SEVERO OCHOA No. SEV-2015-0522, FPI), European Social Fund, Fundació Cellex, Generalitat de Catalunya (AGAUR Grant No. 2017 SGR 1341 and CERCA/Program), ERC AdG OSYRIS, EU FETPRO QUIC, and the National Science Centre, Poland-Symfonia Grant No. 2016/20/W/ST4/00314. [^1]: The factor $k_BT$, where $T$ is the absolute temperature and $k_B$ denotes the Boltzmann constant, is here set to $1$. In general, it can be absorbed into either one of the coefficients ${\boldsymbol}{g}$, ${\boldsymbol}{h}$ or ${\boldsymbol}{\sigma}$. [^2]: Sample path continuity does not in general imply mean-square continuity. [^3]: A process $X(t)$ is mean-square differentiable on a time interval $\mathcal{\tau}$ if for every $t \in \mathcal{\tau}$, $$\left\| \frac{X(t+h)-X(t)}{h}-\frac{dX}{dt}\right\| \to 0,$$ as $h \to 0$. [^4]: Note that here the variables ${\boldsymbol}{x}^\epsilon(t)$ and ${\boldsymbol}{v}^\epsilon(t)$ are general and they do not necessarily represent position and velocity variables of a physical system. [^5]: We forewarn the readers that our assumptions can be relaxed in various directions (see later remarks) but we will not pursue these generalizations here. [^6]: See also Remark 14 in [@LimWehr_Homog_NonMarkovian].
{ "pile_set_name": "ArXiv" }
--- abstract: 'We address the influence of the molecular orbital geometry and of the molecular alignment with respect to the laser-field polarization on laser-induced nonsequential double ionization of diatomic molecules for different molecular species, namely $\mathrm{N}_2$ and $\mathrm{Li}_2$. We focus on the recollision excitation with subsequent tunneling ionization (RESI) mechanism, in which the first electron, upon return, promotes the second electron to an excited state, from where it subsequently tunnels. We show that the electron-momentum distributions exhibit interference maxima and minima due to the electron emission at spatially separated centers. We provide generalized analytical expressions for such maxima or minima, which take into account $s$ $p$ mixing and the orbital geometry. The patterns caused by the two-center interference are sharpest for vanishing alignment angle and get washed out as this parameter increases. Apart from that, there exist features due to the geometry of the lowest occupied molecular orbital (LUMO), which may be observed for a wide range of alignment angles. Such features manifest themselves as the suppression of probability density in specific momentum regions due to the shape of the LUMO wavefunction, or as an overall decrease in the RESI yield due to the presence of nodal planes.' author: - 'T. Shaaran, B.B. Augstein and C. Figueira de Morisson Faria' title: 'Excitation, two-center interference and the orbital geometry in laser-induced nonsequential double ionization of diatomic molecules ' --- Introduction ------------ Strong-field phenomena such as high harmonic generation (HHG) or above-threshold ionization (ATI) have been used as tools for the attosecond imaging of molecular orbitals [@imaging], for probing the structural changes in molecules with attosecond precision and for studying quantum interference effects due do photoelectron or high-harmonic emission at spatially separated centers [@Probing]. This has been made possible due to the fact that both phenomena are caused by the rescattering or recombination of an electron with its parent molecule, which, for typical intense lasers, occur within hundreds of attoseconds. The simplest targets for which this interference can be studied are diatomic molecules, which can be viewed as the microscopic counterpart of a double-slit experiment [doubleslit]{}. Potentially, laser-induced nonsequential double ionization (NSDI) can also be employed for probing molecular orbitals since laser-induced recollision plays an important role in this case. In NDSI the returning electron rescatters inelastically with its parent ion, or molecule, giving part of its kinetic energy to a second electron. This electron can be released in the continuum either through electron-impact ionization [A.Becker,Carla1,Carla2,M.Lein,X.Liu,prauzner,A.Staudte,Emanouil,Bondar]{} or recollision excitation with subsequent tunneling ionization (RESI) [RESI1,RESI2]{}. The former recollision mechanism happens when the first electron, upon return, gives enough energy to the second electron of the target so that it can overcome the second ionization potential and reach the continuum. The latter recollision mechanism happens when the first electron, upon return, gives just enough energy to the second electron so that it can be promoted to an excited bound state, from where it subsequently tunnels. In principle, NSDI exhibits several advantages, with regard to ATI or HHG. First, it allows one to extract more dynamic information about the system, as the type of electron-electron interaction can be identified in the electron-momentum distributions [@Carla1; @Carla2; @Emanouil]. Furthermore, different rescattering mechanisms, such as electron-impact ionization or RESI, populate different regions in momentum space and hence can also be traced back from such distributions [@routes]. Apart from that, events happening at different half cycles of the driving field can be mapped into different momentum regions. Concrete examples are NSDI with few-cycle pulses [@fewcycle], which lead to asymmetric electron-momentum distributions, and individual processes in NSDI of diatomic molecules [F2009]{}. Finally, electron-electron correlation is at the essence of this phenomenon and cannot be ignored. In contrast, for high-order harmonic generation one may, to first approximation, only consider a single active electron and the highest occupied molecular orbital. In fact, only very recently have multiple orbitals and electron-electron correlation been incorporated in the modeling of molecular high-order harmonic generation [@CarlaBrad; @Patchkovskii; @Lin; @Olga2009; @Haessler2010]. For the above-mentioned reasons, NSDI of molecules is being increasingly investigated since the past few years. In fact, there has been experimental evidence that the orbital symmetry [@NSDIsymm] and the alignment angle [@NSDIalign] affect the shapes of the electron momentum distributions. Since then, many theoretical studies have also been performed for molecules, involving, for instance, classical trajectory methods [@classical], the numerical solution of the time-dependent Schrödinger equation in reduced-dimensionality models [@TDSEmol], and semi-analytical approaches based on the strong-field approximation [@Smatrix_mol; @NSDIInterference; @F2009]. Semi-analytical models for NSDI in molecules, however, focus on the electron-impact ionization rescattering mechanism. For instance, in our previous paper [@NSDIInterference] we addressed the influence of the orbital symmetry and the molecular alignment with respect to the laser-field polarization on NSDI of diatomic molecules for the electron-impact ionization mechanism. We showed that the electron momentum distribution exhibit interference maxima and minima due to electron emission at spatially separated centers. Such fringes were positioned at $% p_{1||}+p_{2||}=const.$, i.e., parallel to the anti diagonal of the plane spanned by the electron momentum components $p_{n\parallel }$ $n=1,2$ parallel to the laser-field polarization. They were sharpest if the molecule was aligned along the direction of the field, i.e., for vanishing alignment angle. As this angle increased, the fringes got increasingly blurred until they were completely washed out for perpendicular alignment. Apart from that, recently, several studies have found that the core dynamics, in particular excitation, is important for high-harmonic generation in molecules [@Olga2009; @Haessler2010], and in particular for attosecond imaging of matter. We expect this also to be the case for nonsequential double ionization. For that reason, in the past few years, we have focused on the RESI mechanism. We have shown that the shape of the electron momentum distributions depends very strongly on the initial and excited bound states of the second electron [@Shaaran; @RESIM], in fact far more critically than for electron-impact ionization [@Carla2]. If this is the case already for single atoms, one expects this dependence to be even more critical for molecules. For RESI, we expect the electron momentum distributions to be affected very strongly by the geometry of the bound-state wavefunctions, not only because the excitation process strongly depends on them, but also due to the fact that the second electron is reaching the continuum by tunneling. It is by now well known that this ionization mechanism is strongly influenced by the presence of nodal planes or the directionality of a particular molecular orbital. For instance, for HHG the nodal plane of a $\pi $ state suppresses tunnel ionization when it coincides with the polarization axis (see, e.g., [@Olga2009; @CarlaBrad; @Haessler2010; @Dejan2009; @eliot; @BradCarla]). In the present paper, we perform a systematic analysis of quantum-interference effect in NSDI of diatomic molecules considering the RESI mechanism. We construct a semi-analytical model, based on the strong-field approximation (SFA), in which an electron tunnels from the HOMO of a neutral molecule and rescatters with the HOMO of its singly ionized counterpart. Thereby, we assume that the second electron is excited to the lowest unoccupied molecular orbital (LUMO). We investigate the influence of such orbitals and of the alignment angle on the NSDI electron momentum distributions. Specifically we choose species for which these orbitals have different geometries and parities. Furthermore, we address the question of whether well-defined interference patterns such as those observed in ATI or HHG computations may also be obtained for NSDI in the context of the RESI mechanism, and, if so, under which conditions. These are complementary studies to those performed in our recent work on RESI [@Shaaran; @RESIM], where we show that, for single atoms, the shapes of the electron momentum distributions carry information about the bound state from which the second electron leaves and the state to which it is excited. This paper is organized as follows. In Sec. \[transampl\] we discuss the expression for the RESI transition amplitude, including its general expression  (Sec. \[generalexpr\]), the saddle-point equations obtained from it (Sec. \[saddle\]) and the specific prefactors for a diatomic molecule using Gaussian orbital basis sets (Sec. \[prefactors\]). At the end of this section, (Sec. \[InterferenceCondition\]) we derive a general two-center interference condition for the RESI mechanism. Subsequently, in Sec. \[results\], we compute electron momentum distributions, with emphasis on the two-center interference (Sec. \[alignment\]), and the influence of different molecular orbitals (Sec. \[orbits\]). Finally, in Sec. \[conclusions\], we state the main conclusions to be drawn from this work. Strong-field approximation transition amplitude {#transampl} =============================================== General expressions {#generalexpr} ------------------- The SFA transition amplitude describing the RESI mechanism reads (for details on the derivation see [@Shaaran]). $$\begin{aligned} M(\mathbf{p}_{n},t,t^{\prime },t^{^{\prime \prime }}) &=&\int_{-\infty }^{\infty }\hspace*{-0.2cm}dt\hspace*{-0.1cm}\int_{-\infty }^{t}\hspace*{% -0.3cm}dt^{\prime }\hspace*{-0.1cm}\int_{-\infty }^{t^{\prime }}\hspace*{% -0.3cm}dt^{^{\prime \prime }}\hspace*{-0.1cm}\int d^{3}k \notag \\ &&V_{\mathbf{p}_{2}e}V_{\mathbf{p}_{1}e,\mathbf{k}g}V_{\mathbf{k}g}e^{iS(% \mathbf{p}_{n},\mathbf{k},t,t^{\prime },t^{\prime \prime })}, \label{Mp}\end{aligned}$$ with the action $$\begin{aligned} S(\mathbf{p}_{n},\mathbf{k},t,t^{\prime },t^{^{\prime \prime }}) &=&-\int_{t}^{\infty }\hspace{-0.1cm}\frac{[\mathbf{p}_{2}+\mathbf{A}(\tau )]^{2}}{2}d\tau \notag \\ &&-\int_{t^{^{\prime }}}^{\infty }\hspace{-0.1cm}\frac{[\mathbf{p}_{1}+% \mathbf{A}(\tau )]^{2}}{2}d\tau \notag \\ &&-\int_{t^{^{\prime }}}^{t^{^{\prime \prime }}}\hspace{-0.1cm}\frac{[% \mathbf{k}+\mathbf{A}(\tau )]^{2}}{2}d\tau \notag \\ &&+E_{1g}t^{^{\prime \prime }}+E_{2g}t^{^{\prime }}+E_{2e}(t-t^{^{\prime }}) \label{singlecS}\end{aligned}$$ and the prefactors$$\begin{aligned} V_{\mathbf{k}g} &=&\left\langle \mathbf{\tilde{k}}(t^{\prime \prime })\right\vert V\left\vert \psi _{g}^{(1)}\right\rangle =\frac{1}{(2\pi )^{3/2}} \notag \\ &&\times \int d^{3}r_{1}V_{0}(\mathbf{r}_{1})\exp [-i\mathbf{\tilde{k}}% (t^{\prime \prime })\cdot \mathbf{r}_{1}]\psi _{g}^{(1)}(\mathbf{r}_{1}) \label{Vkg}\end{aligned}$$ $$\begin{aligned} V_{\mathbf{p}_{1}e\mathbf{,k}g} &=&\left\langle \mathbf{\tilde{p}}_{1}\left( t^{\prime }\right) ,\psi _{e}^{(2)}\right\vert V_{12}\left\vert \mathbf{% \tilde{k}}(t^{\prime }),\psi _{g}^{(2)}\right\rangle =\frac{1}{(2\pi )^{3}} \notag \\ &&\times \int \int d^{3}r_{2}d^{3}r_{1}\exp [-i(\mathbf{p}_{1}-\mathbf{k}% )\cdot \mathbf{r}_{1}] \notag \\ &&\times V_{12}(\mathbf{r}_{1,}\mathbf{r}_{2})[\psi _{e}^{(2)}(\mathbf{r}% _{2})]^{\ast }\psi _{g}^{(2)}(\mathbf{r}_{2}) \label{Vp1e,kg}\end{aligned}$$ and  $$\begin{aligned} V_{\mathbf{p}_{2}e} &=&\left\langle \mathbf{\tilde{p}}_{2}\left( t\right) \right\vert V_{\mathrm{ion}}\left\vert \psi _{e}^{(2)}\right\rangle =\frac{1% }{(2\pi )^{3/2}} \notag \\ &&\times \int d^{3}r_{2}V_{\mathrm{ion}}(\mathbf{r}_{2})\exp [-i\mathbf{% \tilde{p}}_{2}(t)\cdot \mathbf{r}_{2}]\psi _{g}^{(2)}(\mathbf{r}_{2}). \label{Vp2e}\end{aligned}$$ Eq. (\[Mp\]) describes the physical process in which, at a time $t^{\prime \prime },$ the first electron tunnels from a bound state $|\psi _{g}^{(1)}>$  into a Volkov state $|\mathbf{\tilde{k}}(t^{\prime })>$. Then the released electron propagates in the continuum from $t^{\prime \prime }$ to $t^{\prime }$ , and it is driven towards its parent molecule. Upon return, the electron scatters inelastically with the core at $t^{\prime }$ and, through the interaction $V_{12},$ promotes the second electron from the bound state $|\psi _{g}^{(2)}>$ to the excited state $|\psi _{e}^{(2)}>$. Finally, at a later time $t,$ the second electron, initially in a bound excited state $% |\psi _{e}^{(2)}>,$ is released by tunneling ionization into a Volkov state $% |\mathbf{\tilde{p}}_{2}\left( t\right) >$. In the above-stated equations, $% E_{ng}$ $(n=1,2)$ are the ionization potentials of the ground state, $E_{ne}$ $(n=1,2)$ denote the absolute values of the excited-state energies and the potentials $V_{0}(\mathbf{r}_{1})$  and $V_{\mathrm{ion}}(\mathbf{r}_{2})$  correspond to the neutral molecule and the singly ionized molecular species, respectively. Here, the final electron momenta are described by $% \mathbf{p}_{n}(n=1,2).$ All the information about the binding potentials viewed by the first and second electrons and the electron-electron interaction are embedded in the form factors (\[Vkg\]) , (\[Vp2e\]) and (\[Vp1e,kg\]) respectively. Assuming that the electron-electron interaction depends only on the difference between the two electron coordinates, i.e., if $V_{12}(\mathbf{r}_{1,}\mathbf{r}_{2})=V_{12}(\mathbf{r}_{1-}\mathbf{r}% _{2}),$ Eq. (\[Vp1e,kg\]) may be rewritten as $$\begin{aligned} V_{\mathbf{p}_{1}e\mathbf{,k}g} &=&\frac{V_{12}(\mathbf{p}_{1}-\mathbf{k})}{% (2\pi )^{3/2}} \notag \\ &&\times \int d^{3}r_{2}e^{-i(\mathbf{p}_{1}-\mathbf{k})\cdot \mathbf{r}% _{2}}[\psi _{e}^{(2)}(\mathbf{r}_{2})]^{\ast }\psi _{g}^{(2)}(\mathbf{r}% _{2}), \label{resc1st}\end{aligned}$$with $$V_{12}(\mathbf{p}_{1}-\mathbf{k})=\frac{1}{(2\pi )^{3/2}}\int d^{3}re^{-i(% \mathbf{p}_{1}-\mathbf{k})\cdot \mathbf{r}}V_{12}(\mathbf{r})$$and $\mathbf{r=r}_{1-}\mathbf{r}_{2}.$ Within the framework of the SFA these prefactors are gauge dependent. Specifically, in the length gauge $\mathbf{\tilde{p}}_{n}\left( \tau \right) =\mathbf{p}_{n}+\mathbf{A}(\tau )$ and $\mathbf{\tilde{k}}(\tau )=\mathbf{k}+% \mathbf{A}(\tau )(\tau =t^{\prime },t^{\prime \prime }),$ while in the velocity gauge $\mathbf{\tilde{p}}_{n}\left( \tau \right) =\mathbf{p}_{n}$ and $\mathbf{\tilde{k}}(\tau )=\mathbf{k}.$ This is due to the fact that the gauge transformation cancels out with the minimal coupling in the latter case. In practice, however, for the specific situation addressed in this work, both gauges lead to very similar results. This happens as the above-stated phase differences will cancel out in $V_{\mathbf{p}_{1}e,\mathbf{k}g}$, and, in $V_{\mathbf{p}_{2}e}$, $\mathbf{A}(t)\simeq 0$ for the parameter range of interest (for more details see [@RESIM]). In the following, unless strictly necessary, we will drop the time dependence in $\mathbf{\tilde{p}}_{n}\left( \tau \right) .$ Saddle-point analysis {#saddle} --------------------- Subsequently, the transition amplitude (\[Mp\]) is solved employing saddle-point methods. For that purpose, one must find the coordinates $% (t_{s},t_{s}^{\prime },t_{s}^{\prime \prime },\mathbf{k}_{s})$ for which $S(% \mathbf{p}_{n},\mathbf{k},t,t^{\prime },t^{\prime \prime })$ is stationary, i.e., for which the conditions $\partial _{t}S(\mathbf{p}_{n},\mathbf{k}% ,t,t^{\prime },t^{\prime \prime })=\partial _{t^{\prime }}S(\mathbf{p}_{n},% \mathbf{k},t,t^{\prime },t^{\prime \prime })=\partial _{t^{^{\prime \prime }}}S(\mathbf{p}_{n},\mathbf{k},t,t^{\prime },t^{\prime \prime })=\mathbf{0}$ and $% \partial _{\mathbf{k}}S(\mathbf{p}_{n},\mathbf{k},t,t^{\prime },t^{\prime \prime })=0$ are satisfied. This leads to the equations $$\left[ \mathbf{k}+\mathbf{A}(t^{\prime \prime })\right] ^{2}=-2E_{1g}, \label{saddle1}$$ $$\mathbf{k=}-\frac{1}{t^{\prime }-t^{\prime \prime }}\int_{t^{\prime \prime }}^{t^{\prime }}d\tau \mathbf{A}(\tau ) \label{saddle2}$$ $$\lbrack \mathbf{p}_{1}+\mathbf{A}(t^{\prime })]^{2}=\left[ \mathbf{k}+% \mathbf{A}(t^{\prime })\right] ^{2}-2(E_{2g}-E_{2e}). \label{saddle3}$$ and$$\lbrack \mathbf{p}_{2}+\mathbf{A}(t)]^{2}=\mathbf{-}2E_{2e}, \label{saddle4}$$which, as discussed below, provide additional physical insight into the problem. The saddle-point Eq. (\[saddle1\]) gives the conservation of energy when the first electron tunnel ionized at a time $\ t^{\prime \prime }$. Eq. ([saddle2]{}) constraints the intermediate momentum $\mathbf{k}$ of the first electron and it makes sure the electron returns to the side of its release, which lies at the geometrical center of molecule. Eq. (\[saddle3\]) expresses the conservation of energy at a time $t^{\prime }$, when the first electron rescatters inelastically with its parent ion, and gives part of its kinetic energy $E_{\mathrm{ret}}(t^{\prime })=\left[ \mathbf{k}+\mathbf{A}% (t^{\prime })\right] ^{2}/2$ to the core to excite the second electron from a state with energy $E_{2g}$ to a state with energy $E_{2e}$. Immediately after rescatering the first electron reaches the detector with momentum $% \mathbf{p}_{1}$. Finally, Eq. (\[saddle4\]) describes the fact that the second electron tunnels at time $t$ from an excited excited state $E_{2e}$ and reaches the detector with momentum $\mathbf{p}_{2}.$ As a consequence of the fact that tunneling has no classical counterpart, these equations possess no real solutions (for more details see [@Shaaran]). Molecular prefactors {#prefactors} -------------------- In this work, we consider that all molecular orbitals are frozen apart from the HOMO and the LUMO. We also assume frozen nuclei and a linear combination of atomic orbitals (LCAO) to construct approximate wave functions for the active orbitals. This implies that the molecular bound-state wave function for each electron reads$$\psi ^{(n)}(\mathbf{\mathbf{r}}_{n})=\sum_{\alpha }c_{\alpha }[\phi _{\alpha }^{(n)}(\mathbf{r}_{n}+\mathbf{R}/2)+(-1)^{l_{\alpha }+\lambda _{\alpha }}\phi _{\alpha }^{(n)}(\mathbf{r}_{n}-\mathbf{R}/2)] \label{Wave Function}$$where $R$ and $l_{\alpha }$ denote the internuclear separation and the orbital quantum numbers, respectively. The index $n=1,2$ refers to the electron in question.$\ $ The index $\lambda _{\alpha }=0$ applies to gerade symmetry and $\lambda _{\alpha }=1$ to ungerade symmetry. The binding potential of this molecule, as seen by each electron, is given by $$V_{\varkappa }(\mathbf{r}_{n})=\mathcal{V}_{\varkappa }(\mathbf{r}_{n}-% \mathbf{R}/2)+\mathcal{V}_{\varkappa }(\mathbf{r}_{n}+\mathbf{R}/2) \label{binding potential}$$where the subscript $\varkappa =0$ or ion refers either to the neutral molecule or to its ionic counterpart, respectively, and $\mathcal{V}% _{\varkappa }(\mathbf{r}_{n})=Z_{\mathrm{eff}}/r_{n}$ is the potential at each center in the molecule. Thereby, $Z_{\mathrm{eff}}$ is the effective core charge as seen by each of the two active electrons. In this paper the wave function $\phi _{\alpha }^{(n)}$ is approximated by a Gaussian basis set, $$\phi _{\alpha }^{(n)}(\mathbf{r}_{n})=\sum_{j}b_{j}^{(n)}x^{l_{\alpha }}y^{l_{\alpha }}z^{l_{\alpha }}\exp [-\zeta _{j}\mathbf{r}^{2}] \label{Gaussian basis}$$ The coefficients $b_{j}$ and $c_{\alpha }$ and the exponents $\zeta _{j}$ can be extracted either from existing literature or from quantum chemistry codes. We compute these coefficients using GAMESS-UK [@GAMESS]. In our basis set, we took only $s$ and $p$ states. This means that, in all the expressions that follow, $l_{\alpha }$ and $l_{\beta }$ are either $0$ or $% 1. $ The above-stated assumptions lead to the form factors $$\begin{aligned} V_{\mathbf{p}_{1}e\mathbf{,k}g} &=&\frac{V_{12}(\mathbf{p}_{1}-\mathbf{k})}{% (2\pi )^{3/2}}\sum_{\alpha }\sum_{\beta }[e^{i(\mathbf{p}_{1}-\mathbf{k}% )\cdot \mathbf{\mathbf{R}}/2} \notag \\ &&+(-1)^{l_{\alpha }+l_{\beta }+\lambda _{\alpha }+\lambda _{\beta }}e^{-i(% \mathbf{p}_{1}-\mathbf{k})\cdot \mathbf{\mathbf{R}}/2}]\mathcal{I}_{1}, \label{vp1ke1}\end{aligned}$$ where $$\mathcal{I}_{1}=\int d^{3}r_{2}e^{-i(\mathbf{p}_{1}-\mathbf{k})\cdot \mathbf{% r}_{2}}\phi _{\alpha }^{(2)}(\mathbf{r}_{2})^{^{\ast }}\phi _{\beta }^{(2)}(% \mathbf{r}_{2})$$ and $$V_{\mathbf{p}_{2}e}=\frac{4\pi }{(2\pi )^{3/2}}\sum_{\alpha }\left[ e^{i% \mathbf{\tilde{p}}_{2}\cdot \mathbf{R}/2}+(-1)^{l_{\alpha }+\lambda _{\alpha }}e^{-i\mathbf{\tilde{p}}_{2}\cdot \mathbf{R}/2}\right] \mathcal{I}_{2}, \label{vp2e1}$$ where $\ \ \ $$$\ \ \ \ \ \ \ \ \mathcal{I}_{2}=\int d^{3}r_{2}\mathcal{V}_{0}(\mathbf{r}% _{2})e^{-i\mathbf{\tilde{p}}_{2}\cdot \mathbf{r}_{2}}\phi _{\alpha }^{(2)}(% \mathbf{r}_{2}).$$ In general, the form factor (\[Vkg\]) does not affect the shape of the electron-momentum distributions. This is particularly true when the first electron tunnels from an orbital with no nodal planes, such as a $\sigma _{g} $ orbital [@NSDIInterference]. However, one has to be careful when the electron tunnels from any orbital with at least one nodal plane, such as a $\pi $ orbital, as this would lead to a suppression of ionization for specific alignment angles. In the following, we will write the above-stated equations as functions of the electron-momentum components $p_{n\parallel }$ and $\mathbf{p}_{n\perp }$ parallel and perpendicular to the laser-field polarization. Physically, we are investigating a diatomic molecule whose main axis is rotated of an angle $% \theta $ with respect to the direction of the laser-field polarization. Hence, we are dealing with two frames of reference, i.e., the molecular frame of reference and the laser field frame of reference. The electron momenta in terms of their parallel and perpendicular components with regard to the laser-field polarization read $$\mathbf{p}_{n}=p_{n||}\hat{e}_{z^{\prime }}+p_{n\perp }\cos \varphi \hat{e}% _{x^{\prime }}+p_{n\perp }\sin \varphi \hat{e}_{y^{\prime }},$$where we assumed that the laser field is polarized along the $z^{\prime }$ axis, the coordinates $x^{\prime }$ and $y^{\prime }$ define the plane perpendicular to the laser-field polarization and $\varphi $ is the azimuthal angle. In order, however, to compute the momentum-space wavefunctions for this molecule, we need the momentum coordinates in the frame of reference of the molecule. The molecular coordinates $x,y$ and $z$ can be obtained by a coordinate rotation around the $x$ axis. In this case, the momenta of the electrons in terms of parallel and perpendicular components in this latter frame of reference will be ![Schematic representation of the molecule and laser field frames of reference, represented by the black and red sets of axis $x,y,z$ and $% x^{\prime},y^{\prime},z^{\prime}$ respectively. The two centers of the molecule are apart by $R$ along the $z$ axis of the molecule, and their positions are indicated by the blue circles in the figure. The field $% \mathbf{A}(t)$ is polarized along the $z^{\prime}$ axis, and $\protect\theta$ shows the alignment angle of the molecule with respect to the laser field.[]{data-label="Moleculefigure"}](Fig1.eps){width="9cm"} $$\begin{aligned} \mathbf{p}_{n} &=&(p_{n||}\cos \theta +p_{n\perp }\sin \theta \sin \varphi )% \hat{e}_{z}+p_{n\perp }\cos \varphi \hat{e}_{x} \notag \\ &&\text{ \ }+(p_{n\perp }\cos \theta \sin \varphi -p_{n||}\sin \theta )\hat{e% }_{y}. \label{newpcoords}\end{aligned}$$ This implies that the momentum components $p_{nx},p_{ny}$ and $p_{nz}$ are defined by Eq. (\[newpcoords\]) and that $$\mathbf{p}_{n}\cdot \mathbf{\mathbf{R}}/2=(p_{n||}\cos \theta +p_{n\perp }\sin \theta \sin \varphi )R/2.$$ A schematic representation of both the field and molecular sets of coordinates is presented in Fig. \[Moleculefigure\]. Below, we provide the explicit expressions for the integrals $\mathcal{I}_{n}(n=1,2)$ in the prefactors (\[vp1ke1\]) and (\[vp2e1\]), for the specific types of orbitals employed in this work. ### Excitation $\protect\sigma \rightarrow \protect\sigma $ If the second electron is excited from a $\sigma $ to a $\sigma$ orbital, both integrals will have the forms $$\begin{aligned} \mathcal{I}_{1} &=&\sum_{j,j^{\prime }}\frac{b_{j}^{(1)}b_{j^{\prime }}^{(1)}\pi ^{3/2}(-i)^{l_{\alpha }+l_{\beta }}}{2^{^{l_{\alpha }+l_{\beta }}}(\zeta _{j}+\zeta _{j^{\prime }})^{3/2+l_{\alpha }+l_{\beta }}} \notag \\ &&\text{ \ \ \ }\times \exp [-\frac{(\mathbf{p}_{1}-\mathbf{k})^{2}}{4(\zeta _{j}+\zeta _{j^{\prime }})}].\Upsilon (l_{\alpha },l_{\beta })\end{aligned}$$ where$$\Upsilon (l_{\alpha },l_{\beta })=\left\{ \begin{array}{c} 1,\text{ \ \ \ }l_{\alpha }+l_{\beta }=0 \\ (\mathbf{p}_{1}-\mathbf{k})_{z},\text{\ \ \ }l_{\alpha }+l_{\beta }=1 \\ 2(\zeta _{j}+\zeta _{j^{\prime }})-(\mathbf{p}_{1}-\mathbf{k})^2_{z},\text{ \ \ }l_{\alpha }+l_{\beta }=2% \end{array}% \right. ,$$ and $$\mathcal{I}_{2}=\sum_{j^{\prime }}b_{j^{\prime }}^{(2)}(-i)^{l_{\beta }}G(l_{\beta }), \label{i2sigm}$$where $$G(l_{\beta })=\left\{ \begin{array}{c} 2\sqrt{\pi }I_{r}^{(l_{\alpha }=0)},l_{\beta }=0 \\ \left( \tilde{p}_{2z}/\tilde{p}_{2}\right) I_{r}^{(l_{\alpha }=1)},l_{\beta }=1% \end{array}% \right. . \label{i2sigmsigm}$$ In Eq. (\[i2sigmsigm\]), $I_{r}^{(l_{\alpha }=0)}$ and $I_{r}^{(l_{\alpha }=1)}$ indicate the radial integrals $$I_{r}^{(l_{\alpha })}=\int_{0}^{\infty }r^{l_{\beta }+1}j_{l_{\beta }}(% \tilde{p}_{2}r)\exp [-\zeta _{j}r^{2}]dr,$$ where $j_{l_{\beta }}(\tilde{p}_{2}r)$ denotes spherical Bessel functions. ### Excitation $\protect\sigma \rightarrow \protect\pi $ We also consider that the second electron is excited from a $\sigma$ orbital to a $\pi$ orbital. In this case, these orbitals are degenerate. For that reason, we choose to consider a coherent superposition of the $\pi _{x}$ and $\pi _{y}$ orbitals carrying equal weights. This gives $$\begin{aligned} \mathcal{I}_{1}\hspace*{-0.2cm} &=&\hspace*{-0.2cm}\sum_{j,j^{\prime }}b_{j}^{(1)}b_{j^{\prime }}^{(1)}\pi ^{3/2}\left[ (-i(\mathbf{p}_{1}-\mathbf{k})_{y})^{l_{% \beta }}+(-i(\mathbf{p}_{1}-\mathbf{k})_{x})^{l_{\beta }}\right] \notag \\ &&\text{ \ }\frac{(-i(\mathbf{p}_{1}-\mathbf{k})_{z})^{l_{\alpha }}}{2^{^{l_{_{\alpha }}+l_{\beta }}}(\zeta _{j}+\zeta _{j^{\prime }})^{3/2+l_{\alpha }+l_{\beta }}% }\exp [-\frac{(\mathbf{p}_{1}-\mathbf{k})^{2}}{4(\zeta _{j}+\zeta _{j^{\prime }})}].\end{aligned}$$ One should note that, if the electron is excited from a $\pi $ to a $\sigma $ orbital, $\mathcal{I}_{1}$ will also have this form. In the second prefactor,$$\mathcal{I}_{2}=\sum_{j^{\prime }}b_{j^{\prime }}^{(2)}(-i)^{l_{\beta }}% \left[ \frac{(\tilde{p}_{2y})^{l_{\beta }}+(\tilde{p}_{2x})^{l_{\beta }}}{% \tilde{p}_{2}}\right] I_{r}^{(l_{\beta })},$$with $l_{\beta }=1$. Throughout, $(\mathbf{p}_{1}-\mathbf{k})_{\varkappa }$ and $\tilde{p}% _{2\varkappa },$ with $\varkappa =x,y,z$ are defined according to Eq. ([newpcoords]{}). Interference Condition {#InterferenceCondition} ---------------------- Here we provide a general interference condition, which takes into account the structure of the orbitals. This includes $s$ $p$ mixing and the orbital parity. The expressions that follow are easily derived if the exponentials in Eqs. (\[vp1ke1\]) and (\[vp2e1\]) are expanded in terms of trigonometric functions. In this case, the prefactor (\[vp1ke1\]) can be written as $$V_{\mathbf{p}_{1}e\mathbf{,k}g}=\frac{V_{12}(\mathbf{p}_{1}-\mathbf{k})}{% (2\pi )^{3/2}}\sum_{\alpha }\sum_{\beta }\sqrt{C_{+}^{2}-C_{-}^{2}}\sin [\xi _{1}+(\mathbf{p}_{1}-\mathbf{k})\cdot \mathbf{R}/2]\mathcal{,}$$ with $$\xi _{1}\mathbf{=}\arctan [\frac{-iC_{+}}{C_{-}}]\text{\ \ \ }$$ and$$C_{\pm }=1\pm (-1)^{l_{\alpha }+l_{\beta }+\lambda _{\alpha }+\lambda _{\beta }}.\text{\ }$$A similar procedure for high-order harmonic generation has been adopted in [Dejan2009]{}. Interference minima are present if $$\xi _{1}+(\mathbf{p}_{1}-\mathbf{k})\cdot \mathbf{R}/2\mathbf{=}m\pi \text{% ,\ \ } \label{intrVp1}$$ where $m$ is an integer. Similarly, interference maxima are obtained for $$\xi _{1}+(\mathbf{p}_{1}-\mathbf{k})\cdot \mathbf{R}/2\mathbf{=}(2m+1)\pi /2% \text{.\ }$$We will focus on the minima given by Eq. (\[intrVp1\]) as they are much easier to observe. If this equation is written in terms of the electron momentum component $(\mathbf{p}_{1}-\mathbf{k})_{z}$ parallel to the molecular axis we find$$\left[ (p_{1||}-k)\cos \theta +p_{1\perp }\sin \theta \sin \varphi \right] R/2=m\pi -\xi _{1}.$$The above-stated equation shows that the parallel momentum component $% p_{1||} $ parallel to the laser-field polarization will lead to well-defined interference fringes approximately at $$p_{1||}=\frac{2(m\pi -\xi _{1})}{R\cos \theta }+k. \label{intVp1par}$$This means that, in the plane $p_{1||}p_{2||}$, these minima will be at $% p_{1||}=const.,$ i.e., parallel to the $p_{1||}$ axis. The perpendicular component $p_{1\perp }$ will mainly cause a blurring in such fringes, when the azimuthal angle is integrated over. Extreme limits will be found for alignment angle $\theta =0$, with sharp two-center patterns, and $\theta =90^{\circ },$ when they get washed out.  Following the same line of argument,$$V_{\mathbf{p}_{2}e}=\frac{4\pi }{(2\pi )^{3/2}}\sum_{\alpha }\sqrt{% D_{+}^{2}-D_{-}^{2}}\sin [\xi _{2}+\mathbf{\tilde{p}}_{2}\cdot \mathbf{R}/2]% \mathcal{I}_{2},\label{vp2trig}$$with $$\xi _{2}\mathbf{=}\arctan [\frac{-iD_{+}}{D_{-}}]\text{\ \ \ }$$ and$$D_{\pm }=1\pm (-1)^{l_{\beta }+\lambda _{\beta }}\ \text{\ \ .}$$ Interference minima are present for Eq. (\[vp2trig\]) if $$\xi _{2}+\mathbf{\tilde{p}}_{2}\cdot \mathbf{R}/2\mathbf{=}m\pi \label{intrVp2}$$   Likewise, there will be interference fringes for $$\tilde{p}_{2||}=\frac{2(m\pi -\xi _{2})}{R\cos \theta },$$i.e., parallel to the $p_{2||}$ axis in the plane spanned by the parallel momentum components $p_{1||},$ $p_{2||}$. In the velocity and the length gauges, $\tilde{p}_{2||}=p_{2||}$ and $p_{2||}+A(t),$ respectively. Since, however, $A(t)\simeq 0$ for the electron tunneling time, in practice there will be very little difference. The perpendicular momentum components will lead to a blurring in the fringes. Electron momentum distributions {#results} =============================== In this section, we compute electron momentum distributions, as functions of the momentum components $(p_{1\parallel },p_{2\parallel })$ parallel to the laser-field polarization. We assume the external laser field to be a monochromatic wave linearly polarized along the axis $z^{\prime }$. Explicitly, $$\mathbf{E}(t)=\varepsilon _{0}\sin \omega t\hat{e}_{z^{\prime }}.$$This approximation is reasonable for laser pulses of the order of ten cycles or longer [@X.Liu]. These distributions, when integrated over the transverse momentum components, read $$\begin{aligned} F(p_{1\parallel },p_{2\parallel }) &=&\hspace*{-0.1cm}\iint \hspace*{-0.1cm}% d^{2}p_{1\perp }d^{2}p_{2\perp }|M_{R}(\mathbf{p}_{1},\mathbf{p}_{2}) \label{distributions} \\ &+&M_{L}(\mathbf{p}_{1},\mathbf{p}_{2})+\mathbf{p}_{1}\leftrightarrow \mathbf{p}_{2}|^{2}, \notag\end{aligned}$$where $M_{R}(\mathbf{p}_{1},\mathbf{p}_{2})$ and $M_{L}(\mathbf{p}_{1},% \mathbf{p}_{2})$ refer to the right and left peak in the electron momentum distributions, respectively, the transition amplitude $M_{R}(\mathbf{p}_{1},% \mathbf{p}_{2})$ is given by Eq. (\[Mp\]), and $d^{2}p_{n\perp }=$ $% p_{n\perp }dp_{n\perp }d\varphi _{p_{n}}.$ For a monochromatic field, we can use the symmetry $\mathbf{A}(t)=-\mathbf{A}(t\pm T/2),$ where $T=2\pi /\omega $ corresponds to a field cycle, in order to simplify the computation of  the electron momentum distributions. This is explained in detail in our previous work [@RESIM]. We also symmetrize the above-stated distributions with respect to the particle exchange $\mathbf{p}% _{1}\leftrightarrow \mathbf{p}_{2}.$ To a good approximation, it is sufficient to consider the incoherent sum in Eq. (\[distributions\]) as the interference terms between the right and left peaks practically get washed out upon the transverse momentum integration (see Appendix B in [@RESIM]). In the following, we will compute electron momentum distributions for $% \mathrm{Li}_{2}$ and $\mathrm{N}_{2}.$ For all cases, we assume that the electron-electron interaction is of contact type, i.e., $V_{12}=\delta (% \mathbf{r}_{1}-\mathbf{r}_{2}).$ This will avoid a further momentum bias in the electron-electron distributions as it leads to $V_{12}(\mathbf{p}_{1}-% \mathbf{k})=const.$ and allow us to investigate the influence of the target structure alone. For a long-range potential, $V_{12}(\mathbf{p}_{1}-\mathbf{k% })$ would be momentum dependent, and hence mask the features we intend to investigate. Interference effects and s p mixing {#alignment} ----------------------------------- We will commence by investigating whether the interference conditions derived in Sec. \[InterferenceCondition\] hold. For that purpose, we must have non-negligible tunneling ionization for parallel-aligned molecules, as this is the situation for which the fringes are expected to be sharpest. Hence, one must consider a target for which neither the HOMO nor the LUMO exhibits nodal planes along the internuclear axis. Therefore, we assume that the first electron tunnels from the HOMO in $\mathrm{Li}_2$ and rescatters inelastically with $\mathrm{Li}_{2}^{+},$ exciting the second electron from its HOMO (2$\sigma _{g})$ to its LUMO (2$% \sigma _{u})$. In order to get a clear picture of conditions (\[intrVp1\]) and (\[intrVp2\]), we must investigate the corresponding prefactors individually. ![Electron-momentum distributions for NSDI in $\mathrm{Li}_{2}$ (bound-state energies $E_{1g}$ $=$ $0.18092040$ a.u., $E_{2g}$ $=$ $0.43944428$ a.u. and $E_{2e}$ $=$ $% 0.12481836 $ a.u. and equilibrium internuclear distance $R=4.7697$ a.u.) considering only the RESI mechanism, as functions of the momentum components parallel to the laser-field polarization$,$ obtained considering $V_{\mathbf{p}_{2}e}$ according to Eq. (\[Vp2e\]) and $V_{\mathbf{p}_{1}e,\mathbf{k}% g}=const$. We consider zero alignment angle, driving-field intensity $% I=4.6\times 10^{13}\mathrm{W/cm}^{2}$ and $\protect\omega =0.057$ a.u. respectively. Panels (a) to (c) display only the contribution from the orbits starting in the first half cycle of the field, while in panels (d) to (f) the distributions have been symmetrized to account for the electron orbits starting in the other half cycle and for electron indistinguishability. The left, middle and right panels correspond to the contributions of the $s$, $p$ and all states used in the construction of the $\protect\sigma _{u}$ LUMO, respectively. The solid, dashed and short dashed lines show the position of minima due to the two-center interference, node of the wavefunction and mixed cases, respectively. The contour plots have been normalized to the maximum probability in each panel. []{data-label="LUMO0deg"}](Fig2.eps){width="11.5cm"} In Fig. \[LUMO0deg\], we depict the above-mentioned electron-momentum distributions for alignment angle $\theta =0^{\circ}.$ We consider $V_{\mathbf{p}% _{1}e\mathbf{,k}g}=const.$ and focus on the influence of $V_{\mathbf{p}% _{2}e} $ alone. We take either the individual contributions of $s$ and $p$ states or the combination of both for 2$\sigma _{u}$. For clarity, in the upper panels, we also exhibit the distributions obtained without symmetrizing with respect to the momentum exchange and electron start times. For all cases, the two-center fringes in Fig. \[LUMO0deg\] are parallel to $p_{2||}=const.$, in agreement with the second interference condition derived in Sec. \[InterferenceCondition\]. For pure $s$ or $p$ states and $\lambda _{\alpha }=1$, which is the case for a $\sigma_u $ orbital, this condition can be further simplified. It reduces to$$\sin [\mathbf{\tilde{p}}_{2}\cdot \mathbf{R}/2]=0,$$for $s$ states, and $$\cos [\mathbf{\tilde{p}}_{2}\cdot \mathbf{R}/2]=0$$ for $p$ states. This implies that, for the former, we expect minima at $% \mathbf{\tilde{p}}_{2}\cdot \mathbf{R}=2m\pi ,$ while for the latter they should occur at $\mathbf{\tilde{p}}_{2}\cdot \mathbf{R}=(2m+1)\pi .$ The position of such minima can also be determined analytically by considering that the second electron tunnels at the peak of the laser field, i.e., at $% t=\pi /2$. The dashed lines in the figure show that the position of these minima exhibit a very good agreement with this simple estimate. Physically, this good agreement may be attributed to the fact that the second electron tunnels most probably at this time. For the $s$ states the two-center interference gives a sharp minimum at $% p_{2\parallel }=0$ (Figs. \[LUMO0deg\] (a) and (d)), while for the $p$ states these patterns are located near $p_{2\parallel }=\pm 3\sqrt{U_{p}}$ (Figs. \[LUMO0deg\] (b) and (e)). In the $p-$state case the distribution has another minimum at $p_{2\parallel }=0,$ which comes from the fact the $p$ wavefunctions vanish for $\mathbf{p}_n=0$. This causes a suppression in the transition amplitude. If the contributions of both $s$ and $p$ states are considered, the minima in the high-momentum region due to the two-center interference seen for the $p$ states vanish, but the minimum at $% p_{2\parallel }=0$ survives. This is shown in Figs. \[LUMO0deg\].(c) and (f) for unsymmetrized and symmetrized distributions, respectively. One should note, however, that for parallel-aligned molecules, both the two-center minimum for the $s$ states and the minimum caused by the node in the $p$ states occur at the same momentum, i.e., at $p_{2\parallel }=0$. Hence, when $s$ $p$ mixing is included both mechanisms contribute to the suppression at the axes $p_{n\parallel}=0$ seen in Figs. \[LUMO0deg\].(c) and (f). We will now investigate the behavior of this node when the alignment angle is varied. Since for Li$_{2}$ both the LUMO and the HOMO exhibit distinct shapes and symmetries one can expect significant changes in the electron-momentum distributions when this angle is modified. ![Electron-momentum distributions for RESI in $\mathrm{Li}_{2}$ as functions of the electron momentum components parallel to the laser-field polarization considering $V_{\mathbf{p}_{1}e,\mathbf{k}g}=const$ and $V_{\mathbf{p}_{2}e}$ according to Eq. (\[Vp2e\]), for alignment angles $\protect\theta % =45^{0}$ (panels (a) to (c)), and $90^{0}$ (panels (d) to (f)). The remaining parameters are the same as in the previous figures. The solid lines show the position of minima due to the node of the one-center wavefunction. From left to right, we considered the contributions of the $s$, $p$ and all states used in the construction of the LUMO. All panels have been symmetrized with regard to the electron orbits and indistinguishability. The contour plots have been normalized to the maximum probability in each panel. []{data-label="LUMO4590deg"}](Fig3_v2.eps){width="8.5cm"} Hence, in Fig. \[LUMO4590deg\], we consider the same prefactors as in the previous case, but alignment angles $\theta =45^{0}$ and $90^{0}$. The figure shows that the patterns caused by the electron emission at spatially separated centers get washed out for such angles. This is due to the momentum components perpendicular to the laser-field polarization, and can be seen very clearly in Fig. \[LUMO4590deg\].(a), where the $s$ contributions are displayed for $\theta=45^{\circ}$. Already for this angle the interference minima at the axes $p_{n\parallel}=0$ are absent. In contrast, the suppression at the axes caused by the fact that the $p$ wavefunctions vanish in that momentum region is still present. This is shown in Fig. \[LUMO4590deg\].(b), in which the contributions from the $p$ states are depicted. The blurring is caused by the fact that, in momentum space, these wavefunctions are proportional to $G(l_{\beta}=1)$ (see Eq. (\[i2sigmsigm\])). This function contains components of $\mathbf{p}_2$ both parallel and perpendicular to the laser field polarization, and the contributions from the latter tend to wash out the minimum. When both $s$ and $p$ contributions are considered, there is a strong suppression of the yield near the $p_{n\parallel}=0$ axis (see Fig. \[LUMO4590deg\].(c)). We have verified that this is due to the destructive interference between both types of contributions in this momentum region. For $\theta=90^{\circ}$, only the components $p_{2\perp}$ contribute, and the electron momentum distributions are determined by the momentum-space integration alone. As a result, they reflect the momentum-space constraints for the RESI mechanism. These constraints lead to electron momentum distributions peaked at $(p_{i\parallel},p_{j\parallel})=(\pm 2\sqrt{U_p},0)$, with $i,j=1,2$ and $i\neq j$ and with widths $2\sqrt{U_p}$, and have been explicitly written in [@Shaaran; @RESIM]. This holds both for the $s$, $p$ and mixed case (Figs. \[LUMO4590deg\].(d), (e) and (f), respectively). ![RESI electron-momentum distributions for $\mathrm{Li}_2$ considering $V_{\mathbf{p}% _{2}e}=const.$ and $V_{\mathbf{p}_{1}e,\mathbf{k}g}$ according to Eq. (\[vp1ke1\]), for $\protect\theta =0$. The field and molecular parameters are the same as in the previous figure. The upper panels display only the contribution from the sets of orbits starting in the first half cycle of the laser field. In the lower panels the distributions have been symmetrized in order to account for the orbits starting in the other half cycle of the field, and for electron indistinguishability. The left, middle and right panels display the contributions from $s$, $p$ and all states composing the HOMO and the LUMO, respectively. The dashed line shows the position of the two-center interference minimum. The contour plots have been normalized to the maximum probability in each panel. []{data-label="HOMO0deg"}](Fig4.eps){width="10cm"} We will now focus on the interference condition determined by the excitation prefactor (\[Vp1e,kg\]). With this objective, we will keep $V_{\mathbf{p}% _{2}e}=const.$ and investigate the influence of $V_{\mathbf{p}_{1}e,\mathbf{k% }g}$ alone, starting from vanishing alignment angle. Once more, we will study the contributions of the $s$ and $p$ states, and the overall distributions. The interference condition and also the wavefunctions in the excitation prefactor now incorporate the HOMO and the LUMO (see Eq. (\[Vp1e,kg\])). For $\mathrm{Li}% _{2}^{+},$ the former and the latter are a gerade and an ungerade orbital, so that $\lambda _{\alpha }=0$ and $\lambda _{\beta }=1$ in Eq. (\[intrVp1\]). For pure $s$ states, $l_{\alpha }=l_{\beta }=0$ and for pure $p$ states, $% l_{\alpha }=l_{\beta }=1.$ This will lead to the simplified interference condition $$\sin [\mathbf{(p}_{1}-\mathbf{k)}\cdot \mathbf{R}/2]=0$$for both. Hence, one expects a minimum close to vanishing parallel momenta in the pure cases. When $s$ $p$ mixing is included, however, different angular momenta will also be coupled and the general interference condition must be considered. The electron momentum distributions obtained in this way are shown in Fig. \[HOMO0deg\], for both symmetrized and unsymmetrized distributions (upper and lower panels, respectively). For most distributions in the figure, we do not observe a clear suppression of the probability densities in any momentum region. This holds both for those caused by the two center interference and by the geometry of the wavefunctions at the ions. We have only observed a two center minimum if we consider the individual contributions of the $p$ states, and do not symmetrize the distributions (see Fig. \[HOMO0deg\].(b)). This is due to the fact that, for the parameters considered in this work, the two-center minimum according to condition (\[intVp1par\]) lies at or beyond the boundary of the momentum region for which rescattering of the first electron has a classical counterpart. The center of this region is roughly at $p_{1||}\simeq 2\sqrt{U_{p}}$ and its extension is determined by the difference between the maximal electron kinetic energy upon return and the excitation energy $E_{2g}-E_{2e},$ as discussed in our previous article [@RESIM]. Apart from that, $s$ $p$ mixing will lead to a blurring of this minimum, as it couples states with different angular momenta. Symmetrization introduces other events, either due to the electron indistinguishability or displaced by half a cycle, and obfuscates this minimum further, as shown in the lower panels of the figure. If the alignment angle is varied, incorporating only the excitation prefactor $V_{\mathbf{p}_1e,\mathbf{k}g}$ will lead to ring-shaped distributions, regardless of whether only $p$, $s$ or all basis states employed in the construction of the HOMO and LUMO are taken. This is expected as, apart from the above-mentioned $s$ $p$ mixing, which blur wavefunction-specific features, there will now be transverse momentum components in the prefactor $V_{\mathbf{p}_1e,\mathbf{k}g}$ which will wash out two-center interference patterns. We have verified that this is indeed the case (not shown). Molecular orbital signature {#orbits} --------------------------- In this section, we make an assessment of how the geometry of the HOMO and the LUMO affect the RESI electron momentum distributions. With this objective, we incorporate both prefactors $V_{\mathbf{p}_{2}e}$ and $V_{\mathbf{p}_{1}e,\mathbf{k}g}$ and vary the alignment angle. In order to discuss the influence of nodal planes, we are also providing the overall yield obtained in our computation in the two figures that follow (see color maps on the right-hand side of each panel). From other strong-field phenomena, it is well-known that the presence of nodal planes may suppress the yield considerably [@Dejan2009; @CarlaBrad]. ![Electron-momentum distributions for Li$_2$ as functions of the parallel momenta $(p_{1\parallel },p_{2\parallel })$ considering all prefactors, for different alignment angles. Panel (a), (b) and (c) correspond the alignment angle $\theta$= 0,45 and 90 degrees, respectively. The field and molecular parameters are the same as in the previous figures.[]{data-label="Li2HOMOLUMOall"}](Fig5.eps){width="16cm"} We will commence by having a closer look at $\mathrm{Li}_2$. Such results are displayed in Fig. \[Li2HOMOLUMOall\]. The main conclusion to be drawn from the figure is that the prefactor $V_{\mathbf{p}_{2}e}$ plays the dominant role in determining the shapes of the electron momentum distributions. This can be observed by a direct comparison of Fig. \[Li2HOMOLUMOall\].(a) with Fig. \[LUMO0deg\].(f), for vanishing alignment angle. The distributions in both figures exhibit similar shapes and minima at the axes $p_{n||}=0$, and are very different from those obtained if only the recollision-excitation prefactor is included (see Fig. \[HOMO0deg\](f)). The main effect of the excitation prefactor $V_{\mathbf{p}% _{1}e,\mathbf{k}g}$ is to alter the widths of the distributions. This situation persists for larger angles, such as $\theta =45^{\circ }$ and $% \theta =90^{\circ },$ as a comparison of Figs. \[Li2HOMOLUMOall\].(b) and (c), with Fig. \[LUMO4590deg\].(c) and (f) shows. For $\theta=45^{\circ}$, there is a suppression of the yield near the axes $p_{n\parallel}=0$, while for $\theta=90^{\circ}$ the interference patterns are washed out. Another interesting feature is that the overall yield decreases with the alignment angle between the molecular axis and the field. This is due to the fact that the LUMO, from which the second electron tunnels, is a $\sigma$ orbital. Spatially, $\sigma$ orbitals are localized along the internuclear axis, and do not exhibit nodal planes for vanishing alignment angle. This implies that tunneling ionization is favored when the LUMO is parallel to the laser field, and decreases when the difference between the orientation between the field and the LUMO increases. A legitimate question is, however, how the shape of the molecular orbital to which the second electron is excited is imprinted on the electron momentum distribution, if there are nodal planes parallel or perpendicular to the molecular axis. For that reason, we now present electron momentum distributions under the assumption that the second electron is excited to a $\pi_g$ orbital. Specifically, we choose $\mathrm{N}_2$ and its singly ionized counterpart, i.e., $\mathrm{N}^+_2$ as the molecular species in our RESI computation. The first electron will be ripped off from the HOMO, which is a $3\sigma_g$ orbital. However, upon return, it will excite the second electron to the LUMO, which is a $1\pi_g$ orbital. A $1\pi_g$ orbital exhibits two nodal planes, which will be oriented along the laser-field polarization for parallel and perpendicular-aligned molecules, i.e., at alignment angles $\theta=0^{\circ}$ and $\theta=90^{\circ}$. This orbital also exhibits lobes at angles $\theta=(2n+1)\pi/4$ with regard to the internuclear axis. The results obtained for this molecular species are exhibited in Fig. \[N2HOMOLUMO\]. As an overall pattern, we observe that the NSDI signal no longer decreases monotonically with increasing alignment angle. In fact, the signal increases for alignment angle $0<\theta<45^{\circ}$, is strongest for $\theta=45^{\circ}$, and decreases once more for $45^{\circ}<\theta<90^{\circ}$. This may be easily understood as a consequence of the geometry of the $1\pi_g$ orbital. For $\theta=0^{\circ}$ (Fig. \[N2HOMOLUMO\](a)), the external field is parallel to one of the nodal planes. Hence, tunnel ionization is strongly suppressed. This reflects itself in the overall yield. As the alignment angle increases, the field-polarization direction gets further and further away from the direction of this nodal plane, and the yield increases until $\theta=45^{\circ}$ (Fig. \[N2HOMOLUMO\](b)). For this angle, the field is parallel to one of the lobes of the $1\pi_g$ orbital, so that tunnel ionization of the second electron is enhanced. As the alignment angle is further increased, the direction of the field approaches the nodal plane at $\theta=\pi/2$ and ionization is further suppressed (Fig. \[N2HOMOLUMO\](c)). Apart from the above-mentioned behavior, we also observe a suppression along the axes $p_{n\parallel}=0$, regardless of the alignment angle. This is due to the fact that, in position space, $\pi$ orbitals vanish at the origin of the coordinate system. Consequently, their Fourier transform vanish for $p_{n\parallel}=0$. ![Electron-momentum distributions for $\mathrm{N}_2$ (bound-state energies $E_{1g}=0.63486$ a.u., $E_{2g}=1.12657$ a.u., and $E_{2e}=0.26871290$ a.u. and equilibrium internuclear distance $R=2.11$ a.u.) in a linearly polarized monochromatic field of intensity $I=1.25 \times 10^{14}\mathrm{W}/\mathrm{cm}^2$ as functions of the parallel momenta $(p_{1\parallel },p_{2\parallel })$ considering all prefactors, for alignment angles $\theta=0$, $45$ and $90$ degrees(panels (a), (b) and (c), respectively).[]{data-label="N2HOMOLUMO"}](Fig6.eps){width="16cm"} Conclusions =========== The results presented in the previous sections illustrate the potential of laser-induced nonsequential double ionization for the attosecond imaging of molecules. This is particularly true if the recollision-excitation with subsequent tunneling ionization (RESI) pathway is dominant. The computations in this work show that the shapes of the RESI electron momentum distributions depend in a dramatic fashion on the geometry of the state to which the second electron has been excited by the first electron, and from which it tunnels. The state in which the second electron is initially bound, i.e., the highest occupied molecular orbital (HOMO) of the singly ionized species plays only a secondary role. Thereby, two main issues are important in determining the shapes of the electron momentum distribution: the quantum interference caused by the interference due to the photoelectron emission at spatially separated center, and the geometry of the orbital from which the second electron tunnels. In order to investigate the first issue, generalized interference conditions for the first and second electron that take into account $s$ $p$ mixing along the lines of [@Dejan2009] have been derived, and led to fringes parallel to the $p_{n\parallel}=0$ $n=1,2$ axes in the plane spanned by the electron momentum components parallel to the laser-field polarization. These fringes agreed well with analytic estimates, but were washed out for relatively small alignment angles. In contrast, the features caused by the orbital geometry, such as suppression of the probability density near $p_{n\parallel}=0$ observed for $p$ states were present over a wide range of alignment angles. Furthermore, the presence or absence of nodal planes manifests itself as the suppression, or enhancement, of the overall yield with regard to the alignment angle. We have discussed the differences and similarities between $\sigma_u$ and $\pi_g$ orbitals in this context, exemplified by the LUMOs of $\mathrm{N}_2$ and $\mathrm{Li}_2$. These results agree with those reported in the literature for phenomena such as high-order harmonic generation [@CarlaBrad; @Olga2009; @Haessler2010] and above-threshold ionization [@Lin]. **Acknowledgements:** This work has been financed by the UK EPSRC (Grant no. EP/D07309X/1) and by the STFC. We are grateful to P. Sherwood, J. Tennyson and M. Ivanov for useful comments, and to M. T. Nygren for his collaboration in the early stages of this project. C.F.M.F. and B.B.A. would like to thank the Daresbury Laboratory for its kind hospitality. [99]{} J. Itatani, J. Levesque, D. Zeidler, H. Niikura, H. Pépin, J. C. Kieffer, P. B. Corkum and D. M. Villeneuve, Nature **432**, 867 (2004). H. Niikura, F. Légaré, R. Hasbani, A. D. Bandrauk, M. Yu. Ivanov, D. M. Villeneuve and P. B. Corkum, Nature **417**, 917 (2002);  H. Niikura, F. Légaré, R. Hasbani, M. Yu. Ivanov, D. M. Villeneuve and P. B. Corkum, Nature **421**, 826 (2003); S. Baker, J. S. Robinson, C. A. Haworth, H. Teng, R. A. Smith, C. C. Chirilă, M. Lein, J. W. G. Tisch, J. P. Marangos, Science **312**, 424 (2006) M. Lein, N. Hay, R. Velotta, J. P. Marangos, and P. L. Knight, Phys. Rev. Lett. **88**, 183903 (2002); Phys. Rev. A **66**, 023805 (2002); M. Spanner, O. Smirnova, P. B. Corkum and M. Y. Ivanov, J. Phys. B** 37**, L243 (2004). A. Becker and F. H.M. Faisal, Phys. Rev. Lett. **84**, 3546 (2000); **89**, 193003 (2002); S. V. Popruzhenko and S. P. Goreslavskii, J. Phys. B **34**, L239 (2001); S. P. Goreslavski and S. V. Popruzhenko, Opt. Express **8**, 395 (2001); S. P. Goreslavskii, S. V. Popruzhenko, R. Kopold, andW. Becker, Phys. Rev. A **64**, 053402 (2001). C. Figueira de Morisson Faria, H. Schomerus, X. Liu, and W. Becker, Phys. Rev. A **69**, 043405 (2004). C. Figueira de Morisson Faria, and M. Lewenstein, J. Phys. B **38**, 3251 (2005). M. Lein, E. K. U. Gross, and V. Engel, Phys. Rev. Lett. **85**, 4707 (2000); J. Phys. B **33**, 433 (2000); J. S. Parker, B. J. S. Doherty, K. T. Taylor, K. D. Schultz, C. I. Blaga, and L. F. DiMauro, Phys. Rev. Lett. **96**, 133001 (2006). X. Liu and C. Figueira de Morisson Faria, Phys. Rev. Lett. **92**, 133006 (2004); C. F. Faria, X. Liu, A. Sanpera, and M. Lewenstein, Phys. Rev. A **70**, 043406 (2004). J. S. Prauzner-Bechcicki, K. Sacha, B. Eckhardt, and J. Zakrzewski, Phys. Rev. A **78**, 013419 (2008). A. Staudte, C. Ruiz, M. Schöffler, S. Schossler, D. Zeidler, Th. Weber, M. Meckel, D. M. Villeneuve, P. B. Corkum, A. Becker, and R. Dörner, Phys. Rev. Lett. **99**, 263002 (2007); A. Rudenko,V. L. B. de Jesus, Th. Ergler, K. Zrost,B. Feuerstein, C. D. Schröter, R. Moshammer, and J. Ullrich, ibid. **99**, 263003 (2007). A. Emmanouilidou, Phys Rev A **78**, 023411 (2008); D. F. Ye, X. Liu, and J. Liu, Phys. Rev. Lett. **101**, 233003 (2008). D. I. Bondar, W.-K. Liu, and M. Yu. Ivanov, Phys. Rev. A **79**, 023417 (2009). E. Eremina, X. Liu, H. Rottke, W. Sandner, M.G. Schätzel, A. Dreischuch, G.G. Paulus, H. Walther, R. Moshammer and J. Ullrich, Phys. Rev. Lett. **92**, 173001 (2004); Yunquan Liu, S. Tschuch, A. Rudenko, M. Dürr, M. Siegel, U. Morgner, R. Moshammer, and J. Ullrich, ibid. **101**, 053001 (2008). E. Eremina, X. Liu, H. Rottke, W. Sandner, A. Dreischuch, F. Lindner, F. Grasbon, G.G. Paulus, H. Walther, R. Moshammer, B. Feuerstein, and J. Ullrich, J. Phys. B. **36**, 3269 (2003). R. Kopold, W. Becker, H. Rottke, and W. Sandner, Phys. Rev. Lett. **85**, 3781 (2000); B. Feuerstein, R. Moshammer, D. Fischer, A. Dorn, C. D. Schröter, J. Deipenwisch, J. R. Crespo Lopez-Urrutia, C. Höhr, P. Neumayer, J. Ullrich, H. Rottke, C. Trump, M. Wittmann, G. Korn, and W. Sandner, Phys. Rev. Lett. , 043003 (2001). X. Liu, C. Figueira de Morisson Faria, Phys. Rev. Lett. **92**, 133006 (2004); C. Figueira de Morisson Faria, X. Liu, A. Sanpera and M. Lewenstein, Phys. Rev. A **70**, 043406 (2004). C. Figueira de Morisson Faria, J. Phys. B **42**, 105602 (2009). C. Figueira de Morisson Faria and B. B. Augstein, Phys. Rev. A **81**, 043409 (2010). S. Patchkovskii, Z. Zhao, T. Brabec and D. M. Villeneuve, Phys. Rev. Lett. **97**, 123003 (2006); S. Patchkovskii, Z. Zhao, T. Brabec, and D.M. Villeneuve, J. Chem. Phys. **126**, 114306 (2007); O. Smirnova, S. Patchkovskii, Y. Mairesse, N. Dudovich, D. Villeneuve, P. Corkum, and M. Yu. Ivanov, Phys. Rev. Lett. **102**, 063601 (2009). O. Smirnova, Y. Mairesse, S. Patchkovskii, N. Dudovich, D. Villeneuve, P. Corkum, M. Y. Ivanov, Nature **460**, 972 (2009). X. Chu and Shih-I. Chu, Phys. Rev. A **64**, 063404 (2001); A. T. Le, R. R. Lucchese, and C. D. Lin, J. Phys. B **42**, 211001 (2009), A. T. Le, R. R. Lucchese, S. Tonzani, T. Morishita, and C. D. Lin, Phys. Rev. A **80**, 013401 (2009). S. Hässler, J. Caillat, W. Boutu, C. Giovanetti-Teixeira, T. Ruchon, T. Auguste, Z. Diveki, P. Breger, A. Maquet, B. Carré, R. Taïeb, and P. Salières, Nat. Phys. **6**, 200 (2010). E. Eremina, X. Liu, H. Rottke, W. Sandner, M. G. Schätzel, A. Dreischuch, G. G. Paulus, H. Walther, R. Moshammer, and J. Ullrich, Phys. Rev. Lett. **92**, 173001 (2004). D. Zeidler, A. Staudte, A. B. Bardon, D. M. Villeneuve,R. Dörner, P. B. Corkum, Phys. Rev. Lett. **95**, 203003 (2005). J. S. Prauzner-Bechicki, K. Sacha, B. Eckhardt, and J. Zakrzewski, Phys. Rev. A **71**, 033407 (2005); J. Liu, D. F. Ye, J. Chen and X. Liu, Phys. Rev. Lett. **99**, 013003 (2007); Y. Li, J. Chen, S. P. Yang and J. Liu, Phys. Rev. A **76**, 023401 (2007); D. F. Ye, J. Chen and J. Liu, Phys. Rev. A **77**, 013403 (2008); A. Emmanouilidou and A. Staudte, Phys. Rev. A **80**, 053415 (2009); Q. Liao, P. Lu, Opt. Express **17**, 15550 (2009). S. Baier, C. Ruiz, L. Plaja and A. Becker, Phys. Rev. A **74**, 033405 (2006); S. Baier, C. Ruiz, L. Plaja and A. Becker, Laser Phys. **17**, 358 (2007). X. Y. Jia, W. D. Li, J. Fan, J. Liu,and J. Chen, Phys. Rev. A **77**, 063407 (2008). C. Figueira de Morisson Faria, T. Shaaran, X.Liu and W.Yang, Phys. A **78**, 043407 (2008). T.Shaaran, M.T. Nygren and C. Figueira de Morisson Faria, Phys A **81**, 063413 (2010). T. Shaaran and C. Figueira de Morisson Faria, J. Mod. Opt. **57**,11 (2010). S. Odžak and D. B. Milošević, Phys. Rev. A **79**, 023414 (2009); J. Phys. B **42**, 071001 (2009). Eliot Hijano, Carles Serrat, George N. Gibson, and Jens Biegert, Phys. Rev. A **81**, 041401 (2010). B. B. Augstein and C. Figueira de Morisson Faria, J. Phys. B **44**, 055601 (2011). GAMESS-UK is a package of ab initio programs for more details see http://www.cfs.dl.ac.uk/gamess-uk/index.shtnl; M. F. Guest, I. J. Bush, H. J. J. van Dam, P. Sherwood, J. M. H. Thomas, J. H. van Lenthe, R. W. A. Havenith, J. Kendrick, Mol. Phys. **103**,719 (2005).
{ "pile_set_name": "ArXiv" }
--- abstract: 'The quantum anomalous Hall effect can occur in single and few layer graphene systems that have both exchange fields and spin-orbit coupling. In this paper, we present a study of the quantum anomalous Hall effect in single-layer and gated bilayer graphene systems with Rashba spin-orbit coupling. We compute Berry curvatures at each valley point and find that for single-layer graphene the Hall conductivity is quantized at $\sigma_{xy} = 2e^2/h$, with each valley contributing a unit conductance and a corresponding chiral edge state. In bilayer graphene, we find that the quantized anomalous Hall conductivity is twice that of the single-layer case when the gate voltage $U$ is smaller than the exchange field $M$, and zero otherwise. Although the Chern number vanishes when $U > M$, the system still exhibits a quantized valley Hall effect, with the edge states in opposite valleys propagating in opposite directions. The possibility of tuning between different topological states with an external gate voltage suggests possible graphene-based spintronics applications.' author: - 'Wang-Kong Tse$^1$' - Zhenhua Qiao$^1$ - 'Yugui Yao$^{1,2}$' - 'A. H. MacDonald$^1$' - 'Qian Niu$^{1,3*}$' title: 'Quantum Anomalous Hall Effect in Single-layer and Bilayer Graphene' --- @twocolumnfalse I. Introduction =============== Graphene is a two-dimensional material with a single-layer honeycomb lattice of carbon atoms. Its isolation in the past decade has generated substantial theoretical and experimental research activity [@Graphene_RMP]. Experimental fabrication methods continue to progress, motivating a recent focus on the possibility of utilizing graphene as a material for nanoelectronics. At the same time, spintronics has also progressed in recent years. The spin degrees of freedom can be manipulated to encode information, allowing fast information processing and immense storage capacity [@Spintronics_RMP]. This article is motivated by recent research progress which has advanced the prospects for spintronics in graphene. A key element of spintronics is the presence of spin-orbit coupling which allows the spin degrees of freedom to be controlled by electrical means. It was pointed out some time ago that the quantum spin Hall effect [@Kane_Mele] can occur in a single-layer graphene sheets because of its intrinsic spin-orbit (SO) coupling. Instrinsic SO coupling induces momentum-space Berry curvatures (which act like momentum-space magnetic fields) that have opposite sign for opposite spin. However, the intrinsic coupling strength was later found to be weak ($\sim 10^{-2}\,-\,10^{-3}\,\mathrm{meV}$ [@ISO_MacDonald; @ISO_Fang; @ISO_Fabian]) enough to make applications of the effect appear impractical. Luckily another type of spin-orbit interaction known as Rashba SO coupling[@FirstRashbaSO] appears when inversion symmetry in the graphene plane is broken. This SO coupling mixes different spins, so the spin-component perpendicular to the graphene plane is no longer conserved. When acting alone, Rashba coupling causes the resulting spin eigenstates to be chiral. One appealing feature of Rashba SO coupling is its tunability by an applied gate field $E_{\mathrm{G}}$. Unfortunately the field-effect Rashba coupling strength is also weak ( $\sim 10\,-\,100\,\mu\mathrm{eV}$ per $V/\mathrm{nm}$ [@ISO_MacDonald; @ISO_Fabian]) at practical field strengths. Recent experiments [@ARPES_PRL_Rader; @ARPES_PRL_Shikin] and ab initio calculation [@TBpaper] have however suggested that surface deposition of impurity adatoms can dramatically enhance the Rashba SO coupling strength in graphene to $\sim 1\,-\,10\,\mathrm{meV}$, raising the hope that spin transport effects induced by Rashba SO coupling might be realizable as experiment progresses. The quantum anomalous Hall effect (QAHE) is characterized by a quantized charge Hall conductance in an insulating state. Unlike the quantum Hall effect, which arises from Landau level quantization in a strong magnetic field, QAHE is induced by internal magnetization and SO coupling. Although there have been a number of theoretical studies of this unusual effect [@QAHEpapers1; @QAHEpapers2; @QAHEpapers3; @QAHEpapers4; @QAHEpapers5; @QAHEpapers6], no experimental observation has been reported so far. In a recent article [@TBpaper], we predicted on the basis of tight-binding lattice models and [*ab initio*]{} calculations that QAHE can occur in single-layer graphene in the presence of both an exchange field and Rashba SO coupling. In this paper, we complement our previous numerical work with a continuum model study which yields more analytical progress and provides clearer insight into our findings. We also present a more detailed and systematic investigation of the topological phases of both single-layer and gated bilayer graphene with strong Rashba SO interactions. In single-layer graphene the Hamiltonian is analytically diagonalizable and we obtain an analytic expression for the Berry curvature and use it to evaluate the Chern number. For bilayer graphene, the possibility of a gate field applied across the bilayer introduces a tunable parameter which we show can induce a topological phase transition. We find that when the bilayer potential difference $U$ is smaller than the exchange field $M$, the system behaves as a quantum anomalous Hall insulator, whereas for $U > M$, the system behaves as a quantum valley Hall insulator with zero Chern number. For each case, we also study the edge state properties of the corresponding finite system using a numerical tight-binding approach. The paper is organized as follows. We first discuss the bulk topological properties for the case of single-layer graphene in section II. In section III, we turn our attention to the case of gated bilayer graphene. We then discuss the edge state properties of both the single-layer and the gated bilayer cases in section IV. Our conclusion are presented in section V. An appendix follows that develops an envelope function analysis of the edge states in the single-layer system. II. Single-layer Graphene ========================= The Brillouin zone of graphene is hexagonal with two inequivalent K and K’ points located at the zone corners. The band structure has a linear band crossings at both K and K’ . At wavevectors near either of these valley points, the envelope-function wavefunctions satisfy a massless Dirac equation. We represent the graphene envelope-function Hamiltonian in the basis $\{\mathrm{A}\uparrow,\mathrm{B}\uparrow,\mathrm{A}\downarrow,\mathrm{B}\downarrow\}$ for both valleys K and K’. Rashba SO coupling in graphene was first discussed by Kane and Mele [@Kane_Mele] and subsequently by Rashba [@Rashba], and also in a number of recent papers [@Sandler; @Smith; @Wrinkler]. The Hamiltonian for valley K including Rashba SO coupling and exchange field is $$H =v{\boldsymbol}{\sigma}\cdot{\boldsymbol}{k}{\boldsymbol}{1}_s+\alpha\left({\boldsymbol}{\sigma}\times{\boldsymbol}{s}\right)_z+M {\boldsymbol}{1}_{\sigma}s_z, \label{eq1}$$ where ${\boldsymbol}{\sigma}$ and ${\boldsymbol}{s}$ are Pauli matrices that correspond respectively to the pseudospin (i.e., A-B sublattice) and spin degrees of freedom, ${\boldsymbol}{1}_{\sigma,s}$ denotes the identity matrix in the $\sigma$ and $s$ space, $\alpha$ is the Rashba SO coupling strength, and $M$ is the exchange magnetization. The Hamiltonian for valley K’ is obtained by the replacement ${\boldsymbol}{\sigma}\to -{\boldsymbol}{\sigma}^*$. We neglect intrinsic SO coupling since we are interested in the case when the Rashba SO coupling parameter $\alpha$ is much stronger than the intrinsic coupling parameter $\Delta_{\mathrm{intrinsic}}$. We note, however, that the presence of a small but finite intrinsic SO coupling is not expected to qualitatively modify our results as long as $\Delta_{\mathrm{intrinsic}} \ll M$. ![(Color online) (a)-(c) Bulk states band structure. (a). $M =0$, $\alpha = 0$, (b). $M/\varepsilon_{\mathrm{c}} = 0.1$, $\alpha = 0$, (c). $M/\varepsilon_{\mathrm{c}} = 0.1$, $\alpha/\varepsilon_{\mathrm{c}} = 0.05$. $k_{\mathrm{c}} = 2\pi/a$ ($a$ is the graphene lattice constant) and $\varepsilon_{\mathrm{c}}$ are momentum and energy cut-off of the Dirac model, beyond which the energy dispersion deviates from linearity due to trigonal warping. (d). Berry curvature $\Omega = 2(\Omega_{+-}+\Omega_{--})$ (the factor of two arises from the two valleys) for $\alpha/\varepsilon_{\mathrm{c}} = 0.05$, $M/\varepsilon_{\mathrm{c}} = 0.1$. $\Omega$ peaks at the $k$ value where the upper valance band has its maximum and degeneracy between opposite spin states occurs when SO coupling is absent \[see panel (b)\].[]{data-label="BandBerry"}](Band_Berry_a_5em2_M_1me1.pdf){width="8.5cm"} Upon diagonalization of the Hamiltonian, we obtain the energy dispersion for both valleys $$\varepsilon_{ks\mu} = \mu\sqrt{M^2+\epsilon_k^2+2\alpha^2+2s\sqrt{\alpha^4+\epsilon_k^2\left(M^2+\alpha^2\right)}}, \label{eq2}$$ where $\epsilon_k = v k$, $\mu = \pm$ stands for the conduction $(+)$ and valence $(-)$ bands. Because of spin-mixing due to Rashba SO coupling, spin is no longer a good quantum number, and the resulting angular momentum eigenstates are denoted by the spin chirality $s = \pm$. The band structure therefore consists of two spin chiral conduction bands and two spin chiral valence bands. The corresponding eigenstates are $$\begin{aligned} &&u_{s\mu}\left(k\right) = \label{eq3} \\ &&N_{s\mu}\left[\begin{array}{cccc} \zeta i e^{-i2\phi_k}P_{s\mu}, & i e^{-i\phi_k}Q_{s\mu}, & \zeta e^{-i\phi_k} R_{s\mu}, & \alpha\epsilon_k^2 \end{array}\right]^{\mathrm{T}}, \nonumber\end{aligned}$$ where $\phi_k = \tan^{-1}(k_y/k_x)$, $N_{s\mu}$ is the normalization constant $$N_{s\mu}\left(k\right) =\left\{P_{s\mu}^2+Q_{s\mu}^2+R_{s\mu}^2+\left(\alpha\epsilon_k^2\right)^2\right\}^{-1/2}, \label{eq4}$$ and $P, Q, R$ are functions defined as follows: $$\begin{aligned} &&P_{s\mu}\left(k\right) = \nonumber \\ &&-M\epsilon_k^2+\left(\alpha^2-s\sqrt{\alpha^4+\epsilon_k^2\left(M^2+\alpha^2\right)}\right)\left(M+\varepsilon_{ks\mu}\right), \nonumber \\ &&Q_{s\mu}\left(k\right) = \epsilon_k\left[\epsilon_k^2-\left(M+\varepsilon_{ks\mu}\right)^2\right]/2, \nonumber \\ &&R_{s\mu}\left(k\right) = \alpha\epsilon_k\left(M+\varepsilon_{ks\mu}\right). \label{eq5}\end{aligned}$$ Fig. \[BandBerry\] illustrates the evolution of the electronic structure as the exchange field $M$ and Rashba SO coupling $\alpha$ are introduced to the system. As shown in Fig. \[BandBerry\](b), the exchange field splits the original spin-degenerate Dirac cone into two oppositely spin-polarized copies, and this in turn produces spin degeneracy circles in momentum space at energy $\varepsilon = 0$. Introducing the SO coupling $\alpha$ causes a gap to open up between the conduction and valence bands around this circle along which SO coupling mixes up and down spins and produces an avoided band crossing. The momentum magnitude $k = k_{\mathrm{\Delta}}$ at which the avoided crossing occurs and the gap $\Delta$ are given by $$k_{\mathrm{\Delta}} = \frac{M \sqrt{M^2 + 2 \alpha^2}}{v \sqrt{M^2 + \alpha^2}},\,\,\,\,\,\Delta = \frac{2 \alpha M}{\sqrt{M^2 + \alpha^2}}. \label{kgap}$$ In the insulating regime when the Fermi level lies within the bulk gap, the Hall conductivity $\sigma_{xy} = \mathcal{C}e^2/h$ where $\mathcal{C}$ is the Chern number $\mathcal{C}$ which can be evaluated using the Thouless-Kohmoto-Nightingale-den Nijs (TKNN’s) formula [@BerryPhaseRMP]: $$\mathcal{C} = \frac{1}{2\pi}\sum_n\int{\mathrm{d}^2k}\left({{\boldsymbol}{\Omega}_n}\right)_z, \label{ChernNumber}$$ where $n$ labels the bands below the Fermi level, and ${\boldsymbol}{\Omega}_n$ is the Berry curvature $${\boldsymbol}{\Omega}_n = i\langle\frac{\partial u_n}{\partial{\boldsymbol}{k}}\vert\times\vert \frac{\partial u_n}{\partial{\boldsymbol}{k}}\rangle, \label{BerryC}$$ with $u_n$ denoting the Bloch state for band $n$. Before calculating the Chern number, we briefly comment on and make connections with two other formulas in the literature that are also used to calculate the Hall conductivity in the insulating regime. For two-band Hamiltonians that can be written in the form $H = {\boldsymbol}{\sigma}\cdot{\boldsymbol}{d}$, the TKNN formula can be written in the form $$\mathcal{C} = \frac{1}{4\pi}\int\mathrm{d}^2k\left(\frac{\partial{\boldsymbol}{\hat{d}}}{\partial k_x}\times\frac{\partial {\boldsymbol}{\hat{d}}}{\partial k_y}\right)\cdot{\boldsymbol}{\hat{d}}, \label{winding1}$$ where ${\boldsymbol}{\hat{d}} = {\boldsymbol}{d}/\vert {\boldsymbol}{d}\vert$ is the unit vector which specifies the direction of ${\boldsymbol}{d}$. The right hand side of Eq. (\[winding1\]) can be identified as a Pontryagin index which is equal to the number of times the unit sphere of spinor directions is covered upon integrating over the Brillouin zone. For the present case, however, the graphene Hamiltonian contains both spin and pseudospin degrees of freedom, and Eq. (\[winding1\]) is not applicable. In this case, $\mathcal{C}$ is given by the following more general form of the Pontryagin index [@Volovik]: $$\mathcal{C} = \frac{1}{24\pi^2}\epsilon_{\mu\nu\lambda}\mathrm{tr}\int\mathrm{d}\omega\mathrm{d}^2k\;G\frac{\partial G^{-1}}{\partial k_{\mu}}G\frac{\partial G^{-1}}{\partial k_{\nu}}G\frac{\partial G^{-1}}{\partial k_{\lambda}}, \label{winding2}$$ where $k_{\mu} = (\omega,k_x,k_y)$, $\epsilon_{\mu\nu\lambda}$ is the anti-symmetric tensor, and $G = (i\omega-H)^{-1}$ is the Green function. In the non-interacting limit we consider in this work, Eq. (\[winding2\]) can be shown to be equivalent [@Yakovenko] to the TKNN formula Eq. (\[ChernNumber\]). We now evaluate the $z$ component of the Berry curvature $\Omega_{s\mu}$ from Eq. (\[BerryC\]) for the bands which are labeled by $s$ and $\mu$. For each valley, we find that the Berry curvature is analytically expressible in terms of an exact differential $${\Omega}_{s\mu} = \frac{1}{k}\frac{\partial}{\partial k}\left[N_{s\mu}^2\left(2P_{s\mu}^2+Q_{s\mu}^2+R_{s\mu}^2\right)\right], \label{Berry2}$$ and the Chern number per valley for each band is given by $$\mathcal{C}_{s\mu} = N_{s\mu}^2\left[2P_{s\mu}^2+Q_{s\mu}^2+R_{s\mu}^2\right]\big\vert_{k = 0}^{k = \infty}, \label{Chern2}$$ The upper limits of the integrand in Eq. (\[Chern2\]) can be set to $\infty$ because the Berry curvature is large only close to valley centers. Computing Eq. (\[Chern2\]), we find that in this continuum model the Chern number $\mathcal{C}_{s-}$ for the individual valence band with $s = \pm$ is not quantized but instead depends numerically on the specific values of $\alpha$ and $M$. We find, however, that the two contributions always sum to $1$ and therefore each valley carries a unit Chern number. Taking into account both valleys, it follows that the quantized Hall conductivity is $$\sigma_{xy} = 2\frac{e^2}{h}\;\mathrm{sgn}(M). \label{Cond}$$ III. Gated Bilayer Graphene =========================== \[width=8.5cm,angle=0\] [Band\_ConM\_4by4\_1.pdf]{} We extend our discussion to the case of bilayer graphene. In the vicinity of valley K, we can write the bilayer graphene Hamiltonian in the presence of Rashba SO coupling $\alpha$, exchange field $M$ and potential imbalance $U$ as (${\boldsymbol}{\tau}$ denotes Pauli matrices for the layer degrees of freedom): $$\begin{aligned} H &=& \left[v{\boldsymbol}{\sigma}\cdot{\boldsymbol}{k}1_s+M1_{\sigma}s_z+\left(\frac{\alpha_{\mathrm{T}}+\alpha_{\mathrm{B}}}{2}\right)\left({\boldsymbol}{\sigma}\times{\boldsymbol}{s}\right)_z\right]1_{\tau} \nonumber \\ &&+\left[\left(\frac{\alpha_{\mathrm{T}}-\alpha_{\mathrm{B}}}{2}\right)\left({\boldsymbol}{\sigma}\times{\boldsymbol}{s}\right)_z+U1_{\sigma}1_s\right]\tau_z \nonumber \\ &&+\frac{1}{2}t_{\perp}1_s\left(\sigma_x\tau_x+\sigma_y\tau_y\right), \label{H}\end{aligned}$$ where $t_{\perp} = 0.4\,\mathrm{eV}$ is the $\tilde{\mathrm{A}}\mathrm{B}$ interlayer hopping energy [@Mccann]. For the other valley K’, the Hamiltonian is given by the above with ${\boldsymbol}{\sigma}\to -{\boldsymbol}{\sigma}^*$. For generality, we have written Eq. (\[H\]) allowing for different Rashba SO coupling strengths $\alpha_{\mathrm{T}}$ and $\alpha_{\mathrm{B}}$ for the top and bottom layers. We shall now set $\alpha_{\mathrm{T}} = \alpha_{\mathrm{B}} = \alpha$ for simplicity as the specific values of $\alpha$ on the two layers do not alter the topology of the bands in our discussions below. ![(Color online) Phase diagram of the Chern number $\mathcal{C}$ as a function of $U$ and $M$. The valley Chern number occupies complementary regions of the phase space with $\mathcal{C}_v = 0$ for $U < M$ and $4\mathrm{sgn}(U)$ for $U > M$.[]{data-label="fig1"}](Phase_QAH.pdf){width="6.5cm"} The Hamiltonian Eq. (\[H\]) is not diagonalizable analytically [@Remark]. We therefore obtain the eigenenergies and eigenvectors numerically and use these to compute the Berry curvature. Fig. \[fig0\] shows the band structure evolution for increasing values of $U$: $U < M$ (panel a), $U = M$ (panel b), and $U > M$ (panel c). For $U < M$, an inverted-gap profile appears that is similar to the single-layer graphene case (Fig. \[BandBerry\]c). At $U = M$, the gap closes exactly at $k = 0$, and reopens when $U > M$. Fig. \[fig0\] shows the behavior of the gap $\Delta$ as a function of the potential difference $U$ for various values of $M$. We find that $\Delta$ initially increases with $U$ and then decreases toward zero when $U$ approaches the value of $M$, after which $\Delta$ increases again with $U$. The Berry curvature Eq. (\[BerryC\]) can be expressed in a form that is more convenient for numerical computation. For the $n^{\mathrm{th}}$ band, the Berry curvature per valley can be expressed as $$\Omega_{xy}^n = -2\sum_{n' \neq n} \frac{\mathrm{Im}\left\{\langle n \vert v_x \vert n' \rangle \langle n' \vert v_y \vert n \rangle\right\}}{\left(\varepsilon_n-\varepsilon_{n'}\right)^2}. \label{Berry}$$ where $v_{x,y} = \partial H/\partial k_{x,y}$. Numerically diagonalizing the Hamiltonian Eq. (\[H\]) and computing the Chern number we find that $$\sigma_{xy} = \left\{\begin{array}{c} 4e^2/h\;\mathrm{sgn}(M),\,\,\,\,\,U<M \\ 0,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,U>M \end{array}\right.. \label{BilayerHall}$$ For $U < M$, the Chern number is twice that of the single-layer graphene case, corresponding to four edge modes. The bilayer graphene system behaves as an quantum anomalous Hall insulator when $U < M$, and exhibits vanishing Hall effect when $U > M$. The gated bilayer graphene system therefore has a Hall current which is tunable by an external gate voltage. The potential difference $U$ breaks the bilayer’s top-bottom spatial inversion symmetry. This produces a valley Hall effect in which valley-resolved electrons scatter to opposite sides of the sample. This can be characterized by the valley Hall conductivity which is defined as the difference between the valley-resolved Hall conductivities $\sigma_{xy}^{\mathrm{v}} = \sigma_{xy}^{\mathrm{K}}-\sigma_{xy}^{\mathrm{K'}}$. We find that $$\sigma_{xy}^{\mathrm{v}} =\left\{\begin{array}{c} 4e^2/h\;\mathrm{sgn}(U),\,\,\,\,\,U>M \\ 0,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,U<M \end{array}\right.. \label{BilayerVHall}$$ Therefore, despite having a vanishing Chern number when $U > M$, the system exhibits a finite valley Hall conductivity $4e^2/h$. The quantum anomalous Hall and quantum valley Hall effects thus occupy complementary regions in the $U-M$ phase space, as summarized in the phase diagram Fig. \[fig1\]. The gated bilayer graphene system therefore behaves, depending on whether $U$ or $M$ is larger, either as a quantum anomalous Hall insulator or a quantum valley Hall insulator. IV. Edge State Properties ========================= We have presented an analysis of bulk topological properties in single-layer and bilayer graphene cases using the low-energy Dirac Hamiltonian. In this section, we study the corresponding edge state properties on a finite single-layer and bilayer graphene sheet, and switch to a tight-binding representation from which we obtain the edge bands numerically. The finite-size single-layer and bilayer graphene sheets in our calculations are terminated with zigzag edges along one direction, and are infinite in the other direction. The Hamiltonian for single-layer graphene case can be expressed as $$\begin{aligned} H_{\mathrm{SLG}} &=& t \sum_{\langle{ij}\rangle \alpha }{ c^\dagger_{i \alpha}c_{j\alpha}+ {i} t_{\mathrm{R}}\sum_{\langle{ij}\rangle \alpha \beta }({\boldsymbol}{s}_{\alpha \beta}{\times}{\mathbf{d}}_{ij}){\cdot}\hat{\mathbf{z}}\,c^\dagger_{i \alpha } c_{j \beta }} \nonumber \\ &+& M \sum_{i\alpha}{c^\dagger_{i\alpha}(s_{z})_{\alpha\alpha}c_{i\alpha}}, \label{SLGTB}\end{aligned}$$ where the first term describes hopping between nearest-neighbors $i,j$ on the honeycomb lattice, the second term is the Rashba SO term with coupling strength $t_{\mathrm{R}}$ ($\mathbf{d}_{ij}$ is a unit vector pointing from site $j$ to site $i$), and the third term is the exchange field $M$. $\alpha,\beta$ denote spin indices. The bilayer graphene case is described by the Hamiltonian $$\begin{aligned} H_{\mathrm{BLG}} &=& H_{\mathrm{SLG}}^{\mathrm{T}}+H_{\mathrm{SLG}}^{\mathrm{B}} +t_\bot \sum_{i\in\mathrm{T},j\in\mathrm{B}, \alpha} c^\dagger_{i\alpha}c_{j\alpha} \nonumber \\ &&+ U \sum_{i\in\mathrm{T},\alpha}{ c^\dagger_{i\alpha}c_{i\alpha}}-U \sum_{j\in\mathrm{B},\alpha}{ c^\dagger_{j\alpha}c_{j\alpha}}, \label{BLGTB}\end{aligned}$$ where $H_{\mathrm{SLG}}^{\mathrm{T, B}}$ are the top (T) and bottom (B) layer Hamiltonians of Eq. (\[SLGTB\]), vertical hopping $t_\bot$ between the layers is represented by the third term and occurs only between Bernal stacked neighbors, and the last two terms describe the potential difference $U$ applied across the bilayer. The parameters in the tight-binding Hamiltonians Eqs. (\[SLGTB\])-(\[BLGTB\]) above are related to those in the low-energy Hamiltonians Eq. (\[eq1\]) and Eq. (\[H\]) as $v = 3ta/2$ ($a = 1.42\,\AA$ is the graphene lattice constant), $\alpha = 2 t_{\mathrm{R}}/3$, and $M,U,t_\bot$ are the same in both equations. [Band-Single1.pdf]{} Single-layer graphene case -------------------------- Fig.\[Band-Single1\] shows the ribbon band structures calculated from Eq. (\[SLGTB\]). Inside the bulk gap, we find counter propogating gapless edge channels at each valley that are localized on opposite edges of the graphene sheet. In the Fig.\[WaveFunction1\], we plot the probability density profile of edge state wave functions $|\psi|^2$ as a function of the atom positions along the width of the graphene sheet for the four edge states labeled by A, B, C, D in Fig.\[Band-Single1\]. It can be seen that states A and C are localized along the left edge whereas B and D are localized along the right edge. The edge states labeled by A and C have the same velocity and propagate along the same direction along one edge \[inset of Fig.\[WaveFunction1\](a)\], whereas B and D have opposite velocity and propagate along the other edge. The two chiral edge modes each carry a unit conductance $e^2/h$ yielding a quantized Hall conductivity $\sigma_{xy} = 2e^2/h$. In the appendix we also present an envelope function analysis of the edge state properties. The edge state band structure obtained with this approach is found to be in excellent agreement with the tight-binding results. [WaveFunction1.pdf]{} Gated bilayer graphene case --------------------------- [Bilayer\_Band\_MU\_UM\_Comb.pdf]{} To study the edge state properties corresponding to the quantum anomalous Hall phase and the quantum valley Hall phase, we show the edge state band structure at a fixed Rashba SO coupling for the two cases $M > U$ and $U > M$ in Fig. \[band-bilayerMLB\]. In contrast to the band structure in the single layer case Fig. \[Band-Single1\], we find that the bilayer graphene band structure becomes asymmetric at K and K’. Within the bulk gap, there exists two edge bands associated with each valley. In Fig. \[Wave\_Bilayer\_MLB\] we show the probability density of the edge state wave function $|\psi|^2$ for the edge states labeled from A to H in Fig. \[band-bilayerMLB\] for both cases. The left panel shows the case $M > U$, and we find that the edge states labeled by A, C, E, G are localized on one edge whereas B, D, F, H are localized on the other edge. This corresponds to the quantum anomalous Hall phase \[Fig. \[Schematic\_comparison\](a)\], where there exists four parallel chiral edge modes yielding a quantized Hall conductance $\sigma_{xy}=4e^2/h$. [Bilayer\_Wave\_MU\_UM\_Comb.pdf]{} For the case $U > M$, the right panel of the wave function plot Fig. \[Wave\_Bilayer\_MLB\] reveals that the four edge modes A, C, E, G which propagate along the same direction now become split between the two edges: A and C travel along one edge whereas E and G travel along the opposite edge. Similarly, for the edge modes that travel in the opposite direction, B and D propagate along one edge whereas E and G propagate along the other edge. This is illustrated in the schematic of Fig.\[Schematic\_comparison\](b). Since the total current along one edge now adds up to zero, the Hall conductivity vanishes. Nevertheless, two sets of counter-propagating edge modes that belong to different valleys K and K’ travel along one edge. This situation bears a remarkable resemblance to the quantum spin Hall effect where one edge consists of two counter-propagating spin polarized modes. Due to the broken top-bottom layer spatial inversion symmetry, bilayer graphene exhibits a quantized valley Hall effect, with $\sigma_{xy}^{\mathrm{v}} = 4e^2/h$. In the case of single-layer graphene, such a quantized valley Hall effect arises when the A-B sublattice symmetry is broken; however there is no obvious strategy for imposing such an external potential experimentally. Through top and back gating, bilayer graphene allows for a more experimentally accessible way to produce the quantum valley Hall effect. Our present results show that the quantum valley Hall effect can coexist with time reversal symmetry breaking, provided that the breaking of spatial inversion symmetry wins over that of time reversal symmetry. IV. Conclusion ============== In summary, we have studied the quantum anomalous Hall effect in single-layer and bilayer graphene systems with strong Rashba spin-orbit interactions due to externally controlled inversion symmetry breaking, and strong exchange fields due to proximity coupling to a ferromagnet. For neutral single-layer graphene, we find that the Hall conductivity is quantized as $\sigma_{xy} = 2e^2/h$. For bilayer graphene, in which an external gate voltage can introduce an inversion symmetry breaking gap, we find a quantized Hall conductivity at neutrality equal to $4e^2/h$ when the potential difference $U$ is smaller than the exchange coupling $M$. This anomalous Hall effect is similar to the quantized anomalous Hall effect [@sahetheory; @saheexpt] which can occur spontaneously in high-quality bilayers at low-temperatures, but is potentially more robust because it relies on external exchange and spin-orbit fields rather than spontaneously broken symmetries. When $U > M$, the system exhibits a quantized valley Hall effect with valley Hall conductivity $4e^2/h$. [Schematic\_comparison.pdf]{} Two obstacles stand in the way of realizing the quantum anomalous Hall effects discussed in this paper experimentally. It will be necessary first of all to introduce a sizeable Rashba spin-orbit coupling. One possibility is surface deposition of heavy-nucleus magnetic atoms that induce large spin-orbit coupling. The exchange field that is also required could be introduced through a proximity effect. From our ab initio studies [@TBpaper], an exchange field splitting of $56\,\mathrm{meV}$ and Rashba spin-orbit coupling $2.8\,\mathrm{meV}$ can be obtained by depositing Fe atoms on the graphene surface. Another possible solution is to deposit graphene on a ferromagnetic insulating substrate. The presence of a substrate breaks spatial inversion symmetry, and therefore also produces Rashba spin-orbit coupling. Since the exchange field is induced through a proximity effect, layered antiferromagnetic insulators, which are more abundant in nature can also be used and offer the advantage of an enlarged pool of candidate substrate materials. Acknowledgement =============== This work was supported by the Welch grant F1473 and by DOE grant (DE-FG03-02ER45958, Division of Materials Sciences and Engineering). Y.Y. was supported by NSFC (No. 10974231) and the MOST Project of China (2007CB925000, 2011CBA00100). Appendix: Envelope function analysis of edge modes ================================================== In this appendix, we present results for the edge state band structures using an envelope function approach based on the continuum Dirac model. We shall present only the single-layer graphene case below, as the bilayer case does not offer as much analytic tractibility since the Hamiltonian Eq. (\[H\]) is not analytically diagonalizable. We first calculate the envelope-function band structure and then compare the results with the tight-binding band structure. Consider below a graphene sheet of infinite extent in the $x$ direction but finite in the y direction spanning from $y = 0$ to $y = L$. From the Hamiltonian Eq. (\[eq1\]), one can write down the eigenvalue problem satisfied by the wavefunction $\tilde{{\boldsymbol}{\Psi}} = \left[\begin{array}{cccc} \tilde{\psi}_{\mathrm{A}\uparrow}, & \tilde{\psi}_{\mathrm{B}\uparrow}, & \tilde{\psi}_{\mathrm{A}\downarrow}, & \tilde{\psi}_{\mathrm{B}\downarrow}\end{array}\right]^{\mathrm{T}}$ $$\begin{aligned} \left[\begin{array}{cccc} -\varepsilon+M & v\left(k_x-\partial_y\right) & 0 & 0 \\ v\left(k_x+\partial_y\right) & -\varepsilon+M & -i 2 \alpha & 0 \\ 0 & i 2 \alpha & -\varepsilon-M & v\left(k_x-\partial_y\right) \\ 0 & 0 & v\left(k_x+\partial_y\right) & -\varepsilon-M \end{array}\right]\tilde{{\boldsymbol}{\Psi}} = 0. \nonumber \\ \label{eq8}\end{aligned}$$ For zigzag-edged graphene, the following boundary conditions apply $$\begin{aligned} \tilde{\psi}_{\mathrm{A}\uparrow}\left(y = L\right) &=& \tilde{\psi}_{\mathrm{A}\downarrow}\left(y = L\right) = 0, \label{eq9} \\ \tilde{\psi}_{\mathrm{B}\uparrow}\left(y = 0\right) &=& \tilde{\psi}_{\mathrm{B}\downarrow}\left(y = 0\right) = 0. \label{eq10} \end{aligned}$$ The solution of the problem Eq. (\[eq8\]) admits the ansatz $\tilde{{\boldsymbol}{\Psi}} = e^{\lambda y}{\boldsymbol}{\Psi}$. Substituting into Eq. (\[eq8\]), we obtain the energy dispersion in terms of $\lambda$ from the resulting determinantal equation $$\begin{aligned} \varepsilon &=& \mu\left\{M^2+v^2\left(k_x^2-\lambda^2\right)+2\alpha^2\right. \nonumber \\ &&\left.+2s\sqrt{\alpha^4+v^2\left(k_x^2-\lambda^2\right)\left(M^2+\alpha^2\right)}\right\}^{1/2}, \label{eq12}\end{aligned}$$ which in turn yields four characteristic lengths $\pm\lambda_{1,2}$ in terms of the energy $\varepsilon$: $$\lambda_{1,2} = \frac{1}{v}\sqrt{v^2k_x^2-\left[\varepsilon^2+M^2\pm2\sqrt{\varepsilon^2 M^2+\alpha^2\left(\varepsilon^2-M^2\right)}\right]}. \label{eq13}$$ Note that $\lambda_{1,2}$ in general can be complex, corresponding to a mixture of the edge and bulk states. The eigenvectors can be obtained as $${\boldsymbol}{\Psi}\left(\lambda\right) = \left[\begin{array}{c} -\left(\varepsilon+M\right)\left[v^2\left(\lambda^2-k_x^2\right)+\left(\varepsilon-M\right)^2\right] \\ i2\alpha\left(\varepsilon^2-M^2\right) \\ i2\alpha v\left(\varepsilon+M\right)\left(k_x-\lambda\right) \\ -v\left(k_x+\lambda\right)\left[v^2\left(\lambda^2-k_x^2\right)+\left(\varepsilon-M\right)^2\right] \end{array}\right], \label{eq14}$$ where we have left out an inessential normalization constant. The total wavefunction can therefore be represented as a linear superposition of the constituent basis wavefunctions $$\begin{aligned} {\boldsymbol}{\tilde{\Psi}} &=& C_1 {\boldsymbol}{\Psi}\left(\lambda_1\right)e^{\lambda_1 y}+D_1 {\boldsymbol}{\Psi}\left(-\lambda_1\right)e^{-\lambda_1 y} \nonumber \\ &&+C_2{\boldsymbol}{\Psi}\left(\lambda_2\right)e^{\lambda_2 y}+D_2 {\boldsymbol}{\Psi}\left(\lambda_2\right)e^{-\lambda_2 y}. \label{eq15}\end{aligned}$$ Using the boundary conditions Eqs. (\[eq9\])-(\[eq10\]), we obtain the following determinantal equation $$\begin{vmatrix} f(\lambda_1) e^{\lambda_1 L} & f(\lambda_1) e^{-\lambda_1 L} & f(\lambda_2) e^{\lambda_2 L} & f(\lambda_2) e^{-\lambda_2 L} \\ 1 & 1 & 1 & 1 \\ \left(k_x-\lambda_1\right)e^{\lambda_1 L} & \left(k_x+\lambda_1\right)e^{-\lambda_1 L} & \left(k_x-\lambda_2\right)e^{\lambda_2 L} & \left(k_x+\lambda_2\right)e^{-\lambda_2 L} \\ f(\lambda_1)\left(k_x+\lambda_1\right) & f(\lambda_1)\left(k_x-\lambda_1\right) & f(\lambda_2)\left(k_x+\lambda_2\right) & f(\lambda_2)\left(k_x-\lambda_2\right) \end{vmatrix} = 0. \label{eq16}$$ where $f(\lambda) = v^2\left(\lambda^2-k_x^2\right)+\left(\varepsilon-M\right)^2$. With $\lambda_{1,2}$ given by Eq. (\[eq13\]), the band structure $\varepsilon$ as a function of $k_x$ can be obtained from solving Eq. (\[eq16\]). We illustrate in Fig. \[EnvelopeTB\] the resulting band structure in the vicinity of a Brillouin zone corner, from which it can be seen that both the bulk and edge bands obtained from the envelope function approach show excellent agreement with the tight-binding result. ![(Color online) Band structure from the envelope function approach and tight-binding model. The Rashba spin-orbit strength and exchange field in the tight-binding model are (in units of $t$) $t_{\mathrm{R}}=0.02$ and $M = 0.1$ in Eq. (\[SLGTB\]), corresponding to $\alpha/\varepsilon_{\mathrm{c}} = 2.12\times 10^{-3}$ and $M/\varepsilon_{\mathrm{c}} = 0.01$ in the low-energy Dirac Hamiltonian. The width of the graphene sheet is $L = 119$ in units of the nearest-neighbor lattice constant. []{data-label="EnvelopeTB"}](comparison_new.pdf){width="8.5cm"} \*On leave from University of Texas at Austin. [18]{} A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. **81**, 109 (2009). I. Zutic, J. Fabian, and S. Das Sarma, Rev. Mod. Phys. **76**, 323 (2004). C. L. Kane and E. J. Mele, Phys. Rev. Lett. **95**, 146802 (2005). H. Min, J. E. Hill, N. A. Sinitsyn, B. R. Sahu, L. Kleinman, and A. H. MacDonald, Phys. Rev. B **74**, 165310 (2006). Y. G. Yao, F. Ye, X.-L. Qi, S.-C. Zhang, and Z. Fang, Phys. Rev. B **75**, 041401(R) (2007). M. Gmitra, S. Konschuh, C. Ertler, C. Ambrosch-Draxl, and J. Fabian, Phys. Rev. B **80**, 235431 (2009). E. I. Rashba, Sov. Phys. Solid State **2**, 1109 (1960). A. Varykhalov, J. Sánchez-Barriga, A. M. Shikin, C. Biswas, E. Vescovo, A. Rybkin, D. Marchenko, and O. Rader, Phys. Rev. Lett. **101**, 157601 (2008). O. Rader, A. Varykhalov, J. Sánchez-Barriga, D. Marchenko, A. Rybkin, and A. M. Shikin, Phys. Rev. Lett. **102**, 057602 (2009). Z. Qiao, S. A. Yang, W. Feng, W.-K. Tse, J. Ding, Y. G. Yao, J. Wang, and Q. Niu, Phys. Rev. B **82**, 161414(R) (2010). F.D.M. Haldane, Phys. Rev. Lett. **61**, 2015 (1988). M. Onoda and N. Nagaosa, Phys. Rev. Lett. **90**, 206601 (2003). C. X. Liu, X.-L. Qi, X. Dai, Z. Fang, and S.-C. Zhang, Phys. Rev. Lett. **101**, 146802 (2008). R. Yu, W. Zhang, H.-J. Zhang, S.-C. Zhang, X. Dai, and Z. Fang, Science **329**, 61 (2010). C. Wu, Phys. Rev. Lett. **101**, 186807 (2008). M. Zhang, H. Hung, C. Zhang, and C. Wu, arXiv:1009.2133v2 (2011). E. I. Rashba, Phys. Rev. B **79**, 161409(R) (2009). M. Zarea and N. Sandler, Phys. Rev. B **79**, 165442 (2009). R. van Gelderen and C. Morais Smith, Phys. Rev. B **81**, 125435 (2010). R. Winkler and U. Zulicke, Phys. Rev. B **82**, 245313 (2010). D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, Phys. Rev. Lett. **49**, 405 (1982); D. Xiao, M.-C. Chang, and Q. Niu, Rev. Mod. Phys. **82**, 1959 (2010). G. E. Volovik, *The Universe in a Helium Droplet* (Oxford, 2003). K. Sengupta and V. M. Yakovenko, Phys. Rev. B **62**, 4586 (2000). E. McCann and V. I. Fal’ko, Phys. Rev. Lett. **96** 086805 (2006). One is tempted to arrive at a $4\times4$ low-energy Hamiltonian from the $8\times8$ Hamiltonian Eq. (\[H\]) by treating the tunneling perturbatively as in Ref. [@Mccann]. Such a Hamiltonian however would not predict the correct quantized anomalous Hall or valley Hall conductivity, because the higher-energy bands also contribute to the Hall response. H. Min, G. Borghi, M. Polini, and A. H. MacDonald, Phys. Rev. B **77**, 041407(R) (2008); F. Zhang, H. Min, M. Polini, and A. H. MacDonald, Phys. Rev. B **81**, 041402(R) (2010); R. Nandkishore and L. Levitov, Phys. Rev. B **82**, 115124 (2010). J. Martin, B. E. Feldman, R. T. Weitz, M. T. Allen, A. Yacoby, arXiv:1009.2069 (2010); R.T. Weitz, M. T. Allen, B. E. Feldman, J. Martin, and A. Yacoby, Science **330**, 812 (2010).
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper examines signal detection in the presence of noise, with a particular emphasis to the nuclear activation analysis. The problem is to decide what between the signal-plus-background and no-signal hypotheses fits better the data and to quantify the relevant signal amplitude or detection limit. Our solution is based on the use of Bayesian inferences to test the different hypotheses.' address: - '$^1$INRIM – Istituto Nazionale di Ricerca Metrologica, Unit of Radiochemistry and Spectroscopy, c/o Department of Chemistry, University of Pavia, via Taramelli 12, 27100 Pavia, Italy' - '$^2$INRIM – Istituto Nazionale di Ricerca Metrologica, str. delle Cacce 91, 10135 Torino, Italy' - '$^3$Department of Chemistry, University of Pavia, via Taramelli 12, 27100 Pavia, Italy' author: - 'L Bergamaschi$^1$, G D’Agostino$^1$, L Giordani$^1$, G Mana$^2$, and M Oddone$^3$' title: The detection of signals buried in noise --- Introduction ============ For signals buried into noise, to decide between the detected and non-detected statements is a long debated problem; in addition, any non-detected decision must include a detection-limit statement. For instance, in analytical chemistry, the detection limit is defined as the lowest quantity of a substance that can be distinguished from no substance at all to within a stated confidence limit [@GoldBook]. The orthodox approach to the estimate of detection limits [@Currie:1968] is based on the concept of confidence interval and of its interpretation as outlined in seminal papers by Neyman [@Neyman:1935; @Neyman:1937]. We investigate an alternative approach that uses Bayesian inferences to test the different hypotheses and to quantify the signal amplitude or detection limit. Using the nuclear activation analysis as an example, that is, the detection of the nuclear activity of a radioisotope in a background photon-count, we illustrate how Bayesian inferences can be used to chose between the signal-plus-background and no-signal hypotheses and to quantify the signal amplitude or detection limit. The contaminant amounts are linked to what it is observed – the photon numbers in given energy bins – by calibration factors. The sampling statistics applies to the counts and, therefore, our paper deals only with the observed signal and its associated noise, but the conclusions can be easily extended to the concentrations. As regards as the terminology, the background count is what would be observed in a non-contaminated sample, the gross count is what is actually observed, and the net count is what would be observed in the absence of background. The term measurand will indicate the mean net-count, whereas the terms background- and gross-signal will indicate the expected background- and gross-count. According the Neyman’s view, the detection limits evaluated from the results of a large set of repeated measurement must bound a fixed measurand value with a given frequency. The detection-limit calculation uses hypothesis testing and the distributions of the measurement results given opposite hypotheses. Firstly, the sampling distribution of the background is used to establish a critical limit $L_C$ such that, if the measurand is zero and the count is only noise (null hypothesis), a net count smaller than $L_C$ would be obtained with a high probability, say 95%. Next, this statement is reversed by choosing the detection limit $L_D$ in such a way that, if the measurand is more than $L_D$ (alternative hypothesis), a net count greater than $L_C$ would be obtained with a high probability, say 95%. When the measurand value matters, this frequency-of-occurrence view is not enough. For instance, decisions require probability assignments to propositions that assert the measurand value and, in turns, they require the application of the Bayes theorem [@Jaynes; @McKay; @Gregory; @Sivia]. In the Bayesian approach, signal detection and signal estimation are not independent problems and, in a large set of equal measurement results, the detection limit must bound different measurand values with a given frequency. Hypothesis test requires to compare the probability of each hypothesis is true, given the data; hence, the detected or non-detected choice is done according to the maximal probability [@Mana:2012]. Only after such a choice has been done representative values – for example, the mode, mean, or median – and confidence intervals – for example, bounding the measurand with a 95% probability – can be calculated. Data model ========== Measurements of the impurity concentrations of the $^{28}$Si crystal used for the determination of the Avogadro constant [@NA:PRL; @NA:Metrologia] are essential to prevent biased results or underestimated uncertainties. The existing literature indicates that Si crystals are extremely pure, but, to obtain a direct evidence of purity, we developed an analytical method based on neutron activation [@DAgostino:2012]. Nuclear activation analysis is based on the detection and counting of the $\gamma$ rays emitted by the radioactive isotopes produced by the neutron irradiation. When a neutron is captured by a nucleus, a compound nucleus is formed in an excited state. This step is followed by a prompt de-excitation to a more stable configuration; the new nucleus is usually radioactive and will de-excite by emitting delayed $\gamma$ rays or particles. In the last case the resulting nucleus is often still exited and a further $\gamma$ emission could occur. The energy spectrum of the emitted $\gamma$ rays shows discrete peaks, which identify and quantify the radioactive nuclei and, consequently, the parent contaminants. After calibration against a known amount of contaminant, the number of counts stored in the energy bins relevant to peak gives the impurity contents of the sample. The gross count $n_{{\rm G}}$ recorded in a given time interval in any bin of the multichannel analyzer includes a background count $n_{{\rm B}}$; in addition, owing to the high purity of the Si sample, for almost all the elements, the net count, if any, is deeply buried in the background. To extract all the available information, we assume that the $n_{{\rm G}}$ and $n_{{\rm B}}$ data are independent random-numbers. Hence, had the background and gross signals been $\Lambda_{{\rm B}}$ and $\Lambda_{{\rm G}}$, their sampling statistics, $$\label{samplingB} {{\rm P}}_{{{\rm G}},{{\rm B}}}(n_{{{\rm G}},{{\rm B}}}|\Lambda_{{{\rm G}},{{\rm B}}}) = \frac{\Lambda_{{{\rm G}},{{\rm B}}}^{n_{{{\rm G}},{{\rm B}}}} \rme^{-\Lambda_{{{\rm G}},{{\rm B}}}}}{n_{{{\rm G}},{{\rm B}}}!}$$ are Poisson distributions having means $\Lambda_{{\rm B}}$ and $\Lambda_{{\rm G}}$ and their joint sampling distribution is $$\label{sampling} {{\rm P}}_{{{\rm B}}{{\rm G}}}(n_{{\rm B}},n_{{\rm G}}|\Lambda_{{\rm B}},\Lambda_{{\rm G}}) = \frac{\Lambda_{{\rm B}}^{n_{{\rm B}}} \Lambda_{{\rm G}}^{n_{{\rm G}}} \rme^{-(\Lambda_{{\rm B}}+ \Lambda_{{\rm G}})} }{n_{{\rm B}}! n_{{\rm G}}!} .$$The problem is, firstly, to decide between the detected and non-detected statements and, secondly, to quantify the net signal $\Lambda=\Lambda_{{\rm G}}-\Lambda_{{\rm B}}$ or its detection limit. ![Left: 95% critical limit (lower curve) and 5% detection limit (upper curve) calculated according to the Currie’s construction for a net signal buried in the background noise. The shaded area is the 95% quantile of the background noise. Right: 5% (lower line) and 95% (upper line) quantiles for $n =n_{{\rm G}}- n_{{\rm B}}$, when $\Lambda_{{\rm B}}=20$. The arrow indicates the Neyman’s 90% confidence interval for $\Lambda$ when $n=15$.[]{data-label="interval"}](LCLD.eps "fig:"){width="65mm"} ![Left: 95% critical limit (lower curve) and 5% detection limit (upper curve) calculated according to the Currie’s construction for a net signal buried in the background noise. The shaded area is the 95% quantile of the background noise. Right: 5% (lower line) and 95% (upper line) quantiles for $n =n_{{\rm G}}- n_{{\rm B}}$, when $\Lambda_{{\rm B}}=20$. The arrow indicates the Neyman’s 90% confidence interval for $\Lambda$ when $n=15$.[]{data-label="interval"}](interval.eps "fig:"){width="65mm"} Classical analysis ================== The Currie’s construction of the detection limit is as follows [@Currie:1968]. The distribution of the minimum-variance unbiased estimate $n = n_{{\rm G}}- n_{{\rm B}}$ of $\Lambda$ is the Skellam probability density [@Irwin:1937; @Skellam:1946] $$\label{Skellam} {\rm Pdf}_{\rm Skl}(n|\Lambda_{{\rm G}},\Lambda_{{\rm B}}) = \rme^{-(\Lambda_{{\rm G}}+\Lambda_{{\rm B}})} {\rm I}_{n} \left( 2\sqrt{\Lambda_{{\rm G}}\Lambda_{{\rm B}}} \right) \sqrt{\left(\frac{\Lambda_{{\rm G}}}{\Lambda_{{\rm B}}}\right)^n} ,$$ where ${\rm I}_{n}(x)$ is the modified Bessel function of the first kind and the mean and variance of the net count $n$ are $\langle n \rangle = \Lambda_{{\rm G}}-\Lambda_{{\rm B}}=\Lambda$ and $\sigma^2_{n}=\Lambda_{{\rm G}}+\Lambda_{{\rm B}}$, respectively. Hence, provided $\Lambda_{{\rm B}}$ is known – which is a crucial assumption, the critical limit, $L_C=\lceil x \rceil$, is the smallest integer greater than or equal to the solution of $$\label{CL} {\rm Cdf}_{\rm Skl}(x|\Lambda_{{\rm B}},\Lambda_{{\rm B}}) = \alpha ,$$ where ${\rm Cdf}_{\rm Skl}$ is the cumulative distribution of ${\rm Pdf}_{\rm Skl}$ and, for instance, $\alpha=0.95$. Therefore, if the net signal is zero and the gross count is only background, a net count greater than $L_C$ would be obtained with a low probability $1-\alpha$. The net signal is assumed detected if $n_{{\rm G}}> L_C$ and non-detected otherwise. The detection limit, $L_D=x$, is the solution of $$\label{DL} {\rm Cdf}_{\rm Skl}(L_C|\Lambda_{{\rm B}}+x,\Lambda_{{\rm B}}) = \beta ,$$ where, for instance, $\beta=0.05$. It is worth noting that a prior knowledge of the background signal $\Lambda_{{\rm B}}$ is again assumed. The figure \[interval\] (left) illustrates the procedure; if the net signal is more than $L_D$, at least the 95% of the net counts are more than $L_C$. We can circumvent the need to know $\Lambda_{{\rm B}}$ in advance by using the Neyman’s construction [@Neyman:1935; @Neyman:1937]; a review can be found in [@Feldman:1998]. Actually, this construction produces confidence regions for the $(\Lambda_{{\rm B}}, \Lambda)$ pair, but, for the sake of simplicity, we fix the value of $\Lambda_{{\rm B}}$ and calculate a confidence interval for the net signal alone. To this end, following Neyman, we introduce a pair of continuous and monotonic functions of $\Lambda$, $n_1(\Lambda)$ and $n_2(\Lambda)$, so chosen as $[n_1,n_2]$ is an $\alpha$-interval for $n$. That is, $${\rm Cdf}_{\rm Skl}(n_2|\Lambda_{{\rm B}}+\Lambda,\Lambda_{{\rm B}}) - {\rm Cdf}_{\rm Skl}(n_1|\Lambda_{{\rm B}}+\Lambda,\Lambda_{{\rm B}}) = \alpha .$$ Provided the net count is in the domain of the inverse functions $\Lambda_1=n_2^{-1}(n)$ and $\Lambda_2=n_1^{-1}(n)$, $${\rm Prob}\big( \Lambda \in [\Lambda_1,\Lambda_2] | \Lambda \big) = \alpha ,$$ by construction and whatever the measurand value may be. Hence, $[\Lambda_1,\Lambda_2]$ is the sough $\alpha$-interval. The figure \[interval\] (right) illustrates the procedure in the case when $\alpha=0.90$ [@Feldman:1998]. According the Neyman’s viewpoint, in a long series of repeated measurements with fixed gross $\Lambda_{{\rm G}}$ and background $\Lambda_{{\rm B}}$ values, the 90% of the intervals calculated as indicated by the arrow will contain the measurand $\Lambda=\Lambda_{{\rm G}}-\Lambda_{{\rm B}}$. Conceptual limits ----------------- The Currie’s constructions of the critical- and detection-limit rely on the prior knowledge of the background signal, which is not available. In practice, the background count $n_{{\rm B}}$ substitutes for $\Lambda_{{\rm B}}$, but this does not remove the conceptual difficulty. An alternative is to use the net count $n=n_{{\rm G}}-n_{{\rm B}}$ to determine the Neyman’s upper limit of the net signal. However, since the sampling distribution of $n$ depends also on $\Lambda_{{\rm B}}$, the Neyman’s upper limit is still a function of the background signal. Additional troubles arise when $n_{{\rm G}}<n_{{\rm B}}$, because the unbiased and minimum-variance estimate of $\Lambda$ is negative and unphysical [@Feldman:1998]. Bayesian analysis ================= The problems inherent in the classical analysis can be solved by Bayesian inferences. They are based on the product rule of probabilities $$\label{p-rule}\fl \digamma_{{{\rm B}}{{\rm G}}}(\Lambda_{{\rm B}},\Lambda_{{\rm G}}|n_{{\rm B}},n_{{\rm G}}) Z_{{{\rm B}}{{\rm G}}}(n_{{\rm B}},n_{{\rm G}}) = {{\rm P}}_{{{\rm B}}{{\rm G}}}(n_{{\rm B}},n_{{\rm G}}|\Lambda_{{\rm B}},\Lambda_{{\rm G}}) \pi(\Lambda_{{\rm B}},\Lambda_{{\rm G}}),$$ where $\pi(\Lambda_{{\rm B}},\Lambda_{{\rm G}})$ is the joint probability distribution of the signal values prior the data are available, $\digamma_{{{\rm B}}{{\rm G}}}(\Lambda_{{\rm B}},\Lambda_{{\rm G}}|n_{{\rm B}},n_{{\rm G}})$ is the joint probability distribution of the signal values – given the signal-plus-background hypothesis and after the data were collected, the likelihood that the signals are $\Lambda_{{\rm B}}$ and $\Lambda_{{\rm G}}$ is the sampling distribution ${{\rm P}}_{{{\rm B}}{{\rm G}}}(n_{{\rm B}},n_{{\rm G}}|\Lambda_{{\rm B}},\Lambda_{{\rm G}})$ evaluated in $n_{{\rm B}}$ and $n_{{\rm G}}$, and the evidence of the data model is the probability distribution $Z_{{{\rm B}}{{\rm G}}}(n_{{\rm B}},n_{{\rm G}})$ of the data, no matter what the signals may be. Pre-data distribution --------------------- A key step to calculate $\digamma_{{{\rm B}}{{\rm G}}}(\Lambda_{{\rm B}},\Lambda_{{\rm G}}|n_{{\rm B}},n_{{\rm G}})$ is to assign the density $\pi(\Lambda_{{\rm B}},\Lambda_{{\rm G}})$ in the $(\Lambda_{{\rm B}},\Lambda_{{\rm G}})$ points of the signal-value space prior the measurement results are available. In fact, the only way to assign probabilities to the signal values consistent with the measurement results is to update, according the Bayes theorem, the assignments made before the data are at hand. These prior assignments must embed all the information available, but, to avoid inferences affected by non-available data, no more than this information must be used. By using the product rule of the probability algebra we can write $$\pi(\Lambda_{{\rm B}},\Lambda_{{\rm G}}) = \pi_{{\rm G}}(\Lambda_{{\rm G}}|\Lambda_{{\rm B}})\pi_{{\rm B}}(\Lambda_{{\rm B}}) ,$$ where $\pi_{{\rm B}}(x)$ and $\pi_{{\rm G}}(x)$ have the same functional form, say, $\pi(x)$, both the signals are strictly positive, and $\Lambda_{{\rm G}}\ge \Lambda_{{\rm B}}$. Eventually, $\pi$ must be uninformative. Therefore, we impose scale invariance [@Jaynes:1968]. Hence, if $\pi(x)=f(x)$, then $\pi'(kx)=f(kx)$ no matter what the $k$ value may be, where $\pi'(kx)=f(x)/k$ is the probability distribution of $x'=kx$. This ensures that the functional form of $\pi$ is independent of the duration of the counting interval. The reason is that, otherwise, we will embed into $\pi$ – through a specific $f$-choice – an information about this duration. Scale invariance limits the $\pi$ choice to the Jeffreys’ $\pi(x) \propto 1/x$ distribution. In the $[0,\infty]$ support, this distribution is not normalizable; therefore, we limit its support to $0 < \Lambda_{\min} < \Lambda_{{\rm B}}< \Lambda_{\max}$ and $\Lambda_{{\rm B}}\le \Lambda_{{\rm G}}< \Lambda_{\max}$ so that $$\fl\label{prior:0} \pi(\Lambda_{{\rm B}},\Lambda_{{\rm G}}) = \frac {{{\rm If}}(\Lambda_{\min} < \Lambda_{{\rm B}}< \Lambda_{\max}){{\rm If}}(\Lambda_{{\rm B}}\le \Lambda_{{\rm G}}< \Lambda_{\max})} {\Lambda_{{\rm B}}\Lambda_{{\rm G}}\ln(\Lambda_{\max}/\Lambda_{\min}) \ln(\Lambda_{\max}/\Lambda_{{\rm B}}) } ,$$ where ${{\rm If}}(\Box)$ is one if its argument is true and zero otherwise. Since this distribution does not allow us to calculate analytically the normalization integrals we will found in the following, it will be approximated as $$\fl\label{prior} \pi(\Lambda_{{\rm B}},\Lambda_{{\rm G}}) = \frac {2{{\rm If}}(\Lambda_{\min} < \Lambda_{{\rm B}}< \Lambda_{\max}){{\rm If}}(\Lambda_{{\rm B}}\le \Lambda_{{\rm G}}< \Lambda_{\max})} {\Lambda_{{\rm B}}\Lambda_{{\rm G}}\ln^2(\Lambda_{\max}/\Lambda_{\min})} .$$ The limits for the support extending from zero to the infinite will be discussed where appropriate. The pre-data distribution of the net signal is the marginal distribution $$\begin{aligned} \fl \nonumber \pi_S(\Lambda) &= &\int_0^\infty \!\!\! \int_{\Lambda_{{\rm B}}}^\infty \delta(\Lambda_{{\rm G}}-\Lambda_{{\rm B}}-\Lambda) \pi(\Lambda_{{\rm B}},\Lambda_{{\rm G}})\, \rmd \Lambda_{{\rm B}}\rmd \Lambda_{{\rm G}}\nonumber \\ \fl \label{pre-data} &= &\frac {2\left[ \ln(\Lambda_{\max}-\Lambda) + \ln(\Lambda_{\min}+\Lambda) - \ln(\Lambda_{\max}\Lambda_{\min}) \right]} {\Lambda \ln^2(\Lambda_{\max}/\Lambda_{\min})} ,\end{aligned}$$ where $0 \le \Lambda < \Lambda_{\max}-\Lambda_{\min}$ and the Dirac delta function $\delta(\Lambda_{{\rm G}}-\Lambda_{{\rm B}}-\Lambda)$ is the distribution of $\Lambda$ conditional to the $\Lambda_{\max}$ and $\Lambda_{\min}$ values [@Chakraborty:2008]. Post-data distributions ----------------------- By combining (\[sampling\]), (\[p-rule\]), and (\[prior\]), the joint probability distribution of the background and gross signals after the data have been collected is $$\fl\label{post} \digamma_{{{\rm B}}{{\rm G}}}(\Lambda_{{\rm B}},\Lambda_{{\rm G}}|n_{{\rm B}},n_{{\rm G}}) = \frac {n_{{\rm B}}\Lambda_{{\rm B}}^{n_{{\rm B}}-1} \Lambda_{{\rm G}}^{n_{{\rm G}}-1} \rme^{-(\Lambda_{{\rm B}}+\Lambda_{{\rm G}})}} {(n_{{\rm B}}+n_{{\rm G}}-1)! \, _2F_1(n_{{\rm B}},n_{{\rm B}}+n_{{\rm G}};n_{{\rm B}}+1;-1)} ,$$ where $_2F_1(a,b;c;z)$ is the hypergeometric function, $Z_{{{\rm B}}{{\rm G}}}$ has been obtained by normalization, $[\Lambda_{\min},\Lambda_{\max}]$ has been chosen large enough that the integration limits can be extended from zero to the infinity, $n_{{\rm B}}> 0$, $n_{{\rm G}}> 0$, and $\Lambda_{{\rm G}}\ge \Lambda_{{\rm B}}> 0$. The post-data distribution of the net signal is the marginal distribution, $$\begin{aligned} \label{d-marginal}\fl \digamma_S(\Lambda|n_{{\rm B}},n_{{\rm G}}) &= &\int_0^\infty \!\!\! \int_{\Lambda_{{\rm B}}}^\infty \delta(\Lambda_{{\rm G}}-\Lambda_{{\rm B}}-\Lambda) \digamma_{{{\rm B}}{{\rm G}}}(\Lambda_{{\rm B}},\Lambda_{{\rm G}}|n_{{\rm B}},n_{{\rm G}})\, \rmd \Lambda_{{\rm B}}\rmd \Lambda_{{\rm G}}\nonumber \\ &= &\int_0^\infty \frac {n_{{\rm B}}\Lambda_{{\rm B}}^{n_{{\rm B}}-1} (\Lambda_{{\rm B}}+\Lambda)^{n_{{\rm G}}-1} \rme^{-(2\Lambda_{{\rm B}}+\Lambda)} \, \rmd \Lambda_{{\rm B}}} {(n_{{\rm B}}+n_{{\rm G}}-1)! \, _2F_1(n_{{\rm B}},n_{{\rm B}}+n_{{\rm G}};n_{{\rm B}}+1;-1)} \nonumber \\ &= &\frac{n_{{\rm B}}\rme^{-\Lambda} \Lambda^{n_{{\rm B}}+n_{{\rm G}}-1} {\rm U}(n_{{\rm B}}, n_{{\rm B}}+n_{{\rm G}}+1, 2\Lambda) }{ (n_{{\rm B}}+n_{{\rm G}}-1)! \, _2F_1(n_{{\rm B}},n_{{\rm B}}+n_{{\rm G}};n_{{\rm B}}+1;-1)} ,\end{aligned}$$ where $n_{{\rm B}}> 0$, $n_{{\rm G}}> 0$, $\Lambda > 0$, and ${\rm U}(a,b,z)$ is the confluent hypergeometric function. Representative values – for example, the mode, mean, or median – and confidence intervals can be calculated from (\[d-marginal\]), but, contrary to a Neyman’s interval, a Bayesian interval is such that, in a long series of repeated measurements of different net signals giving the same net count $n$, a given fraction of the net signals in it. Model selection --------------- The no-signal hypothesis means that the joint sampling distribution of the data is $$\label{onlyB} {{\rm P}}_{BB}(n_{{\rm G}},n_{{\rm B}}|\Lambda_{{\rm B}}) = \frac{ \Lambda_{{\rm B}}^{n_{{\rm B}}+n_{{\rm G}}} \rme^{-2\Lambda_{{\rm B}}} }{n_{{\rm B}}! n_{{\rm G}}!} .$$ Consequently, given the no-signal hypothesis, the post-data probability distribution of the background signal is $$\label{post-env} \digamma_{BB}(\Lambda_{{\rm B}}|n_{{\rm G}},n_{{\rm B}}) = \frac{\Lambda_{{\rm B}}^{n_{{\rm B}}+n_{{\rm G}}-1} \rme^{-2\Lambda_{{\rm B}}} }{2^{n_{{\rm B}}+n_{{\rm G}}}(n_{{\rm B}}-1)! (n_{{\rm G}}-1)! (n_{{\rm B}}+n_{{\rm G}}-1)!} ,$$ where $Z_{BB}$ has been obtained by normalization, $[\Lambda_{\min},\Lambda_{\max}]$ has been chosen large enough that the integration limits can be extended from zero to the infinity, $\Lambda_{{\rm B}}> 0$, and $n_{{\rm B}}+n_{{\rm G}}>0$. To chose between the signal-plus-background and no-signal hypotheses, $H_{{{\rm B}}{{\rm G}}}$ and $H_{BB}$, that is, between the joint sampling distributions (\[sampling\]) and (\[onlyB\]), we need the probability that each hypothesis is true given $n_{{\rm B}}$ and $n_{{\rm G}}$ [@Mana:2012]. On the assumption that, before the data are available, the probabilities of the two hypotheses are the same, the post-data probabilities of $H_{{{\rm B}}{{\rm G}}}$ and $H_{BB}$ are proportional through the same factor to the evidences $$\begin{aligned} \fl \nonumber Z_{{{\rm B}}{{\rm G}}} &= &\int_{\Lambda_{\min}}^{\Lambda_{\max}} \!\!\! \int_{\Lambda_{{\rm B}}}^{\Lambda_{\max}} \frac {2 \Lambda_{{\rm B}}^{n_{{\rm B}}-1} \Lambda_{{\rm G}}^{n_{{\rm G}}-1} \rme^{-(\Lambda_{{\rm B}}+ \Lambda_{{\rm G}})}\, \rmd\Lambda_{{\rm B}}\rmd\Lambda_{{\rm G}}} {n_{{\rm B}}! \,n_{{\rm G}}!\, \ln^2(\Lambda_{\max}/\Lambda_{\min})} \\ \fl \label{HSB} &= &\frac{2(n_{{\rm B}}+n_{{\rm G}}-1)! \, _2F_1(n_{{\rm B}},n_{{\rm B}}+n_{{\rm G}};n_{{\rm B}}+1;-1)}{n_{{\rm B}}n_{{\rm B}}! n_{{\rm G}}! \ln^2(\Lambda_{\max}/\Lambda_{\min})}\end{aligned}$$ and $$\fl\label{HB} Z_{BB} = \int_{\Lambda_{\min}}^{\Lambda_{\max}} \frac{\Lambda_{{\rm B}}^{n_{{\rm B}}+n_{{\rm G}}-1}\rme^{-2\Lambda_{{\rm B}}} \, \rmd\Lambda_{{\rm B}}} {n_{{\rm B}}!\, n_{{\rm G}}!\, \ln(\Lambda_{\max}/\Lambda_{\min})} = \frac{(n_{{\rm B}}+n_{{\rm G}}-1)! } {2^{n_{{\rm B}}+n_{{\rm G}}} n_{{\rm B}}!\, n_{{\rm G}}!\, \ln(\Lambda_{\max}/\Lambda_{\min}) } ,$$where $[\Lambda_{\min},\Lambda_{\max}]$ has been chosen large enough that the integration limits can be extended from zero to the infinity and $n_{{\rm B}}>0$, $n_{{\rm G}}>0$. In (\[HSB\]) and (\[HB\]), $\ln^2(\Lambda_{\max}/\Lambda_{\min})$ and $\ln(\Lambda_{\max}/\Lambda_{\min})$ are Ockham’s penalties for the size of the signal space [@Jaynes; @McKay; @Gregory; @Sivia]. Hence, $$\begin{aligned} \fl {\rm Prob}(H_{{{\rm B}}{{\rm G}}}|n_{{\rm B}},n_{{\rm G}}) &= &\frac{Z_{{{\rm B}}{{\rm G}}}}{Z_{{{\rm B}}{{\rm G}}} + Z_{BB}} \\ \nonumber &= &\frac{2^{n_{{\rm B}}+n_{{\rm G}}+1}\, _2F_1(n_{{\rm B}},n_{{\rm B}}+n_{{\rm G}};n_{{\rm B}}+1;-1)} {2^{n_{{\rm B}}+n_{{\rm G}}+1}\, _2F_1(n_{{\rm B}},n_{{\rm B}}+n_{{\rm G}};n_{{\rm B}}+1;-1) + n_{{\rm B}}\, \ln(\Lambda_{\max}/\Lambda_{\min})}\end{aligned}$$ and $$\begin{aligned} \fl {\rm Prob}(H_{BB}|n_{{\rm B}},n_{{\rm G}}) &= &\frac{Z_{BB}}{Z_{{{\rm B}}{{\rm G}}} + Z_{BB}} \\ \nonumber &= &\frac{n_{{\rm B}}\, \ln(\Lambda_{\max}/\Lambda_{\min})} {2^{n_{{\rm B}}+n_{{\rm G}}+1}\, _2F_1(n_{{\rm B}},n_{{\rm B}}+n_{{\rm G}};n_{{\rm B}}+1;-1) + n_{{\rm B}}\, \ln(\Lambda_{\max}/\Lambda_{\min})} .\end{aligned}$$ The support of the pre-data distribution must be bounded to a non-null lower limit and a finite upper limit. On the contrary, ${\rm Prob}(H_{BB}|n_{{\rm B}},n_{{\rm G}})$ tends to one and ${\rm Prob}(H_{{{\rm B}}{{\rm G}}}|n_{{\rm B}},n_{{\rm G}})$ tends to zero. This paradoxical result is caused by the largest parameter space of the $H_{{{\rm B}}{{\rm G}}}$ hypothesis and, consequently, its largest Ockham’s penalty. This could appear a limitation; however, a $[\Lambda_{\min},\Lambda_{\max}]$ choice can be made on the basis of the background information. In addition, from a numerical viewpoint, the logarithm function maps huge $[\Lambda_{\min},\Lambda_{\max}]$ intervals into negligible Ockham’s factors. [@lllrrrrr]{} element &reaction &energy &$n_{{\rm B}}$ &$n_{{\rm G}}$ &$n$ &$L_C$ &$L_D$\ & &keV &counts &counts &counts &counts &counts\ Au &$^{197}$Au$({{\rm n}},\gamma)$ $^{198}$Au &411.67 &324 &500 &176 &42 &88\ La &$^{139}$La$({{\rm n}},\gamma)$ $^{140}$La &487.02 &306 &284 &$-22$ &41 &85\ As & $^{75}$As$({{\rm n}},\gamma)$ $^{76}$As &559.10 &296 &311 &15 &40 &84\ Application example =================== As an application example, we consider the measurements of the amounts of Au, La, and As in a sample of the natural silicon crystal WASO04 by neutron activation analysis [@DAgostino:2012]. Zooms of the emission spectra in the neighbours of the channels corresponding to the energies of the $\gamma$ rays emitted in the de-excitation of the activated nuclei are shown in Fig. \[spectra\]. All the photons collected in the bins included in each peak (chosen as five times the calibrated full peak half width) were added to obtain the gross counts. The background counts were estimated by adding all the photon collected in an equal number of tail channels fairly subdivided between in the left and right tails. The relevant reactions, peak energies, and background, gross, and net counts are given in table \[AsAu\]. The 95% critical and detection limits have been calculated according to the Currie’s constructions (\[CL\]) and (\[DL\]); they are shown in table \[AsAu\]. Their meanings are as follow: if the net signal is zero, the probability that the net count is less than $L_C$ is 0.95; if the net signal is more than $L_D$, the probability that the net count is more than $L_C$ is 0.95. Accordingly, only a gold contamination has been detected. ![Zooms of the emission spectra in the neighbours of the channels (indicated by the arrows) corresponding to the energies of the $\gamma$ rays emitted in the de-excitation of the activated Au, La, and As nuclei. The shaded areas indicate the peak widths. The horizontal lines indicate the background counts, as estimated from the peak tails; the line lengths indicate the tail-channels considered.[]{data-label="spectra"}](Au-spectrum.eps "fig:"){width="65mm"} ![Zooms of the emission spectra in the neighbours of the channels (indicated by the arrows) corresponding to the energies of the $\gamma$ rays emitted in the de-excitation of the activated Au, La, and As nuclei. The shaded areas indicate the peak widths. The horizontal lines indicate the background counts, as estimated from the peak tails; the line lengths indicate the tail-channels considered.[]{data-label="spectra"}](La-spectrum.eps "fig:"){width="65mm"} ![Zooms of the emission spectra in the neighbours of the channels (indicated by the arrows) corresponding to the energies of the $\gamma$ rays emitted in the de-excitation of the activated Au, La, and As nuclei. The shaded areas indicate the peak widths. The horizontal lines indicate the background counts, as estimated from the peak tails; the line lengths indicate the tail-channels considered.[]{data-label="spectra"}](As-spectrum.eps "fig:"){width="65mm"} ![Left: sampling-distributions of the unbiased minimum-variance estimates of the net signal for Au, La, and As. Right: post-data distributions of the net signals. The filled curve is the pre-data distribution (\[pre-data\]).[]{data-label="marginal"}](Au-La-As-net-counts-sampling.eps "fig:"){width="67mm"} ![Left: sampling-distributions of the unbiased minimum-variance estimates of the net signal for Au, La, and As. Right: post-data distributions of the net signals. The filled curve is the pre-data distribution (\[pre-data\]).[]{data-label="marginal"}](Au-La-As-marginal.eps "fig:"){width="65mm"} The unbiased minimum-variance estimates of the gold, lanthanum, and arsenic net-signals are $n({{\rm Au}})=n_{{\rm G}}({{\rm Au}})-n_{{\rm B}}({{\rm Au}})$, $n({{\rm La}})=n_{{\rm G}}({{\rm La}})-n_{{\rm B}}({{\rm La}})$, and $n({{\rm As}})=n_{{\rm G}}({{\rm As}})-n_{{\rm B}}({{\rm As}})$; they are given in table \[results\] together with the relevant standard deviations. The standard deviations have been calculated by using the Skellam distribution (\[Skellam\]), where the estimates $n_{{\rm B}}$ and $n_{{\rm G}}$ of the background and gross signals have been used. The hypothetical Skellam sampling-distributions of the net counts are shown in Fig. \[marginal\] (left). To calculate the actual sampling distributions would require knowing the the background- and gross-signal values in advance; in Fig. \[marginal\], they were set equal to the background- and gross-signal counts with the exception of the $n({{\rm La}})$ distribution, where both were set equal to the $[n_{{\rm G}}({{\rm La}})+n_{{\rm B}}({{\rm La}})]/2$ mean. It is worth noting that $n({{\rm La}})$, though a perfectly legitime unbiased estimate of $\Lambda({{\rm La}})$, is negative and non-physical. Table \[results\] gives also the 95% Neyman upper-limits of the net signals, which have been calculated for $\Lambda_{{\rm B}}=n_{{\rm B}}$. Their meaning is as follow: in a large set of measurement repetitions, 95% of upper limits so calculated are more than the net signal. In this frequency-of-occurrence sense, the probability that the net signal is less than the Neyman upper-limits is 0.95. ![Contour plots of the joint post-data distributions of the background and gross signals for Au, La, and As. The white areas are excluded by the prior information $\Lambda_{{\rm G}}\ge \Lambda_{{\rm B}}$.[]{data-label="contour"}](Au-contour.eps "fig:"){width="65mm"} ![Contour plots of the joint post-data distributions of the background and gross signals for Au, La, and As. The white areas are excluded by the prior information $\Lambda_{{\rm G}}\ge \Lambda_{{\rm B}}$.[]{data-label="contour"}](La-contour.eps "fig:"){width="65mm"} ![Contour plots of the joint post-data distributions of the background and gross signals for Au, La, and As. The white areas are excluded by the prior information $\Lambda_{{\rm G}}\ge \Lambda_{{\rm B}}$.[]{data-label="contour"}](As-contour.eps "fig:"){width="65mm"} The Bayesian joint post-data distributions of the background and gross signals are shown in Fig. \[contour\], where the support of the pre-data distribution is from $\Lambda_{\min}=10^{-4}$ to $\Lambda_{\max}=10^4$. The relevant marginal distributions of the net signals are given in Fig. \[marginal\] (right). For a comparison, Fig. \[marginal\] shows also the pre-data distribution (\[pre-data\]). The probabilities of both the detected and non-detected statements have been calculated according to the evidence of the relevant data models; the results are given in table \[evidences\]. The gold contamination is evident; the lanthanum and arsenic contamination are very uncertain. Table \[results\] gives the median of the possible net-signals values together with the 25% and 75% quantiles. This table gives also the 95% Bayesian upper-limits of the net signals, whose meaning is as follow: in a large set of measurement repetitions giving the same background and gross counts, 95% of the net signals (in principle, different) are less than the limit so calculated. It must be noted that the Bayesian median of the possible $\Lambda({{\rm La}})$ values is positive; further discussions of the Bayesian inference of a positive quantity from a negative measurement result can be found in [@Calonico:2009a; @Calonico:2009b]. [@lrrrr]{} & &\ element &median &95% interval &$n$ &95% interval\ &counts &counts &counts &counts\ Au &$176_{-19}^{+19}$ &$<223$ &176(29) &$<225$\ La &$10_{-6}^{+9}\,\,$ &$< 35$ &$-22(24)$ &$<20$\ As &$24_{-11}^{+14}$ &$< 59$ &15(25) &$<57$\ [@llrlrlr]{} hypothesis & & &\ & & &\ detected &$3.6\times 10^{-8}$ &100% &$1.2\times 10^{-8}$ & 1% &$4.7\times 10^{-8}$ & 2%\ non-detected &$1.1\times 10^{-14}$ &0% &$2.0\times 10^{-6}$ &99% &$2.4\times 10^{-6}$ &98%\ Nevertheless their different conceptual meanings – the median of the net-signal value-space and a net-signal measure drawn from an unbiased minimum-variance population of net-signal estimates – Bayesian estimate and frequency-of-occurrence measure of $\Lambda({{\rm Au}})$ are numerically the same. The same is true for the relevant Bayesian and Neyman’ confidence intervals, though the first refers to an ensemble of different net-signal values but the same background and gross counts and the second refers to an ensemble of different intervals calculated from different background and gross counts but the same net-signal value. The reason is that both the approaches rely on similar, quasi-Gaussian, probability distributions and that the prior information was irrelevant. Contrary, significant differences are evident when the net count approaches zero or it is negative. Conclusions =========== We showed that probability calculus and Bayesian inferences offers a solution to the problem of deciding between the signal-plus-background and no-signal hypotheses, when looking for quantities whose magnitude is comparable with the background noise of the measurement procedure. Given the measurement results, having been calculated the probabilities of the detected and non-detected hypotheses, optimal decisions follow. For instance, having the signal-plus-background model been selected, a measurand value can be optimally chosen according to the post-data probabilities of its possible values. As regards the detection-limit estimate, the Neyman approach focuses attention on the data processing and it is concerned in finding a statistics capable of a pre-determined performances in the set of the results of repeated measurements of the same measurand. The Bayesian approach – which focuses attention on the measurand-value probabilities – is concerned in the set of different measurand values consistent with repeated measurements giving the same result. This work was jointly funded by the European Metrology Research Programme (EMRP) participating countries within the European Association of National Metrology Institutes (EURAMET) and the European Union. References {#references .unnumbered} ========== [99]{} McNaught A D and Wilkinson A 1997 IUPAC Compendium of Chemical Terminology, 2nd ed. (Blackwell Scientific Publications, Oxford) Currie L A 1968 Limits for Qualitative Detection and Quantitative Determination – Application to Radiochemistry [*Analytical Chemistry*]{} [**40**]{} 586-93 Neyman J 1935 On the problem of confidence intervals [*Ann. Math. Stat.*]{} [**6**]{} 111-6 Neyman J 1937 Outline of a theory of statistical estimation based on the classical theory of probability [*Philos. Trans. Roy. Soc. Ser. A*]{} [**236**]{} 333-80 Jaynes E T 2003 Probability theory: The logic of science (Cambridge: Cambridge University Press) Mc Kay D JC 2003 Information Theory, Inference, and Learning Algorithms (Cambridge: Cambridge University Press) Gregory P C 2005 Bayesian Logical Data Analysis for the Physical Sciences (Cambridge: Cambridge University Press) Sivia D and Skilling J 2006 Data Analysis: A Bayesian Tutorial (Oxford: Oxford University Press) Mana G, Massa E, and Predescu M 2012 Model selection in the average of inconsistent data: an analysis of the measured Planck-constant values [*Metrologia*]{} [**49**]{} 492-500 Andreas B [*et al.*]{} 2011 Determination of the Avogadro constant by counting the atoms in a $^{28}$Si crystal [*Phys. Rev. Lett.*]{} [**106**]{} 030801 Andreas B [*et al.*]{} 2011 Counting the atoms in a $^{28}$Si crystal for a new kilogram definition [*Metrologia*]{} [**48**]{} S1-13 D’Agostino G, Bergamaschi L, Giordani L, Mana G, Massa E, and Oddone M 2012 Elemental characterization of the Avogadro silicon crystal WASO 04 by neutron activation analysis [*Metrologia*]{} [**49**]{} 696-701 Feldman G J and Cousins R D 1998 Unified approach to the classical statistical analysis of small signals [*Phys. Rev. D*]{} [**57**]{} 3873-89 Irwin J O 1937 The frequency distribution of the difference between two independent variates following the same Poisson distribution [*J. R. Stat. Soc. A*]{} [**100**]{} 415-16 Skellam J G 1946 The frequency distribution of the difference between two Poisson variates belonging to different populations [*J. R. Stat. Soc. A*]{} [**109**]{} 296-6 Jaynes E T 1968 Prior Probabilities [*IEEE Trans. Sys. Sci. Cybernetics*]{} [**4**]{} 227-41 Chakraborty S 2008 Some Applications of Dirac’s Delta Function in Statistics for More Than One Random Variable [*Appl. Math.*]{} [**3**]{} 42-54 Calonico D, Levi F, Lorini L and G Mana 2009 Bayesian inference of a negative quantity from positive measurement results [*Metrologia*]{} [**46**]{} 267-71 Calonico D, Levi F, Lorini L and G Mana 2009 Bayesian estimate of the zero-density frequency of a Cs fountain [*Metrologia*]{} [**46**]{} 629-36
{ "pile_set_name": "ArXiv" }
--- abstract: | The joint spectral radius of a finite set of real $d \times d$ matrices is defined to be the maximum possible exponential rate of growth of long products of matrices drawn from that set. A set of matrices is said to have the *finiteness property* if there exists a periodic product which achieves this maximal rate of growth. J. C. Lagarias and Y. Wang conjectured in 1995 that every finite set of real $d \times d$ matrices satisfies the finiteness property. However, T. Bousch and J. Mairesse proved in 2002 that counterexamples to the finiteness conjecture exist, showing in particular that there exists a family of pairs of $2 \times 2$ matrices which contains a counterexample. Similar results were subsequently given by V. D. Blondel, J. Theys and A. A. Vladimirov and by V. S. Kozyakin, but no explicit counterexample to the finiteness conjecture has so far been given. The purpose of this paper is to resolve this issue by giving the first completely explicit description of a counterexample to the Lagarias-Wang finiteness conjecture. Namely, for the set $$\mathsf{A}_{\alpha_*}:= \left\{\left(\begin{array}{cc}1&1\\0&1\end{array}\right), \alpha_*\left(\begin{array}{cc}1&0\\1&1\end{array}\right)\right\}$$ we give an explicit value of $$\alpha_* \simeq 0.749326546330367557943961948091344672091327370236064317358024\ldots$$ such that $\mathsf{A}_{\alpha_*}$ does not satisfy the finiteness property. address: - 'Department of Pure Mathematics, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1.' - 'Dipartimento di Matematica, Università di Roma Tor Vergata, Via della Ricerca Scientifica, 00133 Roma, Italy.' - 'School of Mathematics, University of Manchester, Oxford Road, Manchester M13 9PL, United Kingdom.' - 'BNP Paribas Fortis, 3, rue Montagne du Parc, B-1000 Bruxelles' author: - 'Kevin G. Hare' - 'Ian D. Morris' - Nikita Sidorov - Jacques Theys title: 'An explicit counterexample to the Lagarias-Wang finiteness conjecture' --- [^1] Introduction ============ If $A$ is a $d\times d$ real or complex matrix and $\|\cdot\|$ is a matrix norm, the spectral radius $\rho(A)$ of the matrix $A$ admits the well-known characterisation $$\rho(A) = \lim_{n \to \infty}\|A^n\|^{1/n},$$ a result known as Gelfand’s formula. The *joint spectral radius* generalises this concept to sets of matrices. Given a finite set of $d \times d$ real matrices $\mathsf{A} = \{A_1,\ldots,A_r\}$, we by analogy define the joint spectral radius $\varrho(\mathsf{A})$ to be the quantity $$\varrho(\mathsf{A}):=\limsup_{n \to \infty} \max\left\{\left\|A_{i_1}\cdots A_{i_n}\right\|^{1/n} \colon i_j \in \{1,\ldots,r\}\right\},$$ a definition introduced by G.-C. Rota and G. Strang in 1960 [@RS] (reprinted in [@Rotacoll]). Note that the pairwise equivalence of norms on finite dimensional spaces implies that the quantity $\varrho(\mathsf{A})$ is independent of the choice of norm used in the definition. The joint spectral radius has been found to arise naturally in a range of mathematical contexts including control and stability [@Ba; @DHX; @Gu; @Koz], coding theory [@MO], the regularity of wavelets and other fractal structures [@DL; @DL0; @Mae2; @Prot], numerical solutions to ordinary differential equations [@GZ], and combinatorics [@BCJ; @DST]. As such the problem of accurately estimating the joint spectral radius of a given finite set of matrices is a topic of ongoing research interest [@BN; @GWZ; @Koz4; @Koz5; @QBWF; @Parr; @BT1; @Wirth]. In this paper we study a property related to the computation of the joint spectral radius of a set of matrices, termed the *finiteness property*. A set of $d \times d$ real matrices $\mathsf{A}:=\{A_1,\ldots,A_r\}$ is said to satisfy the finiteness property if there exist integers $i_1,\ldots,i_n$ such that $\varrho(\mathsf{A}) = \rho(A_{i_1}\cdots A_{i_n})^{1/n}$. The *finiteness conjecture* of J. Lagarias and Y. Wang [@LW] asserted that every finite set of $d \times d$ real matrices has the finiteness property; a conjecture equivalent to this statement was independently posed by L. Gurvits in [@Gu], where it was attributed to E. S. Pyatnitskiĭ. In special cases, this finiteness property is known to be true, see for example [@Cicone; @Jungers]. The existence of counterexamples to the finiteness conjecture was established in 2002 by T. Bousch and J. Mairesse [@BM], with alternative constructions subsequently being given by V. Blondel, J. Theys and A. Vladimirov [@BTV] and V. S. Kozyakin [@Koz3]. However, in all three of these proofs it is shown only that a certain family of pairs of $2 \times 2$ matrices must contain a counterexample, and no explicit counterexample has yet been constructed. The problem of constructing an explicit counterexample has been remarked upon as difficult, with G. Strang commenting that an explicit counterexample may never be established [@Rotacoll]. In this paper, we resolve this issue by giving the first completely explicit construction of a counterexample to the Lagarias-Wang finiteness conjecture. Let us define a pair of $2 \times 2$ real matrices by $$A_0:=\left(\begin{array}{cc}1&1\\0&1\end{array}\right), \qquad A_1:=\left(\begin{array}{cc}1&0\\1&1\end{array}\right),$$ and for each $\alpha \in [0,1]$ let us define $\mathsf{A}_\alpha:=\{A_0,\alpha A_1\}$. The construction of Blondel-Theys-Vladimirov [@BTV] shows that there exists $\alpha \in [0,1]$ for which $\mathsf{A}_\alpha$ does not satisfy the finiteness property. The proof operates indirectly by demonstrating that the set of all parameter values $\alpha$ for which the finiteness property *does* hold is insufficient to cover the interval $[0,1]$. In this paper we extend [@BTV] substantially by describing the behaviour of $\varrho(\mathsf{A}_\alpha)$ as the parameter $\alpha$ is varied in a rather deep manner. This allows us to prove the following theorem: \[counter\] Let $(\tau_n)_{n=0}^\infty$ denote the sequence of integers defined by $\tau_0:=1$, $\tau_1,\tau_2:=2$, and $\tau_{n+1}:=\tau_n\tau_{n-1}-\tau_{n-2}$ for all $n \geq 2$,[^2] and let $(F_n)_{n=0}^\infty$ denote the sequence of Fibonacci numbers, defined by $F_0:=0$, $F_1:=1$ and $F_{n+1}:=F_n+F_{n-1}$ for all $n \geq 1$. Define a real number $\alpha_* \in (0,1]$ by $$\label{eq:alphastar} \alpha_*:=\lim_{n \to \infty} \left(\frac{\tau_n^{F_{n+1}}}{\tau_{n+1}^{F_n}}\right)^{(-1)^n}= \prod_{n=1}^\infty \left(1-\frac{\tau_{n-1}}{\tau_n \tau_{n+1}}\right)^{(-1)^n F_{n+1}}.$$ Then this infinite product converges unconditionally, and $\mathsf{A}_{\alpha_*}$ does not have the finiteness property. The convergence in both of the limits given in Theorem \[counter\] is extremely rapid, being of order $O\left(\exp(-\delta \phi^n)\right)$ where $\delta>0$ is some constant and $\phi$ is the golden ratio. An explicit error bound is given subsequently to the proof of Theorem \[counter\]. Using this bound we may compute the approximation $$\alpha_* \simeq 0.749326546330367557943961948091344672091327370236064317358024\ldots$$ which is rigorously accurate to all decimal places shown. We shall now briefly describe the technical results which underlie the proof of Theorem \[counter\]. For each $\alpha \in [0,1]$ let us write $A_0^{(\alpha)}:=A_0$ and $A_1^{(\alpha)}:=\alpha A_1$ so that $\mathsf{A}_\alpha=\left\{A_0^{(\alpha)},A_1^{(\alpha)}\right\}$. The principal technical question which is addressed in this paper is the following: if we are given that for some finite sequence of values $u_1,\ldots,u_n \in \{0,1\}$, the matrix $$\label{introproduct}A^{(\alpha)}_{u_n}A^{(\alpha)}_{u_{n-1}}\cdots A^{(\alpha)}_{u_2}A^{(\alpha)}_{u_1}$$ is “large” in some suitable sense – for example, if its spectral radius is close to the value $\varrho(\mathsf{A}_\alpha)^n$ - then what may we deduce about the combinatorial structure of the sequence of values $u_i$, and how does this answer change as the parameter $\alpha$ is varied? A key technical step in the proof of Theorem \[counter\], therefore, is to show that the magnitude of the product is maximised when the sequence $u_1,u_2,\ldots,u_n$ is a *balanced word*. This result depends in a rather essential manner on several otherwise unpublished results from the fourth named author’s 2005 PhD thesis [@Theys], which are substantially strengthened in the present paper. In the following section we shall introduce the combinatorial ideas needed to describe balanced words. We are then able to state our main technical theorem, describe its relationship to previous research in ergodic theory and the theory of the joint spectral radius, and give a brief overview of how Theorem \[counter\] is subsequently deduced. The detailed structure of this paper is described at the end of the following section. Notation and statement of technical results =========================================== Throughout this paper we denote the set of all $d \times d$ real matrices by ${\mathbf{M}_d(\mathbb{R})}$. The symbol ${{|\!|\!|}}\cdot{{|\!|\!|}}$ will be used to denote the norm on ${\mathbf{M}_d(\mathbb{R})}$ which is induced by the Euclidean norm on ${\mathbb{R}}^d$, which satisfies ${{|\!|\!|}}B{{|\!|\!|}}=\rho(B^*B)^{1/2}$ for every $B \in {\mathbf{M}_d(\mathbb{R})}$. Other norms shall be denoted using the symbol $\| \cdot \|$. We shall say that a norm $\|\cdot\|$ on ${\mathbf{M}_2(\mathbb{R})}$ is *submultiplicative* if $\|AB\| \leq \|A\|\cdot\|B\|$ for all $A, B \in {\mathbf{M}_2(\mathbb{R})}$. For the remainder of this paper we shall also denote $\varrho(\mathsf{A}_\alpha)$ simply by $\varrho(\alpha)$. For the purposes of this paper we define a *finite word*, or simply *word* to be sequence $u=(u_i)$ belonging to $\{0,1\}^n$ for some integer $n \geq 0$. We will typically use $u$, $v$ or $w$ to represent finite words. If $u \in \{0,1\}^n$ then we say that $u$ is length $n$, which we denote by $|u| = n$. If $|u|$ is zero then the word $u$ is called *empty*. The number of terms of $u$ which are equal to $1$ is denoted by $|u|_1$. If $u$ is nonempty, the quantity $|u|_1/|u|$ is called the *$1$-ratio* of $u$ and is written $\varsigma(u)$. The two possible words of length one shall often be denoted simply by $0$ and $1$. We denote the set of all finite words by $\Omega$. We will define an *infinite word* to be a sequence $x = (x_i)$ belonging to $\{0,1\}^{\mathbb{N}}$. We will typically use $x$, $y$ or $z$ to represent infinite words. If the word can be either finite or infinite, we will typically use $\omega$. We denote the set of all infinite words by $\Sigma$, and define a metric $d$ on $\Sigma$ as follows. Given $x,y \in \Sigma$ with $x=(x_i)_{i=1}^\infty$ and $y=(y_i)_{i=1}^\infty$, define $\mathfrak{n}(x,y):=\inf \{i \geq 1 \colon x_i \neq y_i\}$. We now define $d(x,y):=1/2^{\mathfrak{n}(x,y)}$ for all $x,y \in \Sigma$, where we interpret the symbol $1/2^\infty$ as being equal to zero. The topology on $\Sigma$ which is generated by the metric $d$ coincides with the infinite product topology on $\Sigma=\{0,1\}^{\mathbb{N}}$. In particular $\Sigma$ is compact and totally disconnected. For any nonempty finite word $u=(u_i)_{i=1}^n$ the set $\{x \in \Sigma \colon x_i=u_i \text{ for all }1 \leq i \leq n\}$ is both closed and open. Since every open ball in $\Sigma$ has this form for some $u$, the collection of all such sets generates the topology of $\Sigma$. We define the *shift transformation* $T \colon \Sigma \to \Sigma$ by $T[(x_i)_{i=1}^\infty]:=(x_{i+1})_{i=1}^\infty$. The shift transformation is continuous and surjective. We define the projection $\pi_n : \Sigma \to \Omega$ by $\pi_n[(x_i)_{i=1}^\infty] = (x_i)_{i=1}^n$. If $u = u_1 u_2 \dots u_n$ and $v = v_1 v_2 \dots v_m$ are finite words, then we define the *concatenation* of $u$ with $v$ as $u v = u_1 u_2 \dots u_n v_1 v_2 \dots v_m$, the finite word of length $n + m$. Note that if $u$ is the empty word then $uv=vu=v$ for every word $v$. The set $\Omega$ endowed with the operation of concatenation is a semigroup. Given a word $u$ and positive integer $n$ we let $u^n$ denote the linear concatenation of $n$ copies of $u$, so that for example $u^4:=uuuu$. If $u$ is a nonempty word of length $n$, we let $u^\infty$ denote the unique infinite word $x \in \Sigma$ such that $x_{kn+i}=u_i$ for all integers $i,k$ with $k \geq 1$ and $1 \leq i \leq n$. Clearly any infinite word $x \in \Sigma$ satisfies $T^nx=x$ for an integer $n \geq 1$ if and only if there exists a word $u$ such that $x=u^\infty$ and $|u|$ divides $n$. If $u$ is a nonempty word, and $\omega$ is either a finite or infinite word, we say that $u$ is a *subword* of $\omega$ if there exists an integer $k \geq 0$ such that $u_i=\omega_{k+i}$ for all integers $i$ in the range $1 \leq i \leq |u|$. We denote this relationship by $u \prec \omega$. Clearly $u \prec \omega$ if and only if there exist a possibly empty word $v \in \Omega$ and a finite or infinite word $\omega'$ such that $\omega = au\omega'$. An infinite word $x$ is said to be *recurrent* if every finite subword $u \prec x$ occurs as a subword of $x$ an infinite number of times. A finite or infinite word $\omega$ is called *balanced* if for every pair of finite subwords $u,v$ such that $u,v \prec \omega$ and $|u|=|v|$, we necessarily have $||u|_1-|v|_1| \leq 1$. Clearly $\omega$ is balanced if and only if every subword of $\omega$ is balanced. An infinite balanced word which is not eventually periodic is called *Sturmian*. The following standard result describes the principal properties of balanced infinite words which will be applied in this paper: \[Xphi\] If $x \in \Sigma$ is balanced then the limit $\varsigma(x):= \lim_{n \to \infty} \varsigma(\pi_n(x))$ exists. For each $\gamma \in [0,1]$, let $X_\gamma$ denote the set of all recurrent balanced infinite words $x \in \Sigma$ for which $\varsigma(x)=\gamma$. These sets have the following properties: 1. Each $X_\gamma$ is compact and nonempty. 2. For each $\gamma \in [0,1]$, the restriction of $T$ to $X_\gamma$ is a continuous, minimal, uniquely ergodic transformation of $X_\gamma$. If $\mu$ is the unique ergodic probability measure supported in $X_\gamma$, then $\mu(\{x \colon x_1=1\})=\gamma$. 3. If $\gamma = p/q \in [0,1] \cap \mathbb{Q}$ in lowest terms then the cardinality of $X_\gamma$ is $q$, and for each $x \in X_\gamma$ we have $X_\gamma = \{x,Tx,\ldots,T^{q-1}x\}$. If $\gamma \in [0,1] \setminus \mathbb{Q}$ then $X_\gamma$ is uncountably infinite. We have $X_{2/5}=\{(00101)^\infty, (01010)^\infty, (10100)^\infty, (01001)^\infty, (10010)^\infty\}$. Theorem \[Xphi\] does not appear to exist in the literature in the precise form given above, but it may be established without difficulty by combining various results from the second chapter of [@Lot]. The key step in obtaining Theorem \[Xphi\] is to show that $x \in X_\gamma$ if and only if there exists $\delta \in [0,1)$ such that either $x_n \equiv \lfloor (n+1)\gamma+\delta \rfloor - \lfloor n\gamma + \delta\rfloor$, or $x_n \equiv \lceil (n+1)\gamma+\delta \rceil - \lceil n\gamma + \delta\rceil$, see Lemmas 2.1.14 and 2.1.15 of [@Lot]. Once this identification has been made, the dynamical properties of $X_\gamma$ under the shift transformation largely follow from the properties of the rotation map $z \mapsto z+\gamma$ defined on $\mathbb{R}/\mathbb{Z}$. Given a nonempty finite word $u=(u_i)^n_{i=1}$ and real number $\alpha \in [0,1]$, we put $$\mathcal{A}^{(\alpha)}(u):= A^{(\alpha)}_{u_n}A^{(\alpha)}_{u_{n-1}}\cdots A^{(\alpha)}_{u_2}A^{(\alpha)}_{u_1}$$ and $$\mathcal{A}(u):=A_{u_n}A_{u_{n-1}}\cdots A_{u_2}A_{u_1}=\mathcal{A}^{(1)}(u).$$ For every $x \in \Sigma$, $\alpha \in [0,1]$ and $n \geq 1$ we also define $$\mathcal{A}^{(\alpha)}(x,n):=\mathcal{A}^{(\alpha)}(\pi_n(x)),\qquad \mathcal{A}(x,n):=\mathcal{A}(\pi_n(x))=\mathcal{A}^{(1)}(x,n).$$ Note that the function $\mathcal{A}(x,n)$ satisfies the cocycle relationship $$\mathcal{A}(x,n+m) = \mathcal{A}(T^nx,m)\mathcal{A}(x,n)$$ for every $x \in \Sigma$, $n,m \geq 1$. Our main task in proving Theorem \[counter\] is to characterise those infinite words $x \in \Sigma$ for which $\mathcal{A}(x,n)$ grows rapidly in terms of the sets $X_\gamma$. To do this we must be able to specify what is meant by rapid growth. Let us therefore say that an infinite word $x \in \Sigma$ is a *strongly extremal* word for $\mathsf{A}_\alpha$ if there is a constant $\delta>0$ such that ${{|\!|\!|}}\mathcal{A}^{(\alpha)}(x,n){{|\!|\!|}}\geq \delta \varrho(\alpha)^n$ for all $n\geq 1$, and *weakly extremal* for $\mathsf{A}_\alpha$ if $\lim_{n\to\infty} {{|\!|\!|}}\mathcal{A}^{(\alpha)}(x,n){{|\!|\!|}}^{1/n}=\varrho(\alpha)$. It is obvious that every strongly extremal word is also weakly extremal. Note also that since all norms on ${\mathbf{M}_2(\mathbb{R})}$ are equivalent, these definitions are unaffected if another norm $\|\cdot\|$ is substituted for ${{|\!|\!|}}\cdot {{|\!|\!|}}$. We shall say that $\mathfrak{r} \in [0,1]$ is the *unique optimal $1$-ratio* of $\mathsf{A}_\alpha$ if for every $x \in \Sigma$ which is weakly extremal for $\mathsf{A}_\alpha$ we have $\varsigma(\pi_n(x)) \to \mathfrak{r}$. Note that the existence of a unique optimal $1$-ratio is a nontrivial property, and is shown in Theorem \[technical\]. For example, if $\mathsf{A}\subset {\mathbf{M}_2(\mathbb{R})}$ is a pair of isometries then no unique optimal $1$-ratio for $\mathsf{A}$ exists. It is not difficult to see that if $\mathsf{A}_\alpha$ has a unique optimal $1$-ratio which is irrational, then $\mathsf{A}_\alpha$ cannot satisfy the finiteness property, and it is this principle which underlies the present work as well as the work of Bousch-Mairesse [@BM] and Kozyakin [@Koz3]. The principal technical result of this paper is the following theorem which allows us to relate all of the concepts defined so far in this section: \[technical\] There exists a continuous, non-decreasing surjection $\mathfrak{r} \colon [0,1] \to [0,\frac{1}{2}]$ such that for each $\alpha$, $\mathfrak{r}(\alpha)$ is the unique optimal $1$-ratio of $\mathsf{A}_\alpha$. For each $\alpha \in [0,1]$, every element of $X_{\mathfrak{r}(\alpha)}$ is strongly extremal for $\mathsf{A}_\alpha$. Moreover, for every compact set $K \subset (0,1]$ there exists a constant $C_K>1$ such that $$\label{niceformula} C_K^{-1} \leq \frac{\rho\left(\mathcal{A}^{(\alpha)}(x,n)\right)}{\varrho(\alpha)^n} \leq \frac{{\left|\!\left|\!\left|}\mathcal{A}^{(\alpha)}(x,n){\right|\!\right|\!\right|}}{\varrho(\alpha)^n} \leq C_K$$ whenever $\alpha \in K$, $x \in X_{\mathfrak{r}(\alpha)}$ and $n \geq 1$. Conversely, if $x \in \Sigma$ is a recurrent infinite word which is strongly extremal for $\mathsf{A}_\alpha$ then $x \in X_{\mathfrak{r}(\alpha)}$, and if $x \in \Sigma$ is any infinite word which is weakly extremal for $\mathsf{A}_\alpha$ then $(1/n)\sum_{k=0}^{n-1}\mathrm{dist}(T^kx,X_{\mathfrak{r}(\alpha)}) \to 0$. The definition of a strongly extremal infinite word is similar to the one previously proposed by V.  S. Kozyakin [@Koz2007], whereas the definition of a weakly extremal infinite word is similar to a definition used previously by the fourth named author [@Theys]. In both instances the infinite word is simply referred to as ‘extremal’. Note that balanced/Sturmian words (and measures) arise as optimal trajectories in various optimisation problems – see, e.g., [@Bousch; @BS; @HO; @Jenk1]. A less general version of parts of Theorem \[technical\] was proved in [@Theys]. The structure of the paper is as follows: Sections \[sec3\] and \[sec4\] deal with important preliminaries, such as general properties of joint spectral radius and of balanced words. In Section \[sec5\] we show that every strongly extremal infinite word is balanced. In Section \[section6\] we introduce an important auxiliary function $S$ defined as the logarithm of the exponent growth of the norm of an arbitrary matrix product taken along balanced words with a fixed 1-ratio. In Section \[sec7\] we apply results from preceding sections to prove Theorem \[technical\]. Finally, in Section \[sec8\] we deduce Theorem \[counter\] from Theorem \[technical\]. Section \[sec9\] contains some open questions and conjectures. We believe it is worth describing here briefly how Theorem \[technical\] leads to Theorem \[counter\]. Once we have established the existence of such a function $\mathfrak r$, we may take any irrational $\gamma$ and conclude that any element $\alpha$ of the preimage $\mathfrak r^{-1}(\gamma)$ is a counterexample to the finiteness conjecture (since any weakly extremal word must be aperiodic). To construct a specific counterexample, we take $\gamma=\frac{3-\sqrt5}2$ and choose the *Fibonacci word* $u_\infty$ as a strongly extremal word for this 1-ratio. Recall that $u_\infty=\lim_n u_{(n)}$, where $u_{(1)}=1, u_{(2)}=0$ and $u_{(n+1)}=u_{(n)}u_{(n-1)}$ for $n\ge2$. Now consider the morphism $h:\Omega\to{\mathbf{M}_2(\mathbb{R})}$ such that $h(0)=A_0, h(1)=A_1$. Denote $B_n:=h(u_{(n)})$; we thus have $B_{n+1}=B_n B_{n-1}$. One can easily show that ${\mathrm{tr}\,}(B_n)=\tau_n$, the sequence described in Theorem \[counter\]. To obtain explicit formulae for $\alpha_*$, we show that the auxillary function $S$ introducted in Section \[section6\] is differentiable at $\gamma = \frac{3-\sqrt{5}}{2}$ and that $-\log\alpha_*=S'\left(\frac{3-\sqrt5}2\right)$. We then compute this derivative, which will yield (\[eq:alphastar\]). General properties of the joint spectral radius and extremal infinite words {#sec3} =========================================================================== We shall begin with some general results concerning the joint spectral radius. The following characterisation of the joint spectral radius will prove useful on a number of occasions: \[bowf\] Let $\alpha \in [0,1]$ and let $\|\cdot\|$ be any submultiplicative matrix norm. Then: $$\varrho(\alpha) = \inf_{n \geq 1}\max\left\{\left\|\mathcal{A}(x,n)\right\|^{1/n}\colon x \in \Sigma\right\} =\sup_{n \geq 1}\max\left\{\rho\left(\mathcal{A}(x,n)\right)^{1/n}\colon x \in \Sigma\right\}.$$ We review some arguments from [@BW; @DL]. Fix $\alpha \in [0,1]$ and a matrix norm $\|\cdot\|$, and define $$\varrho_n^+(\alpha,\|\cdot\|):= \max\left\{\left\|A^{(\alpha)}_{i_1}\cdots A^{(\alpha)}_{i_n}\right\|\colon (i_1,\ldots,i_n)\in\{0,1\}^n\right\} = \max \left\{\left\|A^{(\alpha)}(x,n)\right\| \colon x \in \Sigma\right\}$$ and $$\varrho_n^-(\alpha):=\max\left\{\rho\left(A^{(\alpha)}_{i_1}\cdots A^{(\alpha)}_{i_n}\right)\colon i_1,\ldots,i_n\in\{0,1\}\right\} = \max \left\{\rho\left(A^{(\alpha)}(x,n) \right)\colon x \in \Sigma\right\}.$$ Clearly each $\varrho_n^+(\alpha,\|\cdot\|)$ is nonzero, and $\varrho_{n+m}^+(\alpha,\|\cdot\|) \leq \varrho_{n}^+(\alpha,\|\cdot\|)\varrho_{m}^+(\alpha,\|\cdot\|)$ for every $n, m \geq 1$. Applying Fekete’s subadditivity lemma [@Feck] to the sequence $\log \varrho_n^+(\alpha,\|\cdot\|)$ we obtain $$\lim_{n \to \infty} \varrho^+_n(\alpha,\|\cdot\|)^{1/n} = \inf_{n \geq 1} \varrho^+_n(\alpha,\|\cdot\|)^{1/n}.$$ In particular the limit superior in the definition of $\varrho(\alpha)$ is in fact a limit. A well-known result of Berger and Wang [@BW] implies that $$\lim_{n \to \infty} \varrho^+_n(\alpha,\|\cdot\|)^{1/n} = \limsup_{n \to \infty} \varrho^-_n(\alpha)^{1/n},$$ which in particular implies that the value $\varrho(\alpha)$ is independent of the choice of norm $\|\cdot\|$. Finally, note that if $\rho\left(A_{i_1}^{(\alpha)}\cdots A_{i_n}^{(\alpha)}\right)=\varrho^-_n(\alpha)$ for some $n$, then $\varrho_{nm}^-(\alpha) \geq \rho((A_{i_1}\cdots A_{i_n})^m) = \varrho_n^-(\alpha)^m$ for each $m \geq 1$, and hence the limit superior above is also a supremum. We may immediately deduce the following corollary, which was originally noted by C. Heil and G. Strang [@HS]: \[rhocts\] The function $\varrho \colon [0,1] \to\mathbb{R}$ is continuous. The first of the two identities given in Lemma \[bowf\] shows that $\varrho$ is equal to the pointwise infimum of a family of continuous functions, and hence is upper semi-continuous. The second identity shows that $\varrho$ also equals the pointwise supremum of a family of continuous functions, and hence is lower semi-continuous. \[extnorm\] For each $\alpha\in(0,1]$ there exists a matrix norm $\|\cdot\|_\alpha$ such that $\left\|A_i^{(\alpha)}\right\|_\alpha \leq \varrho(\alpha)$ for $i=0,1$. The matrix norms $\|\cdot\|_\alpha$ may be chosen so that the following additional property is satisfied: for every compact set $K \subset (0,1]$ there exists a constant $M_K>1$ such that $M_K^{-1}\|B\|_\alpha \leq {{|\!|\!|}}B{{|\!|\!|}}\leq M_K \|B\|_\alpha$ for all $B \in {\mathbf{M}_2(\mathbb{R})}$ and all $\alpha \in K$. Let $\mathsf{B}=\{B_1,\ldots,B_r\}$ be any finite set of $d \times d$ real matrices and let $\varrho(\mathsf{B})$ be its joint spectral radius. We say that $\mathsf{B}$ is *irreducible* if the only linear subspaces $V \subseteq{\mathbb{R}}^d$ such that $B_iV \subseteq V$ for every $i$ are $\{0\}$ and ${\mathbb{R}}^d$. A classic theorem of N. E. Barabanov [@Ba] shows that if $\mathsf{B}$ is irreducible then there exists a constant $M_{\mathsf{B}}>1$ such that for each $n \geq 1$, $$\max\{\|B_{i_1}\dots B_{i_n}\| \colon i_j \in \{1,\ldots,r\}\} \leq M_{\mathsf{B}}\varrho(\mathsf{B})^n.$$ Note in particular that necessarily $\varrho(\mathsf{B})>0$. It is then straightforward to see that if we define for each $v \in \mathbb{R}^d$ $$\|v\|_{\mathsf{B}}:= \sup_{n \geq 0}\left\{\varrho(\mathsf{B})^{-n}\max {{|\!|\!|}}B_{i_1}\cdots B_{i_n}v{{|\!|\!|}}\colon i_j \in \{1,\ldots,r\}\right\},$$ where ${{|\!|\!|}}\cdot{{|\!|\!|}}$ denotes the Euclidean norm, then $\|\cdot\|_{\mathsf{B}}$ is a norm on $\mathbb{R}^d$ which satisfies $\|B_i v\|_{\mathsf{B}} \leq \varrho(\mathsf{B})\|v\|_{\mathsf{B}}$ for every $i \in \{1,\ldots,r\}$ and $v \in \mathbb{R}^d$. It follows that the operator norm on ${\mathbf{M}_2(\mathbb{R})}$ induced by $\|\cdot\|_{\mathsf{B}}$ has the property $\|B_i\|_{\mathsf{B}} \leq \varrho(\mathsf{B})$ for each $B_i$. More recent results due to F. Wirth [@Wirth Thm. 4.1] and V. S. Kozyakin [@Kozmore] show that the constants $M_{\mathsf{B}}$ may be chosen so as to depend continuously on the set of matrices $\mathsf{B}$, subject to the condition that the perturbed matrix families also do not have invariant subspaces. It is easily shown that $\mathsf{A}_\alpha$ is irreducible for every $\alpha \in (0,1]$ and so the lemma follows from these general results. We immediately obtain the following: \[gtr1\] For each $\alpha \in (0,1]$ we have $\varrho(\alpha)>1$. Assume $\varrho(\alpha) \leq 1$ for some $\alpha \in (0,1]$. Then we have $\sup\{\|A^{(\alpha)}\left(0^n\right)\|_\alpha\colon n \geq 1\} \leq 1$ by Lemma \[extnorm\] and consequently $\sup\{{{|\!|\!|}}A^n_0{{|\!|\!|}}\colon n \geq 1\}<\infty$. Since $A_0^n=\left(\begin{smallmatrix} 1 & n \\ 0 & 1\end{smallmatrix}\right)$, we have $\lim_{n \to \infty}{{|\!|\!|}}A_0^n{{|\!|\!|}}=+ \infty$ and therefore we must have $\varrho(\alpha)>1$. Fix some norm $\|\cdot\|_\alpha$ which satisfies the conditions of Lemma \[extnorm\]. The following key result is a variation on part of [@QBWF Thm 2.2]. We include a proof here for the sake of completeness. \[Zset\] For each $\alpha \in (0,1]$ define $$Z_\alpha:=\bigcap_{n=1}^\infty \left\{x \in \Sigma \colon \left\|\mathcal{A}^{(\alpha)}(x,n)\right\|_\alpha = \varrho(\alpha)^n\right\}.$$ Then each $Z_\alpha$ is compact and nonempty, and satisfies $TZ_\alpha \subseteq Z_\alpha$. Fix $\alpha \in (0,1]$ and define for each $n\geq 1$ $$Z_{\alpha,n}:= \left\{x \in \Sigma \colon \left\|\mathcal{A}^{(\alpha)}(x,n)\right\|_\alpha = \varrho(\alpha)^n\right\}.$$ Clearly each $Z_{\alpha,n}$ is closed. If some $Z_{\alpha,n}$ were to be empty, then by Lemma \[extnorm\] we would have $\sup\left\{\left\|\mathcal{A}^{(\alpha)}(x,n)\right\|_\alpha \colon x \in \Sigma\right\} < \varrho(\alpha)^n$, contradicting Lemma \[bowf\]. For each $n \geq 1$ we have $Z_{\alpha,n+1}\subseteq Z_{\alpha,n}$, since if $x \in Z_{\alpha,n+1}$ then $$\begin{aligned} \varrho(\alpha)^{n+1}= \left\|\mathcal{A}^{(\alpha)}(x,n+1)\right\|_\alpha &\leq \left\|\mathcal{A}^{(\alpha)}(T^nx,1)\right\|_\alpha \left\|\mathcal{A}^{(\alpha)}(x,n)\right\|_\alpha \\&\leq \varrho(\alpha) \left\|\mathcal{A}^{(\alpha)}(x,n)\right\|_\alpha \leq \varrho(\alpha)^{n+1}\end{aligned}$$ using Lemma \[extnorm\] and it follows that $x \in Z_{\alpha,n}$ also. We deduce that the set $Z_\alpha = \bigcap_{n=1}^\infty Z_{\alpha,n}$ is nonempty. Since each $Z_{\alpha,n}$ is closed, $Z_\alpha$ is closed and hence is compact. Finally, if $x \in Z_{\alpha,n+1}$ then we also have $$\begin{aligned} \varrho(\alpha)^{n+1}= \left\|\mathcal{A}^{(\alpha)}(x,n+1)\right\|_\alpha &\leq \left\|\mathcal{A}^{(\alpha)}(Tx,n)\right\|_\alpha \left\|\mathcal{A}^{(\alpha)}(x,1)\right\|_\alpha \\&\leq \varrho(\alpha)\left\|\mathcal{A}^{(\alpha)}(Tx,n)\right\|_\alpha \leq \varrho(\alpha)^{n+1}\end{aligned}$$ so that $Tx \in Z_{\alpha,n}$, and we deduce from this that $TZ_\alpha \subseteq Z_\alpha$. The remaining lemmas in this section will be applied in the proof of Theorem \[technical\] to characterise the extremal orbits of $\mathsf{A}_\alpha$. \[strongex\] Let $\alpha \in (0,1]$ and $x\in\Sigma$. If $x$ is recurrent and strongly extremal for $\mathsf A_\alpha$, then $x \in Z_\alpha$. Let $\alpha \in (0,1]$ and $x\in\Sigma \setminus Z_\alpha$, and suppose that $x$ is recurrent. We shall show that $\liminf_{n \to \infty} \varrho(\alpha)^{-n}\left\|\mathcal{A}^{(\alpha)}(x,n)\right\|_\alpha = 0$ and therefore $x$ is not strongly extremal, which proves the lemma. Since $x \notin Z_\alpha$, there exist $\varepsilon>0$ and $n_0 \geq 1$ such that $\left\|\mathcal{A}^{(\alpha)}(x,n_0)\right\|_\alpha <(1-\varepsilon)\varrho(\alpha)^{n_0}$. Since $x$ is recurrent, it follows that for each $k \geq 1$ we may find integers $r_k>r_{k-1}>\ldots >r_2>r_1=0$ such that $\left\|\mathcal{A}^{(\alpha)}(T^{r_i}x,n_0)\right\|_\alpha <(1-\varepsilon)\varrho(\alpha)^{n_0}$ for each $i$. By increasing $k$ and passing to a subsequence if necessary, it is clear that we may assume additionally that $r_{i+1}>r_i+n_0$ for $1 \leq i < k$. Define also $r_{k+1}:=r_k+n_0+1$. We have $$\begin{aligned} \left\|\mathcal{A}^{(\alpha)}(x,r_{k+1})\right\|_\alpha & \leq \prod_{i=1}^{k} \left\|\mathcal{A}^{(\alpha)}(T^{r_i}x,n_0)\right\|_\alpha \left\|\mathcal{A}^{(\alpha)}(T^{r_i+n_0}x,r_{i+1}-r_i-n_0)\right\|_\alpha \\ &\leq (1-\varepsilon)^k \varrho(\alpha)^{r_{k+1}},\end{aligned}$$ and since $k$ may be taken arbitrarily large we conclude that $$\liminf_{n \to \infty} \varrho(\alpha)^{-n}\left\|\mathcal{A}^{(\alpha)}(x,n)\right\|_\alpha = 0,$$ as desired. The following lemma is a straightforward corollary of a more general result due to S. J. Schreiber [@Sch Lemma 1]: \[schr\] Let $(f_n)$ be a sequence of continuous functions from $\Sigma$ to $\mathbb{R}$ such that $f_{n+m}(x) \leq f_n(T^mx)+f_m(x)$ for all $x \in \Sigma$ and $n,m \geq 1$. Then for each $x \in \Sigma$ and $m \geq 1$, $$\liminf_{n \to \infty}\frac{1}{nm}\sum_{k=0}^{n-1}f_m(T^kx) \geq \liminf_{n \to \infty} \frac{1}{n}f_n(x).$$ \[weakex\] Let $\alpha \in (0,1]$ and suppose that the restriction of $T$ to $Z_\alpha$ is uniquely ergodic, with $\mu$ being its unique $T$-invariant Borel probability measure. Then $\mathfrak{r}:=\mu\left(\{x \in \Sigma \colon x_1 =1\}\right)$ is the unique optimal $1$-ratio of $\mathsf{A}_\alpha$, and if $x \in \Sigma$ is weakly extremal, then $$\lim_{n \to \infty}\frac1n\sum_{k=0}^{n-1} \mathrm{dist}\left(T^kx,{\mathrm{supp}\,}\mu\right) =0.$$ Let $\mathcal{M}$ denote the set of all Borel probability measures on $\Sigma$ equipped with the weak-\* topology, which is defined to be the smallest topology such that $\mu \mapsto \int f\,d\mu$ is continuous for every continuous function $f \colon \Sigma \to \mathbb{R}$. This topology makes $\mathcal{M}$ a compact metrisable space [@Parth Thm. II.6.4]. Let us fix $\alpha \in (0,1]$ and suppose that $x \in \Sigma$ is weakly extremal. For each $n \geq 1$ define $\mu_n:=(1/n)\sum_{k=0}^{n-1}\delta_{T^kx} \in \mathcal{M}$, where $\delta_z \in \mathcal{M}$ denotes the Dirac probability measure concentrated at $z \in \Sigma$. We claim that $\lim_{n \to \infty} \mu_n = \mu$ in the weak-\* topology. Applying Lemma \[schr\] with $f_n(x):=\log \left\|\mathcal{A}^{(\alpha)}\left(x,n\right)\right\|_\alpha$ and noting that $f_n(x) \leq n\log\varrho(\alpha)$ for all $x$ and $n$, we obtain $$\label{calc0}\lim_{n \to \infty} \int \frac{1}{N} \log \left\|\mathcal{A}^{(\alpha)}\left(z,N\right)\right\|_\alpha d\mu_n(z)=\lim_{n \to \infty}\frac{1}{nN}\sum_{i=0}^{n - 1}\log\left\|\mathcal{A}^{(\alpha)}\left(T^ix,N\right)\right\|_\alpha =\log \varrho(\alpha)$$ for every $N \geq 1$. As in the proof of Lemma \[Zset\] we let $Z_{\alpha,N} = \{z \in \Sigma \colon \|\mathcal{A}^{(\alpha)}(z,N)\|_\alpha=\varrho(\alpha)^N\}$ for each $N \geq 1$, and we recall that $Z_{\alpha,N+1}\subseteq Z_{\alpha,N}$ for every $N$. Let $\nu \in \mathcal{M}$ be any limit point of the sequence $(\mu_n)$. If $f \colon \Sigma \to \mathbb{R}$ is any continuous function then it follows easily from the definition of $(\mu_n)$ that $|\int f\,d\nu - \int f\circ T \,d\nu| \leq \limsup_{n \to \infty} |\int f\circ T d\mu_n - \int f\,d\mu_n| =0$ and it follows that $\nu$ is $T$-invariant. For each $N \geq 1$ we have $$\int \frac{1}{N} \log \left\|\mathcal{A}^{(\alpha)}(z,N)\right\|_\alpha d\nu(z)=\log \varrho(\alpha),$$ and since $\left\|\mathcal{A}^{(\alpha)}(z,N) \right\|_\alpha\leq \varrho(\alpha)^N$ for all $z \in \Sigma$ it follows from this that $\nu\left(Z_{\alpha,N}\right)=1$. Since this applies for every $N$, and $Z_{\alpha,N+1} \subseteq Z_{\alpha,N}$ for every $N$, we deduce that $\nu(Z_\alpha)=1$. By hypothesis $\mu$ is the unique $T$-invariant element of $\mathcal{M}$ giving full measure to $Z_\alpha$, and it follows that $\nu = \mu$. We have shown that $\mu$ is the only weak-\* accumulation point of the sequence $(\mu_n)$, and since $\mathcal{M}$ is compact and metrisable we deduce that $\lim_{n \to \infty}\mu_n=\mu$, which completes the proof of the claim. The proof of the lemma now follows easily. Let $f \colon \Sigma \to \mathbb{R}$ be the characteristic function of the set $\{x \in \Sigma \colon x_1=1\}$, and note that $f$ is continuous since this set is both open and closed. Define a further continuous function by $g(x):=\mathrm{dist}(x,{\mathrm{supp}\,}\mu)$. Since $\mu_n \to \mu$ we may easily derive $$\lim_{n \to \infty} \varsigma(\pi_n(x)) = \lim_{n \to \infty} \frac{1}{n}\sum_{i=0}^{n-1}f\left(T^ix\right) = \lim_{n \to \infty} \int f\,d\mu_n = \int f\,d\mu = \mu(\{x \in \Sigma \colon x_1=1\})=\mathfrak{r}$$ and $$\lim_{n \to \infty} \frac{1}{n}\sum_{i=0}^{n-1}\mathrm{dist}(T^ix,{\mathrm{supp}\,}\mu) = \lim_{n \to \infty} \int g\,d\mu_n = \int g\,d\mu =0$$ as required. The proof is complete. General properties of balanced words {#sec4} ==================================== In this short and mostly expository section we present some combinatorial properties of balanced words which will be applied in subsequent sections. We first require some additional definitions. Given two nonempty finite words $u,v$ of equal length, we write $u < v$ if $u$ strictly precedes $v$ in the lexicographical order: that is, $u < v$ if and only if there is $k \geq 1$ such that $u_k=0$, $v_k=1$, and $u_i=v_i$ when $1 \leq i <k$. We define the *reverse* of a finite word $u$, which we denote by $\tilde u$, to be the word obtained by listing the terms of $u$ in reverse order. That is, if $u = u_1 u_2 \cdots u_n$ then $\tilde u = u_n u_{n-1} \cdots u_1$. We say that a finite word $p$ is a *palindrome* if $\tilde p = p$. Since the reverse of the empty word is also the empty word, the empty word is a palindrome. We say that two finite words $u$ and $v$ of equal length are *cyclic permutations* of each other, and write $u \simeq v$, if there exist finite words $a$ and $b$ such that $u=ab$ and $v=ba$. For each $n\geq 0$ this defines an equivalence relation on the set of words of length $n$. We begin by collecting together some standard results from [@Lot]: \[nolongwords\] Let $\gamma \in (0,1)$ and $x \in X_\gamma$, and choose any $N > \max\{\lceil \gamma^{-1}\rceil, \lceil (1-\gamma)^{-1}\rceil\}$. Then neither $0^N$ nor $1^N$ is a subword of $x$. Let $u \prec x$ with $|u|=N$. By [@Lot Prop. 2.1.10] we have $\gamma|u| + 1 \geq |u|_1 \geq \gamma |u| -1$. In particular we have $|u|_1 > \gamma \lceil \gamma^{-1}\rceil - 1 \geq 0$ and $|u|-|u|_1 > (1-\gamma)\lceil (1-\gamma)^{-1}\rceil -1\geq 0$, so $0<|u|_1<|u|$ and $u$ cannot be equal to $0^N$ or $1^N$. Let $\mathcal{W} \subset \Omega \times \Omega$ be the smallest set with the following two properties: $(0,1) \in \mathcal{W}$; if $(u,v)\in \mathcal{W}$, then $(uv,v) \in \mathcal{W}$ and $(u,vu)\in\mathcal{W}$. We say that $u \in \Omega$ is a *standard word* if either $(v,u) \in \mathcal{W}$ or $(u,v)\in \mathcal{W}$ for some $v \in \Omega$. \[standard\] The set of standard words has the following properties: (i) If $u$ is standard, with $|u|=q$ and $|u|_1=p$, then $u^\infty \in X_{p/q}$. (ii) For every $\gamma \in [0,1]$ there exists $x \in X_\gamma$ such that for infinitely many $q \in \mathbb{N}$ the word $\pi_q(x)$ is standard. (i). If $q=1$ then the result is trivial. For $q>1$, [@Lot Prop. 2.2.15] shows that every standard word is balanced. If $u$ is standard, then it is clear from the definition that $u^n$ is a subword of a standard word for every $n \geq 1$. In particular every $u^n$ is balanced and therefore $u^\infty$ is balanced. (ii). Let $x$ be the infinite word defined by $x_n:= \lfloor \gamma(n+2)\rfloor - \lfloor \gamma(n+1)\rfloor \in \{0,1\}$ for all $n \geq 1$. This word is called the *characteristic word* for $\gamma$. It is shown in [@Lot Prop 2.2.15] that $x$ has the required properties. The following result is given in the proof of [@Lot Prop. 2.1.3]. Note that $p$ may be the empty word; for example, this is true in the case $w = 0011$. \[lothaire1\] Let $w$ be a finite word which is not balanced, let $u$ and $v$ be subwords of $w$ of equal length such that $|u|_1 \geq 2+|v|_1$, and suppose that $u,v$ have the minimum possible length for which this property may be satisfied. Then there is a palindrome $p$ such that $u=1p 1$ and $v = 0 p 0$. The following two results arise in the fourth named author’s PhD thesis [@Theys]: \[theys1\] Let $w$ be a finite word and $p$ a palindrome, and suppose that $0p0$ and $1p1$ are subwords of $w$. Then there is a finite word $b$, which may be empty, such that either $0p0b1p1$ or $1p1b0p0$ is a subword of $w$. Recall that $u\prec v$ means that $u$ is a subword of $v$. Since $0p0$ and $1p1$ are both subwords of $w$, the only alternative is that they occur in an overlapping manner: that is, there are finite words $d,e,f$ such that $0d1e0f1 \prec w$, where $d1e=e0f=p$, or similarly with $0$ and $1$ interchanged. Since $\tilde p = p$, the relation $d1e=e0f=p$ implies $\tilde e 1 \tilde d = e0f$, and since $|\tilde e | = |e|$ we obtain $1=0$, a contradiction. We conclude that the words $0p0$ and $1p1$ cannot overlap, and the result follows. \[theys2\] Let $u$ be a finite word which is not balanced. Then there exist words $a,w,b$ such that $a w b \prec u$ and one of the following two possibilities holds: either $\tilde b > a$ and $\tilde w > w$, or $\tilde a > b$ and $w > \tilde w$. Combining Lemmas \[lothaire1\] and \[theys1\] we find that there exist words $p,v$ such that $\tilde p = p$ and either $0p0v1p1 \prec u$, or $1p1v0p0 \prec u$. In the former case we may take $a:=0p$, $b:=p1$ and $w:=0v1$, and in the latter case we may take $a:=1p$, $b:=p0$ and $w:=1v0$. Finally, we require the following lemma which characterises those finite words for which all cyclic permutations are balanced. This result appears to be something of a “folklore theorem” in the theory of balanced words; to the best of our knowledge, the proof which we present here is original. A version of this result appears as [@Ret Thm 6.9]. Note that the word $u:=1001$ is an example of a balanced word with the property that $u^\infty$ is not balanced. \[cyclic-balanced\] Let $u$ be a nonempty finite word. Then the following are equivalent: (i) Every cyclic permutation of $u$ is balanced. (ii) The finite word $u^2$ is balanced. (iii) The infinite word $u^\infty$ is balanced. It is clear that (iii)$\implies$(ii)$\implies$(i). To prove the implication (i)$\implies$(ii) by we shall show that if $u$ is a nonempty finite word such that $u^2$ is not balanced, then there is a cyclic permutation of $u$ which is not balanced. Let us then suppose that $u$ is a finite nonempty word such that $u^2$ is not balanced. Let $a,b$ be subwords of $u^2$ of equal length such that $||a|_1-|b|_1|\geq 2$, and suppose that no pair of shorter subwords may be found which also has this property. Clearly we have $||a|_1-|b|_1|=2$, and without loss of generality we shall assume that $|a|_1=2+|b|_1$. By Lemma \[lothaire1\] there exists a palindrome $p$ such that $a=1p1$ and $b=0p0$, and it follows from Lemma \[theys1\] that $|a|,|b| \leq |u|$. We may therefore choose words $c$ and $d$ such that $|c|=|d| = |u|-|a|=|u|-|b|$ and $ac \simeq bd \simeq u$. Since $|ac|_1=|bd|_1=|u|_1$ we have $|d|_1=2+|c|_1$, and since $a$ and $b$ are the shortest words with this property we must have $|b|=|a|\leq |c|$. Now, since $ac \simeq u$, it is not difficult to see that every word which is a subword of some cyclic permutation of $u$ and has length at most $|c|$ must occur as a subword of the word $cac$. In particular $b \prec cac$, and since $|b|=|a|$ we have either $b \prec ca$ or $b \prec ac$. In either case we have shown that there exists a cyclic permutation of $u$ which has both $a$ and $b$ as subwords, and no word with that property may be balanced. We conclude that (i) cannot not hold when (ii) does not hold, and so (i)$\implies$(ii) as required. It is now straightforward to show that (ii)$\implies$(iii). Let $u$ be a finite nonempty word such that $u^2$ is balanced; then every cyclic permutation of $u$ is balanced, since the cyclic permutations of $u$ are precisely the subwords of $u^2$ with length $|u|$. Now, the cyclic permutations of $u^2$ are precisely the words of the form $v^2$ where $v \simeq u$; but since (i)$\implies$(ii), all of these cyclic permutations must be balanced also. Applying the implication (i)$\implies$(ii) again we deduce that $u^4$ is balanced. Repeating this procedure inductively shows that $u^{2^k}$ is balanced for every $k \geq 1$, and this yields (iii). Relationships between balanced words and extremal orbits {#sec5} ======================================================== The principal goal of this section is to show that for each $\alpha \in (0,1]$, every recurrent $x \in \Sigma$ which is strongly extremal for $\mathsf{A}_\alpha$ is balanced. We also prove some related ancillary results which will be applied in the following section. The following valuable lemma shows that under quite mild conditions the trace, spectral radius, Euclidean norm and smallest diagonal element of a matrix of the form $\mathcal{A}(u)$ approximate each other quite closely. For every $B \in {\mathbf{M}_2(\mathbb{R})}$ we define $\mathfrak{d}(B)$ to be the minimum modulus of the diagonal entries of $B$. \[rho-norm\] Let $\alpha \in [0,1]$ and $N \geq 2$, and let $u$ be a nonempty finite word such that $0^N, 1^N \nprec u$. Then, $$\frac{1}{2N^2}{\left|\!\left|\!\left|}\mathcal{A}^{(\alpha)}(u){\right|\!\right|\!\right|}\leq \mathfrak{d}\left(\mathcal{A}^{(\alpha)}(u)\right) \leq \frac{1}{2}{\mathrm{tr}\,}\mathcal{A}^{(\alpha)}(u) \leq \rho\left(\mathcal{A}^{(\alpha)}(u)\right) \leq {\left|\!\left|\!\left|}\mathcal{A}^{(\alpha)}(u){\right|\!\right|\!\right|}.$$ Let $\mathfrak{m}(B)$ denote the maximum of the entries of a non-negative matrix $B \in {\mathbf{M}_2(\mathbb{R})}$. The inequalities $${{|\!|\!|}}B{{|\!|\!|}}= \sqrt{\rho(B^*B)} \leq \sqrt{{\mathrm{tr}\,}(B^*B)} \leq 2\mathfrak{m}(B)$$ and $$\mathfrak{d}(B) \leq \frac{1}{2}{\mathrm{tr}\,}B \leq \rho(B) \leq {{|\!|\!|}}B{{|\!|\!|}}$$ are elementary. To prove the lemma, it therefore suffices to show that $\mathfrak{m}\left(\mathcal{A}^{(\alpha)}(u)\right) \leq N^2 \mathfrak{d}(\mathcal{A}^{(\alpha)}(u))$ whenever $0^N, 1^N \nprec u$. Since $\mathcal{A}^{(\alpha)}(u)\equiv \alpha^{|u|_1}\mathcal{A}(u)$ it is clearly sufficient to consider only the case $\alpha=1$. Let us prove this inequality. We shall suppose that the final symbol occurring in $u$ is $0$, since the opposite case is easily dealt with by symmetry. Let $n \geq 1$ and $a_1,\ldots,a_n \geq 1$ be integers such that either $u=0^{a_n}1^{a_{n-1}}0^{a_{n-2}}\cdots1^{a_2}0^{a_1}$ with $n$ odd, or $u=1^{a_n}0^{a_{n-1}}1^{a_{n-2}}\cdots1^{a_2}0^{a_1}$ with $n$ even. By hypothesis we have $a_k \leq N-1$ for every $k$. For $1 \leq k \leq n$ let us define $$\frac{p_k}{q_k}:= \cfrac{1}{a_1+ \cfrac{1}{a_2+\dotsb+ \cfrac{1}{a_{k-1}+ \cfrac{1}{a_k} }}}$$ in least terms, and define also $p_0,q_{-1}:=0$ and $p_{-1},q_0:=1$. The integers $p_k, q_k$ then satisfy the recurrence relations $p_{k}=a_{k}p_{k-1}+p_{k-2}$ and $q_{k}=a_{k}q_{k-1}+q_{k-2}$ for all $k$ in the range $1\leq k\leq n$. A well-known formula for $p_k/q_k$ implies $$\mathcal{A}(u)=A_0^{a_n}A_1^{a_{n-1}}\cdots A_0^{a_1} = \left(\begin{array}{cc}1&a_n\\0&1\end{array}\right) \left(\begin{array}{cc}1&0\\a_{n-1}&1\end{array}\right)\cdots \left(\begin{array}{cc}1&a_1\\0&1\end{array}\right)= \left(\begin{array}{cc}p_n&q_n\\p_{n-1}&q_{n-1}\end{array} \right)$$ if $n$ is odd, and $$\mathcal{A}(u)=A_1^{a_n}A_0^{a_{n-1}}\cdots A_0^{a_1} = \left(\begin{array}{cc}1&0\\a_n&1\end{array}\right) \left(\begin{array}{cc}1&a_{n-1}\\0&1\end{array}\right)\cdots \left(\begin{array}{cc}1&a_1\\0&1\end{array}\right)= \left(\begin{array}{cc}p_{n-1}&q_{n-1}\\p_{n}&q_{n} \end{array}\right)$$ if $n$ is even (see, e.g., [@Frame]). If $n$ is odd then clearly $\mathfrak{d}(\mathcal{A}(u))=\min\{p_n,q_{n-1}\}$, and since $q_n=a_{n}q_{n-1}+q_{n-2} \leq (a_{n}+1)q_{n-1} \leq Nq_{n-1}$ and $p_{n}/q_{n} \geq 1/(a_1+1) \geq 1/N$ we obtain $\mathfrak{m}(\mathcal{A}(u))=q_n \leq \min\{Np_n,Nq_{n-1}\}< N^2 \mathfrak{d}(\mathcal{A}(u))$ as required. If $n$ is even then similarly $\mathfrak{m}(\mathcal{A}(u))=q_n \leq N q_{n-1} \leq N^2 p_{n-1}=N^2\mathfrak{d}(\mathcal{A}(u))$. The proof is complete. Let $a,w,b$ be nonempty finite words with $|a|=|b|$. We shall say that $(a,w,b)$ is a *suboptimal triple* if either $\tilde a > b$ and $w > \tilde w$, or $\tilde b > a$ and $\tilde w > w$. We require the following lemma due to V. Blondel, J. Theys and A. Vladimirov [@BTV Lemma 4.2]: \[beeteevee\] Let $w$ be a nonempty finite word. Then $\mathcal{A}(\tilde w) - \mathcal{A}(w) = k(w)J$, where $k(w) \in \mathbb{Z}$ and $$J := A_0A_1 - A_1A_0 = \left(\begin{array}{cc}1&0\\0&-1\end{array}\right).$$ Moreover, $k(w)$ is positive if and only if $w > \tilde w$, and negative if and only if $w < \tilde w$. The following is a slightly strengthened version of [@BTV Lemma 4.3]: \[sot2\] Let $(a,w,b)$ be a suboptimal triple, let $B_1,B_2$ be non-negative matrices, and let $\alpha \in [0,1]$. Then $${\mathrm{tr}\,}\left(B_1 \mathcal{A}^{(\alpha)}(a \tilde w b)B_2\right)\geq {\mathrm{tr}\,}\left(B_1\mathcal{A}^{(\alpha)}(a w b)B_2\right) + \alpha^{|a w b|_1}\mathfrak{d}(B_1)\mathfrak{d}(B_2).$$ Since ${\mathrm{tr}\,}\mathcal{A}^{(\alpha)}(u)=\alpha^{|u|_1}{\mathrm{tr}\,}\mathcal{A}(u)$ for every finite word $u$ it is clearly sufficient to treat only the case $\alpha=1$. We shall deal first with the case where $\tilde a >b$ and $w > \tilde w$, the alternative case being similar. Since $\tilde a > b$ we may write $a=u 1 c$, $b=\tilde c 0 \tilde v$ for some finite words $c$, $u$ and $v$ (which may be empty). Note that $J$ satisfies the relations $$A_1JA_1= A_0JA_0 = J,\qquad A_0JA_1 = \left(\begin{array}{cc}0 & -1\\ -1 & -1\end{array}\right),\qquad A_1JA_0 = \left(\begin{array}{cc}1 & 1\\ 1 & 0\end{array}\right),$$ and hence by Lemma \[beeteevee\], $${\mathrm{tr}\,}\left(\mathcal{A}(a)(\mathcal{A}(\tilde w) - \mathcal{A}(w))\mathcal{A}(b)\right)=k(w) {\mathrm{tr}\,}\left(\mathcal{A}(u)\left(\begin{array}{cc}1 & 1\\ 1 & 0\end{array}\right)\mathcal{A}(\tilde v)\right) \geq 1.$$ Now, a direct calculation shows that for any non-negative matrix $C \in \mathbf{M}_2(\mathbb{R})$ we have ${\mathrm{tr}\,}(B_1CB_2) \geq \mathfrak{d}(B_1)\mathfrak{d}(B_2){\mathrm{tr}\,}(C)$. Since the matrix $\mathcal{A}(a)(\mathcal{A}(\tilde w) - \mathcal{A}(w))\mathcal{A}(b)$ is non-negative, we deduce that $$\begin{aligned} {\mathrm{tr}\,}(B_1 \mathcal{A}(a \tilde w b)B_2)- {\mathrm{tr}\,}(B_1\mathcal{A}(a w b)B_2) &={\mathrm{tr}\,}\left(B_1 \mathcal{A}(a)(\mathcal{A}(\tilde w) - \mathcal{A}(w))\mathcal{A}(b) B_2\right)\\ &\geq \mathfrak{d}(B_1)\mathfrak{d}(B_2) {\mathrm{tr}\,}\left( \mathcal{A}(a)(\mathcal{A}(\tilde w) - \mathcal{A}(w))\mathcal{A}(b) \right)\\ & \geq \mathfrak{d}(B_1)\mathfrak{d}(B_2)\end{aligned}$$ as required. In the case where $\tilde b > a$ and $\tilde w > w$, the integer $k(w)$ and the matrix $A_0JA_1$ each contribute a negative sign to the product $\mathcal{A}(a)(\mathcal{A}(\tilde w) - \mathcal{A}(w))\mathcal{A}(b)) $ and the same conclusion may be reached. We may now prove the following two results which will allow us to characterise extremal orbits in terms of balanced words: \[finite-balanced-best\] Let $0 \leq \frac{p}{q} \leq 1$, with the integers $p$ and $q$ not necessarily coprime. Suppose that $|u|=q$, $|u|_1=p$ and $$\label{jt}\rho(\mathcal{A}(u)) = \max\left\{\rho(\mathcal{A}(v)) \colon |v|=q\text{ and }|v|_1=p\right\}.$$ Then the infinite word $u^\infty$ is balanced. We shall begin by showing that if $u$ has the properties described then it is balanced. Let us assume for a contradiction that $u$ has these properties but is *not* balanced. By Lemma \[theys2\], there exists a suboptimal triple $(a,w,b)$ such that $a w b \prec u$. Let us write $u = s_1 a w b s_2$ and define $\hat u:=s_1 a \tilde w b s_2 $. By Lemma \[sot2\] we have ${\mathrm{tr}\,}(\mathcal{A}(\hat u)) > {\mathrm{tr}\,}(\mathcal{A}(u))$. Since $\mathcal{A}(\hat u)$ and $\mathcal{A}(u)$ are both non-negative matrices with unit determinant, it follows that $$\begin{aligned} \rho(\mathcal{A}(\hat u)) &= \frac{1}{2}\left({\mathrm{tr}\,}(\mathcal{A}(\hat u)) + \sqrt{{\mathrm{tr}\,}(\mathcal{A}(\hat u))^2 -4}\right)> \frac{1}{2}\left({\mathrm{tr}\,}(\mathcal{A}(u)) + \sqrt{{\mathrm{tr}\,}(\mathcal{A}(u))^2 -4}\right) \\ &= \rho(\mathcal{A}(u)).\end{aligned}$$ Since clearly $|\hat u| = |u|$ and $|\hat u |_1 = |u|_1$ this is a contradiction, so $u$ must be balanced as required. Now, suppose that $u$ satisfies with $|u|_1=p$ and $|u|=q$, and that $v$ is a cyclic permutation of $u$. It is a well-known property of the spectral radius that $\rho(B_1B_2)=\rho(B_2B_1)$ for any $B_1,B_2 \in {\mathbf{M}_2(\mathbb{R})}$, and it follows from this that $\rho(\mathcal{A}(v))=\rho(\mathcal{A}(u))$. By applying the preceding argument to $v$ it follows that $v$ is also balanced. We conclude that all of the cyclic permutations of $u$ are balanced, and by Lemma \[cyclic-balanced\] this implies that $u^\infty$ is balanced as required. \[balanced1\] Let $\alpha \in (0,1]$ and suppose that $x \in Z_\alpha$. Then $x$ is balanced. To prove the proposition, let us suppose that there exists a recurrent infinite word $x \in Z_\alpha$ which is not balanced. We shall then be able to deduce a contradiction, and the result follows. The general principle of the proof is that if $x$ is recurrent and not balanced, then we can construct a word based on $x$ along which the trace of the product $\mathcal{A}^{(\alpha)}(x,n)$ grows “too rapidly”. Fix a real number $C_\alpha>1$ such that $C_\alpha^{-1}\|B\|_\alpha \leq \|B\| \leq C_\alpha \|B\|_\alpha$ for all $B\in {\mathbf{M}_2(\mathbb{R})}$. By Lemma \[gtr1\] we have $\varrho(\alpha)>1$, and by Gelfand’s formula we have $\left\|\mathcal{A}^{(\alpha)}\left(0^n\right)\right\|^{1/n}_\alpha \to 1$ as $n \to \infty$. It follows in particular that there is an integer $N_0 \geq 2$ such that $\left\|\mathcal{A}^{(\alpha)}\left(0^{N_0}\right)\right\|_\alpha < \varrho(\alpha)^{N_0}$ and therefore $0^{N_0} \nprec z$ for every $z \in Z_\alpha$. Similarly we may choose $N_1 \geq 2$ such that $1^{N_1} \nprec z$ for every $z \in Z_\alpha$. Let $N:= \max\{N_0,N_1\}$, and choose a further integer $M \geq 2$ such that $$\max\left\{\left\|\mathcal{A}^{(\alpha)}\left(0^M\right)\right\|_\alpha, \left\|\mathcal{A}^{(\alpha)}\left(1^M\right)\right\|_\alpha\right\} <\frac{\varrho(\alpha)^M }{ 2C_\alpha N^2}.$$ If $v$ is any subword of $x$, then there exists $n \geq 0$ such that $\mathcal{A}^{(\alpha)}(v)=\mathcal{A}^{(\alpha)}(T^nx,|v|)$, and since $T^nx \in Z_\alpha$ this implies $$\label{pfbal1} \mathfrak{d}\left(\mathcal{A}^{(\alpha)}(v)\right)\geq \frac{1}{2N^2}{\left|\!\left|\!\left|}\mathcal{A}^{(\alpha)}(v){\right|\!\right|\!\right|}\geq \frac{1}{2C_\alpha N^2}\left\|\mathcal{A}^{(\alpha)}\left(T^nx,|v|\right)\right\|_\alpha = \frac{\varrho(\alpha)^{|v|}}{2C_\alpha N^2},$$ where we have used Lemma \[rho-norm\]. On the other hand, for any nonempty finite word $u$, $$\label{pfbal4}{\mathrm{tr}\,}\mathcal{A}^{(\alpha)}(u) \leq 2\rho\left(\mathcal{A}^{(\alpha)}(u)\right) \leq 2 \left\|\mathcal{A}^{(\alpha)}(u)\right\|_\alpha \leq 2\varrho(\alpha)^{|u|}.$$ Now, since $x$ is not balanced, it by definition has a subword which is not balanced. Applying Lemma \[sot2\] to this subword we deduce that there exists a suboptimal triple $(a, w,b)$ such that $a w b \prec x$. Define $\ell:=|a w b|$, and fix an integer $K \geq 1$ such that $$\left(1+\frac{\alpha^\ell}{16C_\alpha^2 N^4 M^2 \varrho(\alpha)^\ell}\right)^K> 2C_\alpha N^2.$$ Since $x$ is recurrent there are infinitely many occurrences of the word $a w b$ as a subword of $x$, and so we may choose words $s_1,\ldots,s_{K+1}$ such that the word $$u^{(0)}:= s_1 (a w b) s_2 (a w b) s_3 \ldots\ldots s_K (a w b) s_{K+1}$$ is a subword of $x$. Let $L:=|u^{(0)}|$, and for $i=1,\ldots,K$ define a new word $u^{(i)}$ by reversing the first $i$ explicit instances of the word $w$ in $u^{(0)}$; that is, $$u^{(1)}:= s_1 (a \tilde w b) s_2 (a w b) s_3 \ldots\ldots s_K (a w b) s_{K+1},$$ $$u^{(2)}:= s_1 (a \tilde w b) s_2 (a \tilde w b) s_3 \ldots\ldots s_K (a w b) s_{K+1},$$ and so forth, up to $$u^{(K)}:= s_1 (a \tilde w b) s_2 (a \tilde w b) s_3 \ldots\ldots s_K (a \tilde w b) s_{K+1}.$$ Note that for each $i$ we have, by applying Lemma \[sot2\] $i$ times and using , $$\label{pfbal5}{\mathrm{tr}\,}\mathcal{A}^{(\alpha)}(u^{(i)}) \geq {\mathrm{tr}\,}\mathcal{A}^{(\alpha)}(u^{(0)}) \geq 2\mathfrak{d}\left(\mathcal{A}^{(\alpha)}(u^{(0)}) \right)\geq \frac{\varrho(\alpha)^L}{C_\alpha N^{2}},$$ since $u^{(0)}$ is a subword of $x$. As a consequence we observe that $0^M\nprec u^{(i)}$ for every $i$, since if we were to have $0^M \prec u^{(i)}$ for some $i$ then we could obtain $$\begin{aligned} \frac{\varrho(\alpha)^L}{2C_\alpha N^2} \leq \frac{1}{2}{\mathrm{tr}\,}\mathcal{A}^{(\alpha)}(u^{(i)}) &\leq \rho\left(\mathcal{A}^{(\alpha)}(u^{(i)})\right) \leq \left\|\mathcal{A}^{(\alpha)}(u^{(i)})\right\|_\alpha \\&\leq \left\|\mathcal{A}^{(\alpha)}\left(0^M\right)\right\|_\alpha . \varrho(\alpha)^{L-M} <\frac{\varrho(\alpha)^L}{2C_\alpha N^2},\end{aligned}$$ a contradiction. Clearly an analogous contradiction would arise if we were to have $1^M \prec u^{(i)}$ and we conclude that $1^M \nprec u^{(i)}$ also. Now, for $i=1,\ldots,K$ let $c^{(i)}$, $d^{(i)}$ be those words such that $u^{(i-1)}=c^{(i)} a w b d^{(i)}$ and $u_{i}=c^{(i)} a \tilde w b d^{(i)}$. Note that $|c^{(i)}|+|d^{(i)}|+\ell = L$ for each $i$. Making $i$ applications of Lemma \[sot2\] and using yields $$\begin{aligned} {\mathrm{tr}\,}\mathcal{A}^{(\alpha)}(c^{(i)}) &={\mathrm{tr}\,}\mathcal{A}^{(\alpha)}\left(s_1(a\tilde w b)s_2 \ldots s_{i-1} (a\tilde w b)s_i\right) \\ &\geq {\mathrm{tr}\,}\mathcal{A}^{(\alpha)}\left(s_1(a w b)s_2 \ldots s_{i-1} (a w b)s_i\right) \geq \frac{\varrho(\alpha)^{|c^{(i)}|}}{2C_\alpha N^2},\end{aligned}$$ since the last of these words is a subword of $u^{(0)}$, and $u^{(0)}$ is a subword of $x$. Since $c^{(i)} \prec u^{(i)}$ and $0^M, 1^M\nprec u^{(i)}$ we have $0^M , 1^M \nprec c^{(i)}$, and by Lemma \[rho-norm\] in combination with the preceding inequality this implies $$\label{pfbal2} \mathfrak{d}\left(\mathcal{A}^{(\alpha)}(c^{(i)})\right) \geq \frac{1}{4M^2}{\mathrm{tr}\,}\mathcal{A}^{(\alpha)}(c^{(i)}) \geq \frac{\varrho(\alpha)^{|c^{(i)}|}}{4C_\alpha N^2 M^2}.$$ Equally, since $d^{(i)} \prec u^{(0)}$ and $u^{(0)}$ is a subword of $x$, we may apply to obtain $$\label{pfbal3}\mathfrak{d} \left(\mathcal{A}^{(\alpha)}(d^{(i)}) \right)\geq \frac{\varrho(\alpha)^{|d^{(i)}|}}{2C_\alpha N^2}.$$ We may now complete the proof. Combining , , and we obtain for each $i$ $$\alpha^{|a w b|_1}\mathfrak{d}\left( \mathcal{A}^{(\alpha)}(c^{(i)})\right) \mathfrak{d}\left( \mathcal{A}^{(\alpha)}(d^{(i)})\right) \geq \frac{\alpha^\ell\varrho(\alpha)^{L-\ell}}{8C_\alpha^2 N^4 M^2}\geq \frac{\alpha^\ell}{16C_\alpha^2 N^4 M^2\varrho(\alpha)^\ell} {\mathrm{tr}\,}\mathcal{A}^{(\alpha)}(u^{(i-1)}),$$ and hence by Lemma \[sot2\], $${\mathrm{tr}\,}\mathcal{A}^{(\alpha)}(u^{(i)}) \geq \left(1+\frac{\alpha^\ell}{16C_\alpha^2 N^4M^2\varrho(\alpha)^\ell}\right){\mathrm{tr}\,}\mathcal{A}^{(\alpha)}(u^{(i-1)}).$$ In combination with and this yields $$\begin{aligned} 2\varrho(\alpha)^L \geq {\mathrm{tr}\,}\mathcal{A}^{(\alpha)}(u^{(K)}) &\geq \left(1+\frac{\alpha^\ell}{16C_\alpha^2 N^4M^2\varrho(\alpha)^\ell}\right)^K{\mathrm{tr}\,}\mathcal{A}^{(\alpha)}(u^{(0)}) \\&\geq \left(1+\frac{\alpha^\ell}{16C_\alpha^2 N^4M^2\varrho(\alpha)^\ell}\right)^K \cdot\frac{\varrho(\alpha)^L}{C_\alpha N^2},\end{aligned}$$ contradicting our choice of $K$. The proof is complete. Study of the growth of matrix products along balanced words {#section6} =========================================================== In this section we analyse in detail the exponential growth rate of $\mathcal{A}(x,n)$ in the limit as $n \to \infty$ for $x \in X_\gamma$, investigating in particular the manner in which this value depends on $\gamma$. A construction with similar properties is discussed briefly in [@BM §4.3]. The results of this section are summarised in the following proposition: \[Sproposition\] - There exists a continuous concave function $S \colon [0,1] \to \mathbb{R}$ such that for each $\gamma \in [0,1]$, $$\lim_{n \to \infty} \frac{1}{n}\log{\left|\!\left|\!\left|}\mathcal{A}(x,n){\right|\!\right|\!\right|}= \lim_{n \to \infty} \frac{1}{n}\log\rho(\mathcal{A}(x,n)) = S(\gamma)$$ uniformly for $x \in X_\gamma$. - If $\gamma=p/q \in [0,1]\cap\mathbb{Q}$ then $S(\gamma)=q^{-1}\log \rho(\mathcal{A}(x,q))$ for every $x \in X_\gamma$. - The function $S$ also satisfies $\inf_{\gamma\in[0,1]} S = S(0)=S(1)=0$, $\sup S = S(1/2)=\log \varrho(1)$, and $S(\gamma)=S(1-\gamma)$ for all $\gamma \in [0,1]$. - The function $S$ is non-decreasing on $\left[0,\frac12\right]$. The proof of Proposition \[Sproposition\] is given in the form of a sequence of lemmas. Specifically, the result follows by combining Lemmas \[Sfunction\]–\[Sbounds\] and Lemma \[Smain\] below. \[Sfunction\] Let $\gamma \in [0,1]$. Then there exists a real number $S(\gamma)$ such that $$\lim_{n \to \infty}\frac{1}{n}\log {\left|\!\left|\!\left|}\mathcal{A}(x,n){\right|\!\right|\!\right|}=\lim_{n \to \infty}\frac{1}{n}\log\rho( \mathcal{A}(x,n))=S(\gamma)$$ uniformly over $x \in X_\gamma$. In the cases $\gamma=0$, $\gamma=1$ the lemma is trivial, since by Theorem \[Xphi\] the set $X_\gamma$ consists of a single point which is fixed under $T$, and the result follows by Gelfand’s formula. To prove the lemma in the nontrivial cases we use a result due to A. Furman [@F] on uniform convergence for linear cocycles over homeomorphisms. Since in general the transformations $T \colon X_\gamma \to X_\gamma$ are not homeomorphisms, this is achieved via an auxiliary construction. Let us fix $\gamma \in (0,1)$. Define a space of two-sided sequences ${\hat X}_\gamma \subset \{0,1\}^{\mathbb{Z}}$ as follows: the sequence $x=(x_n)_{n \in \mathbb{Z}} \in \{0,1\}^{\mathbb{Z}}$ belongs to ${\hat{X}}_{\gamma}$ if and only if there exists $\delta \in [0,1]$ such that either $x_n \equiv \lceil (n+1)\gamma + \delta \rceil - \lceil n\gamma + \delta \rceil$ for all $n \in \mathbb{Z}$, or $x_n \equiv \lfloor (n+1)\gamma + \delta \rfloor - \lfloor n\gamma + \delta \rfloor$ for all $n \in \mathbb{Z}$. It follows from the discussion subsequent to the statement of Theorem \[Xphi\] that the two-sided sequence $(x_i)_{i \in \mathbb{Z}}$ belongs to ${\hat{X}}_\gamma$ if and only if the one-sided sequence $(x_{i+k})_{i =1}^\infty$ belongs to $X_\gamma$ for every $k \in \mathbb{Z}$. We equip ${\hat{X}}_\gamma$ with the topology it inherits from the infinite product topology on $\{0,1\}^{\mathbb{Z}}$, and define $\hat{T} \colon {\hat{X}}_\gamma \to {\hat{X}}_\gamma$ by $\hat{T}[(x_i)_{i \in \mathbb{Z}}]:=(x_{i+1})_{i \in \mathbb{Z}}$ analogously to the definition of $T$. In the same manner as for the transformation $T \colon X_\gamma \to X_\gamma$, one may show that $\hat T \colon {\hat{X}}_\gamma \to {\hat{X}}_\gamma$ is a continuous, uniquely ergodic transformation of a compact metrisable space. Finally, we define $\hat{\mathcal{A}} \colon {\hat{X}}_\gamma \times \mathbb{Z} \to {\mathbf{M}_2(\mathbb{R})}$ in the following manner: given $x=(x_i)_{i \in \mathbb{Z}} \in {\hat{X}}_\gamma$ and $n \geq 1$, we define $\hat{\mathcal{A}}(x,n):=A_{x_n}\cdots A_{x_1}$, $\hat{\mathcal{A}}(x,-n):= A_{x_{-(n-1)}}^{-1}A_{x_{-(n-2)}}^{-1}\cdots A_{x_{0}}^{-1} = \hat{\mathcal{A}}(\hat{T}^{-n}x,n)^{-1}$, and $\hat{\mathcal{A}}(x,0)=I$. It may be directly verified that $\hat{\mathcal{A}}$ is continuous and satisfies the following cocycle relation: for all $x \in {\hat{X}}_\gamma$ and $n,m \in \mathbb{Z}$, we have $\hat{\mathcal{A}}(x,n+m)= \hat{\mathcal{A}}(\hat{T}^nx,m)\hat{\mathcal{A}}(x,n)$. Now let $N\geq 1$ be as given by Lemma  \[nolongwords\]. For each $x \in X_\gamma$ we have $0^N, 1^N \nprec x$. Since for each $x = (x_i)_{i \in \mathbb{Z}} \in \hat{X}_\gamma$ we have $(x_i)_{i=1}^\infty \in X_\gamma$, it follows from this that the matrix product which defines $\hat{\mathcal{A}}(x,N)$ is a product of mixed powers of $A_0$ and $A_1$, and does not simply equal $A_0^N$ or $A_1^N$. A simple calculation shows that this implies that for each $x \in {\hat{X}}_\gamma$, all of the entries of the matrix $\hat{\mathcal{A}}(x,N)$ are strictly positive. We may therefore apply [@F Theorem 3] to deduce that there exists a real number $S(\gamma)$ such that $\frac{1}{n}\log \|\hat{\mathcal{A}}(x,n)\|$ converges uniformly to $S(\gamma)$ for $x \in {\hat{X}}_\gamma$. Since clearly for each $n \geq 1$, $$\left\{\hat{\mathcal{A}}(x,n) \colon x \in {\hat{X}}_\gamma\right\} = \left\{\mathcal{A}(x,n)\colon x \in X_\gamma\right\},$$ this implies that $\frac{1}{n}\log \|\mathcal{A}(x,n)\|$ converges uniformly to $S(\gamma)$ for $x \in X_\gamma$. Since as previously noted we have $0^N, 1^N \nprec x$ for all $x \in X_\gamma$, it follows immediately from Lemma \[rho-norm\] that also $\frac{1}{n}\log \rho(\mathcal{A}(x,n)) \to S(\gamma)$ uniformly over $x \in X_\gamma$. The proof is complete. \[Sestimates\] The function $S$ has the following properties: (i) Let $\gamma=p/q \in [0,1]$, not necessarily in least terms: then $S(\gamma)=q^{-1}\log \rho(\mathcal{A}(x,q))$ for every $x \in X_{p/q}$. (ii) Let $u$ be a finite word such that $|u|=q$, $|u|_1=p$. Then $S(p/q) \geq q^{-1}\log\rho(\mathcal{A}(u))$. (iii) Let $\gamma \in [0,1]$ be irrational. Then there exist $x \in X_\gamma$ and a sequence of rational numbers $(p_n/q_n)_{n=1}^\infty$ converging to $\gamma$ such that $S(p_n/q_n)=q_n^{-1}\log \rho(\mathcal{A}(x,q_n))$ for every $n \geq 1$. (iv) For every $\gamma \in [0,1]$ we have $S(\gamma)=S(1-\gamma)$. (i). By Theorem \[Xphi\] we have $T^qx=x$ for every $x \in X_{p/q}$, and so for every $x\in X_{p/q}$, $$S(p/q) = \lim_{n \to \infty} \frac{1}{kq}\log{\left|\!\left|\!\left|}\mathcal{A}(x,kq){\right|\!\right|\!\right|}= \lim_{k \to \infty}\frac{1}{kq}\log {\left|\!\left|\!\left|}\mathcal{A}(x,q)^k{\right|\!\right|\!\right|}= \frac{1}{q}\log \rho\left(\mathcal{A}(x,q)\right).$$ (ii). Clearly the set of all words $v$ such that $|v|=q$ and $|v|_1=p$ is finite, so there exists a word $v$ which attains the maximum value of $\rho(\mathcal{A}(v))$ within this set. In particular we have $\rho(\mathcal{A}(v))\geq \rho(\mathcal{A}(u))$. By Lemma \[finite-balanced-best\] the infinite word $v^\infty \in \Sigma$ is balanced, and since it is clearly recurrent we have $v^\infty \in X_{p/q}$ by Theorem \[Xphi\]. By part (i) this implies $q^{-1}\log \rho(\mathcal{A}(v))=S(p/q)$ as required. \(iii) Let $x \in X_\gamma$ be as given by Lemma \[standard\](ii), and let $(q_n)_{n=1}^\infty$ be a strictly increasing sequence of natural numbers such that $\pi_{q_n}(x)$ is a standard word for every $n$. Define $p_n:=|\pi_{q_n}(x)|_1$ for each $n \geq 1$. By the definition of $X_\gamma$ we have $p_n/q_n \to \gamma$. Since each $\pi_{q_n}(x)$ is standard, $[\pi_{q_n}(x)]^\infty \in X_{p_n/q_n}$ for each $n$ by Lemma \[standard\](i), and by part (i) of the present lemma this implies $S(p_n/q_n)=q_n^{-1}\log \rho(\mathcal{A}(x,q_n))$. \(iv) For each finite or infinite word $\omega$, define $\overline{\omega}$ to be the *mirror image* of $\omega$, i.e., the unique word such that $\overline{\omega}_i=1$ if and only if $\omega_i=0$. It is clear that $x \in X_\gamma$ if and only if $\overline{x}\in X_{1-\gamma}$. Define $R=\left(\begin{smallmatrix}0&1\\1&0\end{smallmatrix}\right)$ and note that $R^{-1}A_0R=A_1$ and $R^{-1}A_1R=A_0$. If $x \in X_\gamma$ and $n \geq 1$, then $$R^{-1}\mathcal{A}(x,n)R=(R^{-1}A_{x_n}R)\cdots (R^{-1}A_{x_2}R)(R^{-1}A_{x_1}R) = \mathcal{A}(\overline{x},n)$$ and in particular $\rho(\mathcal{A}(x,n))=\rho(\mathcal{A}(\overline{x},n))$. It follows easily that $S(\gamma)=S(1-\gamma)$. \[Sbounds\] The function $S$ satisfies $S(0)=\inf S = 0$ and $S(\frac{1}{2})=\sup S = \log \varrho(1)$. The reader may easily verify that $$\label{S1}{{|\!|\!|}}A_1{{|\!|\!|}}={{|\!|\!|}}A_0{{|\!|\!|}}={{|\!|\!|}}A_0A_1{{|\!|\!|}}^{\frac{1}{2}}={{|\!|\!|}}A_1A_0{{|\!|\!|}}^{\frac{1}{2}}=\rho(A_0A_1)^{1/2}= \frac{1+\sqrt{5}}{2}.$$ By Theorem \[Xphi\], we have $X_{1/2}=\{(01)^\infty,(10)^\infty\}$, so by Gelfand’s formula we have $$\lim_{n \to \infty}{{|\!|\!|}}\mathcal{A}(x,n){{|\!|\!|}}^{1/n} = \rho(A_0A_1)^{\frac{1}{2}} = \rho(A_1A_0)^{\frac{1}{2}} = \frac{1+\sqrt{5}}{2}$$ when $x \in X_{1/2}$. Let us show that $\varrho(1)=\frac{1+\sqrt{5}}{2}$. In other words, we will prove that $$\sup\left\{{{|\!|\!|}}\mathcal{A}(x,n){{|\!|\!|}}^{1/n}\colon x\in\Sigma\right\} = \lim_{n\to\infty}{{|\!|\!|}}\mathcal{A}((01)^\infty,n){{|\!|\!|}}^{1/n} =\frac{1+\sqrt{5}}{2}.$$ Suppose $x$ has a tail different from $(01)^\infty$. Then it must contain one of the following subwords: $w_1=11(01)^n1,\ w_2=11(01)^n00, w_3=00(10)^n0, w_4=00(10)^n11$ with $n\ge0$. In view of mirror symmetry, it suffices to deal with $w_1$ and $w_2$. We will show that it is possible to replace them with subwords of $(01)^\infty$, $w_1'$ and $w_2'$ respectively, in such a way that the corresponding growth exponent does not decrease. Namely, put $w_1'=(10)^{n+1}1$ and $w_2'=(10)^{n+2}$. It is easy to see that for $n\ge1$, $$\begin{aligned} (A_0A_1)^n &= \begin{pmatrix} F_{2n} & F_{2n-1} \\ F_{2n-1} & F_{2n-2} \end{pmatrix} \\ (A_1A_0)^n &= \begin{pmatrix} F_{2n-2} & F_{2n-1} \\ F_{2n-1} & F_{2n} \end{pmatrix},\end{aligned}$$ where, as above, $(F_n)_{n=0}^\infty$ is the Fibonacci sequence (with $F_0=F_1=1$). Hence $$A_1^2(A_0A_1)^nA_1 = \begin{pmatrix} F_{2n+1} & F_{2n-1} \\ F_{2n+3} & F_{2n+1} \end{pmatrix},$$ whereas $$(A_1A_0)^{n+1}A_1 = \begin{pmatrix} F_{2n+2} & F_{2n+1} \\ F_{2n+3} & F_{2n+2} \end{pmatrix},$$ i.e., $\mathcal A(w_1')$ dominates $\mathcal A(w_1)$ entry-by-entry. Similarly, $$A_1^2(A_0A_1)^nA_0^2 = \begin{pmatrix} F_{2n} & F_{2n+2} \\ F_{2n+2} & F_{2n+4} \end{pmatrix}$$ and $$(A_1A_0)^{n+2} = \begin{pmatrix} F_{2n+2} & F_{2n+3} \\ F_{2n+3} & F_{2n+4} \end{pmatrix}.$$ Thus, $\varrho(1)= \frac{1+\sqrt{5}}{2}=e^{S(\frac{1}{2})}$, and since clearly $S(\gamma)\leq \log\varrho(1)$ for every $\gamma \in [0,1]$ this implies that $\sup S = S(1/2)$. On the other hand, it is clear that $X_0$ contains a single point $x$ corresponding to an infinite sequence of zeroes, and for this $x$ we have $S(0)=\log \rho(A_0)=0$. Finally, since every matrix $\mathcal{A}(x,n)$ is an integer matrix which has determinant one and is hence nonzero, every $x \in \Sigma$ has $\frac{1}{n}\log {{|\!|\!|}}\mathcal{A}(x,n){{|\!|\!|}}\geq 0$ for all $n$ and therefore $S(\gamma)\geq 0$ for every $\gamma$. \[Sconcave\] The restriction of $S$ to $(0,1)\cap\mathbb{Q}$ is concave in the following sense: if $\gamma_1,\gamma_2,\lambda \in (0,1) \cap \mathbb{Q}$ then $S(\lambda \gamma_1+(1-\lambda)\gamma_2) \geq \lambda S(\gamma_1)+(1-\lambda)S(\gamma_2)$. For $i=1,2$ let $\gamma_i = p_i/q_i$ in least terms, and let $\lambda =k/m$. Let $M=\max\{q_1,q_2\}$. As a consequence of Lemma \[Sestimates\](i) there exist finite words $u^{(1)},u^{(2)} \in \Omega$ such that $|u^{(i)}|_1=p_i$, $|u^{(i)}|=q_i$ and $S(\gamma_i) = q_i^{-1}\log \mathcal{A}(u^{(i)})$ for each $i$. Since $0<\gamma_1,\gamma_2<1$ we have $0<|p_i|<|q_i|$ and therefore $0^M, 1^M \nprec (u^{(i)})^\ell$ for $i=1,2$ and every $\ell \geq 1$. In particular, for each $\ell_1,\ell_2 \geq 1$ the word $(u^{(1)})^{\ell_1} (u^{(2)})^{\ell_2}$ does not have $0^{2M}$ or $1^{2M}$ as a subword, and hence by Lemma \[rho-norm\], $$\begin{aligned} \rho\bigl(\mathcal{A}(u^{(1)})^{\ell_1}(u^{(2)})^{\ell_2}\bigr) & \geq \mathfrak{d}\left(\mathcal{A}(u^{(1)})^{\ell_1}(u^{(2)})^{\ell_2}\right) \geq \mathfrak{d}(\mathcal{A}(u^{(1)})^{\ell_1})\mathfrak{d}(\mathcal{A} ((u^{(2)})^{\ell_2}) \\ &\geq \frac{1}{64M^4}\rho\left(\mathcal{A}(u^{(1)})^{\ell_1}\right) \rho\left(\mathcal{A}(u^{(2)})^{\ell_2}\right).\end{aligned}$$ Applying this inequality together with Lemma \[Sestimates\](ii), for each $n \geq 1$ we obtain $$\begin{aligned} S(\lambda \gamma_1 + (1-\lambda)\gamma_2) &= S\left(\frac{kp_1q_2 + (m-k)q_1p_2}{mq_1q_2}\right)\\ &\geq \frac{1}{nmq_1q_2}\log \rho(\mathcal{A}((u^{(1)})^{nkq_2}(u^{(2)})^{n(m-k)q_1}))\\ &\geq \frac{1}{nmq_1q_2}\left(\log \rho(\mathcal{A}((u^{(1)})^{nkq_2})) + \log \rho(\mathcal{A}((u^{(2)})^{n(m-k)q_1})) - \log 64M^4\right)\\ &= \frac{k}{mq_1}\log \rho(\mathcal{A}(u^{(1)})) + \frac{m-k}{mq_2}\log \rho(\mathcal{A}(u^{(2)})) - \frac{\log 64M^4}{nmq_1q_2}\\ &= \lambda S(\gamma_1) + (1-\lambda)S(\gamma_2) -\frac{\log 64M^4}{nmq_1q_2}.\end{aligned}$$ Taking the limit as $n \to \infty$ we obtain the desired result. \[Smain\] The function $S \colon [0,1]\to \mathbb{R}$ is continuous and concave. By Lemma \[Sconcave\], the restriction of $S$ to $(0,1)\cap\mathbb{Q}$ is concave. Define a function $\widetilde S \colon [0,1] \to \mathbb{R}$ by $$\widetilde S(\gamma) := \lim_{\varepsilon \to 0}\sup\left\{S(\gamma_*) \colon \gamma_* \in (0,1) \cap \mathbb{Q} \text{ and }|\gamma_*-\gamma|<\varepsilon\right\}.$$ Note that $\widetilde S$ is well-defined since $S$ is bounded by Lemma  \[Sbounds\]. We shall show in several stages that $\widetilde S$ is continuous, concave, and equal to $S$ throughout $[0,1]$. We first shall show that $\widetilde S$ is concave. Let $\gamma_1,\gamma_2,\lambda \in [0,1]$, and choose sequences of rationals $\left(\gamma_1^{(n)}\right)$, $\left(\gamma_2^{(n)}\right)$ and $\left(\lambda_n\right)$ belonging to $(0,1)$, converging respectively to $\gamma_1, \gamma_2$ and $\lambda$, such that $\lim_{n \to \infty} S\left(\gamma_i^{(n)}\right) = \widetilde S(\gamma_i)$ for $i=1,2$. We then have $$\begin{aligned} \widetilde S\left(\lambda \gamma_1 + \left(1-\lambda\right)\gamma_2\right) &\geq \limsup_{n \to \infty} S\left(\lambda_n \gamma_1^{(n)} + \left(1-\lambda_n\right)\gamma_2^{(n)}\right)\\ &\geq \limsup_{n \to \infty} \lambda_n S\left(\gamma_1^{(n)}\right) + \left(1-\lambda_n\right)S\left(\gamma_2^{(n)}\right)\\ &= \lim_{n \to \infty}\lambda_n S\left(\gamma_1^{(n)}\right) + \left(1-\lambda_n\right)S\left(\gamma_2^{(n)}\right)\\& = \lambda \widetilde S(\gamma_1)+\left(1-\lambda\right)\widetilde S(\gamma_2)\end{aligned}$$ using Lemma \[Sconcave\], and $\widetilde S$ is concave as claimed. In particular the restriction of $\widetilde S$ to the interval $(0,1)$ is continuous (see for example [@Rock Thm 10.3]). We next claim that $\widetilde S(\gamma)=S(\gamma)$ for rational values $0<\gamma<1$. Given $\gamma \in (0,1) \cap \mathbb{Q}$, choose a sequence of rationals $(\gamma_n)$ such that $\gamma_n \to \gamma$ and $S(\gamma_n) \to \widetilde S(\gamma)$. If $0<\gamma \leq \gamma_n$ for some $n$ then $$S(\gamma) \geq \left(1-\frac{\gamma}{\gamma_n}\right)S(0)+ \frac{\gamma}{\gamma_n}S(\gamma_n)= \frac{\gamma}{\gamma_n}S(\gamma_n),$$ and similarly if $\gamma_n<\gamma < 1$ then $$S(\gamma) \geq \left(\frac{1-\gamma}{1-\gamma_n}\right)S(\gamma_n) +\left(\frac{\gamma-\gamma_n}{1-\gamma_n}\right)S(1) \geq \left(\frac{1-\gamma}{1-\gamma_n}\right)S(\gamma_n).$$ It follows that by taking the limit as $n \to \infty$ we may obtain $S(\gamma) \geq \widetilde S(\gamma)$, and the converse inequality $\widetilde S(\gamma) \geq S(\gamma)$ is obvious from the definition of $\widetilde S$. This proves the claim. We now claim that $\lim_{\gamma \to 0} \widetilde S(\gamma)=\widetilde S(0)=0=S(0)$ and $\lim_{\gamma \to 1}\widetilde S(\gamma)=\widetilde S(1)=0=S(1)$. Since $S(\gamma)=S(1-\gamma)$ for every $\gamma \in [0,1]$ by Lemma \[Sestimates\](iv) it is sufficient to prove only the first assertion. By Lemma \[Sbounds\] we have $S(0)=\inf S =0 $ and therefore $\inf \widetilde S \geq 0$. Since $\widetilde S$ is concave there must exist $\delta>0$ such that the restriction of $\widetilde S$ to $[0,\delta)$ is monotone, and so if we can show that $\lim_{n \to \infty} \widetilde S(1/n)=0$ then the desired result will follow. By the preceding claim it is sufficient to show that $\lim_{n \to \infty} S(1/n)=0$. For each $n \geq 1$ it is easily verified using Lemma \[cyclic-balanced\] that $(0^n1)^\infty \in X_{1/n}$, so using Lemma \[Sestimates\](i) we may estimate $$0 \leq S\left(\frac{1}{n}\right)=\frac{1}{n+1}\log \rho(A_0^n A_1) \leq \frac{1}{n+1} \log {\mathrm{tr}\,}(A_0^n A_1) = \frac{\log (n+2)}{n+1}$$ and therefore $S(1/n) \to 0$. This completes the proof of the claim. To complete the proof of the lemma it suffices to show that in fact $\widetilde S(\gamma) = S(\gamma)$ when $\gamma$ is irrational. Given $\gamma \in [0,1] \setminus \mathbb{Q}$, let $x \in X_\gamma$ and $(p_n/q_n)_{n=1}^\infty$ be as given by Lemma \[Sestimates\](iii). Since $\widetilde S$ is continuous and agrees with $S$ on the rationals, we may apply parts (iii) and (i) of Lemma \[Sestimates\] to obtain $$S(\gamma)= \lim_{n \to \infty} \frac{1}{q_n}\log \rho(\mathcal{A}(x,q_n)) = \lim_{n \to \infty} S\left(\frac{p_n}{q_n}\right) = \lim_{n \to \infty}\widetilde S\left(\frac{p_n}{q_n}\right) = \widetilde S(\gamma),$$ and we conclude that $\widetilde S \equiv S$ as desired. To conclude the proof of Proposition \[Sproposition\], we note that the function $S$ being non-decreasing on $\left[0,\frac12\right]$ follows from its concavity and the fact that $\max\limits_{\gamma\in[0,1/2]} S(\gamma)=S(1/2)$. Proof of Theorem \[technical\] {#sec7} ============================== Before commencing the proof of Theorem \[technical\], we require the following simple lemma: \[techpf\] For each $\alpha \in [0,1]$ we have $\varrho(\alpha) \geq e^{S(\gamma)}\alpha^\gamma$ for all $\gamma \in [0,1]$. If $\alpha \in (0,1]$ and $X_\gamma \cap Z_\alpha \neq \emptyset$, then $X_\gamma \subseteq Z_\alpha$ and $\varrho(\alpha)=e^{S(\gamma)}\alpha^\gamma$. In the case $\alpha=0$, an easy calculation using Proposition \[Sproposition\] and the definition of $\varrho$ shows that $\varrho(\alpha)=\rho(A_0)=1=e^{S(0)}$. It is therefore clear in this case that $\varrho(\alpha)=e^{S(\gamma)}\alpha^\gamma$ if and only if $\gamma=0$. For the rest of the proof let us fix $\alpha \in (0,1]$ and $\gamma \in [0,1]$. For each $x \in X_\gamma$, we have $$\begin{aligned} \log \varrho(\alpha) &=\limsup_{n \to \infty} \sup\left\{\frac{1}{n}\log {\left|\!\left|\!\left|}\mathcal{A}^{(\alpha)}(z,n){\right|\!\right|\!\right|}\colon z \in \Sigma\right\}\geq \lim_{n \to \infty} \frac{1}{n}\log {\left|\!\left|\!\left|}\mathcal{A}^{(\alpha)}(x,n){\right|\!\right|\!\right|}\\&= \lim_{n \to \infty} \left(\frac{1}{n}\log {{|\!|\!|}}\mathcal{A}(x,n){{|\!|\!|}}+ \varsigma(\pi_n(x))\log \alpha\right) = S(\gamma)+\gamma \log \alpha\end{aligned}$$ so that $\varrho(\alpha) \geq e^{S(\gamma)}\alpha^\gamma$. If $x \in X_\gamma \cap Z_\alpha$ then by the definition of $Z_\alpha$ we have $$S(\gamma)+\gamma\log\alpha=\lim_{n \to \infty} \frac{1}{n}\log {\left|\!\left|\!\left|}\mathcal{A}^{(\alpha)}(x,n){\right|\!\right|\!\right|}= \lim_{n \to \infty} \frac{1}{n}\log \left\|\mathcal{A}^{(\alpha)}(x,n)\right\|_\alpha = \log \varrho(\alpha)$$ so that $\varrho(\alpha)=e^{S(\gamma)}\alpha^\gamma$, and since by Theorem \[Xphi\] the restriction of $T$ to $X_\gamma$ is minimal it is clear that $X_\gamma \subseteq Z_\alpha$. We also require the following lemma, which is an easy consequence of a result in [@BTV]: \[varsi\] Let $\alpha \in [0,1]$ and let $u,v$ be nonempty finite words such that $\rho(\mathcal{A}^{(\alpha)}(u))^{1/|u|} =\rho(\mathcal{A}^{(\alpha)}(v))^{1/|v|} =\varrho(\alpha)$. Then $\varsigma(u)=\varsigma(v)$. In [@BTV], Blondel, Theys and Vladimirov define two nonempty finite words $u,v$ to be *essentially equal* if there exist finite words $a,b$ such that $au^\infty = bv^\infty$. In particular it is clear that if $u$ and $v$ are essentially equal then necessarily $\varsigma(u)=\varsigma(v)$. Blondel *et al.* then associate to each nonempty finite word $\omega$ the set $J_\omega=\{\alpha \in [0,1] \colon \mathcal{A}^{(\alpha)}(\omega) = \varrho(\alpha)^{|\omega|}\}$. In [@BTV Lemma  4.4] it is shown that if $J_u \cap J_v \neq \emptyset$ then $u$ and $v$ are essentially equal. We deduce from this that if $u$ and $v$ are nonempty finite words which satisfy $\rho(\mathcal{A}^{(\alpha)}(u))^{1/|u|} =\rho(\mathcal{A}^{(\alpha)}(v))^{1/|v|} =\varrho(\alpha)$ for some fixed $\alpha \in [0,1]$, then $\alpha \in J_u \cap J_v$ by definition; this implies that $u$ and $v$ are essentially equal, and therefore $\varsigma(u)=\varsigma(v)$. Now we are ready to prove Theorem \[technical\]. **1. Existence of $\mathfrak r$.** We shall begin by showing that for each $\alpha \in (0,1]$ there exists a unique $\gamma \in [0,1]$ such that $X_{\gamma} \cap Z_\alpha \neq \emptyset$. Let $\alpha \in (0,1]$. By Lemma \[Zset\] the set $Z_\alpha$ is compact and invariant under $T$, and this implies that it contains a recurrent point (see e.g. [@KH p.130]). It follows by Proposition \[balanced1\] that $Z_\alpha$ contains a recurrent balanced infinite word, and hence there exists $\gamma_\alpha \in [0,1]$ such that $X_{\gamma_\alpha} \cap Z_\alpha \neq \emptyset$. By Lemma  \[techpf\] it follows that $e^{S(\gamma_\alpha)}\alpha^{\gamma_\alpha}=\varrho(\alpha)$. We claim that $\gamma_\alpha$ is the unique element of $ [0,1]$ with this property. By Lemma \[techpf\] this further implies that $X_\gamma \cap Z_\alpha = \emptyset$ when $\gamma\neq\gamma_\alpha$. To prove this claim, let us suppose that $0 \leq \gamma_1 < \gamma_2 \leq 1$ with $e^{S(\gamma_1)}\alpha^{\gamma_1} =e^{S(\gamma_2)}\alpha^{\gamma_2} =\varrho(\alpha)$, and derive a contradiction. Choose $\lambda_1,\lambda_2 \in [0,1]$ such that $\tilde\gamma_1:=\lambda_1 \gamma_1 + (1-\lambda_1)\gamma_2$ and $\tilde\gamma_2:=\lambda_2 \gamma_1 + (1-\lambda_2)\gamma_2$ are both rational with $\gamma_1\leq \tilde \gamma_1 <\tilde \gamma_2 \leq \gamma_2$. Applying Proposition \[Sproposition\] we deduce $$\begin{aligned} S(\tilde\gamma_i)+\tilde\gamma_i\log\alpha &= S(\lambda_i\gamma_1+(1-\lambda_i)\gamma_2)+ (\lambda_i\gamma_1+(1-\lambda_i)\gamma_2)\log\alpha\\ &\geq \lambda_i(S(\gamma_1)+\gamma_1\log\alpha) +(1-\lambda_i)(S(\gamma_2)+\gamma_2\log\alpha) = \log\varrho(\alpha),\end{aligned}$$ and hence $e^{S(\tilde\gamma_i)}\alpha^{\tilde\gamma_i} \geq \varrho(\alpha)$, for $i=1,2$. Applying Lemma \[techpf\] it follows that $e^{S(\tilde\gamma_1)}\alpha^{\tilde\gamma_1} =e^{S(\tilde\gamma_2)}\alpha^{\tilde\gamma_2}=\varrho(\alpha)$. Let $x \in X_{\tilde\gamma_1}$ and $y \in X_{\tilde\gamma_2}$, and let $u:=\pi_{q_1}(x)$ and $v:=\pi_{q_2}(y)$. By Proposition \[Sproposition\] we have $\varrho(\alpha)=\rho(\mathcal{A}^{\alpha}(u))^{1/|u|} = \rho(\mathcal{A}^{\alpha}(v))^{1/|v|}$, and since $\varsigma(u)=\tilde\gamma_1<\tilde\gamma_2=\varsigma(v)$ this contradicts Lemma \[varsi\]. The claim is proved. Let us define $\mathfrak{r}(\alpha):=\gamma_\alpha$ for all $\alpha \in (0,1]$, and $\mathfrak{r}(0):=0$. Note that $\varrho(0)=\rho(A_0)=1=e^{S(0)}$ as a consequence of Lemma \[bowf\] and Proposition \[Sproposition\]. It follows from this and the previous arguments that for all $\alpha,\gamma \in [0,1]$ we have $\varrho(\alpha) \geq e^{S(\gamma)}\alpha^\gamma$ with equality if and only if $\gamma=\mathfrak{r}(\alpha)$, and for all $\alpha \in (0,1]$ we have $X_\gamma \cap Z_\alpha \neq \emptyset$ precisely when $\gamma = \mathfrak{r}(\alpha)$, in which case $X_{\mathfrak{r}(\alpha)}\subseteq Z_\alpha$. **2. Monotonicity of $\mathfrak r$.** We now show that the function $\mathfrak{r}$ thus defined is non-decreasing. Let us suppose that $\alpha_1,\alpha_2 \in [0,1]$ with $\mathfrak{r}(\alpha_1)<\mathfrak{r}(\alpha_2)$; this implies in particular that $\alpha_2$ is nonzero. By the preceding result we have $\varrho(\alpha_1)= e^{S(\mathfrak{r}(\alpha_1))}\alpha_1^{\mathfrak{r}(\alpha_1)}> e^{S(\mathfrak{r}(\alpha_2))}\alpha_1^{\mathfrak{r}(\alpha_2)}$ and similarly $\varrho(\alpha_2)= e^{S(\mathfrak{r}(\alpha_2))}\alpha_2^{\mathfrak{r}(\alpha_2)}> e^{S(\mathfrak{r}(\alpha_1))}\alpha_2^{\mathfrak{r}(\alpha_1)}$. Consequently $\alpha_1^{\mathfrak{r}(\alpha_2)-\mathfrak{r}(\alpha_1)} <e^{S(\mathfrak{r}(\alpha_1))-S(\mathfrak{r}(\alpha_2))} < \alpha_2^{\mathfrak{r}(\alpha_2)-\mathfrak{r}(\alpha_1)}$, and since $\mathfrak{r}(\alpha_2)-\mathfrak{r}(\alpha_1)>0$ we deduce that $\alpha_1<\alpha_2$. We conclude that if $0\leq \alpha_1<\alpha_2\leq 1$ then necessarily $\mathfrak{r}(\alpha_1)\leq \mathfrak{r}(\alpha_2)$ and therefore $\mathfrak{r}$ is non-decreasing as required. **3. Continuity of $\mathfrak r$.** We may now show that $\mathfrak{r}$ is continuous. Given $\alpha_0 \in (0,1]$ let $\mathfrak{r}_-$ be the limit of $\mathfrak{r}(\alpha)$ as $\alpha \to \alpha_0$ from the left, which exists since $\mathfrak{r}$ is monotone. For every $\alpha \in (0,1]$ we have $\varrho(\alpha)= e^{S(\mathfrak{r}(\alpha))}\alpha^{\mathfrak{r}(\alpha)}$. By Lemma \[rhocts\] and Proposition \[Sproposition\], $\varrho$ and $S$ are continuous, so taking the left limit at $\alpha_0$ yields $e^{S(\mathfrak{r}(\alpha_0))}\alpha_0^{\mathfrak{r}(\alpha_0)}= \varrho(\alpha_0)=e^{S(\mathfrak{r}_-)}\alpha_0^{\mathfrak{r}_-}$. Since $\mathfrak{r}(\alpha)$ is the unique value for which this equality may hold we deduce that $\mathfrak{r}(\alpha_0)= \mathfrak{r}_-$ as required. Similarly for every $\alpha_0 \in [0,1)$ the limit of $\mathfrak{r}(\alpha)$ as $\alpha \to\alpha_0$ from the right is equal to $\mathfrak{r}(\alpha_0)$, and we conclude that $\mathfrak{r}$ is continuous. Since $\mathfrak{r}(0)=0$ and $\mathfrak{r}(1)=1/2$ as a consequence of Proposition \[Sproposition\], and we have shown that $\mathfrak{r}$ is continuous and monotone, we deduce that $\mathfrak{r}$ maps $[0,1]$ surjectively onto $[0,\frac{1}{2}]$ as claimed. **4. 1-ratio and characterisation of extremal orbits.** It remains to show that for each $\alpha$ the extremal orbits of $\mathsf{A}_\alpha$ may be characterised in terms of $X_{\mathfrak{r}(\alpha)}$ in the manner described by the Theorem, and that $\mathfrak{r}(\alpha)$ is the unique optimal $1$-ratio of $\mathsf{A}_\alpha$. In the case $\alpha=0$ it is obvious that $x \in \Sigma$ is weakly extremal if and only if it is strongly extremal, if and only if $x =0^\infty \in X_0$, and in this case the proof is then complete. For each $\alpha \in (0,1]$, Lemma \[strongex\] shows that every recurrent strongly extremal infinite word belongs to $Z_\alpha$, and therefore belongs to $X_{\mathfrak{r}(\alpha)}$ by Proposition \[balanced1\] and the uniqueness property of $\mathfrak{r}(\alpha)$. To show that weakly extremal infinite words accumulate on $X_{\mathfrak{r}(\alpha)}$ in the desired manner we require an additional claim. Given $\alpha \in (0,1]$, we assert that there is a unique $T$-invariant Borel probability measure whose support is contained in $Z_\alpha$, and that this support is equal to $X_{\mathfrak{r}(\alpha)}$. Indeed, let $\mu_{\mathfrak{r}(\alpha)}$ be the unique $T$-invariant measure with support equal to $X_{\mathfrak{r}(\alpha)}$, the existence of which is given by Theorem \[Xphi\]. If $\nu$ is a $T$-invariant Borel probability measure with ${\mathrm{supp}\,}\nu \subseteq Z_\alpha$, define $\widetilde X := \{x \in {\mathrm{supp}\,}\nu \colon x \text{ is recurrent}\}$. It follows from the Poincaré recurrence theorem that $\widetilde X$ is dense in ${\mathrm{supp}\,}\nu$ (see e.g. [@KH Prop. 4.1.18]). By Proposition \[balanced1\] every element of $\widetilde X$ is balanced, and since $\mathfrak{r}(\alpha)$ is the unique $\gamma \in [0,1]$ for which $X_\gamma \cap Z_\alpha \neq \emptyset$, it follows that $\widetilde X \subseteq X_{\mathfrak{r}(\alpha)}$. We conclude that ${\mathrm{supp}\,}\nu \subseteq X_{\mathfrak{r}(\alpha)}$ and therefore $\nu=\mu_{\mathfrak{r}(\alpha)}$ since the restriction of $T$ to $X_{\mathfrak{r}(\alpha)}$ is known to be uniquely ergodic, which proves the claim. By Theorem \[Xphi\] we have $\mu_{\mathfrak{r}(\alpha)}(\{x \in \Sigma \colon x_1=1\}) =\mathfrak{r}(\alpha)$, and we may now apply Lemma  \[weakex\] to see that if $x \in \Sigma$ is weakly extremal for $\mathsf{A}_\alpha$, then $(1/n) \sum_{k=0}^{n-1} \mathrm{dist}(x,X_{\mathfrak{r}(\alpha)}) \to 0$ and $\varsigma(\pi_n(x)) \to \mathfrak{r}(\alpha)$ as required. It remains only to show that for each $\alpha \in (0,1]$, every $x \in X_{\mathfrak{r}(\alpha)}$ is strongly extremal in the strict fashion described by . Given any compact set $K \subseteq (0,1]$, choose an integer $N_K$ large enough that $N_K>\max\{\lceil \mathfrak{r}(\alpha)^{-1}\rceil, \lceil (1-\mathfrak{r}(\alpha))^{-1}\rceil\}$ for every $\alpha \in K$, and let $M_K>1$ be the constant given by Lemma \[extnorm\]. Let $\alpha \in K$ and $x \in X_{\mathfrak{r}(\alpha)}$. By Lemma \[nolongwords\] we have $0^{N_K}, 1^{N_K} \nprec x$, and since $X \subseteq Z_\alpha$ we have $\|\mathcal{A}^{(\alpha)}(x,n)\|_\alpha=\varrho(\alpha)^n$ for all $n \geq 1$. Applying Lemma \[rho-norm\] and Lemma  \[extnorm\], $$\begin{aligned} \frac{\varrho(\alpha)^n}{2M_KN_K^2} &= \frac{1}{2M_KN_K^2}\left\|\mathcal{A}^{(\alpha)}(x,n) \right\|_\alpha \leq \frac{1}{2N_K^2}{\left|\!\left|\!\left|}\mathcal{A}^{(\alpha)}(x,n){\right|\!\right|\!\right|}\leq \rho\left(\mathcal{A}^{(\alpha)}(x,n)\right)\\& \leq {\left|\!\left|\!\left|}\mathcal{A}^{(\alpha)}(x,n){\right|\!\right|\!\right|}\leq M_K\left\|\mathcal{A}^{(\alpha)}(x,n)\right\|_\alpha \leq M_K \varrho(\alpha)^n<2M_KN_K^2\varrho(\alpha)^n\end{aligned}$$ so that holds with $C_K:=2M_K N^2_K$. In particular this shows that for each $\alpha \in (0,1]$, every $x \in X_{\mathfrak{r}(\alpha)}$ is strongly extremal. The proof of the Theorem is complete. Proof of Theorem \[counter\] {#sec8} ============================ Recall from Proposition \[Sproposition\] that there exists a continuous concave function $S \colon [0,1] \to \mathbb{R}$ such that for each $\gamma \in [0,1]$, $$S(\gamma)= \lim_{n \to \infty} \frac{1}{n} \log {{|\!|\!|}}\mathcal{A}(x,n){{|\!|\!|}}=\lim_{n \to \infty} \frac{1}{n} \log \rho(\mathcal{A}(x,n))$$ uniformly for $x \in X_\gamma$. We saw in the course of the proof of Theorem \[technical\] that the function $\mathfrak{r} \colon [0,1] \to [0,\frac{1}{2}]$ is characterised by the fact that $\varrho(\alpha) \geq e^{S(\gamma)}\alpha^\gamma$ for all $\alpha,\gamma \in [0,1]$ with equality if and only if $\gamma=\mathfrak{r}(\alpha)$. Readers who have skipped the proof of Theorem \[technical\] may note that this characterisation can be deduced easily from the definition of $S$ and the statement of Theorem \[technical\]. The proof of Theorem \[counter\] operates by exploiting the concavity of $S$ and the above relationship between $S$ and $\mathfrak{r}$ to compute a value $\alpha_* \in [0,1]$ such that $\mathfrak{r}(\alpha_*) \notin \mathbb{Q}$ as the limit of a series of approximations. We begin with a result from convex analysis. \[deriv\] For each $\gamma \in \left(0,\frac{1}{2}\right)$, we have $\mathfrak{r}^{-1}(\gamma)=\{\alpha_0\}$ with $\alpha_0 \in (0,1]$ if and only if $S$ is differentiable at $\gamma$ and $S'(\gamma)=-\log \alpha_0$. See Figure \[fig:S(gamma)\] for a graph of $S(\gamma)$ along with the tangent line of slope $\alpha_*$. $$\includegraphics[width=250pt,height=250pt]{alpha_0_0.5-2.ps}$$ Recall that if $f \colon [a,b] \to \mathbb{R}$ is a concave function then $\eta \in \mathbb{R}$ is called a *subgradient* of $f$ at $z \in [a,b]$ if $f(y) \leq f(z) + \eta(y-z)$ for all $y \in [a,b]$. Furthermore, $f$ is differentiable at $z \in (a,b)$ with $f'(z)=\eta$ if and only if $\eta$ is the unique subgradient of $f$ at $z$ (see for example [@Rock Thm 25.1]). To prove the lemma it therefore suffices to show that for each $\gamma \in (0,\frac{1}{2})$, $\eta\in \mathbb{R}$ is a subgradient of $S$ at $\gamma$ if and only if $e^{-\eta} \in (0,1]$ and $\mathfrak{r}(e^{-\eta})=\gamma$. Let us prove that this is the case. For every $\alpha,\gamma\in [0,1]$ we have $e^{S(\mathfrak{r}(\alpha))}\alpha^{\mathfrak{r}(\alpha)} \geq e^{S(\gamma)}\alpha^\gamma$ with equality if and only if $\gamma=\mathfrak{r}(\alpha)$. For each fixed $\alpha \in (0,1)$ it now follows by a simple rearrangement that $-\log \alpha$ is a subgradient of $S$ at $\mathfrak{r}(\alpha)$. Conversely, suppose that $\eta \in \mathbb{R}$ is a subgradient of $S$ at some $\gamma_0 \in (0,\frac{1}{2})$. By Proposition \[Sproposition\], $S$ is monotone increasing on the interval $[0,\frac{1}{2}]$ and therefore we must have $\eta \geq 0$. Since $\eta$ is a subgradient we have $e^{S(\gamma_0)-\eta \gamma_0} \geq e^{S( \gamma)-\eta \gamma}$ for all $\gamma \in [0,1]$, and since $e^{-\eta} \in (0,1]$ it follows that $\gamma_0=\mathfrak{r}(e^{-\eta})$ as required. The following corollary is not needed in this paper but since it is straightforward, we believe it’s worth mentioning. The function $S$ is strictly concave on $[0,1]$ and strictly increasing on $[0,1/2]$. If $S$ were not strictly concave, there would be an interval $(\gamma_1,\gamma_2)$ such that $S$ would be linear on this interval. Hence $S'$ would be constant on $(\gamma_1,\gamma_2)$ which would mean, in view of the previous lemma, that $\mathfrak r^{-1}(\gamma)$ would be constant for all $\gamma\in (\gamma_1,\gamma_2)$. This contradicts $\mathfrak r$ being well defined (Theorem \[technical\]), whence $S$ is strictly concave. Since $S$ is non-decreasing, continuous and strictly concave on $[0,1/2]$, it is strictly increasing. Throughout this section we let $\phi:=\frac{1+\sqrt{5}}{2}$ denote the golden ratio. Recall that a real number $\gamma$ is said to be *Liouville* if for every $k>0$ there exist integers $p,q$ such that $0<|\gamma-p/q|< 1/q^k$. A classical theorem of Liouville asserts that no algebraic number can be Liouville (see, e.g., [@HW Theorem 191]). In particular $\phi^{-2}$ is not Liouville. \[FIM\] Let $\gamma \in [0,\frac{1}{2}]$ and suppose that $\gamma$ is an irrational number which is not Liouville. Then there exists a unique $\alpha \in [0,1]$ such that $\mathfrak{r}(\alpha)=\gamma$. By Theorem \[technical\] the function $\mathfrak{r}$ is surjective and monotone, so the set $\mathfrak{r}^{-1}(\gamma)$ is either a point or an interval. To show that this set cannot be an interval, we shall suppose that there exist $\alpha_0 \in (0,1)$ and $\varepsilon>0$ such that $\mathfrak{r}(\alpha)=\gamma$ for all $\alpha \in [e^{-\varepsilon}\alpha_0, e^{\varepsilon}\alpha_0]$, and derive a contradiction. Since $\gamma$ is irrational but not Liouville, we may choose an integer $k>0$ such that for all integers $p,q$ with $q$ nonzero we have $\left|\gamma - p/q\right| > 1/q^k$. A theorem due to the second named author [@QBWF Thm 1.2] implies that for every $r>0$, $$\max\left\{\rho\left(\mathcal{A}^{(\alpha_0)}(x,m)\right)^{\frac{1}{m}} \colon x \in \Sigma \text{ and }1 \leq m \leq n\right\} = \varrho(\alpha_0)+O\left(\frac{1}{n^r}\right)$$ in the limit as $n \to \infty$. In particular it follows that if $n$ is some sufficiently large integer, then there exist an integer $m$ and an infinite word $x \in \Sigma$ such that $1 \leq m \leq n$ and $$\label{rpoint2}\rho\left(\mathcal{A}^{(\alpha_0)}(x,m)\right)^{1/m} >\left(1-\frac{1}{n^{k+1}}\right)\varrho(\alpha)>e^{-\varepsilon n^{-k}}\varrho(\alpha).$$ Let $\varsigma(\pi_m(x))=p/q$ in least terms; we shall suppose firstly that $\frac{p}{q}-\gamma>0$, the opposite case being similar. By hypothesis we have $\varrho(\lambda\alpha_0)= e^{S(\mathfrak{r}(\lambda\alpha_0))}(\lambda\alpha_0)^{\mathfrak{r}(\alpha_0)} = e^{S(\gamma)}(\lambda\alpha_0)^\gamma=\lambda^{\gamma}\varrho(\alpha_0)$ for every $\lambda \in [e^{-\varepsilon},e^{\varepsilon}]$, and also $\frac{p}{q}-\gamma = |\frac{p}{q}-\gamma| > q^{-k} \geq n^{-k}$. Combining this with and Lemma  \[bowf\] we obtain $$\begin{aligned} \varrho\left(e^{\varepsilon}\alpha_0\right) \geq \rho\left(\mathcal{A}^{\left(e^{\varepsilon}\alpha_0\right)}(x,m)\right)^{1/m} &=e^{\varepsilon p/q} \rho\left(\mathcal{A}^{(\alpha_0)}(x,m)\right)^{1/m}\\&> e^{\varepsilon p/q-\varepsilon n^{-k}}\varrho(\alpha_0) = e^{\varepsilon\left(p/q - \gamma- n^{-k}\right)}\varrho\left(e^{\varepsilon}\alpha_0\right)\\&> \varrho\left(e^{\varepsilon}\alpha_0\right),\end{aligned}$$ a contradiction. In the case $\frac{p}{q}-\gamma<0$ we may similarly arrive at the expression $$\varrho\left(e^{-\varepsilon}\alpha_0\right) >e^{-\varepsilon p/q - \varepsilon n^{-k}}\varrho(\alpha_0)= e^{\varepsilon\left(\gamma-p/q-n^{-k}\right)} \varrho\left(e^{-\varepsilon}\alpha_0\right) >\varrho\left(e^{-\varepsilon}\alpha_0\right)$$ which is also a contradiction. The proof is complete. Let $(F_n)_{n=0}^\infty$ denote the Fibonacci sequence, which is defined by $F_0:=0$, $F_1:=1$ together with the recurrence relation $F_{n+2}:=F_{n+1}+F_n$, and recall that $F_n = (\phi^n - (-1/\phi)^n)/\sqrt{5}$ for every $n \geq 0$. Define a sequence of integers $(\tau_n)_{n=0}^\infty$ by $\tau_0:=1$, $\tau_1=\tau_2:=2$, and $\tau_{n+1}:=\tau_{n}\tau_{n-1}-\tau_{n-2}$ for every $n \geq 2$. Finally, define a sequence of matrices $(B_n)_{n=1}^\infty$ by $B_1:=A_1$, $B_2:=A_0$ and $B_{n+1}:=B_{n}B_{n-1}$ for every $n \geq 2$. The key properties of $F_n$, $B_n$ and $\tau_n$ are summarised in the following three lemmas. \[bn-matrices\] For each $n \geq 2$ the identities $S(F_{n-2}/F_n)=F_n^{-1}\log \rho(B_n)$ and $F_{n}F_{n-1}-F_{n+1}F_{n-2}=(-1)^n$ hold, and the value $\phi^{-2}$ lies strictly between $F_{n-2}/F_{n}$ and $F_{n-1}/F_{n+1}$. Define a sequence of finite words by $u_{(1)}:=1$, $u_{(2)}:=0$, and $u_{(n+1)}:=u_{(n)}u_{(n-1)}$ for every $n \geq 2$. Clearly we have $\mathcal{A}(u_{(n)})=B_n$ for all $n \geq 1$. A simple induction argument shows that each $u_{(n)}$ is a standard word in the sense defined in Lemma \[standard\], and that $|u_{(n)}|=F_n$, $|u_{(n)}|_1=F_{n-2}$ for every $n \geq 2$. By Lemma \[standard\] and Lemma \[Sestimates\](i) we therefore have $[u_{(n)}]^\infty \in X_{F_{n-2}/F_n}$ and consequently $S(F_{n-2}/F_{n}) = F_n^{-1}\log \rho(\mathcal{A}(u_{(n)}))=F_n^{-1}\log \rho(B_n)$ for every $n \geq 2$ as required. The remaining parts of the lemma follow from the fact that the fractions $F_{n-2}/F_n$ are precisely the continued fraction convergents of $\phi^{-2}$. Alternatively these results can be derived from the explicit formula for $(F_n)$. \[taun-sequence-1\] For each $n \geq 1$ we have ${\mathrm{tr}\,}B_n = \tau_n$. By direct evaluation the reader may obtain ${\mathrm{tr}\,}B_1 = {\mathrm{tr}\,}B_2 =2=\tau_1 = \tau_2$ and ${\mathrm{tr}\,}B_3 =3= \tau_3$, so it suffices to show that the sequence $({\mathrm{tr}\,}B_n)$ satisfies the same recurrence relation as $(\tau_n)$ for all $n \geq 3$. Let us write $$B_n = \left(\begin{array}{cc}a_n&b_n\\c_n&d_n\end{array}\right)$$ for each $n\geq 1$. Notice that we have $a_nd_n-b_nc_n = \det B_n = 1$ for every $n$, and for each $n \geq 2$ the definition $B_{n+1}:=B_nB_{n-1}$ implies the identity $$\left(\begin{array}{cc}a_{n+1}&b_{n+1}\\ c_{n+1}&d_{n+1}\end{array}\right) = \left(\begin{array}{cc}a_na_{n-1}+b_nc_{n-1}& a_nb_{n-1}+b_nd_{n-1}\\c_na_{n-1}+d_nc_{n-1}& c_nb_{n-1}+d_nd_{n-1}\end{array}\right).$$ Fix any $n \geq 3$. By definition we have $${\mathrm{tr}\,}B_{n+1} = a_{n+1}+d_{n+1} = a_n a_{n-1} + b_nc_{n-1} + c_n b_{n-1} + d_n d_{n-1}$$ and $$({\mathrm{tr}\,}B_n)({\mathrm{tr}\,}B_{n-1})= a_n a_{n-1} + a_n d_{n-1} + d_na_{n-1}+d_n d_{n-1},$$ so we may compute $$\begin{aligned} ({\mathrm{tr}\,}B_n)({\mathrm{tr}\,}B_{n-1})-{\mathrm{tr}\,}B_{n+1} &= a_n d_{n-1} +d_n a_{n-1}- b_n c_{n-1} - c_n b_{n-1}\\ &= d_{n-1}(a_{n-1}a_{n-2} + b_{n-1}c_{n-2}) + a_{n-1}(c_{n-1}b_{n-2}+d_{n-1}d_{n-2})\\ &\quad- c_{n-1}(a_{n-1}b_{n-2} + b_{n-1}d_{n-2}) - b_{n-1}(c_{n-1}a_{n-2}+d_{n-1}c_{n-2})\\ &= a_{n-2}(a_{n-1}d_{n-1}-b_{n-1}c_{n-1}) + d_{n-2}( a_{n-1}d_{n-1}-b_{n-1}c_{n-1} )\\ & = a_{n-2}+d_{n-2} = {\mathrm{tr}\,}B_{n-2},\\end{aligned}$$ which establishes the required recurrence relation. \[taun-sequence-2\] There exist constants $\delta_1,\delta_2>0$ such that $$\left|\log \tau_n - \log \rho(B_n)\right| = O\left(e^{-\delta_1F_n}\right)$$ and $$\left|\log \left(1-\frac{\tau_{n-1}}{\tau_{n+1}\tau_{n}}\right)\right| = O\left(e^{-\delta_2F_n}\right)$$ in the limit as $n \to \infty$. It is clear that $F_{n-2}/F_n \to \phi^{-2}$ using the formula for $F_n$, and since $S$ is continuous it follows via Lemma \[bn-matrices\] that $F_n^{-1}\log \rho(B_n) \to S(\phi^{-2})>0$. Since $\det B_n=1$ and $B_n$ is non-negative, the eigenvalues of $B_n$ are $\rho(B_n)$ and $\rho(B_n)^{-1}$ respectively, so for each $n \geq 1$ we have $\tau_n = {\mathrm{tr}\,}B_n = \rho(B_n) + \rho(B_n)^{-1}$, where we have used Lemma \[taun-sequence-1\]. Hence, $$0 \leq \log \tau_n - \log \rho(B_n) = \log \left(\frac{\rho(B_n)+\rho(B_n)^{-1}}{\rho(B_n)}\right)\leq \frac{1}{\rho(B_n)^2} = O\left(e^{-F_n S(\phi^{-2})}\right),$$ where we have used the elementary inequality $\log (1+x) \leq x$ which holds for all real $x$, and this proves the first part of the lemma. It follows from this result that $\lim_{n \to \infty} F_n^{-1} \log \tau_n = S(\phi^{-2})$. We may therefore apply this to obtain $$\begin{aligned} \lim_{n \to \infty}\frac{1}{F_n}\log \left(\frac{\tau_{n-1}}{\tau_{n+1}\tau_{n}}\right) &= \lim_{n \to \infty}\left(\frac{1}{F_n} \log \tau_{n-1} - \frac{1}{F_n} \log \tau_{n+1}- \frac{1}{F_n} \log \tau_{n}\right)\\&= S(\phi^{-2})(\phi^{-1} - \phi - 1) = -2 S(\phi^{-2})<0,\end{aligned}$$ from which the second part of the lemma follows easily. *Proof of Theorem \[counter\]*. We will show that $S'(\phi^{-2})=-\log \alpha_*$, where $\alpha_*$ satisfies the product and limit formulas given in the statement of the Theorem. By Lemma \[deriv\] this implies that $\mathfrak{r}(\alpha_*)=\phi^{-2} \notin \mathbb{Q}$, and by Theorem \[technical\] this implies that $\mathsf{A}_{\alpha_*}$ does not satisfy the finiteness property. By Lemma \[deriv\] together with Lemma \[FIM\], the derivative $S'(\phi^{-2})$ exists and is finite. Using Lemma  \[bn-matrices\] and Lemma \[taun-sequence-2\], we may now compute $$\begin{aligned} S'(\phi^{-2})&=\lim_{n \to \infty}\frac{S\left(\frac{F_{n-1}} {F_{n+1}}\right)-S\left(\frac{F_{n-2}} {F_{n}}\right)}{\frac{F_{n-1}} {F_{n+1}}-\frac{F_{n-2}}{F_{n}}}\\ &=\lim_{n \to \infty}\frac{\frac{1}{F_{n+1}} \log\rho\left(B_{n+1}\right)-\frac{1}{F_{n}}\log \rho\left(B_{n}\right)}{\frac{F_{n-1}} {F_{n+1}}-\frac{F_{n-2}}{F_{n}}}\\ &=\lim_{n \to \infty}\frac{{F_{n}} \log\rho\left(B_{n+1}\right)-{F_{n+1}}\log \rho\left(B_{n}\right)}{F_nF_{n-1}- F_{n+1}F_{n-2}}\\ &=\lim_{n \to \infty}(-1)^n(F_{n}\log\rho\left(B_{n+1}\right)-F_{n+1}\log \rho(B_n))\\ &=\lim_{n \to \infty}(-1)^n(F_{n}\log\tau_{n+1}-F_{n+1}\log \tau_n).\end{aligned}$$ Let us define $$\alpha_* := e^{-S'(\phi^{-2})} = \lim_{n \to \infty} \left(\frac{\tau_n^{F_{n+1}}}{\tau_{n+1}^{F_n}}\right)^{(-1)^n}$$ which yields the first of the two expressions for $\alpha_*$. We shall derive the second expression. Let us write $\alpha_n:= (\tau_n^{F_{n+1}} / \tau_{n+1}^{F_n})^{(-1)^n}$ for each $n \geq 1$ so that $\alpha_* = \lim_{n \to \infty} \alpha_n$. Applying the recurrence relations for $(F_n)$ and $(\tau_n)$ once more, we obtain for each $n \geq 1$ $$\begin{aligned} \frac{\alpha_{n+1}}{\alpha_n}&= \frac{\left(\tau_{n+1}^{F_{n+2}}/ \tau_{n+2}^{F_{n+1}}\right)^{(-1)^{n+1}}} {\left(\tau_n^{F_{n+1}}/\tau_{n+1}^{F_n}\right)^{(-1)^n}} = \left(\frac{\tau_{n+2}^{F_{n+1}} \tau_{n+1}^{F_n}}{\tau_{n+1}^{F_{n+2}}\tau_n^{F_{n+1}}}\right)^{(-1)^n}\\ &= \left(\frac{\tau_{n+2}}{\tau_{n+1}\tau_{n}}\right)^{(-1)^n F_{n+1}} = \left(\frac{\tau_{n+1}\tau_n - \tau_{n-1}}{\tau_{n+1}\tau_{n}}\right)^{(-1)^n F_{n+1}} = \left(1-\frac{\tau_{n-1}}{\tau_{n+1}\tau_{n}}\right)^{(-1)^nF_{n+1}}.\end{aligned}$$ Since $\tau_1=\tau_2=2$ and $F_1 = F_2 = 1$ we have $\alpha_1=1$. Using the formula above we may now obtain for each $N\geq 2$ $$\alpha_{N} = \alpha_1 \prod_{n=1}^{N-1}\frac{\alpha_{n+1}}{\alpha_n} = \prod_{n=1}^{N-1}\left(1-\frac{\tau_{n-1}} {\tau_{n+1}\tau_n}\right)^{(-1)^nF_{n+1}}.$$ It follows from Lemma \[taun-sequence-2\] that these partial products converge unconditionally in the limit $N \to \infty$, and taking this limit we obtain the desired infinite product expression for $\alpha_*$. The proof of Theorem \[counter\] may be extended to give an explicit estimate for the difference $|\alpha_*-\alpha_N|$ as follows. Note that for each $n \geq 3$ we have $1/3 \leq F_{n-2}/F_n \leq 1/2$ and therefore, by Proposition \[Sproposition\], $$\begin{aligned} F_n^{-1}\log \tau_n &\geq F_n^{-1}\log \rho(B_n) = S\left(\frac{F_{n-2}}{F_n}\right) \\ &\geq S\left(\frac{1}{3}\right)=\frac{\log \rho(A_0^2A_1)}{3}=\frac{\log (2+\sqrt{3})}{3}.\end{aligned}$$ On the other hand, if we define a sequence $(\tilde\tau_n)_{n=1}^\infty$ by $\tilde\tau_1=\tilde \tau_2 =\tau_1=\tau_2=2$ and $\tilde\tau_{n+1}:=\tilde\tau_n \tilde\tau_{n-1}$ for $n \geq 3$, then it is clear that $\tau_n \leq \tilde\tau_n = 2^{F_n}$ for every $n \geq 1$. Combining these estimates yields $$\begin{aligned} \left|\log\alpha_N-\log\alpha_*\right| &\leq \sum_{n=N}^\infty F_{n+1} \left|\log \left(1-\frac{\tau_{n-1}}{\tau_{n+1}\tau_n}\right)\right| \leq 2\sum_{n=N}^\infty \frac{F_{n+1}\tau_{n-1}}{\tau_{n+1}\tau_n}\\ &\leq 2\sum_{n=N}^\infty F_{n+1}\frac{2^{F_{n-1}}}{(2+\sqrt{3})^{F_{n+2}/3}}< C_1 \sum_{n=N}^\infty (\phi^{n+1}+1)\theta^{\phi^{n}}\\&\leq 120\sum_{n=N}^\infty \left(\frac{3}{4}\right)^{\phi^n}< 780\left(\frac{3}{4}\right)^{\phi^N}\end{aligned}$$ for all $N \geq 3$, where $$C_1:=\frac{4(2+\sqrt{3})^{1/3}}{\sqrt{5}}= 2.77475\ldots,\qquad\theta:=\left(\frac{8} {(2+\sqrt{3})^{\phi^3}}\right)^{\frac{1}{3\phi \sqrt{5}}}=0.72441\ldots$$ It follows in particular that the value $\alpha_{13}:=\tau_{14}^{F_{13}}/\tau_{13}^{F_{14}}$ satisfies $|\alpha_*-\alpha_{13}|<10^{-62}$, which yields the approximation given in the introduction. Further questions {#sec9} ================= [**1.**]{} [*Is it true that $\alpha_*$ is irrational or transcendental?*]{} The fast rate of convergence of the sequence $\left(\frac{\tau_n^{F_{n+1}}}{\tau_{n+1}^{F_n}}\right)^{(-1)^n}$ suggests that $\alpha_*$ is probably irrational; however, perhaps unexpectedly, this rate itself is not fast enough to claim this. Roughly, to apply known results (see, e.g., [@Nabut]), we need $\tau_n$ to grow like $A^{B^n}$ with $A>1$ and $B>2$. Then Theorem 1 from the aforementioned paper would apply. In our setting however we “only” have $B=\phi<2$. A good illustration how tight the quoted result is is the famous Cantor infinite product $$\prod_{n=0}^\infty \left(1+\frac1{2^{2^n}}\right)$$ equal to 2, despite its “superfast” convergence rate. However, a similar product $$\prod_{n=0}^\infty \left(1+\frac1{2^{3^n}}\right)$$ is indeed irrational. We conjecture that $\alpha_{**}=\mathfrak{r}^{-1}(1-1/\sqrt2)$ (which corresponds to the substitution $0\to001,\ 1\to0$ similarly to $\alpha_*$ corresponding to the Fibonacci substitution $0\to01,\ 1\to0$) is irrational. [**2.**]{} [*Is $\mathfrak{r}^{-1}(\gamma)$ always a point when $\gamma$ is irrational?*]{} We know this to be true if $\gamma$ is not Liouville (i.e., for all irrational $\gamma$ except a set of zero Hausdorff dimension) but the method used in Lemma \[FIM\] is somewhat limited. We hope to close this gap in a follow-up paper. [**3.**]{} [*If the answer to the previous question is yes, then is it true that $\mathfrak{r}^{-1}(\gamma)\notin\mathbb Q$ whenever $\gamma\notin\mathbb Q$?*]{} This question is pertinent to a conjecture of Blondel and Jungers, which says that the finiteness property holds for all matrices with rational entries [@BJ]. Our model should not, therefore, yield a counterexample to this conjecture. [**4.**]{} [*Is $\mathfrak{r}^{-1}(\gamma)$ always an interval with nonempty interior when $\gamma$ is rational?*]{} It was shown by the fourth named author in his thesis [@Theys] that $\mathfrak{r}^{-1}\left(\frac12\right)=\left[\frac45,1\right]$, and all other known examples indicate that the answer is positive. However proving this for a general $\gamma\in\mathbb Q$ seems like a difficult question. [**5.**]{} [*Does the set of all $\alpha$ such that $\mathfrak{r}(\alpha)\notin \mathbb{Q}$ have zero measure? Does it have zero Hausdorff dimension?*]{} Analogues of these properties are claimed for Bousch-Mairesse’s example but proofs are not given [@BM]. We conjecture that the graph of $\mathfrak{r}$ is a devil’s staircase with the plateau regions corresponding to $\{\gamma: \mathfrak r(\gamma)\in\mathbb Q\}$ – see Figure \[fig:frakr(gamma)\]. $$\includegraphics[width=250pt,height=250pt,angle=270]{frakr.ps}$$ Between the time of submisssion and present, some progress has been made on some of the questions above. Interested readers are welcome to contact the authors above to find out the current progress on these problems. Acknowledgement {#acknowledgement .unnumbered} =============== The authors are indebted to V. S. Kozyakin for his helpful remarks and suggestions. [10]{} , [*On the [L]{}yapunov exponent of discrete inclusions. [I]{}*]{}, Avtomat. i Telemekh., (1988), pp. 40–46. , [*Bounded semigroups of matrices*]{}, Linear Algebra Appl., 166 (1992), pp. 21–27. , [ *Combinatorics on words*]{}, vol. 27 of CRM Monograph Series, American Mathematical Society, Providence, RI, 2009. Christoffel words and repetitions in words. , [*On the number of $\alpha$-power-free binary words of $2<\alpha\leq 7/3$*]{}, Theoret. Comp. Sci., 410 (2009), pp. 2823–2833. , [*Computationally efficient approximations of the joint spectral radius*]{}, SIAM J. Matrix Anal. Appl., 27 (2005), pp. 256–272 (electronic). , [*An elementary counterexample to the finiteness conjecture*]{}, SIAM J. Matrix Anal. Appl., 24 (2003), pp. 963–970 (electronic). , [*Le poisson n’a pas d’arêtes*]{}, Ann. Inst. H. Poincaré Probab. Statist., 36 (2000), pp. 489–508. , [*Asymptotic height optimization for topical [IFS]{}, [T]{}etris heaps, and the finiteness conjecture*]{}, J. Amer. Math. Soc., 15 (2002), pp. 77–111 (electronic). , [*Ordered orbits of the shift, square roots, and the devil’s staircase*]{}, Math. Proc. Cambridge Philos. Soc., 115 (1994), pp. 451–481. , [ *Finiteness property of pairs of [$2\times 2$]{} sign-matrices via real extremal polytope norms*]{}, Linear Algebra Appl., 432 (2010), pp. 796–816. , [*Almost sure stability of discrete-time switched linear systems: a topological point of view*]{}, SIAM J. Control Optim., 47 (2008), pp. 2137–2156. , [*Sets of matrices all infinite products of which converge*]{}, Linear Algebra Appl., 161 (1992), pp. 227–263. height 2pt depth -1.6pt width 23pt, [*Two-scale difference equations. [II]{}. [L]{}ocal regularity, infinite products of matrices and fractals*]{}, SIAM J. Math. Anal., 23 (1992), pp. 1031–1079. , [*Number of representations related to a linear recurrent basis*]{}, Acta Arith., 88 (1999), pp. 371–396. , [*Über die [V]{}erteilung der [W]{}urzeln bei gewissen algebraischen [G]{}leichungen mit ganzzahligen [K]{}oeffizienten*]{}, Math. Z., 17 (1923), pp. 228–249. , [*Continued fractions and matrices*]{}, Amer. Math. Monthly, 56 (1949), pp. 98–103. , [*On the multiplicative ergodic theorem for uniquely ergodic systems*]{}, Ann. Inst. H. Poincaré Probab. Statist., 33 (1997), pp. 797–815. , [*Complex polytope extremality results for families of matrices*]{}, SIAM J. Matrix Anal. Appl., 27 (2005), pp. 721–743 (electronic). , [*On the zero-stability of variable stepsize multistep methods: the spectral radius approach*]{}, Numer. Math., 88 (2001), pp. 445–458. , [*Stability of discrete linear inclusion*]{}, Linear Algebra Appl., 231 (1995), pp. 47–85. , [*An introduction to the theory of numbers*]{}, Oxford University Press, Oxford, sixth ed., 2008. Revised by D. R. Heath-Brown and J. H. Silverman. , [*Continuity of the joint spectral radius: application to wavelets*]{}, in Linear algebra for signal processing ([M]{}inneapolis, [MN]{}, 1992), vol. 69 of IMA Vol. Math. Appl., Springer, New York, 1995, pp. 51–61. , [*Optimal periodic orbits of chaotic systems occur at low period*]{}, Phys. Rev. E, 54 (1996), pp. 328–337. , [*Frequency locking on the boundary of the barycentre set*]{}, Experiment. Math., 9 (2000), pp. 309–317. , [*The joint spectral radius*]{}, vol. 385 of Lecture Notes in Control and Information Sciences, Springer-Verlag, Berlin, 2009. Theory and applications. , [*On the finiteness property for rational matrices*]{}, Linear Algebra Appl., 428 (2008), pp. 2283–2295. , [*Introduction to the modern theory of dynamical systems*]{}, vol. 54 of Encyclopedia of Mathematics and its Applications, Cambridge University Press, Cambridge, 1995. With a supplementary chapter by Katok and Leonardo Mendoza. , [*Algebraic unsolvability of a problem on the absolute stability of desynchronized systems*]{}, Avtomat. i Telemekh., (1990), pp. 41–47. height 2pt depth -1.6pt width 23pt, [*A dynamical systems construction of a counterexample to the finiteness conjecture*]{}, in [P]{}roceedings of the 44th [IEEE]{} [C]{}onference on [D]{}ecision and [C]{}ontrol, and the [E]{}uropean [C]{}ontrol [C]{}onference 2005, Seville, Spain, December 2005, pp. 2338–2343. height 2pt depth -1.6pt width 23pt, [*Structure of extremal trajectories of discrete linear systems and the finiteness conjecture*]{}, Automat. Remote Control, 68 (2007), pp. 174–209. height 2pt depth -1.6pt width 23pt, [*On the computational aspects of the theory of joint spectral radius*]{}, Dokl. Akad. Nauk, 427 (2009), pp. 160–164. height 2pt depth -1.6pt width 23pt, [*An explicit [L]{}ipschitz constant for the joint spectral radius*]{}, [L]{}inear [A]{}lgebra [A]{}ppl., 433 (2010), pp. 12–18. height 2pt depth -1.6pt width 23pt, [*On explicit a priori estimates of the joint spectral radius by the generalized [G]{}elfand formula*]{}, Differential Equations Dynam. Systems, 18 (2010), pp. 91–103. , [*The finiteness conjecture for the generalized spectral radius of a set of matrices*]{}, Linear Algebra Appl., 214 (1995), pp. 17–42. , [*Algebraic combinatorics on words*]{}, Encyclopedia of Mathematics and its Applications, Cambridge University Press, Cambridge, 2002. , [*Calculating joint spectral radius of matrices and [H]{}ölder exponent of wavelets*]{}, in Approximation theory [IX]{}, [V]{}ol. 2 ([N]{}ashville, [TN]{}, 1998), Innov. Appl. Math., Vanderbilt Univ. Press, Nashville, TN, 1998, pp. 205–212. , [*On codes that avoid specified differences*]{}, IEEE Trans. Inform. Theory, 47 (2001), pp. 433–442. , [*A rapidly-converging lower bound for the joint spectral radius via multiplicative ergodic theory*]{}. to appear in Adv. Math. , [*Irrationality of limits of quickly convergent algebraic numbers sequences*]{}, Proc. Amer. Math. Soc., 102 (1988), pp. 473–479. , [*Approximation of the joint spectral radius of a set of matrices using sum of squares*]{}, in Hybrid systems: computation and control, vol. 4416 of Lecture Notes in Comput. Sci., Springer, Berlin, 2007, pp. 444–458. , [*Probability measures on metric spaces*]{}, AMS Chelsea Publishing, Providence, RI, 2005. Reprint of the 1967 original. , [*The joint spectral radius and invariant sets of linear operators*]{}, Fundam. Prikl. Mat., 2 (1996), pp. 205–231. , [*Convex analysis*]{}, Princeton Landmarks in Mathematics, Princeton University Press, Princeton, NJ, 1997. Reprint of the 1970 original, Princeton Paperbacks. , [*Gian-[C]{}arlo [R]{}ota on analysis and probability*]{}, Contemporary Mathematicians, Birkhäuser Boston Inc., Boston, MA, 2003. Selected papers and commentaries, Edited by Jean Dhombres, Joseph P. S. Kung and Norton Starr. , [*A note on the joint spectral radius*]{}, Nederl. Akad. Wetensch. Proc. Ser. A 63 = Indag. Math., 22 (1960), pp. 379–381. , [*On growth rates of subadditive functions for semiflows*]{}, J. Differential Equations, 148 (1998), pp. 334–350. , [*[J]{}oint [S]{}pectral [R]{}adius: theory and approximations*]{}. thesis, [U]{}niversité [C]{}atholique de [L]{}ouvain, 2005. , [*The [L]{}yapunov exponent and joint spectral radius of pairs of matrices are hard—when not impossible—to compute and to approximate*]{}, Math. Control Signals Systems, 10 (1997), pp. 31–40. , [*The generalized spectral radius and extremal norms*]{}, Linear Algebra Appl., 342 (2002), pp. 17–40. [^1]: Research of K. G. Hare supported, in part, by NSERC of Canada. Research of I. D. Morris supported by the EPSRC grant EP/E020801/1. [^2]: This is the sequence A022405 from Sloane’s On-Line Encyclopedia of Integer Sequences.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Given a set $P$ of $n$ points on $\mathbb R^{2}$, we address the problem of computing an axis-parallel empty rectangular annulus $A$ of maximum-width such that no point of $P$ lies inside $A$ but all points of $P$ must lie inside, outside and on the boundaries of two parallel rectangles forming the annulus $A$. We propose an $O(n^3)$ time and $O(n)$ space algorithm to solve the problem. In a particular case when the inner rectangle of an axis-parallel empty rectangular annulus reduces to an input point we can solve the problem in $O(n \log n)$ time and $O(n)$ space.' author: - 'Arpita Baral, Abhilash Gondane, Sanjib Sadhu, Priya Ranjan Sinha Mahapatra,' bibliography: - 'ptdom-April-2015.bib' title: 'Maximum-width Axis-Parallel Empty Rectangular Annulus' --- Introduction ============ A set of $n$ points $P$ on $\mathbb R^{2}$ is said to be [enclosed]{} by a geometric (or enclosing) object $C$ if all points of $P$ must lie inside $C$ and on the boundary of $C$. The problem of enclosing the input point set $P$ using a [minimum sized]{} geometric object $C$ such as a circle [@ps-cgi-90], rectangle [@tou-sgprc-83], triangle [@oamb-oafmet-86], circular annulus [@w-nmpp-86; @rz-epccmrsare-92; @efnn-rauvd-89; @ast-apsgo-94; @as-eago-98], rectilinear annulus [@ght-oafepranw-09], rectangular annulus [@jmkd-mwra-2012] etc has been extensively studied in computational geometry over the last few decades. Here we start the discussion with the enclosing problem that uses various annulus as an enclosing object. Among various types of annulus, the enclosing problem using circular annulus has been studied extensively [@w-nmpp-86; @rz-epccmrsare-92; @efnn-rauvd-89; @ast-apsgo-94; @as-eago-98]. The objective of this problem is to find a circular annulus of minimum-width that encloses $P$. Here circular annulus region is formed by two concentric circles. Gluchshenkoa et al. [@ght-oafepranw-09] considered the problem of finding a rectilinear annulus of minimum-width which encloses $P$. For this problem, the annulus region is formed by two concentric axis-parallel squares. Recently, Bae [@bae-cmwea-2017] studied this square annulus problem in arbitrary orientation where annulus is the open region between two concentric squares. Mukherjee et al. [@jmkd-mwra-2012] considered the problem of identifying a rectangular annulus of minimum-width which encloses $P$. In this problem, it is interesting to note that two mutually parallel rectangles forming the annulus region, are not necessary to be [concentric]{}. Moreover the orientation of such rectangles is not restricted to be axis-parallel. Further details on various annulus problem can be found in [@bae-cmwea-2017; @bg-opapp-14; @bbdg-opap-98; @bbbrw-ccmwaps-98; @AHIMPR-03; @DuncanGR-97] and the references therein.\ As per we are aware, there has been little work on finding empty annulus of maximum-width for an input point set $P$. D[í]{}az[-]{}B[á]{}[ñ]{}ez et al. [@dhmrs-leap-03] first studied the problem of finding an empty circular annulus of maximum-width and proposed $O(n^3\log n)$ time and $O(n)$ space algorithm to solve it. Mahapatra [@mahapatra-larea-2012] considered the problem of identifying an axis-parallel empty rectangular annulus of maximum-width for the point set $P$ and proposed an [incorrect]{} $O(n^2)$ time algorithm to solve it. Given a point set $P$, note that, for an axis-parallel minimum-width rectangular annulus which encloses $P$, the outer or larger rectangle is always the minimum enclosing rectangle of $P$ [@jmkd-mwra-2012]. This observation leads to develop an $O(n)$ time algorithm to find an axis-parallel rectangular annulus of minimum-width which encloses $P$ [@jmkd-mwra-2012]. However for the empty axis-parallel rectangular annulus problem of maximum-width, the number of potential outer rectangles forming an empty rectangular annulus is $O(n^4)$. This implies that $O(n^5)$ algorithm can be developed to solve this empty annulus problem using the result in  [@jmkd-mwra-2012]. Here we propose an $O(n^3)$ time and $O(n)$ space algorithm for finding an axis-parallel empty rectangular annulus of maximum-width for a given point set $P$. Note that the problem of axis-parallel empty rectangular annulus of maximum-width is equivalent to the problem when the empty annulus region is generated by two concentric rectangles. The paper is organized as follows: In Section \[problemdefandter\] we discuss the problem of identifying an axis-parallel empty rectangular annulus of maximum-width after introducing some notations. In Section \[proposedalgorithm\] we describe our new algorithm and prove its correctness. Section \[conclusion\] concludes the paper. Problem definition and terminologies {#problemdefandter} ==================================== We begin by introducing some notations. Let $P = \{p_1, \ldots, p_n\}$ be a set of $n$ points on $\mathbb R^{2}$. Let the $x$ and $y$-coordinate of a point $p_i$ be denoted as $x(i)$ and $y(i)$ respectively. Two axis-parallel rectangles $R$ and $R'$ are said to be parallel to each other when one of the sides of rectangle $R$ is parallel to a side of $R'$. Let $R_{in}$ and $R_{out}$ be two axis parallel rectangles such that $R_{in} \subset R_{out}$. The rectangular annulus $A$ formed by two such axis-parallel rectangles $R_{in}$ and $R_{out}$ is the [open region]{} between $R_{in}$ and $R_{out}$ where $R_{in}$ has non-zero area. See Fig. \[fig1:Cases\] for a demonstration. We use the term inner (resp. outer) for the smaller (resp. larger) rectangle of rectangular annulus $A$. In this paper the rectangle will always imply an axis parallel rectangle. The [top-width]{} of the rectangular annulus $A$ is the perpendicular distance between top sides of its inner and outer rectangles. Similarly we define the [bottom-width]{}, [right-width]{} and [left-width]{} of $A$. The minimum width among the top-, right-, bottom- and left-widths of a rectangular annulus $A$ is defined as the [width]{} of $A$ and is denoted by $W(A)$. A rectangular annulus $A$ formed by rectangles $R_{in}$ and $R_{out}$ ( $R_{in} \subset R_{out}$) is said to be [empty]{} if the following two conditions are satisfied. - No points of $P$ lie inside $A$. - All points of $P$ lie inside the rectangle $R_{in}$ and outside the rectangle $R_{out}$. The input points may lie on the boundaries of both $R_{in}$ and $R_{out}$. The objective of our problem is to compute an axis-parallel empty rectangular annulus of [maximum-width]{} from the given point set $P$. Note that the solution of this problem is not unique. From now onwards the term annulus is used to mean an axis-parallel empty rectangular annulus. Proposed Algorithm {#proposedalgorithm} ================== An annulus is defined by its eight edges (Four edges of outer rectangle and four edges of inner rectangle). Each edge of an annulus passes through a point $p \in P$. See Fig. \[fig1:Cases\] as an illustration.\ ![[]{data-label="fig1:Cases"}](Fig1_cases.eps) Initially sort $n$ points of $P$ in ascending order on the basis of their x-coordinates and in descending order on the basis of their y-coordinates. Throughout the paper we have assumed that all points are in general position i.e. no horizontal or vertical line pass through two points. Two horizontal lines $Top_{out}$ and $Bot_{out}$ sweeps vertically from top to bottom over the plane and these two lines denote the current positions of the top and bottom sides of the outer rectangle defining an empty annulus. Depending on the position of $Top_{out}$ and $Bot_{out}$ a horizontal strip is defined as follows.\ [**Definition 1**]{} A horizontal strip $S(a,b)$ is defined as the open region bounded by two parallel lines $Top_{out}$ and $Bot_{out}$ where the lines $Top_{out}$, $Bot_{out}$ pass through the points $a$ and $b$ respectively, having $y(a) > y(b)$ and $a, b \in P$.\ [**Definition 2**]{} $E(a,b)$ is the set of all empty annuli in $S(a,b)$ such that the top ($Top_{out}$) and bottom ($Bot_{out}$) edges of outer rectangle of any annulus $A \in E(a,b)$ pass through the points $a$ and $b$ respectively.\ We now state the following simple observation. [@jmkd-mwra-2012]\[obs1\] Given an outer rectangle $R_{out}$ generated from the point set $P$ on $\mathbb R^{2}$, the empty annulus $A$ having $R_{out}$ as the outer rectangle can be computed in $O(m)$ time, where $m$ is the number of points inside $R_{out}$. Our proposed algorithm computes an empty annulus $A^{max}_{(a,b)}$ of maximum-width within each strip $S(a,b)$, for all such possible pairs $(a,b)$, where $a,b \in P$. Finally an annulus of maximum-width among all those annuli ($A^{max}_{(a,b)}$) is reported.\ Finding an empty annulus of maximum-width in a horizontal strip {#finding_cand} --------------------------------------------------------------- Consider a strip $S(a,b)$ where we are looking for an empty annulus of maximum-width from $E(a,b)$. The following approach presents the way to achieve our goal.\ ![[]{data-label="fig:points_strip"}](Fig_points_in_strip.eps) Let $Q$ be the set of ordered points in increasing order (w.r.t. value of x-coordinates) including points $a$ and $b$ in the strip $S(a,b)$. Note that $Q$ can be determined from ordered set of points $P$ in linear time. Also let the leftmost and rightmost points of $Q$ be $l$ and $r$ respectively. Without loss of generality we have assumed that $x(a) > x(b)$ in $S(a,b)$. To compute the elements in $E(a,b)$, we use two vertical segments $Left_{out}$ and $Right_{out}$. These two lines define the left and right edges of outer rectangle $R_{out}$ of an annulus $A \in E(a,b)$. It can be observed that $Left_{out}$ is required to sweep over the points of $Q$ on the left of $b$ and $Right_{out}$ is required to sweep over the points of $Q$ on the right of $a$ to generate elements of $E(a,b)$. If any one of these segments moves to a point of $Q$ an annulus defined by the current positions of $Left_{out}$ and $Right_{out}$ is required to update and therefore the points of $Q$ are the [event]{} points of the proposed sweep line algorithm. We now partition $Q$ into $3$ groups - $(i)$ Points starting from $x(l)$ to $x(b)$ are in set $L$, $(ii)$ Points inside the rectangle formed by corner points $x(a)$ and $x(b)$ in set $M$ and $(iii)$ Points starting from $x(a)$ to $x(r)$ in set $R$ (See Fig. \[fig:points\_strip\]). $Left_{out}$ starts sweeping from $x(b)$ and moves towards $x(l)$ where the event points of $Left_{out}$ are the input points in $L$. Similarly $Right_{out}$ starts sweeping from $x(a)$ and moves towards $x(r)$ and its event points are input points in $R$.\ Let $p$ and $q$ denote the immediate right point of $a$ and immediate left point of $b$ respectively. Also let $p'$ and $q'$ denote the immediate right and immediate left points of $p$ and $q$ respectively. See Fig. \[fig:points\_strip\].\ Depending on the cardinality of $M$ we have the following three cases - $(i)$ $|M| \geq 2$, $(ii)$ $|M| = 0$, and $(iii)$ $|M| = 1$.\ [**Case I ($|M| \geq 2$):**]{} We first take an initial annulus $A$ in $S(a,b)$ from which we generate other annuli in the strip. This annulus $A$ has outer rectangle say $R_{out}$. The top-right corner and bottom-left corner of $R_{out}$ are at points $a$ and $b$ respectively. Construct an inner rectangle $R_{in}$ within this $R_{out}$ using Observation \[obs1\]. Let $Top_{in}$, $Right_{in}$, $Bot_{in}$ and $Left_{in}$ denote the top, right, bottom and left edges of $R_{in}$ respectively. $W(A)$ is the width of annulus $A$.\ \[lem1\] Consider any annulus $K \in E(a,b)$ and assume that $W(K)$ is determined by top-, bottom or left-width of annulus $K$. If the right edges of outer and inner rectangles of annulus $K$ is shifted towards right to obtain another annulus $K'$, then $W(K') \leq W(K)$. ![](Fig_lemma1.eps) \[FigLemma1\] Let the top- ($Top_{in}$), right- ($Right_{in}$), bottom- ($Bot_{in}$), left- ($Left_{in}$) edges of inner rectangle ($R_{in}$) of annulus $K$ pass through the points $p_t$, $p_r$, $p_b$, and $p_l$ respectively. The $Left_{out}$ and $Right_{out}$ edges of $K$ pass through the points $s$ and $t$ where $s \in L$, $t \in R$ and $W(K)$ is determined by the left-width of $K$. See Fig.\[FigLemma1\](a) for an illustration. We now shift $Right_{out}$ of annulus $K$ from $t$ to a point $t'$ in the right where $x(t)<x(t')$ and $t' \in R$ keeping $Left_{out}$ fixed at $s$. Annulus $K'$ is constructed where $K' \in E(a,b)$ and let us assume that $W(K')>W(K)$. Since the left edges of outer and inner rectangles of both $K$ and $K'$ pass through same points $s$ and $p_l$, their left-widths are equal. If the $Bot_{in}$ edge of $K'$ is determined by a point say $p_b'$ such that $x(p_b') \geq x(t)$ and $p_b \in R$ then bottom-width of $K'$ is less than bottom-width of $K$. Similarly we can say this for top-width of $K'$. Right-width of $K'$ can be equal, greater or smaller than right-width of $K$. If any one of the top-, right- or bottom-widths of $K'$ is smaller than its left-width then $W(K')<W(K)$. So it contradicts our assumption. If left-width of $K'$ determines $W(K')$ then we have $W(K')=W(K)$.\ Now consider annulus $K$ where its top-width determines $W(K)$. Now we shift $Right_{out}$ of annulus $K$ from $t$ to any point $t'$ in the right where $x(t)<x(t')$ and $t, t' \in R$ and $Left_{out}$ fixed at $s$, $s \in L$ (See Fig.\[FigLemma1\](b)). Annulus $K'$ is formed. $Right_{in}$ of $K'$ pass through $p'_r$ where $p'_r \in R$. To achieve better solution i.e. $W(K')>W(K)$ we have to increase the top-width of $K'$. However this is not possible because the point $p_t$ will lie in the open region between $R_{in}$ and $R_{out}$ of annulus $K'$. This means that no further sweeping of $Right_{out}$ and $Right_{in}$ of annulus $K$ is required. Using symmetry the assertion that $W(K') \leq W(K)$ holds when $W(K)$ is determined by the bottom-width of $K$ and $K'$ is any annulus whose left edge of outer rectangle lies at the same position where $Left_{out}$ of $K$ lies, and right edge of outer rectangle lies to the right of $Right_{out}$ of $K$. Similarly we can prove the following result. \[Lem2\] Assume that $K$ is an annulus in $E(a,b)$ where $W(K)$ is determined by top-, bottom- or right-width of annulus $K$. If the left edges of outer and inner rectangles of annulus $K$ is shifted towards left to obtain another annulus $K'$, then $W(K') \leq W(K)$. ![](Fig_shift.eps) \[fig:shift\] Algorithm \[alg:algostrip\] is based on the computation of a new annulus from an initial configuration. Let $A'$ be a given annulus in $E(a,b)$. See Fig.\[fig:shift\] as illustration. Depending on $W(A')$ we [shift]{} $Left_{out}$ of $A'$ from $p'_l$ to the next event point in left $s$, where $p'_l, s \in L$. Therefore a new outer rectangle is formed. Since $p'_l$ lies in the open region between this new outer rectangle and $R_{in}$ of $A'$, $p'_l$ is compared with the points $p_t$, $p_b$ and $p_l$. We thus create new annulus $A_1$. Note that this operation requires constant time. Now we describe Algorithm \[alg:algostrip\] to compute the set $E(a,b)$ for Case I. In each step our algorithm keeps information about the best solution computed so far. Let the best solution in $S(a,b)$ is stored in $W(A^{max}_{(a,b)})$ where $A^{max}_{(a,b)}$ is an maximum-width annulus in the strip. $W(A^{max}_{(a,b)}) \gets W(A)$.\ Return $W(A^{max}_{(a,b)})$. Note that if top- (or bottom) width becomes width of an annulus and is equal to its left- (or right) width we consider its top- (or bottom) width as its width (Followed from Lemma \[lem1\]). Also if any annulus have width from its left-width and right-width simultaneously then we consider any one of them as its width and proceed accordingly. ![](Fig1_lem2.eps) \[lem2\_fig\] In Algorithm \[alg:algostrip\] all elements of $E(a,b)$ are not computed. It starts with the initial configuration of annulus $A$. Depending on $W(A)$ we shift either its left edge or right edge of outer rectangle. Assume that left-width of $A$ determines $W(A)$. Now consider $A'$ and $A_1$ are two annuli computed in Case I where left-widths of $A'$ and $A_1$ determines $W(A')$ and $W(A_1)$ respectively. See Fig.\[lem2\_fig\](a) as an illustration. $Left_{out}$ of $A'$ and $A_1$ pass through $s'$ and $s$, and $s', s \in L$. $Right_{out}$ of $A'$ and $A_1$ pass through point $t$, $t \in R$. Let $A_L$ be the set of all annuli whose left edges ($Left_{out}$) of outer rectangle pass through any point between $s'$ and $s$. The right edge of outer rectangle of any annulus in $A_L$ is fixed at $t$. If we shift the $Right_{out}$ of $A'$ to any point $t'$ such that $x(t') > x(t)$ and $t' \in R$ then the annulus that will be created have width either less or equal to $W(A')$ (Ref. Lemma \[lem1\]). This fact is true for all annuli in $A_L$. This means that there is no requirement to generate all those annuli whose $Left_{out}$ pass through any point from $s'$ to $s$ and $Right_{out}$ passes through any point in the right of $t$. It may happen that $Left_{out}$ of $A_1$ reaches $l$ and left-width of $A_1$ determines $W(A_1)$ then our algorithm terminates and reports the best solution in $S(a,b)$. If right-width of $A_1$ determines $W(A_1)$ then we shift the $Right_{out}$ of $A_1$ and compute annuli further. Now assume that $W(A')$ and $W(A_1)$ are determined by right-widths of $A'$ and $A_1$. $Right_{out}$ of $A'$ and $A_1$ pass through $t'$ and $t$ and $t', t \in R$ and their $Left_{out}$ is fixed at $s'$, $s' \in L$. Let $A_R$ be the set of all annuli whose right edges of outer rectangle pass through any point between $t'$ and $t$ and left edges of outer rectangle fixed at $s'$ (See Fig.\[lem2\_fig\](b)). In a similar way we can say that there is no requirement to shift the left edge of outer rectangle of any annulus in $A_R$.\ We now consider the case when $|M| = 0$. [**Case II($|M| = 0$):**]{} We use Algorithm \[alg:algostrip\] to generate annuli of $E(a,b)$. It requires an initial configuration. We need at least two points to create an inner rectangle. These two points to form inner rectangle can lie in the left side of both $a$ and $b$, in the right side of both $a$ and $b$, or one in the right side of $a$ and other in the left side of $b$. Thus we need three initial configurations of the annuli from which we can generate other annuli in $E(a,b)$. They are as follows.\ $(i)$ Outer rectangle formed by point $a$ on the top right corner, $b$ in the bottom and $Left_{out}$ passing through the point $q_1$ where $q_1$ is immediate left point of $q'$ (See Fig. \[case2\](a)). Two points $q$ and $q'$ lie on the two opposite corner of the inner rectangle. We name this annulus as $A_1$.\ $(ii)$ Outer rectangle formed by point $b$ on the lower left corner, $a$ on the above and $Right_{out}$ passing through the point $p_1$ where $p_1$ is immediate right point of $p'$ (See Fig. \[case2\](b)). Two points $p$ and $p'$ lie on the two opposite corner of the inner rectangle. We name this annulus as $A_2$.\ $(iii)$ Outer rectangle formed by point $q'$ on the left, $a$ on the above, $p'$ on the right and $b$ lying at bottom. The inner rectangle is formed by two opposite corner points $p$ and $q$. Say this annulus $A_3$ (See Fig. \[case2\](c)). ![Demonstration shows $A_1, A_2, A_3$.[]{data-label="case2"}](Fig4_case2.eps) For each initial configuration we invoke Algorithm \[alg:algostrip\]. We compare the solutions obtained from them and finally report an empty annulus of maximum-width in $S(a,b)$.\ [**Case III($|M| = 1$):**]{} A single point, say $z$ is present inside the outer rectangle formed by two opposite corner points $a$ and $b$ in $S(a,b)$. Algorithm \[alg:algostrip\] requires an initial configuration to start with. We need at least two points to create an inner rectangle. One of them is $z$ and the other point can lie either in the left side or right side of $z$. Therefore we form two initial configurations of annuli to compute other annuli in $E(a,b)$.\ $(i)$ Outer rectangle formed by point $a$ on the top right corner, $b$ in the bottom and $Left_{out}$ passing through the point $q'$ (See Fig. \[case3\](a)). Opposite corner points $q$ and $z$ form inner rectangle. This annulus is $A_1$.\ $(ii)$ Outer rectangle formed by point $b$ on the lower left corner, $a$ on the above and $Right_{out}$ passing through the point $p'$. Here $p$ and $z$ are used to form inner rectangle. See Figure \[case3\](b). Let this annulus be $A_2$.\ ![Demonstration shows $A_1, A_2$.[]{data-label="case3"}](Fig5_case3.eps) We invoke Algorithm \[alg:algostrip\] separately on $A_1$, $A_2$ and report an empty annulus of maximum-width in $S(a,b)$. As stated in Case I, we do not compute all elements of $E(a,b)$ for Case II and Case III. We report an empty annulus of maximum-width in $S(a,b)$ from those annuli which are computed in Case II (resp. Case III).\ Now we have the following result.\ \[timecom\] For a given set of points $P = \{p_1, \ldots, p_n\}$ in $\mathbb R^{2}$, an empty annulus of maximum-width can be computed in $O(n^3)$ time using $O(n)$ space. Note that the number of horizontal strips formed by any two points of $P$ is $O(n^2)$. Algorithm \[alg:algostrip\] requires $O(m)$ time where $m$ ($\leq n$) is the number of input points in any such strip. Thus the result follows. In the above axis-parallel empty rectangular annulus problem, the inner rectangle forming such an empty annulus always have [non-zero]{} area. However if $R_{in}$ reduces to a single point $p \in P$ then we have following result. \[coro111\] Given a set $P$ of $n$ points on $\mathbb R^{2}$, an axis-parallel empty rectangular annulus $A$ of maximum-width can be computed in $O(n\log n)$ time using $O(n)$ space when the inner rectangle of the annulus $A$ reduces to a single point $p \in P$. One can construct the voronoi diagram for point set $P$ in $O(n \log n)$ time using $O(n)$ size data structure [@bcko-cgaa-08]. For any query point $q \in P$, a nearest point of $q$ among the points from $P$ can be computed in $O(\log n)$ time. Therefore the computation of nearest input points for all points of $P$ requires $O(n \log n)$ time. Hence the result follows. Conclusion and discussion {#conclusion} ========================= In the annulus problem studied by Mukherjee et al. [@jmkd-mwra-2012], the outer rectangle of an annulus of minimum-width which encloses $P$ must be the minimum enclosing rectangle enclosing $P$ where $P$ is the set of $n$ input points in $\mathbb R^{2}$. However in the empty axis-parallel rectangular annulus problem of maximum-width, the number of potential outer rectangles is $O(n^4)$. This observation implies that an $O(n^5)$ algorithm is trivial to find an empty rectangular axis-parallel annulus of maximum width. Therefore the proposed $O(n^3)$ time algorithm to solve the maximum-width empty annulus problem is a non-trivial one. Note that we didn’t give any lower bound for this problem but proposed $O(n\log n)$ time algorithm in Corollary \[coro111\] to solve the problem for a particular case. In this context, it would be interesting to give a sub-quadratic algorithm or to prove the problem $O(n^2)$-hard. Note that for each empty rectangular annulus problem discussed so far, the orientation is fixed. In future it remains as a challenge to solve this problem where the annuli are of [arbitrary orientation]{}. Acknowledgements ================ This work is supported by Project (Ref. No. $248 (19)~2014~$R $\&$ D II$~\/ 1045$) from The National Board for Higher Mathematics (NBHM), Government of India awarded to P. Mahapatra where Arpita Baral is a research scholar under this Project.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Extrinsic faulting has been discussed previously within the so called difference method and random walk calculation. In this contribution is revisited under the framework of computational mechanics, which allows to derive expressions for the statistical complexity, entropy density and excess entropy as a function of faulting probability. The approach allows to compare the disordering process of extrinsic fault with other faulting types. The $\epsilon$-machine description of the faulting mechanics is presented. Several useful analytical expressions such as probability of consecutive symbols in the Hägg coding is presented, as well as hexagonality. The analytical expression for the pairwise correlation function of the layers is derived and compared to results previously reported. The effect of faulting on the interference function is discussed as related to the diffraction pattern.' author: - Ernesto - Raimundo - Arbelio - Massimo title: 'Extrinsic faulting in $3C$ close packed crystal structures: Computational mechanics analysis' --- Introduction ============ Close packed structures (CPS) are OD (Order-Disorder) structures built by stacking hexagonal layers in the direction perpendicular to the layer [@durovic97]. The stacking ambiguity raising from the two possible positions of a layer with respect to the previous one leads to a theoretically infinite number of possible polytypes if no constraint is made on the periodicity. However, by far the commonest ones are the cubic close packed or $3C$ (Bravais lattice of type $cF$) and the hexagonal close packed or $2H$ (Bravais lattice of type $hP$). These are MDO (Maximum Degree of Order) polytypes, meaning that they structure contains the minimal number of layer triples, quadruples etc. (one, in both cases). CPS usually exhibit some kind of planar disorder, or stacking faults, that viewed as a disruption in the otherwise periodic arrangement of layers, can be analyzed as non interacting defects. This is the basis of the so called random faulting model (RFM), that has been the most widely used model of faulting in layer structures, dating back to the early times in diffraction analysis [@wilson; @wagner; @warren]. The idea of the RFM is to consider certain types of faulting, such as intrinsic (removal of a layer from the sequence), extrinsic (addition of a layer in the sequence) and twinning (change of orientation in the sequence), assigning to each a fixed probability of occurrence, independent from the density of faulting and neglecting any spatial interaction between the faults present in the material. This simplifying assumption means that RFM, if suitable, should be at very low density of faulting, in which case it is justified, during the derivation of the correspondent expression for the displacement correlation function between layers (also known as the pairwise correlation) and subsequently the diffraction equation, to drop all terms above linear in the faulting probabilities. Analytical expressions are then found: see such mathematical development in the classical works of [@warren] for deformation and twin faulting in $FCC$ and $HCP$ structures. Besides the assumption of low density of defects, the RFM also assumes that faults, when occur, go through the whole coherently diffracting domain, avoiding the need to account for the appearance of partial dislocations. Further, faulting are considered to happen along the stacking direction but not along any direction that is crystallographically equivalent in the non-faulted polytype. For example, in the case of the unfaulted $3C$ polytype, the four directions $\langle 111 \rangle$ are equivalent, as in any cubic crystal, but this is no longer the case if faulting occurs in one of the four. The reader can refer to [@estevez07] for further historical account of the subject. In recent years there have been attempts to extend the mathematical applicability of the model, without modifying its fundamental assumptions. First, [@velterop] have observed that, even for low density of faulting, the assumption of only one faulting direction, is an unrealistically simplifying assessment of the diffraction behavior, which can lead to misleading conclusions. Another issue is the need to accommodate larger, more realistic, faulting densities within the model. Even if the physical assumption of non interacting faulting is too heavy for larger probability of faulting, it is still interesting to pursue such an extension for several reasons. RFM can be used as a reference model for other approaches. The fact that only one parameter for each faulting type is needed, makes it very attractive in practical analysis of materials. Additionally, RFM can be used as a suitable starting model in computer simulations of faulting. In this case, the need of a good starting proposal is essential in the convergence and convergence speed of numerical calculations. In the last years, independently, Varn *et al.* [@varn01; @varn01a; @varn02; @varn04], and [@estevez08], have attempted to rewrite the RFM in a modern framework, using a Hidden Markov Model (HMM) description of the faulting dynamics. The more ambitious idea is to go beyond the faulting model and try to understand the disordering process in layer structures, as a dynamical process of a system capable of performing (physical) computation and, in this sense, able to store and process information [@crutchfield92]. The attractiveness of the proposal is that such approach can harvest from a powerful set of tools, developed within the study of complexity, grounded in information theory concepts such as Shannon entropy and mutual information. This framework is known as computational mechanics and have been used in a wide range of subjects [@crutchfield12]. A first attempt to use the HMM description of random faulting was done by [@estevez08] for intrinsic and twinning faults. Their analysis allowed to calculate the displacement correlation function and the diffraction equation for the whole range of faulting probabilities. They also derived useful expression concerning the hexagonality of the stacking event, the average size of cubic and hexagonal neighborhood blocks, the correlation length, all as function of the faulting probabilities. While correct, this approach is ad hoc in nature, only applicable to the problem considered by them, namely using as starting structure the $3C$ layer ordering, and working through the appropriate equations. A more recent breakthrough came from the work of [@riechers15], who proved that calculation of the pairwise correlation function could be systematized in an elegant way, allowing its applicability to a wide number of situations such as those found in close packed arrangements. The idea is to find the description of the stacking arrangement as a HMM and, from there, build the transition matrix, find the stationary probabilities of the HMM states and the pairwise correlation function \[See equation (13) in [@riechers15] or in this contribution further on\]. In their contribution, they also discussed a number of examples that showed how the formalism can reproduce previous results, such as those reported by [@estevez08] and also be applied to other cases. The result of [@riechers15] opens the possibility to study, in a systematic way, the RFM for different types of faulting and their combinations, something which proved to be at least cumbersome and, in certain instances, intractable by previous tools. This is what we intend to do in this contribution for extrinsic faults. Extrinsic faulting has been dealt before [@johnson63; @warren63; @lele67; @holloway69; @holloway69a; @howard77; @takahashi78; @howard79]. The main goal of the manuscript is to report several analytical expressions for disorder of extrinsic faulted CPS. These expressions relates disorder magnitudes such as those derived within mechanical computation with the extrinsic faulting probability which in turn allows comparison with similar expression derived for twin and deformation fault already reported [@estevez08]. Also, closed analytical expression for the probability of finding different stacking sequences in the faulted structure is reported and from there an expression is derived for the hexagonality and the average length of perfectly coherent $FCC$ sequence within the CPS. The analytical expression of the pair correlation function as a function of faulting probability is derived and its decaying and oscillation behavior are discussed. Finally, the expression for the interference function is reported and peak shift and asymmetry as a result of the extrinsic faulting is commented. First the main concepts used and the notation are introduced. Order and disorder in close packed stacking arrangements and the pairwise correlation function ============================================================================================== In the OD structures built from hexagonal layers, the layers can be found only in three positions perpendicular to the stacking direction, which are usually labeled $A$, $B$, and $C$ [@durovic97; @pandey2]. Close packed is the constraint that two layers which bear the same letter, and are thus exactly overlapped in the projection along the stacking direction can not occur consecutively. According to this description, the ideal $FCC$ structure is described by $ABCABCAB\ldots$ sequences [@verma], while the ideal $HCP$ structure, has a stacking order described by $ABABABA\ldots$ and the double hexagonal close packed ($DHCP$) in turn is described by the stacking $ABCBABCB\ldots$. An equivalent, and less redundant coding is the Hägg code [@hagg43], where pair of consecutive layers is given a plus (or 1) symbol if they form a ”forward” sequence $AB$, $BC$ or $CA$, and a minus (or 0) sign otherwise[^1]. There is a one-to-one relation between both coding [@estevez05a]. It is also important to introduce a three layer hexagonal environment as one where a layer $X$ has the two adjacent layers in the same position (e.g. $ABA$, $ACA$, $BAB$, $BCB$, $CAC$, $CBC$); if a layer environment is not hexagonal then it is cubic. A hexagonal environment is denoted by a letter $h$ and a cubic environment by a letter $k$, this is the basis of the Jagodzinski coding of the stacking arrangement, as before, there is a one-to-one correspondence between the $ABC$ coding and the Jagodzinski coding [@estevez08a]. Hexagonality then refers to the fraction of hexagonal environments in the stacking sequence. Also, it can be easily checked that, when in the Hägg code the pair of characters $10$ or $01$ is found, a hexagonal environment is found. Faulting is generically meant as a disruption of the ideal periodic ordering of a stacking arrangement and therefore constitute a defect in the structure. In close packed structures, the most simple type of faults that are usually considered are (1) deformation faults, which are jogs in the otherwise perfect periodic sequence, (2) extrinsic or double-deformation fault, which is the insertion of an extraneous layer in the perfect sequence and; (3) twin faults, which cause reversions in the stacking ordering. In what follows, the probability for the occurrence of a deformation fault will be denoted by $\alpha$, of an extrinsic fault by $\gamma$, while the probability for the occurrence of a twin faulting will be denoted by $\beta$. The pair correlation function between layers, known as the pairwise correlation function $Q_{\xi}(\Delta)$, is the key to calculate the effect of the stacking arrangement in the diffraction intensity [@estevez01; @estevez03a]. Consider a stacking direction and sense, $Q_{\xi}(\Delta)$, where $\xi=\{c,a,s\}$ is the probability of finding two layers, $\Delta$ layers apart, and displaced the first with respect to the second one as (1) $\xi=c$: $A-B$ or $B-C$, or $C-A$; (2) $\xi=a$: $B-A$ or $C-B$, or $A-C$ and (3) $\xi=s$: $A-A$ or $B-B$, or $C-C$[^2]. It should be noted that $Q_{s}(1)=0$ due to the close packed constraint and $Q_{s}(0)=1$ by construction. It is possible, for any of the described codings ABC, Hägg and Jagodzinski, to construct a Hidden Markov Model (HMM) describing a broad range of both ordered and disordered stacking process. A HMM description comprises a finite, or at least enumerable, set of states $\mathcal{S}$ and the associated initial set of probability $\pi_0$ of being in each state; the set of transition matrices $\mathbf{T}$, and a set of symbols drawn from a finite alphabet $\mathcal{A}$. Each transition matrix $\mathcal{T}^{[\upsilon]}$ is a square matrix with number of rows equal to the number of states, where each entry $t^{(\upsilon)}_{ij}$ represents the probability of jumping from state $i$ to state $j$, while emitting the symbol $\upsilon\in\mathcal{A}$. The HMM transition matrix $\mathcal{T}$ is defined as the sum of the $\mathcal{T}^{(\upsilon)}$ over all symbols $\upsilon$ in the alphabet. Figure \[fig:perfseq\] shows the HMM for the $FCC$, $HCP$ and $DHCP$ stacking structures. For further details the reader is referred to previous papers on the subject [@varn01; @varn01a; @varn02; @varn04; @estevez08]. When seen through a HMM description, stacking arrangements are cast as an information processing system that sequentially outputs symbols as it makes transition between states. The system output is then an infinite string of characters $\Upsilon=\ldots \upsilon_{-2} \upsilon_{-1} \upsilon_{0} \upsilon_{1} \upsilon_{2}\ldots$ each character $\upsilon_i\in\mathcal {A}$. For the purposes of analysis it is common to, at a given point, divide the output string in two halves, the left halves $\overleftarrow{\Upsilon}=\ldots \upsilon_{-2} \upsilon_{-1}$ is known as the past, while the right halve $\overrightarrow{\Upsilon}=\upsilon_{0} \upsilon_{1} \upsilon_{2}\ldots$ is called the future [^3] [@varn01; @varn01a; @varn02; @varn04]. There can be many HMM describing the same process, the minimal HMM describing the system dynamics is considered to be optimal in the sense of using fewer resources while providing the best predictive power and will be the one relevant in this contribution, such model is called an $\epsilon$-machine [@crutchfield92; @crutchfield12]. The $\epsilon$-machine has, among others, the important property of unifilarity which means that, from a given state, the emitted symbols determines unambiguously the transition to another state. Let us denote, following the common use of brac and kets in physics, by $\langle \pi |$ the vector of state probabilities and by $|1\rangle$ a vector of $1s$. If the HMM description is known, then the probability of any finite sequence $\upsilon^N=\upsilon_i \upsilon_{i+1} \upsilon_{i+2}\ldots \upsilon_{i+N-1}$ will be given by $$P(\upsilon^N)=\langle \pi | \mathcal{T}^{[\upsilon_{i}]} \mathcal{T}^{[\upsilon_{i+1}]}\ldots\mathcal{T}^{[\upsilon_{i+N-1}]}|1\rangle.\label{eq:psn}$$ Where in this case $\langle x | A | y\rangle$ is a real number resulting from the scalar product between vectors and matrices. Several information theory magnitudes can be defined once the minimal HMM description of the stacking process is known. Shannon defined information entropy $H(X)$ for an event set $X$ with discrete probabilities distribution $p(X)$ as [@arndt] $$H(X)=-\sum_i p(X) \log p(X),$$ where the sum is taken over all the probability distribution and here and in what follows the logarithm is taken base two which makes the units of the entropy to be bits. For the $\epsilon$-machine, the statistical complexity $C_\mu$ is defined as the Shannon entropy over the HMM states, $$C_\mu=H(\mathcal{S})=-\sum_i p_i \log p_i,\label{eq:sc}$$ where $p_i$ is the stationary probability of the $i$th-state in the minimal HMM description and the sum is over all states probabilities. $C_\mu$ measures the amount of information the system stores. Excess entropy $E$ is also used to characterize the system information processing capabilities and is used as a measure of predictability, defined as the mutual information between the left half and the right half in the system output, $$E=H(\overleftarrow{\Upsilon})+H(\overrightarrow{\Upsilon})-H(\Upsilon).$$ Entropy density $h_\mu$ [@arndt] is defined as $$h_\mu=\lim_{N\rightarrow\infty}\frac{H(\Upsilon^N)}{N},$$ when such limit exist, with $\Upsilon^N$denotes all substrings of $\Upsilon$ of length $N$. $h_\mu$ is used to answer how random the process is [@feldman03]. Finally, [@riechers15] described a procedure for computing the pairwise correlation function from the transition matrices, that can be summarized as follows: 1. The HMM of the stacking process in the ABC notation is given together with $\{\mathcal{A}, \mathcal{S}, \pi_0, \mathbf{T}\}$. If this description is given in the Hägg coding then the expansion to the ABC coding must be performed [@riechers15]. 2. The stationary probabilities $\pi$ over the HMM states is calculated as the normalized left eigenvector of the transition matrix $\mathcal{T}$ with eigenvalue unity. $$\langle\pi|=\langle\pi|\mathcal{T},\label{eq:eigen}$$ 3. The pairwise correlation function follows from the definition and the use of equation (\[eq:psn\]): $$\displaystyle Q_{\xi}(\Delta)=\sum_{x_0\in\mathcal{A}}\langle \pi | \mathcal{T}^{[x_0]}\mathcal{T}^{\Delta-1}\mathcal{T}^{[\hat{\xi}(x_0)]}|\mathbf{1}\rangle.\label{eq:Q}$$ Where $\hat{\xi}=\{\hat{c}, \hat{a}, \hat{s}\}$ is a family of permutation functions given by $$\begin{array}{lll} \hat{c}(A)=B & \hat{c}(B)=C & \hat{c}(C)=A\\ \hat{a}(A)=C & \hat{a}(B)=A & \hat{a}(C)=B\\ \hat{s}(A)=A & \hat{s}(B)=B & \hat{s}(C)=C \end{array}$$ and $\mathbf{1}$ represents a vector of $1$’s (See also equations (20) and (24) in [@riechers15] for alternatives expression for equation (\[eq:Q\])). Extrinsic fault in the face centered cubic stacking order ========================================================= An extrinsic fault (In the case of the FCC structure also known as double deformation fault) in a $3C$ stacking is depicted in Fig. \[fig:ef\] along a perfect sequence for comparison. It can be seen that in the Hägg code, the extrinsic fault is equivalent to the flip (bitwise negation) of two consecutive characters. The probability of occurrence of such faulting will be denoted by $\gamma$. It will be assumed that $\gamma$ can take any value between 0 and 1. Building from the effect of the extrinsic fault over the Hägg code, the HMM of the faulting process is shown in Fig. \[fig:fsaext\], where it is assumed that the ideal $3C$ structure goes in the $A\rightarrow B \rightarrow C$ sequence. The $p$ state represents the non faulted condition, as long as the system stays in that state, the output symbol $\upsilon=1$ will correspond to the perfect $3C$ structure. If faulting occurs, a $0$ is emitted and the system goes to the $e$ state, where a second $0$ is printed with certainty while returning to the $p$ state[^4]. The HMM of figure \[fig:fsaext\] represents a biased even process (see Appendix A of [@crutchfield13] and Example D in [@varn13]). It should be observed that any sequence with an odd number of $0$ can not be the result of such HMM. Such sequence will be called forbidden, moreover, the forbidden sequences are called irreducible as they do not contain a proper subsequence which itself is forbidden. The number of irreducible forbidden sequence in the even process is infinite, in such case, the process is called a sofic system [@feldman03]. The fact that any sequence from the HMM of the even process contains an even number of $0's$ has important consequences as will be discussed further down. The corresponding transition matrix will be given by $$\begin{array}{ll} \mathcal{T}^{[1]}=\left ( \begin{array}{ll}\overline{\gamma}&0\\0&0\end{array} \right ) & \mathcal{T}^{[0]}=\left ( \begin{array}{ll}0&\gamma\\1&0\end{array} \right ) \\\\ \mathcal{T}=\left ( \begin{array}{ll}\overline{\gamma}&\gamma\\1&0\end{array} \right ), & \end{array}$$ where $\overline{\gamma}$ stands for $1-\gamma$. The stationary probabilities over the recurrent states $p$ and $e$ can be calculated following equation (\[eq:eigen\]) which results in $$\displaystyle \langle\pi|=\left \{\frac{1}{1+\gamma},\frac{\gamma}{1+\gamma}\right \},\label{eq:pi}$$ the first value corresponds to the $p$ state. Hexagonality in terms of computational mechanics has been analyzed in a more general context previously [@varn07]. Hexagonality can be calculated from the probability of occurrence of $01$ or $10$ in the Hägg code of the sequence. Both probabilities are equal and, from equation (\[eq:psn\]), given by $$P(01)=\langle \pi | \mathcal{T}^{[0]} \mathcal{T}^{[1]}|1\rangle=\gamma \frac{1-\gamma}{1+\gamma},$$ from which the hexagonality is given by $2P(01)$. Hexagonality has a maximum value of $2(3-2\sqrt{2})\approx 0.343$ at $\gamma=\sqrt{2}-1\approx 0.414$ (Fig. \[fig:hexhe\]a). The statistical complexity can be derived from equation (\[eq:sc\]) using equation (\[eq:pi\]) and is given by $$\displaystyle C_{\mu}=\frac{1}{1+\gamma}\left ( \log (1+\gamma)-\gamma \log \frac{\gamma}{1+\gamma}\right ).$$ logarithm is taken usually in base two and then the units of $C_\mu$ is bits. For an $\epsilon$-machine the entropy density is given by [@crutchfield13] $$\displaystyle h_\mu=-\sum_{k\in \mathcal{S}}P(k)\sum_{x \in \mathcal{A}} P(x|k)\log P(x|k),$$ where $P(a|b)$ means the probability of $a$ conditioned on $b$. The units of the entropy density is bits/site. The expression for the entropy density will not be derived explicitly and the reader is referred to [@crutchfield13], the resulting expression for the entropy density is $$\displaystyle h_{\mu}=-\frac{1}{1+\gamma}\left [ \gamma\log \gamma+(1-\gamma)\log (1-\gamma)\right ].$$ The calculation of the excess entropy is more involved and explained in detail in the Appendix. The results is $$\displaystyle E=\frac{1}{1+\gamma}\left ( \log (1+\gamma)-\gamma \log \frac{\gamma}{1+\gamma}\right ),$$ which is identical to the statistical complexity. Figure \[fig:hexhe\]b shows the behavior of the excess entropy as a function of $\gamma$. Observe that at $\gamma=1$, the excess entropy has a discontinuity, as $E$ drops to zero when the finite state automata description has a topological change to a certain process with only one state and emitting always a $0$ symbol. This discontinuity is not seen by the entropy density (Fig. \[fig:hexhe\]a) that has a maximum at $\gamma=1/2(3-\sqrt{5})\approx 0.382$ with value $h_\mu=0.6942$ bits/site and then smoothly drops to zero as $\gamma$ approaches $1$. The probability of a chain of 0’s of length $n$ is given by $$P(0^n)=\left \{ \begin{array}{ll}\gamma^l\left(1-\frac{2 \sqrt{\gamma}}{1+\gamma}\right)& n=2 l\\0 & n=2l+1\end{array}\right. .\label{eq:0}$$ For chains of 1’s $$P(1^n)=\frac{(1-\gamma)^n}{1+\gamma}.\label{eq:1}$$ From equation (\[eq:0\]) and (\[eq:1\]) the average length of blocks of 0’s and 1’s can be calculated $$\begin{array}{l} \langle L_0 \rangle=\sum_{n=1}^{\infty}n P(0^n)=\frac{4 \gamma}{(1-\gamma)^2}\\ \\ \langle L_1\rangle=\sum_{n=1}^{\infty}n P(1^n)=\frac{1}{\gamma^2}\frac{1-\gamma}{1+\gamma}. \end{array}$$ $\langle L_0\rangle=\langle L_1\rangle$ at $\gamma=0.3623$. In Fig. \[fig:hexvse\] hexagonality as a function of excess entropy and entropy density are shown. The higher the entropy density is, the higher the hexagonality, which comes as no surprise as hexagonal neighborhoods are result of faulting events, which in turn implies larger disorder. It can be seen though, that hexagonality is not a function of entropy density. On the contrary, hexagonality seems to be a function of excess entropy. A maximum value of hexagonality is found for an excess entropy of $0.8724$ bits. The pairwise correlation function. ---------------------------------- The HMM for the $ABC$ coding describing the extrinsic fault can be constructed from the Hägg description and is shown in Fig. \[fig:abcfsa\]. For each state in the HMM over the Hägg code (Fig. \[fig:fsaext\]), three states are induced in the HMM of the $ABC$ coding corresponding to subsequences starting with $A$, $B$ and $C$. Using the same procedure described for the Hägg HMM, the transition matrices can be written and the stationary probability over the recurrent states calculated for the HMM over the $ABC$ coding: $$\langle \pi_{abc}|=\frac{1}{3(1+\gamma)}\{\gamma, \gamma, \gamma,1,1,1\}.\label{eq:piabc}$$ where the order in the states has been taken as $\{A_e, B_e, C_e, A, B, C\}$. Using equation (\[eq:Q\]) the pairwise correlation follows $$\begin{array}{l} \displaystyle Q_{s}(\Delta)=\frac{1}{3}\left [ 1+\right.\\\\ \left (\frac{|p|}{4}\right )^{\Delta}\left(\left[1+\frac{\cos(3 \phi_r)|r|}{\sqrt{3}(1+\gamma)}\right]\cos(\Delta\phi_p)+\frac{\sin(3\phi_r)|r|}{\sqrt{3}(1+\gamma)}\sin(\Delta \phi_p)\right)+\\\\ \left.\left (\frac{|q|}{4}\right )^{\Delta}\left(\left[1-\frac{\cos(3 \phi_r)|r|}{\sqrt{3}(1+\gamma)}\right]\cos(\Delta\phi_q)-\frac{\sin(3\phi_r)|r|}{\sqrt{3}(1+\gamma)}\sin(\Delta \phi_q)\right)\right ]\\\\ =\frac{1}{3}\left( 1+Q^{[1]}_{s}(\Delta)+Q^{[2]}_{s}(\Delta)\right ), \label{eq:q0} \end{array}$$ where $$\begin{array}{l} r=|r|e^{i \phi_r}=\sqrt{i \sqrt{3}(6\gamma-\gamma^2-1)-(1+\gamma)^2},\\\\ x=(\gamma-1)(1-i\sqrt{3}),\\\\ p=|p|e^{\phi_p}=x+\sqrt{2}s,\\\\ q=|q|e^{\phi_q}=x-\sqrt{2}s. \end{array}$$ The obtained equation is equivalent to that result given by [@holloway69], as can be seen by comparing numerical results from equation (\[eq:q0\]) for $\Delta=0,1,2,3$ and those reported in equations (35), (36), (37), (38) in [@holloway69a] (making $\alpha=0$)[^5]. In turn these authors have shown that their result reduces to that of [@johnson63]. Holloway and Klamkin do not give a close form of $Q_{s}(\Delta)$ for $\Delta > 3$. There are two terms in the expression for $Q_{s}(\Delta)$, each with an oscillating and a decaying part. Figure \[fig:qp\]a shows the behavior of both decaying terms with faulting probability. $p$ and $q$ have a jump (discontinuity) at the same value of $\gamma \approx \gamma_0= 0.1716$, where the real part of $r$ has a minimum, and the imaginary part has a jump from negative value to a positive one. Interesting, the combined plot of both terms result in two smooth continuous curves. At $\gamma=0$, $p$ is zero while $q=1$, the oscillating part of the second term in $Q_{s}$ dominates. At $\gamma=1$, both $p$ and $q$ have the same value of $1$ and the combine effect of both oscillating terms determines the pairwise correlation function. for both cases( $\gamma=0$ and $\gamma=1$), $Q_{s}(\Delta)$ reduces to $$\displaystyle Q_{s}(\Delta)=\frac{1}{3}\left(1+2 \cos \left[\frac{2 \pi}{3}\Delta\right] \right ),$$ describing the correlation function for the perfect $3C$ stacking. At $\gamma_0$ the oscillating part of both terms in $Q_{s}(\Delta)$ becomes equal for all values of $\Delta$. At $\gamma=\sqrt{2}-1$, where the hexagonality reaches its maximum value, the oscillating part of $Q^{[1]}_{s}(\Delta)$ is the prevailing one at large $\Delta$ values. For small ($\gamma\approx 0$) and large values ($\gamma\approx 1$) is the oscillating part of $Q^{[2]}_{s}(\Delta)$ which determines the underlying stacking sequence. In any case, the lower curve in Fig. \[fig:qp\]a determines the faster decaying behavior of the pairwise correlation function, while the upper curve determines the dominant behavior at larger $\Delta$ values. Figure \[fig:qp\]b shows the correlation lengths derived from both decaying terms. At large values of $\Delta$ the $p$ term is the dominant factor in the pairwise correlation function for values of $\gamma > \gamma_0$, while the opposite happens at values below $\gamma_0$. A similar deduction made for $Q_{c}(\Delta)$ results in $$\begin{array}{l} Q_c(\Delta)=\frac{1}{3} \left(1+\left ( \frac{\left| p\right|}{4}\right )^\Delta \left [C_p \cos (\Delta \phi_p)+S_p \sin (\Delta \phi_p)\right]+\right.\\\\ \left.\left ( \frac{\left| q\right|}{4}\right )^\Delta \left[C_q \cos (\Delta\phi_q)+S_q \sin (\Delta \phi_q)\right]\right) ,\end{array}$$ with $$\begin{array}{l} C_p=\frac{\sqrt{2}}{\left| r \right|}\frac{ \gamma^2-4 \gamma+1 }{1+\gamma }\cos\phi_r+2 \frac{\sqrt{6}}{\left| r \right|}\frac{\gamma}{1+\gamma}\sin \phi_r-\frac{1}{2},\\\\ S_p=\frac{\sqrt{2}}{\left| r \right|}\frac{ \gamma^2-4 \gamma+1 }{1+\gamma}\sin\phi_r-2 \frac{ \sqrt{6}}{\left| r \right|}\frac{\gamma}{1+\gamma} \cos \phi_r+\frac{\sqrt{3}}{2}\\\\ C_q=-\frac{\sqrt{2}}{\left| r \right|}\frac{ \gamma^2-4 \gamma+1 }{1+\gamma }\cos\phi_r-2 \frac{\sqrt{6}}{\left| r \right|}\frac{\gamma}{1+\gamma}\sin \phi_r-\frac{1}{2},\\\\ S_q=-\frac{\sqrt{2}}{\left| r \right|}\frac{ \gamma^2-4 \gamma+1 }{1+\gamma}\sin\phi_r+2 \frac{ \sqrt{6}}{\left| r \right|}\frac{\gamma}{1+\gamma} \cos \phi_r+\frac{\sqrt{3}}{2}. \end{array}$$ $Q_a(\Delta)$ follows from the normalization condition. The interference function. ========================== The diffraction pattern of an OD structure can be decomposed in two contributions: that of the layer and that of the stacking sequence. The reduced diffracted intensities (i.e. once the necessary corrections are applied: Lorentz, polarization, absorption etc.) can be deconvoluted in terms of these two contributions so that the stacking sequence leaves its fingerprint in the form of an interference function showing a periodic distribution of deconvoluted intensities. In the case of complex sequences like that of micas, in which adjacent layers can be stacked in six different orientations, the interference function has been called PID (Periodic Intensity Distribution: [@nespolo99]). For close packed structures, the situation is simpler because adjacent layers may take only two relative positions. The consequence of extrinsic faulting over the diffracted intensity is visible in the interference function. The interference function follows from the use of the expressions for $Q_{s}$, $Q_{c}$ and $Q_{a}$ [@estevez01]: $$\displaystyle {\cal I}({r}^{*})= 1+2 \sum_{\Delta=1}^{N_{c}-1} A_{\Delta} \cos(2 \pi \Delta l)+B_{\Delta} \sin(2 \pi \Delta l), \label{Qfinal}$$ where $$\label{fcoef} \begin{array}{l} \displaystyle A_{\Delta}=(1-\frac{\Delta}{N_{c}}) \left \{ Q_s(\Delta) +\left[Q_c(\Delta)+Q_a(\Delta)\right] \cos[\frac{2 \pi}{3} (h-k)]\right \}\label{fcoefa}\\\\ \displaystyle B_{\Delta}=(1-\frac{\Delta}{N_{c}}) \left[Q_c(\Delta)-Q_a(\Delta)\right] \sin[\frac{2 \pi}{3} (h-k)]. \end{array}$$ $N_c$ is the number of layers in the stacking sequence. For $h-k$ a multiple of $3$, the coefficients reduces to $A_{\Delta}=(1-\frac{\Delta}{N_{c}})$ and $B_{\Delta}=0$ and this family of reflections are not affected by the extrinsic faulting. For $h-k=3n+1$ with $n$ an integer, the coefficients are then $$\label{fcoef1} \begin{array}{l} \displaystyle A_{\Delta}=(1-\frac{\Delta}{N_{c}}) \left \{ Q_s(\Delta) -\frac{Q_c(\Delta)+Q_a(\Delta)}{2}\right \}\\\\ \displaystyle B_{\Delta}= \frac{\sqrt{3}}{2}(1-\frac{\Delta}{N_{c}})\left[Q_c(\Delta)-Q_a(\Delta)\right] . \end{array}$$ The last case is $h-k=3n+2$ with $n$ an integer, the coefficients are then $$\label{fcoef2} \begin{array}{l} \displaystyle A_{\Delta}=(1-\frac{\Delta}{N_{c}}) \left \{ Q_s(\Delta) -\frac{Q_c(\Delta)+Q_a(\Delta)}{2}\right \}\\\\ \displaystyle B_{\Delta}= \frac{\sqrt{3}}{2}(1-\frac{\Delta}{N_{c}})\left[Q_a(\Delta)-Q_c(\Delta)\right] . \end{array}$$ An analytical expression for the interference function can be deduced from the above equations but is too long and cumbersome to be of any particular interest[^6]. The result has been discussed already by [@johnson63; @warren63; @holloway69a]. With increasing faulting probability $\gamma$, the peak asymmetrically broadens, lowers its intensity and shifts (Figure \[fig:peakshift\]). For $h-k=1\;(mod \, 3)$ the peak originally at $l=3n+1$ ($n\in \mathcal{Z}$) shift towards lower $l$ values, while the opposite occurs for $h-k=2\;(mod \, 3)$ where the peak originally at $l=3n-1$ shift towards higher $l$ values. Additionally at high faulting probability an additional peak appears near the so called twin position. For $h-k=1\;(mod \, 3)$ ($h-k=2\;mod \, 3$) the twin position is at $l=3n-1$ ($l=3n+1$), the additional peak appears at lower (larger) value of $l$ and gradually shifts towards the twin peak position as $\gamma$ increases while strengthening its intensity and decreasing its broadening. The behavior of the original peak and the twin one are not symmetrical, that is, they do not behave the same for $\gamma$ and $1-\gamma$, respectively. The non symmetric behavior of the peaks can be explained by the non symmetrical character of the HMM describing extrinsic broadening (Figure \[fig:fsaext\]). A similar profile for single crystal and the particular case of $\gamma=1/2$ has been reported by [@varn13]. If one observe the interference for $\gamma=0.333$ (Figure \[fig:peakshift\]), it is too often in the literature that peak deformations with geometry such as these are fitted with models involving more than one phase. The fact that such distortions can be result of a single type of faulting that does not lead to any polytype, should be taken as a note of warning against introducing to easily new structures in profile fitting. In Figure \[fig:asym\] the peak shift and asymmetry as a function of faulting probability is shown. Asymmetry has been defined as the ratio between the half width at half maximum (HWHM) for the right side ($W_r$), divided by the HWHM for the left side ($W_l$), by construction, the asymmetry is equal to $1$ for a perfect symmetric peak. For powder diffraction it must be considered that the components of a family of planes like the $\{111\}$ ( where all members of the family are crystallographically equivalent for the unfaulted crystal and share the same interplanar distance) are no longer equivalent when faulting occurs. For example, when indexed respect to hexagonal axis the $\{111\}$ includes the following planes: $(0,0,3)$ , $(0,0,\bar{3})$, $(\bar{1},1,1)$, $(1,0,1)$, $(0,\bar{1},1)$, $(0,1,\bar{1})$, $(1,\bar{1},\bar{1})$, $(\bar{1},0,\bar{1})$; the first two are unaffected by extrinsic faulting, the next three are of the type $h-k=1\;(mod\, 3)$, and the last three of the form $h-k=2\;(mod\, 3)$. Thus, when simulating the faulted powder diffraction profiles, each component of a plane family must be considered individually. Figure \[fig:powder\] shows the powder peak profile for the $\{111\}$ where the components not affected by faulting have been left out. The reader can compare with the single crystal profiles of figure \[fig:peakshift\]. Conclusions =========== Stacking disorder can be viewed in a number of cases as a dynamical system capable of storing and processing information. From this point of view, it has been shown that extrinsic fault in the Hägg code is a sofic system, where predictability in the future is linked to long range memory in the past for faulting probabilities within \]0,1\[. A sofic system, as the one considered here, has no description as a finite range Markov process. This inability to describe such simple faulting process by a finite range model is interesting, as it is common in the literature to try to model faulting by this type of finite range Markov models.[^7]. In spite of this, the HMM model for extrinsic faulting is simple enough, it just belongs to different type of processing machinery. This is precisely the underlying idea of computational mechanics that attempts to find the less sophisticated model for a given process by climbing up in a given hierarchy of possible computational machines until such description is found. This character has several interesting consequences. First, the excess entropy equals the statistical complexity of the system. Excess entropy is linked with the structured output of the system, while statistical complexity measures memory stored in the system. In consequence, structure is linked to memory, a result not surprising once it is acknowledge that the HMM of the process is equivalent to a biased even process. In an even process, the occurrence of consecutive 0’s has to be tracked completely, to determine in which state the system is. As increasing faulting probability means longer runs of 0’s, excess entropy grows monotonically with increasing $\gamma$. Excess entropy has a discontinuity at $\gamma=1$, where the topology of the HMM changes to a one state system with certainty in the output and therefore zero $E$. Entropy density, on the other hand, is a smooth function of faulting probability in all the probability range. Entropy density has a maximum at $\gamma \approx 0.382$, near the maximum of the hexagonality, but slightly larger value. Extrinsic faulting, as treated here, implies that no faulting probability changes the underlying periodic sequence: no phase transformation happens. Hexagonality reaches at $\gamma =\sqrt{2}-1\approx 0.414214$ a maximum of $2(3-2\sqrt{2})\approx0.34314$ and therefore the system is always more “cubic” than hexagonal. In the text, several useful analytical expressions have been derived for different entropic magnitudes, probabilities, lengths, and correlations, all as a function of the faulting probability $\gamma$. To the knowledge of the authors, such expressions have not been reported before. The pairwise correlation function of the layers has been derived and from there the interference function was obtained. The correlation function is composed of two terms each with a decaying and oscillating part. The numerical values of the obtained expression coincides with those that can be found using previous treatments. The shift and asymmetric broadening of the reflections as a result of extrinsic faulting was also discussed. Acknowledgment ============== This work was partially financed by FAPEMIG under the project BPV-00047-13 and computational infrastructure support under project APQ-02256-12. EER which to thank the Université de Lorraine for a visiting professor grant. He also would like to acknowledge the financial support under the PVE/CAPES grant 1149-14-8 that allowed the visit to the UFU. RLS wants to thank the support of CNPq through the projects 309647/2012-6 and 304649/2013-9. We would like to thank the anonymous referees for the careful reading and the number of valuable suggestion that greatly improved the manuscript. Appendix ======== Calculation of the excess entropy --------------------------------- In order to calculate the excess entropy the mixed state representation of the system dynamics must be deduced. To understand what the mixed state representation is, the HMM description must be viewed as an instance of a hidden Markov model (HMM) [@upper89]. In short, any model derived by the observer of the system output, that reproduces (statistically) the output, is called a presentation of the process. The observer can then follow the evolution of the system by updating mixed states, defined as a distribution over the states of the HMM HMM description. The reader is referred to [@upper89] and [@crutchfield13] for a detailed explanation, the later will closely followed. The mixed state representation of the biased even process of Figure \[fig:fsaext\] is shown in Figure \[fig:mixedfsm\] (Compare with Fig. 2 in [@crutchfield13]). Each state in the set $\mathcal{S}$ now has a distribution of probabilities associated with it: $$\begin{array}{ll} S: & \delta_S= \left \{\frac{1}{1+\gamma}, \frac{\gamma}{1+\gamma} \right \}\\\\ S_2: & \delta_{S_2}=\left \{1/2, 1/2 \right \}\\\\ S_3: & \delta_{S_3}=\left \{1,0 \right \}\\\\ S_4: & \delta_{S_4}=\left \{0, 1\right \}. \end{array}$$ As well as transition probabilities $$\begin{array}{ll} P(0|S)= & 2 \frac{\gamma}{1+\gamma}\\\\ P(1|S)= & \frac{1-\gamma}{1+\gamma}\\\\ P(0|S_2)= & \frac{1+\gamma}{2}\\\\ P(1|S_2)= & \frac{1-\gamma}{2}\\\\ P(0|S_3)= & 1-\gamma\\\\ P(1|S_3)= & \gamma\\\\ P(0|S_4)= & 1\\\\ P(1|S_4)= & 0\\\\ \end{array}$$ Observe that the emission of a $1$ implies from any state, a transition to the state $S_3$. States $S$ and $S_2$ are transient while the recurrent states reproduce the original HMM. The stationary probability over the states is given by $$\displaystyle \langle \pi_{mix}|=\left \{0,0,\frac{1}{1+\gamma}, \frac{\gamma}{1+\gamma}\right \}.$$ The state transition matrix will be $$\displaystyle W=\left ( \begin{array}{cccc} 0 & \frac{2\gamma}{1+\gamma}& \frac{1-\gamma}{1+\gamma} & 0\\ \frac{1+\gamma}{2} & 0 & \frac{1-\gamma}{2} & 0 \\ 0 & 0 & 1-\gamma & \gamma \\ 0 & 0 & 1 & 0 \end{array} \right ).$$ With eigenvalues $$\Lambda_{W}=\left \{ 1,-\gamma, -\sqrt{\gamma}, \sqrt{\gamma} \right \}.$$ The projection operator $W_{\lambda}$, for each eigenvalue, is obtained using $$\displaystyle W_\lambda=\prod_{\xi\in\Lambda_W,\xi \neq \lambda}\frac{W-\xi I}{\lambda-\xi}$$ $I$ represents the identity matrix and the product avoids the singularity in the denominator. The results are $$\begin{array}{l} \displaystyle W_1=\left ( \begin{array}{cccc} 0 & 0 & \frac{1}{1+\gamma}& \frac{\gamma}{1+\gamma}\\ 0 & 0 & \frac{1}{1+\gamma}& \frac{\gamma}{1+\gamma}\\ 0 & 0 & \frac{1}{1+\gamma}& \frac{\gamma}{1+\gamma}\\ 0 & 0 & \frac{1}{1+\gamma}& \frac{\gamma}{1+\gamma} \end{array} \right )\\\\ \displaystyle W_{-\gamma}=\left ( \begin{array}{cccc} 0 & 0 & 0 & 0\\ 0 & 0 & \frac{1}{2}\frac{\gamma-1}{1+\gamma}& \frac{1}{2}\frac{1-\gamma}{1+\gamma}\\ 0 & 0 & \frac{\gamma}{1+\gamma}& -\frac{\gamma}{1+\gamma}\\ 0 & 0 &- \frac{1}{1+\gamma}& \frac{1}{1+\gamma} \end{array} \right )\\\\ \displaystyle W_{-\sqrt{\gamma}}=\left ( \begin{array}{cccc} \frac{1}{2} & -\frac{\sqrt{\gamma}}{1+\gamma} & \frac{1}{2}\frac{\sqrt{\gamma}-1}{1+\gamma}& \frac{1}{2}\frac{\sqrt{\gamma}-\gamma}{1+\gamma}\\ -\frac{1}{4}\frac{1+\gamma}{\sqrt{\gamma}} & \frac{1}{2} & \frac{1}{4}\frac{1-\sqrt{\gamma}}{\sqrt{\gamma}}& \frac{1}{4}(\sqrt{\gamma}-1)\\ 0 & 0 & 0& 0\\ 0 & 0 & 0&0 \end{array} \right )\\\\ \displaystyle W_{\sqrt{\gamma}}=\left ( \begin{array}{cccc} \frac{1}{2} & \frac{\sqrt{\gamma}}{1+\gamma} & -\frac{1}{2}\frac{\sqrt{\gamma}+1}{1+\gamma}& -\frac{1}{2}\frac{\sqrt{\gamma}+\gamma}{1+\gamma}\\ \frac{1}{4}\frac{1+\gamma}{\sqrt{\gamma}} & \frac{1}{2} & -\frac{1}{4}\frac{1+\sqrt{\gamma}}{\sqrt{\gamma}}& -\frac{1}{4}(\sqrt{\gamma}+1)\\ 0 & 0 & 0& 0\\ 0 & 0 & 0&0 \end{array} \right ). \end{array}$$ Defining $$\langle \delta_\pi|=\{ \begin{array}{llll}1 & 0 & 0 & 0\end{array}\}$$ then $$| H(W^{\mathcal{A}})\rangle=-\sum_{\eta \in \mathcal{S}}|\delta_{\eta}\rangle\sum_{x\in \{0,1\}}\langle \delta_\eta|W^{(x)} |\mathbf{1}\rangle \log \langle \delta_\eta|W^{(x)} |\mathbf{1}\rangle,$$ and the excess entropy follows from $$E=\sum_{\lambda\in\Lambda_W, |\lambda|<1}\frac{1}{1-\lambda}\langle \delta_{\pi_{mix}}|W_\lambda|H(W^{\mathcal{A}})\rangle$$ which is equation (8) from [@crutchfield13]. [35]{} Arndt, C. 2001. *Information measures*. Springer Verlag. Crutchfield, J.  Feldman, D. P. 2003. *Chaos*, [13]{}, 25–54. Crutchfield, J. P. 1992. In *Modeling complex phenomena*, edited by L. Lam V. Narodditsty, pp. 66–101. Springer, Berlin. Crutchfield, J. P. 2012. *Nature*, [8]{}, 17–24. Crutchfield, J. P., Ellison, C. J.  Riechers, P. M. 2016 . *Phys. Lett.*, [A380]{}, 998–1002. Ďurovič, S. 1997. *In Modular Aspect of Minerals, EMU Notes in Mineralogy*. Eötvös University Press, Budapest. Estevez-Rams, E., Aragon-Fernandez, B., Fuess, H.  Penton-Madrigal, A. 2003. *Phys. Rev. B*, [68]{}, 064111–064123. Estevez-Rams, E., Azanza-Ricardo, C., Martinez-Garcia, J. Aragon-Fernandez, B. 2005. *Acta Cryst.* [A61]{}, 201–208. Estevez-Rams, E., Leoni, M., Penton-Madrigal, A.  Scardi, P. 2007. *Z. f. Kristallographie*, [suppl. 26]{}, 99–104. Estevez-Rams, E., Martinez-Garcia, J., Penton-Madrigal, A. Lora-Serrano, R. 2001. *Phys. Rev. B*, [63]{}, 54109–54118. Estevez-Rams, E.  Martinez-Mojicar, J. 2008. *Acta Cryst.* [A64]{}, 529–536. Estevez-Rams, E., Welzel, U., Penton-Madrigal, A.  Mittemeijer, E. J. 2008. *Acta Cryst.* [A64]{}, 537–548. Frank, F. C. 1951. *Philos. Mag.* [42]{}, 1014–1021. Hagg, G. 1943. *Arkiv. Kemi. Mineralogi Geologi*, [16B]{}, 1–6. Holloway, H.  Klamkin, M. S. 1969*a*. *J. Appl. Phys.* [40]{}, 1681–1689. Holloway, H.  Klamkin, M. S. 1969*b*. *J. Appl. Phys.* [40]{}, 1681–1689. Howard, C. J. 1977. *Acta Cryst.*, [A33]{}, 29–32. Howard, C. J. 1979. *Acta Cryst.*, [A35]{}, 337–338. Johnson, C. A. 1963. *Acta Cryst.* [16]{}, 490–497. Lele, S., R.Anantharaman, T.  Johnson, C. A. 1967. *Phys. Stat. Sol.* [20]{}, 59–68. Nespolo, M., Takeda, H., Kogure, T.  Ferraris, G. 1999 . *Acta Cryst.*, [A55]{}, 659–676. Pandey, D.  Krishna, P. 2004. *International Tables for Crystallography Volume C*. Kluwer Academic. Riechers, P. M., Varn, D. P.  Crutchfield, J. P. 2015. *Acta Cryst.*, [A71]{}, 423–443. Takahashi, H. 1978. *Acta Cryst.*, [A34]{}, 344–346. Upper, D. R. 1989. *Theory and algorithms for hidden Markov models and generalized hidden Markov models*. Thesis dissertation. The University of Rice, U.S. Varn, D. P. 2001. *Language extraction from ZnS*. Thesis dissertation. The University of Tennessee, U.S. Varn, D. P.  Canright, G. S. 2001. *Acta Cryst.*, [A57]{}, 4–19. Varn, D. P., Canright, G. S.  Crutchfield, J. P. 2002. *Pys. Rev. B*, [66]{}, 174110–174113. Varn, D. P.  Crutchfield, J. P. 2004. *Pys. Lett. A*, [324]{}, 299–317. Varn, D. P.,Canright, G. S.  Crutchfield, J. P. 2007. *Acta Cryst.*, [B63]{}, 169–182. Varn, D. P.,Canright, G. S.  Crutchfield, J. P. 2013. *Acta Cryst.*, [A69]{}, 413–426. Velterop, L., Delhez, R., de Keijser, T. H., Mittemeijer, E. J. Reefman, D. 2000. *J. Appl. Cryst.* [33]{}, 296–306. Verma, A. R.  Krishna, P. 1966. *Polymorphism and Polytypism in Crystals*. Wiley, New York. Wagner, A. J. C. 1957. *Acta Metall.* [5]{}, 427. Warren, B. E. 1963. *J. Appl. Phys.* [34]{}, 1973–1975. Warren, B. E. 1969. *X-Ray Diffraction*. Addison-Wesley, New York. Wilson, A. J. C. 1942. *Proc. Roy. Soc. A*, [180]{}, 277–285. \[fig:perfseq\] \[fig:ef\] \[fig:fsaext\] \[fig:hexhe\] \[fig:hexvse\] \[fig:abcfsa\] \[fig:qp\] \[fig:mixedfsm\] \[fig:peakshift\] \[fig:asym\] \[fig:powder\] [^1]: An alternative notation by Nabarro-Frank [@frank51] uses $\bigtriangledown$ and $\bigtriangleup$ for $+$ (or 1) and $-$ (or 0) respectively. [^2]: The notation is that used by Varn et al. [@varn01; @varn02], where $c$ stands for ”cyclic”, $a$ stands for ”anti-cyclic and $s$ for ”same” [^3]: The terms of past and future are taken from the analysis usually carried out in dynamical systems and is kept even when the considered variable is not time, as in the case of stacking order where the pertinent variable is layer position in the stacking. In any case, stacking and faulting are usually cast as a sequential process [@warren63], the HMM analysis just makes this explicit. One could understand the meaning of past and future in this sense. [^4]: The described dynamics implicitly assumes that an inserted layer can not follow another inserted layer. The later case has been approached by [@howard77]. [^5]: When comparing with [@holloway69a] results, it must be noticed that in their notation $Q_{s}(\Delta)=P(m)$, $Q_{c}(\Delta)=Q(m)$ and $Q_{a}(\Delta)=R(m)$ [^6]: In Riechers, Varn, and Crutchfield arXiv:1410.5028 (2014), a more elegant way to deduce the interference function directly from the HMM is derived and could lead to a more manageable expression as has been rightly pointed out by an anonymous referee. [^7]: We thank one of the anonymous referee for her/his enlightening comment on this issue
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we define a kind of decomposition for a quantum access structure. We propose a conception of minimal maximal quantum access structure and obtain a sufficient and necessary condition for minimal maximal quantum access structure, which shows the relationship between the number of minimal authorized sets and that of the players. Moreover, we investigate the construction of efficient quantum secret schemes by using these techniques, a decomposition and minimal maximal quantum access structure. A major advantage of these techniques is that it allows us to construct a method to realize a general quantum access structure. For these quantum access structures, we present two quantum secret schemes via the idea of concatenation or a decomposition of a quantum access structure. As a consequence, the application of these techniques allow us to save more quantum shares and reduce more cost than the existing scheme.' author: - 'Chen-Ming Bai' - 'Zhi-Hui Li' - 'Yong-Ming Li' title: Quantum Access Structure and Secret Sharing --- Introduction ============ Secret sharing, first introduced by Shamir${[1]}$ and Blakley${[2]}$, is an important cryptographic primitive and then extended to the quantum field${[3-6]}$. The central aim of protocol is for a dealer to distribute a piece of secret information (called the secret) among a finite set of players $\mathcal{P}$ such that only qualified subsets can collaboratively recover the secret. Traditionally both secret and shares were classical information. While the secret in a quantum scheme may be either an unknown quantum state or a classical one. In the quantum scenario all players are comprised of quantum systems, and they can utilize the quantum communication technique. Compared to the classical secret sharing, quantum secret sharing (QSS) is more secure due to the application of quantum communication technique. In 1999, Hillery [*et al.*]{} ${[3]}$ firstly proposed a protocol of QSS by using GHZ states where an unknown qubit can be shared with two players such that to recover the original qubit the players have to put their pieces of quantum information together. Cleve, Gottesman and Lo ${[4]}$ presented a more general scheme. In 2004, Xiao [*et al.*]{} ${[7]}$ generalized the QSS of Hillery [*et al.*]{} into arbitrary multiparty. From then on, with the development of quantum cryptography that is unconditional secure in theory, QSS has attracted much attention and progressed quickly in recently years ${[8-24]}$ (for an incomplete list). The access structure of a secret sharing scheme is a family of all authorized sets. In a classical secret sharing scheme, some researchers have proposed many interesting results${[25,26,27]}$. In quantum case there are also many nice results. For example, Cleve [*et al.*]{} ${[4]}$ proposed an efficient construction of all threshold schemes and introduced the quantum access structure. Adam Smith ${[28]}$ researched the quantum access structure in detail and used the monotone span programs to design a quantum secret sharing. Marin [*et al.*]{} ${[29]}$ gave graphical characterisation of the access structure to both classical and quantum information. Gheorghiu ${[14]}$ provided a systematic way of determining the access structure. In Ref.${[5]}$ Gottesman systematically presented a variety of results on the theory of QSS and also defined a maximal quantum access structure. This access structure has some special properties. For example the authorized and unauthorized sets are complements of each other. It plays an important role for these properties to construct the secret scheme. Moreover, the maximal quantum access structure also has a very close relationship with pure state quantum secret sharing scheme that encode pure state secrets as pure states (when all of the shares are available). Gottesman also showed the fact there is always a pure quantum secret sharing scheme to realize a maximal quantum access structure. However, in that reference, Gottesman didn’t give a discussion about it in detail. In this paper we further analyze the maximal access structure and give a formal definition. After analyzing the above access structure, we present a minimal maximal quantum access structure, which the number of the minimal authorized sets cannot be reduced when the number of participants is unchanged. We also obtain a sufficient and necessary condition for minimal maximal quantum access structure, which shows the relationship between the number of minimal authorized sets and that of the players. After analyzing the minimal maximal access structure is more compact and easier to obtain than the maximal one. On the other hand, Gottesman combined the maximal access structure with the original access structure and proposed a quantum secret sharing protocol for a general access structure by a threshold cascade scheme. If $\mathcal{S}_1$ and $\mathcal{S}_2$ are quantum secret sharing schemes, then the scheme formed by concatenating them (expanding each share of $\mathcal{S}_1$ as the secret of $\mathcal{S}_2$ ) is also secret sharing. This idea is very good and interesting. However, in his scheme, there are two disadvantages. One is complex to select threshold schemes according to the number of minimal authorized sets. So this will lead to require more quantum resources in this scheme. As quantum data is expansive and hard to deal with, it would be desirable to use as little quantum data as possible in order to share a secret. Another is based on the maximal quantum access structure because it is complicated. In the process of testing the real authorized sets, there will increase a lot of work because the number of minimal authorized sets is uncertain. Therefore, it will lead to reduce the efficiency of the scheme. In this paper we define a decomposition of the quantum access structure to solve the first problem. In these decompositions, we can find an optimal one and use it to reduce the amount of quantum data. For the second problem, we replace the maximal quantum access structure with the minimal maximal quantum access structure. We can easily obtain this minimal maximal quantum access structures, and each minimal maximal quantum access structure includes maximal one. By combining the optimal decomposition and the minimal maximal quantum access structure, we improve Gottesman’s scheme and present a more convenient solution than before. For the optimal decomposition, we also propose a quantum secret sharing scheme to realize a general access structure and compare these schemes. The structure of the paper is organized as follows. In Sec.II, we define a decomposition of a quantum access structure, explore the maximal quantum access structure and propose a minimal maximal quantum access structure. We also show some results about minimal maximal quantum access structure. In Sec.III, we propose two schemes realizing the general access structure. One uses a decomposition of quantum access structure, and another is based on the optimal decomposition and the cascade method. The paper is ended with the conclusion and discussion in Sec.VI. Quantum Access structure ======================== Decomposition of Quantum Access structure ----------------------------------------- Quantum access structure plays an important role in quantum secret sharing, and let us give its definition. Let $\mathcal{P}$ be a set of players, the access structure of a secret sharing is the family of authorized sets, $\Gamma\subseteq2^\mathcal{P}$. $\Gamma$ is called a quantum access structure on $\mathcal{P}$ if it satisfies that \(a) If $A\subseteq B$ for any $A$ in $\Gamma$, then $B\in\Gamma$; \(b) If $A, B\in\Gamma$, then $A\cap B\neq \emptyset$. By Definition 1, it is obvious that a quantum access structure must satisfy the monotonicity and the no-cloning theorem ${[30,31]}$. For convenience, in the following $\mathcal{P}$ shows the set of participants and $\Gamma$ represents a access structure on $\mathcal{P}$. In the classical secret sharing, many researchers have proposed a decomposition of an access structure ${[26,32]}$. Similarly we present a decomposition of a quantum access structure and later will use this decomposition to realize a general access structure in Sec.III.A. Given a quantum access structure $\Gamma$ containing $r$ minimal authorized sets, a decomposition of $\Gamma$ is composed by a set $\{\Gamma_1, \Gamma_2,\cdots,\Gamma_l\}$, where $\Gamma_i \ (i=1,2,\cdots,l)$ satisfies the following conditions: \(a) $\Gamma_i\subseteq \Gamma$ and $\Gamma=\bigcup_{i=1}^{l}\Gamma_i$; \(b) $\Gamma_i\cap\Gamma_j =\emptyset$ for any $\Gamma_i,\Gamma_j\subseteq \Gamma\ (i\neq j)$; \(c) There exists quantum secret sharing protocol realizing a quantum access structure $\Gamma_i \ (i=1,2,\cdots,l)$. Furthermore, if $l=r$, then the decomposition is trivial; if $l< r$, then the decomposition is called $l$-decomposition. For an $l$-decomposition, if there doesn’t exist a positive integer $l'$ such that $l'<l$, then this decomposition is optimal. **Remark:** A decomposition of the access structure is defined based on the number of partition for the quantum access structure $\Gamma$. Because the partition of $\Gamma$ is not unique, the decomposition is not unique. Suppose that $\Gamma$ is a quantum access structure and $\{\Gamma_1, \Gamma_2,\cdots,\Gamma_l\}$ is a decomposition of $\Gamma$. When $\Gamma_i=\{A_{i1},A_{i2},\cdots,A_{ir_i}\}\ (i=1,2,\cdots,l)$, then $\Gamma$ is denoted by $\Gamma=\{A_{11},A_{12},\cdots,A_{1r_1},\cdots,A_{l1},A_{l2},\cdots,A_{lr_l}\}.$ Minimal Maximal Quantum Access structure ---------------------------------------- In Ref.${[5]}$, Gottesman introduced a maximal quantum access structure in which the authorized and unauthorized sets are complement of each other. In the following let us formally define a maximal quantum access structure. Let $\mathcal{P}$ be a set of players, $\Gamma$ a quantum access structure and $ \mathcal{A}$ a set of all unauthorized groups. Then $\Gamma$ is said to be a maximal quantum access structure, denoted by $\Gamma_M$, if it satisfies that \(a) If any $A\in\Gamma$, then $\overline{A}\in\mathcal{A}$; \(b) If any $B\in\mathcal{A}$, then $\overline{B}\in \Gamma$. where $\mathcal{A}=\{A\in 2^\mathcal{P}|A\notin\Gamma \ \rm{and}\ A\neq{\emptyset}\}$, $\overline{A}=\mathcal{P}\setminus A$ and $\overline{B}=\mathcal{P}\setminus B$. By Definition 3, we can know that if $A\subsetneqq B$ for any minimal authorized set $B$ in $\Gamma_M$, the complement of the set $A$ must be authorized. Next we analyze the determination and properties of a maximal quantum access structure. Firstly, we give the following lemma. (\[33\]) Let $\Gamma\subseteq2^\mathcal{P}$ be a general quantum access structure and $ \mathcal{A}= \mathcal{A}_{1} \cup\mathcal{A}_{2}$ a set of all unauthorized groups, where $\mathcal{A}_{1}=\{A\in\mathcal{A}\ |\ \exists\ B\in\Gamma, A\cap B=\emptyset \}$ and $\mathcal{A}_{2}=\{A\in\mathcal{A}\ |\ \forall\ B\in\Gamma, A\cap B\neq\emptyset \}$. \(i) If $A\in \mathcal{A}_{1}$, then $\overline{A}\in\Gamma$. \(ii) If $A\in \mathcal{A}_{2}$, then $\overline{A}\in\mathcal{A}_{2}\subseteq\mathcal{A}$. Let $\Gamma$ be a quantum access structure and $ \mathcal{A}= \mathcal{A}_{1} \cup\mathcal{A}_{2}$ a set of all unauthorized sets, where $\mathcal{A}_{1}=\{A\in\mathcal{A}\ |\ \exists\ B\in\Gamma, A\cap B=\emptyset \}$ and $\mathcal{A}_{2}=\{A\in\mathcal{A}\ |\ \forall\ B\in\Gamma, A\cap B\neq\emptyset \}$. Then $\Gamma$ is a maximal quantum access structure if and only if $\mathcal{A}= \mathcal{A}_{1}$, i.e., $\mathcal{A}_{2}=\emptyset$. [ Proof.]{} Suppose that $\mathcal{A}_{2}\neq\emptyset$. By Lemma 1, we can obtain that $\overline{A}\in \mathcal{A}_{2}\subseteq \mathcal{A}$ for any $A$ in $\mathcal{A}_{2}$. Thus both $A$ and $\overline{A}$ are unauthorized sets, and this leads to a contradiction with the maximal quantum access structure $\Gamma$. Therefore $\mathcal{A}_{2}=\emptyset$. For the converse, we can get that $\mathcal{A}= \mathcal{A}_{1}$ since $\mathcal{A}_{2}=\emptyset$. By Lemma 1, it implies that $\overline{A}\in \Gamma$ for all $A$ in $\mathcal{A}_{1}$. Since the fact that the quantum access structure satisfies the no-cloning theorem, we can find that $\overline{B}\in \mathcal{A}_{1}=\mathcal{A} $ for all $B$ in $\Gamma $. According to Definition 3, $\Gamma$ must be a maximal quantum access structure.  $\blacksquare$ Let $\mathcal{P}$ be a set of players and $\Gamma$ a quantum access structure on $\mathcal{P}$. There are always some subsets of $\mathcal{P}$ added to $\Gamma$ such that $\Gamma$ becomes a maximal quantum access structure. [[ Proof.]{} Let $\mathcal{P}$ be a set of players, $\Gamma\subseteq 2^\mathcal{P}$ a quantum access structure and $\mathcal{A}$ a set of all unauthorized groups. Suppose that $\Gamma$ can be denoted by $\Gamma=\{A_1,A_2,\cdots,A_r\}$, where $A_i\in2^\mathcal{P}\ (i=1,2,\cdots,r)$ is the minimal authorized set. If $\Gamma$ is a maximal quantum access structure, this proposition is obviously true. If $\Gamma$ isn’t a maximal quantum access structure, we can construct the maximal access structure. Since that $\Gamma$ isn’t maximal, we can find that all sets $B_1,B_2,\cdots,B_m$ are in $\mathcal{A}$ and the complements of them, $\overline{B}_1,\overline{B}_2,\cdots,\overline{B}_m$, are also in $\mathcal{A}$. For convenience, $\mathcal{S}$ represents a set $\{B_1,B_2,\cdots,B_m,\overline{B}_1,\overline{B}_2,\cdots,\overline{B}_m\}$. Adding the set $C_{j_1}\in\mathcal{S}$ to the access structure $\Gamma$, then we can obtain a new quantum access structure $\Gamma'=\{A_1,A_2,\cdots,A_r,C_{j_1}\}$. Continuing to add the set $C_{j_2}\in \mathcal{S}$ to $\Gamma'$, where $C_{j_2}$ should satisfy the conditions: $C_{j_1}\nsubseteq C_{j_2}$ and $C_{j_2}\cap C_{j_1}\neq\emptyset$, so we can have another access structure $\Gamma''=\{A_1,A_2,\cdots,A_r,C_{j_1},C_{j_2}\}$. Repeat the above process until there doesn’t exist sets meeting the conditions. Since $2^\mathcal{P}$ is finite, we can obtain the maximal quantum access structure.  $\blacksquare$]{} Theorem 3 tells us that it can get a maximal quantum access structure for any quantum access structure and show that how we construct a maximal quantum access structure through a quantum access structure. In order to understand we provide an example of an access structure on the set $\mathcal{P}$ with five players. Given the set of players $\mathcal{P}=\{P_1,P_2,P_3,P_4,P_5\}$ and the quantum access structure $\Gamma=\{P_1P_2,P_1P_4P_5,P_2P_3P_5,P_2P_3P_4\}$. For the access structure $\Gamma$, it must satisfy the monotonicity. Therefore it means that these sets containing authorized sets in $\Gamma$ are authorized. Since the no-cloning theorem, the complements of these authorized sets are unauthorized. Apart from the above sets, the remaining sets are denoted by $P_1P_3,P_2P_4,P_2P_5,P_2P_4P_5,P_1P_3P_5,P_1P_3P_4$. If we add $P_1P_3$ to $\Gamma$, that is, $P_1P_3$ becomes an authorized set, then $P_1P_3P_5$ and $P_1P_3P_4$ are authorized and the others are unauthorized. So we obtain a maximal quantum access structure $$\Gamma_M=\{P_1P_2,P_1P_3,P_1P_4P_5,P_2P_3P_5,P_2P_3P_4\}.$$ If we add $P_2P_4P_5$, $P_1P_3P_5$ and $P_1P_3P_4$ to $\Gamma$, we will obtain another maximal quantum access structure $$\begin{aligned} \Gamma_M'&=&\{P_1P_2,P_1P_3P_5,P_1P_3P_4,P_1P_4P_5,P_2P_3P_5,\\ &\quad&P_2P_3P_4,P_2P_4P_5\}.\end{aligned}$$ By this example, we can get different maximal quantum access structures after adding different sets to the same quantum access structure. The number of minimal authorized sets contained in each maximal access structure is not equal. For Example 1, if some authorized sets in $\Gamma_M'$ are changed, for example, two sets $P_1P_3P_4$ and $P_1P_3P_5$ are replaced with $P_1P_3$ and the set $P_2P_4P_5$ is deleted, then we can obtain a new maximal quantum access structure, i.e. $\Gamma_M$. If we continue to change the minimal authorized set in $\Gamma_M$, we will find that the number of participants in the new maximal access structure will be reduced. Based on this fact, we propose the definition of minimal maximal quantum access structure. Let $\mathcal{P}$ be a set of players and $\Gamma_M$ a maximal quantum access structure. $\Gamma_M$ is called a minimal maximal quantum access structure on $\mathcal{P}$, denoted by $\Gamma_M^{(m)}$, if it satisfies that the number of the minimal authorized sets in $\Gamma_M$ cannot be reduced when the number of participants is unchanged. It is easy to verify that $\Gamma_M$ is a minimal maximal access structure in Example 1. How do we change the given maximal access structure to a minimal maximal one? The following theorem shows this construction. Let $\Gamma_M$ be a maximal quantum access structure. Then a minimal maximal quantum access structure is given by changing some authorized sets of $\Gamma_M$. [[ Proof.]{} Let $\Gamma_M$ be a quantum access structure and it can be denoted by $\Gamma_M=\{A_1,A_2,\cdots,A_r\}$, where $A_i\ (i=1,2,\cdots,r)$ is the minimal authorized set. First we can take some minimal authorized sets $A_{j_1},A_{j_2},\cdots,A_{j_k} (k<r)$. Then $\bigcap_{j\in\{j_1,j_2,\cdots,j_k\}}A_j=B_l$ containing at least 2 players. Use $B_l$ instead of $A_j(\supseteq B_l)$ and delete the set $\overline{B}_l$. Hence we can obtain a new maximal quantum access structure. Repeat the above process until the number of minimal authorized sets cannot be reduced, we will be forced to stop. At this time, we obtain a minimal maximal quantum access structure.  $\blacksquare$]{} Given the set of players $\mathcal{P}=\{P_1,P_2,P_3,P_4,P_5,P_6\}$ and the maximal quantum access structure is denoted by $$\begin{aligned} \Gamma_M&=&\{P_1P_2,P_1P_3P_4,P_1P_3P_5,P_1P_3P_6,\\ &\quad&P_1P_4P_5,P_1P_4P_6,P_1P_5P_6,P_2P_3P_5P_6,\\ &\quad&P_2P_4P_5P_6,P_2P_3P_4P_5,P_2P_3P_4P_6\}.\end{aligned}$$ Without lost of generality, we may take $P_1P_3P_4, P_1P_3P_5$ and $P_1P_3P_6$. Since $(P_1P_3P_4)\cap(P_1P_3P_5)\cap(P_1P_3P_6)=P_1P_3$, we can replace $P_1P_3P_4$, $P_1P_3P_5$ and $P_1P_3P_6$ with $P_1P_3$ and delete the set $P_2P_4P_5P_6$. Then we obtain the new access structure $$\begin{aligned} \Gamma'&=&\{P_1P_2,P_1P_3,P_1P_4P_5,P_1P_4P_6,P_1P_5P_6,\\ &\quad&P_2P_3P_4P_5,P_2P_3P_4P_6,P_2P_3P_5P_6\}.\end{aligned}$$ At the same method, we can also continue to replace $P_1P_4P_5$ and $P_1P_4P_6$ with $P_1P_4$ and delete the set $P_2P_3P_5P_6$. Hence we can obtain the new maximal quantum access structure $$\Gamma''=\{P_1P_2,P_1P_3,P_1P_4,P_1P_5P_6,P_2P_3P_4P_5,P_2P_3P_4P_6\}.$$ If we continue to change the minimal authorized set, some participants will not appear in the new authorized set. Hence $\Gamma'$ is a minimal maximal quantum access structure. This example shows the relationship between the number of minimal authorized sets and that of the participants. Thus we present a sufficient and necessary condition about the minimal maximal quantum access structure. In order to prove the condition, we need give the following lemma. Let $\mathcal{P'}=\{P_1,\cdots,P_{n-1}\}$ be a set of the players and $\Gamma_M=\{A_1,A_2,\cdots,A_r\}$ a maximal quantum access structure on $\mathcal{P'}$, where $A_i\ (i=1,2,\cdots,r)$ is the minimal authorized set. If a player $P_n$ is added to $A_i$ in $\Gamma_M$, then the new quantum access structure isn’t maximal. [ Proof.]{} Suppose that $\mathcal{P}=\mathcal{P'}\cup\{P_n\}$, then the new quantum access structure on $\mathcal{P}$ can be denoted by $\Gamma'=\{A_1,\cdots,A_{i-1},A_i\cup\{P_n\},A_{i+1}\cdots,A_r\}$, where $A_i\in\Gamma_M (i=1,2,\cdots,r)$. Since that $A_i\cup\{P_n\}$ is a minimal authorized set of $\Gamma'$ and $A_i\subsetneqq A_i\cup\{P_n\}$, then we know that $A_i$ is an unauthorized set. Next we need to prove that $\mathcal{P}\setminus A_i$ is an unauthorized set, that is, $A_i\cup\{P_n\}\nsubseteq \mathcal{P}\setminus A_i$ and $A_j\nsubseteq\mathcal{P}\setminus A_i (j\neq i)$, where $\mathcal{P}\setminus A_i=\overline{A}_i\cup\{P_n\}$. If $A_i\cup\{P_n\}\subseteq \overline{A}_i\cup\{P_n\}$, then $A_i\subseteq \overline{A}_i$. Obviously, this leads to a contradiction. If $A_j\subseteq \overline{A}_i\cup\{P_n\}$, then $A_j\subseteq \overline{A}_i$, i.e., $A_j\cap A_i=\emptyset$. This contradicts the fact that $A_j\cap A_i\neq\emptyset$ for any $A_j, A_i$ in $\Gamma_M$. Hence $\mathcal{P}\setminus A_i$ is also an unauthorized set. Both $A_i$ and $\mathcal{P}\setminus A_i$ are unauthorized, so $\Gamma'$ isn’t a maximal quantum access structure on $\mathcal{P}$.  $\blacksquare$ Let $\mathcal{P}$ be a set with $n$ players and $\Gamma_M$ a maximal quantum access structure containing $r$ minimal authorized sets. Then $\Gamma_M$ is a minimal maximal quantum access structure if and only if $r=n$. [ Proof.]{} ($\Leftarrow$) Since the maximal quantum access structure $\Gamma_M$ contains $r$ minimal authorized sets and $r=n$, we can denote $\Gamma_M=\{A_1,A_2,\cdots,A_n\}$. If some minimal authorized sets of $\Gamma_M$ are changed, then we can obtain a new quantum access structures $\Gamma'=\{B_1,B_2,\cdots,B_m\} (m<n)$. If $\Gamma'$ is not a maximal quantum access, then the theorem is true. If $\Gamma'$ is a maximal quantum access, then we can find a player $P_{i_0} \notin B_j$ for each $B_j$ in $\Gamma'$. Otherwise there exists a set $B_{j'}$ such that $P_{i_0} \in B_{j'}$. By Lemma 5, we know that $\Gamma'$ is not a maximal quantum access. This leads to a contradiction. Hence $\Gamma_M$ is a minimal maximal quantum access structure. ($\Rightarrow$) For $\mathcal{P}=\{P_1,P_2,P_3\}$, the minimal maximal quantum access structure on $\mathcal{P}$ can be denoted by $\Gamma_M^{(m)}=\{P_1P_2,P_1P_3,P_2P_3\}$. Obviously, the conclusion is true. For $\mathcal{P}=\{P_1,P_2,P_3,P_4\}$, the minimal maximal quantum access structure on $\mathcal{P}$ can be denoted by $\Gamma_M^{(m)}=\{P_1P_2,P_1P_3,P_1P_4,P_2P_3P_4\}$. It is obvious to see that the conclusion holds. For $\mathcal{P}=\{P_1,P_2,P_3,P_4,P_5\}$, all minimal maximal quantum access structures on $\mathcal{P}$ can be denoted by $$\begin{aligned} \Gamma_{M_1}^{(m)}&=&\{P_1P_2,P_1P_3,P_1P_4,P_1P_5,P_2P_3P_4P_5\} \\ \Gamma_{M_2}^{(m)}&=&\{P_1P_2,P_1P_3,P_1P_4P_5,P_2P_3P_4,P_2P_3P_5\}\end{aligned}$$ Obviously, the number of minimal authorized sets in each minimal maximal quantum access structure is equal to that of the players. Hence the conclusion is true. When there are $n-1$ players, i.e., $\mathcal{P'}=\{P_1,P_2,\cdots,P_{n-1}\}$, the minimal maximal quantum access structure on $\mathcal{P'}$ can be denoted by $\Gamma_M=\{A_1,A_2,\cdots,A_r\}$, where $A_i \ (i=1,2,\cdots,r)$ is the minimal authorized set. We assume that this conclusion is true, that is, $r=n-1$. Next we need prove that when there are $n$ players, this conclusion is also true. Suppose that $\mathcal{P}=\mathcal{P'}\cup\{P_n\}$, we add the player $P_n$ to $A_i$ in $\Gamma_M$, where $A_i$ satisfies that for each $B\subsetneqq\overline{A}_i=\mathcal{P'}\setminus A_i$ there exists $A_j (j\neq i)$ in $\Gamma_M$ such that $B\cap A_j=\emptyset$. Then we can obtain a new quantum access structure $\Gamma'$ and it is denoted by $\Gamma'=\{A_1,\cdots,A_{i-1},A_i\cup\{P_n\},A_{i+1}\cdots,A_r\}$. By Lemma 5, we know that $\Gamma'$ isn’t maximal. From the proof of Lemma 5 we find the unauthorized sets $A_i$ and $\mathcal{P}\setminus A_i=\overline{A}_i\cup\{P_n\}$. Add $\overline{A}_i\cup\{P_n\}$ to $\Gamma'$ and obtain a quantum access structure $$\Gamma''=\{A_1,\cdots,A_{i-1},A_i\cup\{P_n\},A_{i+1}\cdots,A_r,\overline{A}_i\cup\{P_n\}\}$$ It is easy to verify that $\Gamma''$ is a maximal quantum access structure. Without loss of generality, in the following we may take $A_i\cup\{P_n\}\in \Gamma''$ as an example, and the others can be analyzed by the same method. Case 1: Since $A_i\subsetneqq A_i\cup\{P_n\}$, the set $A_i$ is unauthorized. The complement of $A_i$ is $\mathcal{P}\setminus A_i=\overline{A}_i\cup\{P_n\}$ and it is an authorized set, so this case holds. Case 2: If $B\subsetneqq A_i$, then $B\cup\{P_n\}\subsetneqq A_i\cup\{P_n\}$. So $B\cup\{P_n\}$ is an unauthorized set. The complement of $B\cup\{P_n\}$ is $\mathcal{P'}\setminus B$. Hence there exists $A_j\in\Gamma_M \ (j\neq1)$ such that $A_j\subseteq\mathcal{P'}\setminus B$, that is, $\mathcal{P'}\setminus B$ is authorized. For otherwise $A_j\nsubseteq\mathcal{P'}\setminus B$ for any $A_j$ in $\Gamma_M$, then $A_1\cap A_2\cap\cdots\cap A_r\neq\emptyset$, i.e., there exists $P_l$ such that $P_l\in A_1\cap A_2\cap\cdots\cap A_r$. Then we can obtain that $\{P_l\}$ and $\{P_1\cdots P_{l-1}P_{l+1}\cdots P_{n}\}$ are unauthorized sets. This is contrary to the maximal quantum access structure $\Gamma_M$. By the induction hypothesis, it implies that $r= n-1$. Therefore, we can get that $r+1= n-1+1=n$. So this proposition is true for $n$ players. This completed the proof.  $\blacksquare$ If $\mathcal{P}$ is a set with $n$ players and $\Gamma_M$ is a maximal quantum access structure containing $r$ minimal authorized sets, then $r>n$. From the proof of Theorem 6, we have proposed a construction method about the minimal maximal quantum access structure. Compared to the maximal quantum access structure, the minimal maximal quantum access structure is more concise and easier to construct. Therefore, we take the access structure with five participants as an example. Suppose that $\mathcal{P}=\{P_1,P_2,P_3,P_4,P_5\}$ is a set of players, all minimal maximal quantum access structures on $\mathcal{P}$ can be denoted by $$\begin{aligned} \Gamma_{M_1}^{(m)}&=&\{P_1P_2,P_1P_3,P_1P_4,P_1P_5,P_2P_3P_4P_5\} \\ \Gamma_{M_2}^{(m)}&=&\{P_1P_2,P_1P_3,P_2P_3P_4,P_1P_4P_5,P_2P_3P_5\}\end{aligned}$$ If we want to revoke a player $P_5$ because of some factors, then we only need to change two authorized sets in $\Gamma_{M_1}^{(m)}$ or $\Gamma_{M_2}^{(m)}$, and then we can reconstruct the new minimal maximal access structure. If we want to join a player $P_6$, then we also only need to change and add two authorized sets in $\Gamma_{M_1}^{(m)}$ or $\Gamma_{M_2}^{(m)}$. ![(b)The minimal maximal quantum access structures $\Gamma_{M_2}^{(m)}$; (a)Deleting a player $P_5$ from $\Gamma_{M_2}^{(m)}$; (c)Adding a player $P_6$ to $\Gamma_{M_2}^{(m)}$, where red circle represents the authorized set to change.](1){width="7cm"} The FIG.1 shows the minimal maximal quantum access structures $\Gamma_{M_2}^{(m)}$ adds or removes a participant, and we can find that adding or deleting a participant has a minor effect on the minimal authorized sets in the minimal maximal access structure. Therefore, it is relatively easy to deal with the change of quantum share, which can guarantee the security of secret sharing. Construction of a General Access Structure ========================================== Two Schemes ----------- In this part, we propose two schemes for general access structure. One is based on decomposition of quantum access structure, and another scheme combines the decomposition of access structure with the minimal maximal quantum access structure. Scheme I In the classical secret sharing, there is a perfect secret sharing scheme for general access structure based on the decomposition of access structure. In Sec.II.A, we introduce the decomposition of quantum access structure. Hence we can also propose a quantum secret sharing scheme to realize a general access structure by using the optimal decomposition. Suppose $\mathcal{P}=\{P_1,P_2,\cdots,P_{n}\}$ is a set of players and $\Gamma$ is a quantum access structure on $\mathcal{P}$. We can find an optimal decomposition $\{\Gamma_1, \Gamma_2,\cdots,\Gamma_l\}$, where each $\Gamma_i$ can be realized by quantum secret sharing protocol. Therefore, we can put these particles held by $P_i$ in the register $R_i$ and then distribute the register $R_i$ to the participant $P_i$ (FIG.2). Noted that each participant has a register, but the corresponding particles in the different registers are entangled, and the particles in the same register are independent of each other. Any attack will destroy the entanglement between the particles, so that the secret can not be restored. For different authorized sets, participants can choose different particles and cooperate with others to restore the original secret. ![Distribution of particles in Scheme I](2 "fig:"){width="4cm"}\ Scheme II In the following we mainly give the secret sharing to realize a general quantum access structure by the idea of concatenation scheme. Moreover, we also make use of the minimal maximal access structure and the decomposition. Preparatory phase: Given a quantum access structure $\Gamma=\{A_1,A_2,\cdots,A_r\}$, where $A_i\in2^\mathcal{P}\ (i=1,2,\cdots,r)$ is the minimal authorized set. By Theorem 3 and Theorem 6, we can obtain the minimal maximal quantum access structure $\Gamma^{(m)}_M$ from $\Gamma$. Since the fact, we have that for any maximal quantum access structure there exists a pure quantum secret sharing scheme to realize it. By Definition 2, it implies that there is a decomposition of the access structure $\Gamma$. Without loss of generality, we may take a decomposition $\Gamma=\Gamma_1\cup\Gamma_2\cup\cdots\cup\Gamma_l\ (l\leq r)$, where $\Gamma_i \ (i=1,2,\cdots,l)$ can be realized by quantum secret sharing protocol. If $l=r$, this decomposition is trivial. Gottesman used this trivial decomposition to design the protocol. If there doesn’t exist a positive integer $l'$ such that $l'<l$, this decomposition is optimal. In our paper we will utilize the optimal decomposition to construct QSS in order to save the resources. Distribution phase: Due to the the optimal decomposition of $\Gamma$, we get $l$ sub-access structures. According to them, we should take a $((l,2l-1))$ quantum threshold scheme realized in Ref.${[4]}$. \(i) Distribute $l$ shares of $((l,2l-1))$ quantum threshold scheme to $\Gamma_1,\Gamma_2,\cdots,\Gamma_l$ , respectively. Without loss of generality, Share $i$, for $i=1,2,\cdots,l$, is mapping to $\Gamma_i$ as the secret. For each $\Gamma_i$, it can be realized by quantum secret sharing scheme. \(ii) Distribute the remaining shares of $((l,2l-1))$ scheme to a minimal maximal quantum access structure $\Gamma^{(m)}_M$. Since the fact (i), then there exists a pure state scheme to realize $\Gamma^{(m)}_M$. Reconstruction phase: We will analyze the access structure of this concatenation scheme and verify the real authorized sets. Only these players in the real authorized sets can cooperatively obtain the original information. \(i) Suppose that a set $A\in\Gamma$ containing certain $A_i$, i.e., $A_i\subseteq A$. We can know that the set $A$ is also an authorized set of $\Gamma^{(m)}_M$ since $\Gamma\subseteq\Gamma_M^{(m)}$. For the authorized set $A$, we can reconstruct the $l$ shares of the $((l,2l-1))$ scheme, where $l-1$ shares are from $\Gamma^{(m)}_M$ and the only one from $\Gamma_i$. Hence $A$ is also an authorized set of the concatenation scheme. \(ii) Suppose that there is a set $B$ such that $A_i\nsubseteq B$ for any $A_i\in\Gamma$. Thus $B$ is an unauthorized set of $\Gamma$. If $B$ is an authorized set of $\Gamma^{(m)}_M$, then we can only reconstruct the $l-1$ shares from $\Gamma^{(m)}_M$. Hence $B$ is also an unauthorized set of the concatenation scheme. From the above (i) and (ii) we can know that the access structure of the concatenation scheme is exactly $\Gamma$, that is, only the sets of $\Gamma$ are authorized ones that can restore the original secret. Comparision ----------- In this section, we discuss the comparison of Scheme I and Scheme II. In addition, we also compare our scheme II and Gottesman’s construction by example. In Scheme I, we make use of the decomposition of quantum access structure and know that it is easy to find an optimal decomposition. Hence the advantage of this scheme is to achieve a general quantum access structure. In this scheme each participant holds many particles, but register storage capacity is limited. If each participant has too many information shares, it may lead that the register capacity is insufficient. In addition, each participant directly grasped a large amount of information shares about the original secret. If the scheme was attacked by the participants conspiracy, it is easy to cause the leakage of the original information. In Scheme II, we use the idea of concatenation scheme and combine the minimal maximal access structure and the decomposition. Compared to Scheme I, the original secret in Scheme II will be divided into some secret shares, and we treat each share as a secret to each sub-access structure. So it can ensure that the participants do not have directly access to the secret share and reduce the chance to leakage of information. In this scheme, participants first cooperate to recover the secret shares and then cooperate to restore the original secret. Hence Scheme II is more secure and we give an example. In Example 1, we have given the quantum access structure $\Gamma=\{P_1P_2,P_1P_4P_5,P_2P_3P_5,P_2P_3P_4\}$. For this access structure, we can add some sets to obtain a minimal maximal quantum access structure $\Gamma_M^{(m)}=\{P_1P_2,P_1P_3, P_1P_4P_5,P_2P_3P_5,P_2P_3P_4\}$. Moreover, we can find an optimal decomposition of $\Gamma$. It is denoted by $\Gamma=\Gamma_1\cup\Gamma_2$, where $\Gamma_1=\{P_1P_2,P_1P_4P_5\}$ and $\Gamma_2=\{P_2P_3P_5,P_2P_3P_4\}$. In Ref.${[33]}$, there exists a generalized quantum secret sharing scheme to realize $\Gamma_i\ (i=1,2)$ (GQSS). Hence we can consider the $((2,3))$ quantum threshold scheme. The three rows represent shares of a $((2,3))$ scheme, so authorized sets on any two rows suffice to reconstruct the secret. $$\begin{aligned} ((2,3)) scheme \left\{ \begin{array}{ll} GQSS:& \Gamma_1=\{P_1P_2,P_1P_4P_5\}\\ GQSS:& \Gamma_2=\{P_2P_3P_5,P_2P_3P_4\}\\ \Gamma_M^{(m)}& \end{array}\right. \end{aligned}$$ The first two rows are threshold schemes. $\Gamma^{(m)}_M$ is a minimal maximal quantum access structure containing $\Gamma_1$ and $\Gamma_2$. It is easy to verify that the set $P_1P_3$ is unauthorized. For our construction method, we firstly divide the quantum access structure $\Gamma$ into two parts, $\Gamma_1$ and $\Gamma_2$. This is an optimal decomposition. According to the optimal decomposition, we adopt $((2,3))$ quantum threshold scheme realizing the access structure $\Gamma$. In Ref.${[5]}$, Gottesman gave a trivial decomposition of $\Gamma$, so he would use the $((4,7))$ scheme to realize the same access structure (see below). Obviously, his scheme is more cumbersome and uses more quantum shares than ours. $$\begin{aligned} ((4,7)) scheme \left\{ \begin{array}{ll} ((2,2)):& \{P_1P_2\}\\ ((3,3)):& \{P_1P_4P_5\}\\ ((3,3)):& \{P_2P_3P_5\}\\ ((3,3)):& \{P_2P_3P_4\}\\ \Gamma_M& \end{array}\right. \end{aligned}$$ Compared to Gottesman’s construction, we utilize a minimal maximal quantum access structure instead of a maximal one. On one hand, each maximal quantum access structure is included by the minimal maximal one. On the other hand, because the minimal maximal quantum access structure reduces the number of the minimal authorized sets, we will greatly reduce the number of tests in the process of verifying the authorized set. Furthermore, the efficiency of the scheme will be greatly improved. In TABLE I, we give a comparison between them containing five or six participants. Moreover, with the increase of the number of participants, the construction of the maximal access structure is difficult. However, the minimal maximal access structure is easily obtained by our method in Theorem 6. In addition, it is easy to find that our scheme based on the optimal decomposition is more convenient and save more quantum resources. Hence the optimal decomposition of quantum access structure is also valid for the construction of secret sharing schemes. [c|c|c]{} &[Quantum access structure ($\Gamma_M,\Gamma^{(m)}_M$)]{} & [Verification times]{}\ & [$\Gamma_M=\{P_1P_2P_3,P_1P_2P_4,P_1P_2P_5,P_1P_3P_4,P_1P_3P_5,$]{} & [10]{}\ & [$ P_1P_4P_5,P_2P_3P_4,P_2P_3P_5,P_2P_4P_5,P_3P_4P_5\}$]{} &\ &[$\Gamma^{(m)}_M=\{P_1P_2,P_1P_3,P_1P_4P_5,P_2P_3P_4,P_2P_3P_5\}$]{} & [5]{}\ & [$\Gamma_M=\{P_1P_2,P_1P_3P_4,P_1P_3P_5,P_1P_3P_6,P_1P_4P_5,P_1P_4P_6,P_1P_5P_6,$]{}&[11]{}\ &[$P_2P_3P_5P_6,P_2P_4P_5P_6,P_2P_3P_4P_5,P_2P_3P_4P_6\}.$]{} &\ &[$\Gamma^{(m)}_M=\{P_1P_2,P_1P_3,P_1P_4,P_1P_5P_6,P_2P_3P_4P_5,P_2P_3P_4P_6\}$]{}& [6]{}\ Conclusions and Discussion =========================== In this work firstly we proposed a definition about decomposition of the quantum access structure. Secondly, we formally defined a maximal quantum access structure. After analysing it we also presented a minimal maximal quantum access structure. Next, we gave a sufficient and necessary condition to determine the minimal maximal quantum access structure and gave other conclusions about it. We discussed the relationship between the number of the minimal authorized sets in minimal maximal access structure and that of participants. Finally, we gave the application about a decomposition of the quantum access structure and a minimal maximal access structure in secret sharing. Then we proposed two quantum secret sharing schemes to realize a general access structure. Our scheme II was based on the method of concatenation and decomposition of an access structure. Compared to the existing scheme, our scheme II can save quantum resources and reduce the cost. In addition, for QSS many factors may lead that the access structure in the secret sharing is changed, such as the security requirements and changing to the participants in the attack. Therefore, a dynamic secret sharing scheme has very important research value ${[34-36]}$. If a participant in the system is suspected because it may be compromised, we can change the access structure to reduce the role of this member in the reconstruction phase. Hence it can continue to maintain security of the whole system. Compared with the normal secret sharing scheme, dynamic secret sharing has higher security and greater flexibility in application. If the dynamic scheme uses a minimal maximal access structure, then this process will be relatively easy to add or delete a participant. Therefore, it is simple to deal with the change of quantum share, which can guarantee the security of secret sharing. This is an interesting question, and we can further study how to give a specific dynamic secret sharing scheme to realize the minimal maximal quantum access structure. ACKNOWLEDGEMENT {#acknowledgement .unnumbered} =============== We want to express our gratitude to anonymous referees for their valuable and constructive comments. This work was sponsored by the National Natural Science Foundation of China under Grant No.61373150 and No.61602291, and Industrial Research and Development Project of Science and Technology of Shaanxi Province under Grant No.2013k0611. [00]{} A. Shamir, How to Share a Secret, Commun. ACM, 22, pp.612-613, 1979. G. R. Blakley, Safeguarding cryptographic keys, Proc. of the National Computer Conference, America, pp.313-317, 1979. M. Hillery, V. Buzek, A. Berthiaume, Quantum Secret Sharing, Phys. Rev. A, 59, pp.1829-1834, 1999. R. Cleve, D. Gottesman, H-K. Lo , How to Share a Quantum Secret, Phys. Rev. Lett, 83, pp.648-651, 1999. D. Gottesman, Theory of Quantum Secret Sharing, Phys. Rev. A, 61, 042311, 2000. A. Karlsson, M. Koashi, N. Imoto, Quantum entanglement for secret sharing and secret splitting, Phys. Rev. A, 59, pp.162-168, 1999. L. Xiao, G.L. Long, F.G. Deng, J.W. Pan, Efficient multiparty quantum-secret-sharing schemes, Phys. Rev. A, 69, 052307, 2004. F.G. Deng, X.H. Li, C.Y. Li, [*et al.*]{}, Multiparty quantum-state sharing of an arbitrary two-particle state with Einstein-Podolsky-Rosen pairs, Phys. Rev. A, 72, 044301, 2005. G. Gordon, G. Rigolin, Generalized quantum-state sharing, Phys. Rev. A, 73, 062316, 2006. V. Gheorghiu, B.C. Sanders, Accessing quantum secrets via local operations and classical communication, Phys. Rev. A, 88, 022340, 2013. H.W. Qin, X.H. Zhu, Y.W. Dai, $(t, n)$ Threshold quantum secret sharing using the phase shift operation, Quantum Inf Process, 14, pp.2997-3004, 2015. P. Sarvepalli, R. Raussendorf, Matroids and quantum-secret-sharing schemes, Phys. Rev. A, 81, 052333, 2010. R. Rahaman, M.G. Parker, Quantum scheme for secret sharing based on local distinguishability, Phys. Rev. A, 91, 022330, 2015. V. Gheorghiu, Generalized semiquantum secret-sharing schemes, Phys. Rev. A 85, 052309, 2012 A. Tavakoli, I. Herbauts, M. Zukowski, M. Bourennane, Secret sharing with a single d-level quantum system, Phys. Rev. A, 92, 030302(R), 2015. F.G. Deng, H.Y. Zhou, G.L. Long, Circular quantum secret sharing, J. Phys. A: Math. Gen, 39, pp.14089-14099, 2006. J. Bogdanski, N. Rafiei, M. Bourennane,Experimental quantum secret sharing using telecommunication fiber, Phys. Rev. A, 78, 062307, 2008. C. Schmid, [*et al.*]{}, Experimental single qubit quantum secret sharing, Phys. Rev. Lett., 95, 230505, 2005. I.C. Yu, F.L. Lin, C.Y. Huang, Quantum secret sharing with multilevel mutually (un)biased bases, Phys. Rev. A, 78, 012344, 2008. A. Maitra, S.J. De, G. Paul, A.K. Pal, Proposal for quantum rational secret sharing, Phys. Rev. A, 92, 022305, 2015. H.W. Qin, Y.W. Dai, d-Dimensional quantum state sharing with adversary structure, Quantum Inf. Process, 15, pp.1689¨C1701, 2016. L.Y. Hsu, C.M. Li, Quantum secret sharing using product states, Phys. Rev. A, 71, 022321, 2005. V. Karimipour, M. Asoudeh, Quantum secret sharing and random hopping: Using single states instead of entanglement, Phys. Rev. A, 92, 030301(R), 2015. G.L. Long, X.S. Liu, Theoretically efficient high-capacity quantum-key-distribution scheme, Phys. Rev. A 65,032302, 2002. C. Blundo, A.D. Santis, D.R. Stinson, U. Vaccaro, Graph decompositions and secret sharing schemes, Journal of Cryptology, 8(1): 39-64, 1995. W.A. Jackson, K.M. Martin, Perfect Secret Sharing Schemes on Five Participants, Des. Codes and Cryptogr., 9, pp.267-286, 1996. J. Mart¨ª-Farr¨¦, C. Padr¨®, Secret Sharing Schemes with Three or Four Minimal Qualified Subsets, Des. Codes and Cryptogr., 34, pp.17-34, 2005. A. Smith, Quantum secret sharing for general access structures, arXiv preprint quant-ph/0001087, 2000. A. Marin, D. Markham, S. Perdrix, Access Structure in Graphs in High Dimension and Application to Secret Sharing, In TQC 2013-8th Conference on the Theory of Quantum Computation, Communication and Cryptography, 22, pp.308-324, 2013. W.K. Wootters, W.H. Zurek, A single quantum cannot be cloned, Nature, 299, pp.802-803, 1982. D. Dieks, Communication by EPR devices, Physics Letters A, 92, pp.271-272, 1982. G.D. Crescenzo, C. Galdi, Hypergraph decomposition and secret sharing, Discrete Applied Mathematics, 157(5): 928-946, 2009. C.M. Bai, Z.H. Li, [*et al.*]{}, A Generalized Information Theoretical Model for Quantum Secret Sharing, Int J Theor Phys, 55, pp.4972-4986, 2016. Y. G. Yang, Y. Wang, H.P. Chai, [*et al.*]{}, Member expansion in quantum $(t, n)$ threshold secret sharing schemes, Optics Communications, 284(13): 3479-3482, 2011. J.L. Hsu, S.K. Chong, T. Hwang, [*et al.*]{}, Dynamic quantum secret sharing, Quantum Inf. Process, 12(1): 331-344, 2013. H.Y. Jia, Q.Y. Wen, F. Gao, [*et al.*]{}, Dynamic quantum secret sharing, Physics Letters A, 376(10): 1035-1041, 2012.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a full Bayesian algorithm designed to perform automated searches of the parameter space of caustic-crossing binary-lens microlensing events. This builds on previous work implementing priors derived from Galactic models and geometrical considerations. The geometrical structure of the priors divides the parameter space into well-defined boxes that we explore with multiple Monte Carlo Markov Chains. We outline our Bayesian framework and test our automated search scheme using two data sets: a synthetic lightcurve, and the observations of OGLE-2007-BLG-472 that we analysed in previous work. For the synthetic data, we recover the input parameters. For OGLE-2007-BLG-472 we find that while $\chi^2$ is minimised for a planetary mass-ratio model with extremely long timescale, the introduction of priors and minimisation of BIC, rather than $\chi^2$, favours a more plausible lens model, a binary star with components of $0.78$ and 0.11 $\msun$ at a distance of $6.3$ kpc, compared to our previous result of $1.50$ and $0.12~\msun$ at a distance of 1 kpc.' author: - | N. Kains$^{1}$ [^1], P. Browne$^{2}$, K. Horne$^{2}$, M. Hundertmark$^{2}$, A. Cassan$^{3}$\ \ $^{1}$European Southern Observatory, Karl-Schwarzschild Straße 2, 85748 Garching bei München, Germany\ $^{2}$SUPA, School of Physics and Astronomy, University of St. Andrews, North Haugh, St Andrews, KY16 9SS, United Kingdom\ $^{3}$Institut d’Astrophysique de Paris, UMR7095 CNRS-Université Pierre & Marie Curie, 98 bis boulevard Arago, 75014 Paris, France bibliography: - '../thesisbib.bib' date: 'Accepted ... Received ... ; in original form ...' title: 'A Bayesian algorithm for model selection applied to caustic-crossing binary-lens microlensing events' --- \[firstpage\] gravitational lensing, extrasolar planets, modelling, bayesian methods Introduction {#sec:intro} ============ Gravitational microlensing [@einstein36] is a well-established technique to detect extrasolar planets (e.g. @maopaczynski91, @beaulieu06, @muraki11), and is complementary to other methods, being able to probe low-mass cool planets that are inaccessible to them from the ground. This allows us to carry out statistical studies of planets of all masses located at a few AU from their host star [@cassan12]. Microlensing occurs when one or several compact objects are located between a source star and the observer, leading to a gravitational deflection of the light from the source star by the “lens" objects. As the source and lens move in and out of alignment, this deflection is observable in the form of a simple characteristic brightening and fading pattern when the lensing object is a single star (@paczynski86), but takes a much more complex form when the lens is made up of more than one object. When that happens, the lightcurve typically features “anomalies", which can be modelled to determine the nature of the lensing system. One of the configurations that can lead to anomalies is when the lensing system contains one or more planets. In order to determine the properties of these planets, the anomalies must be analysed through detailed modelling; this paper is concerned with cases where the lens consists of two components. Analysing anomalous microlensing lightcurves can be a significant computational challenge for a number of reasons. The calculation of a full binary-lens lightcurve, including the effects of having an extended source, is an expensive process computationally, and the parameter space to be explored is complex, with several degeneracies (e.g. @kubas05). This is the case even when second-order effects, such as that of parallax due to the Earth’s orbit or orbital motion in the lensing system, are ignored. A significant number of the $\sim 1500$ microlensing events now being discovered by survey teams in a season exhibit anomalies due to stellar or planetary companions to the lens star. Many of these are caustic-crossing events in which the lightcurve exhibits rapid jumps, brightening when a new pair of images forms and fading when two images merge and disappear. [@cassan08] introduced an advantageous parameterisation for caustic-crossing events by linking two parameters, $\tin$ and $\tout$, to the caustic-crossing times and two parameters, $\sin$ and $\sout$, to the ingress and egress points where the source-lens trajectory crosses the caustic curve. These parameters make it easier to locate all possible source-lens trajectories that fit the observed caustic-crossing features. [@kains09] used the [@cassan08] parameters to analyse the observed lightcurve of the microlensing event OGLE-2007-BLG-472, which exhibits two strong caustic-crossing features separated by about 3 days. This short duration suggested that the anomaly could be due to the source crossing a small planetary caustic, motivating detailed modelling to rule out alternative binary star lens models. The lowest-$\chi^2$ model has a planetary mass ratio, but an extremely long event timescale, $\te \sim 2000$ days, much longer than the 2-200 day range typical of Galactic Bulge microlensing events. On this basis [@kains09] rejected the global $\chi^2$ minimum by placing an ad-hoc 300 day cutoff on $\te$, and suggested that a Bayesian approach including appropriate priors on all the parameters would more naturally shift the posterior probability to local $\chi^2$ minima with less extreme parameters. [@cassan09] derived analytic formulae for the prior ${{\pi}\left(\sin,\sout\right)}$ corresponding to a uniform and isotropic distribution of lens-source trajectories, which are specified by an angle $\alpha$ and impact parameter $\uz$. A suitable prior on $\te$ arises by using a model of microlensing in the Galaxy to determine distributions for the lens and source distances and their relative proper motion, or alternatively by using a parameterised model fitted to the observed distribution of $\te$ among all the events found in the microlensing survey. In either case a prior on $\te$ effectively penalises very long and very short events, lowering the posterior probability of the $\te\sim2000$ d global $\chi^2$ minimum found for OGLE-2007-BLG-472 and favouring local minima with more typical event timescales. Priors on other parameters can also be derived from models of stellar population synthesis such as the Besan[ç]{}on model [@robin03], which we use in this work. In this paper we develop further the Bayesian analysis of caustic-crossing events, exploiting intrinsic features of the ${{\pi}\left(\sin,\sout\right)}$ prior to specify and test a procedure suitable for automatic exploration of the full parameter space. We test the procedure using synthetic lightcurve data, and we re-analyse the OGLE-2007-BLG-472 data to compare the results of maximum likelihood analysis ($\chi^2$ minimisation) with the full Bayesian analysis including appropriate priors. Binary-lens microlensing ======================== In the context of microlensing, caustics are locations in the source plane, behind the lens, where the magnification is infinite. A point-mass lens produces a point caustic directly behind the lens where formation of an Einstein Ring gives infinite magnification for a point source, or very large magnification for a finite size source. The point-lens gives a symmetric magnification pattern $A(u)$, with $u$ the projected source-lens distance in the source plane, in units of the Einstein Ring radius. The linear source trajectory has impact parameter $\uz$ and a timescale $\te$, both expressed in units of the angular Einstein radius [@einstein36], $$\label{eq:thetae} \thetae = \sqrt{ \frac{ 4\,G\,M }{ c^2 } \left( \frac{ D_\mathrm{S} - D_\mathrm{L} } {D_\mathrm{S}\, D_\mathrm{L} } \right)} \ ,$$ where $M$ is the lens mass, and $\ds$ and $\dl$ are the distances to the source and the lens respectively. This produces a symmetric lightcurve with magnification $A(u(t))$ peaking at $A_0$ at time $\tz$. Thus 3 parameters, $\uz$, $\tz$, and $\te$ define the shape of a point-source point-lens lightcurve. Finite source effects alter the peak of the lightcurve when $\uz$ is of order $\rhostar=\theta_\star/\thetae$, the source star radius in Einstein radius units. With two or more lens masses, the simple point caustic becomes a more complex set of closed curves consisting of concave segments joining at cusps, the shapes and locations depending on the lens masses and locations. Microlensing lightcurve anomalies, relative to the point-lens model, arise from the asymmetric magnification pattern associated with these caustic curves. The source trajectory may pass nearby to a cusp, causing a bump in the lightcurve, or cross over a caustic curve, resulting in a variety of complex lightcurve features, depending on the exact lens geometry. For a static binary lens, the caustic pattern depends on the mass ratio $q$ and separation $d$ between the two lens masses. The source trajectory relative to the caustics is specified by the impact parameter $\uz$ relative to the centre of mass of the lens, and the trajectory angle $\alpha$ relative to the line connecting the two lens masses. As emphasised by [@cassan08] and [@kains09], for caustic-crossing events the standard parameterisation makes it very difficult to conduct a systematic exploration of the parameter space. The alternative parameterisation formalised by [@cassan08] replaces ($\uz$, $\alpha$, $\tz$, $\te$, $\rhostar$) by equivalent parameters ($\sin$, $\sout$, $\tin$, $\tout$, $\dtcc$) that are more closely related to observable lightcurve features, and therefore better constrained by observations. Of the “standard" binary-lens microlensing parameters, two are retained: the mass ratio of the lens components $q$ ($\leq 1$), and their projected separation $d$. [@kains09] show that the alternative parameters are better suited to fitting caustic-crossing event lightcurves, finding models that are widely separated and easily missed with the standard binary-lens parameterisation. However, the global $\chi^2$ minimum found in the [@kains09] analysis of OGLE-2007-BLG-472 was a model with a “planetary" mass ratio $q\sim10^{-4}$, but with an extremely long timescale, $\te\sim2000$ days. This model was rejected through a qualitative discussion with the expectation that a Bayesian analysis would more naturally shift the best fit to a different local $\chi^2$ minimum with less exotic parameters. In this paper, we add priors on relevant parameters, attempting to remove the need for such qualitative arguments by using a badness-of-fit statistic that includes both the likelihood and additional terms originating from prior information. Data ==== Synthetic Lightcurve Data ------------------------- To test our automated algorithm, we generated a data set using the parameters given in [Table \[tab:888\_trueparameters\]]{}, selected to reproduce features seen in observed anomalous microlensing lightcurves. The chosen parameters correspond to a caustic-crossing binary-lens event with crossings separated by 7 days and occurring near the lightcurve peak (Fig. \[fig:data\]). Parameter Units ------------- ---------- ------- $\tz$ $5503.6$ MHJD $\te$ $27.2$ days $\alpha$ $1.68$ rad $\uz$ $0.1$ $-$ $\rhostar$ $0.003$ $-$ $d$ $1.22$ $-$ $q$ $0.08$ $-$ $g=\fb/\fs$ $5$ $-$ : Standard binary-lens parameters used to generate our synthetic data. \[tab:888\_trueparameters\] For an observation at time $t_i$, when the source is magnified by a factor $A(t_i)$, the true model magnitude is $$\label{eq:mi} \mu_i = -2.5\, \mathrm{log_{10}}( \fs\, A(t_i) + \fb ) \ ,$$ where the un-magnified source flux $\fs$ was chosen to be 1/5 of the blend flux $\fb$, which represents un-magnified stars that are blended with the microlensing target. The source-lens trajectory’s impact parameter $\uz=0.1$ is small enough to reach magnification $A\sim10$ near the closest approach at time $\tz$. We obtain synthetic magnitude data $m_i$ by using a pseudo-random number generator to sample a Gaussian distribution with mean $\mu_i$ and standard deviation $\sigma_i$, given by $$\label{eq:sigma} \sigma_i = \frac{0.01}{1+\left| m_0 - \mu_i \right|} \ ,$$ where $m_0=-2.5\, \mathrm{log_{10}}(\fs+\fb)$ is the baseline magnitude, corresponding to the un-magnified source flux plus the blend flux. The fractional error bars are thus 1% at the baseline and decrease when the source is magnified. After generating the synthetic magnitude data, we re-scaled these error bars to obtain a $\chi^2$ of 1 per degree of freedom for the true model. This approximates the common practice of rescaling the nominal error bars when fitting to observed microlensing lightcurves. We employed a non-uniform cadence emulating a typical microlens observing strategy. We start with a baseline cadence of one observation per night, increasing to 3 observations per night as the event nears the peak predicted by a point-source point-lens (PSPL) fit to the earlier data. When the anomaly is detected, i.e. when the synthetic lightcurve data departs significantly from the PSPL fit, the cadence increases to 5 observations per night. From the resulting lightcurve, a random sample of $N$ points is selected to emulate data loses, e.g. due to bad weather or technical issues. The resulting synthetic lightcurve, retaining 199 data points, is shown on [Fig. \[fig:data\]]{}. A plot of the true model lightcurve with the parameters given in [Table \[tab:888\_trueparameters\]]{} is shown on [Fig. \[fig:888\_truelc\]]{}. ![Synthetic data used in this paper, plotted with 1-$\sigma$ error bars calculated using [Eq. (\[eq:sigma\])]{}. \[fig:data\]](fig/plotdat.ps){width="6cm"} ![*Top*: Synthetic data and true model lightcurve, generated with the parameters given in [Table \[tab:888\_trueparameters\]]{}. *Bottom*: The ($\sin, \sout$) prior map, with the location of the true model shown with a filled yellow circle. \[fig:888\_truelc\]](fig/OB08888-model0-plotblfit.ps "fig:"){width="6cm"} ![*Top*: Synthetic data and true model lightcurve, generated with the parameters given in [Table \[tab:888\_trueparameters\]]{}. *Bottom*: The ($\sin, \sout$) prior map, with the location of the true model shown with a filled yellow circle. \[fig:888\_truelc\]](fig/OB08888-model0-pmap.ps "fig:"){width="6cm"} OGLE-2007-BLG-472 ----------------- This event was alerted during the 2007 microlensing observing season by the OGLE collaboration, and followed up from two observing sites by the PLANET collaboration. It was used as the test event by Kains et al. (2009, see that paper for full details on the data sets) to illustrate the capabilities of a modelling scheme based on the parameters defined by [@cassan08]. We use the same event here for comparison and in particular to show that the full Bayesian analysis shifts the posterior probability away from the exotic parameters found in the previous maximum likelihood analysis. Bayesian framework ================== Our analysis implements a Bayesian framework for fitting microlensing events involving caustic crossings. For $M$ parameters $\theta$ and data $D$, the posterior probability density in the $M$-dimensional parameter space is $$P(\theta|D) = \frac{ P(D|\theta)\, {{\pi}\left(\theta\right)} }{ \int P(D|\theta)\, {{\pi}\left(\theta\right)}\, d^M \theta} \ .$$ Here ${{\pi}\left(\theta\right)}$ is the prior on the $M$ parameters and $P(D|\theta)$ is the likelihood, e.g. for Gaussian errors with known standard deviations $\sigma_i$ the likelihood is $$L(\theta) \propto P(D|\theta) = \frac{ \exp{ \left\{ -\frac{1}{2}\chi^2(\theta,D) \right\} } }{Z_D} \ ,$$ with $$\chi^2(\theta,D)=\sum_{i=1}^N \left( \frac{ D_i-\mu_i(\theta)}{\sigma_i} \right)^2 \ ,$$ where $\mu_i(\theta)$ is the model prediction for data $D_i$, and $$Z_D = \left( 2\,\pi \right)^{N/2}\, \prod_{i=1}^N \sigma_i \ ,$$ is a measure of the $N$-dimensional volume admitted by the data. In fitting the binary lens model to microlensing lightcurve data, we project the posterior distribution in the full $M$-dimensional parameter space onto the $(d,q)$ plane, a process known as [*marginalising*]{} over the $m= M-2$ [*nuisance*]{} parameters, which we denote collectively by $\beta$: $$\label{eq:subset} \begin{array}{rl} P(d,q|D) & {\mbox{$\!\!\!$}}= \int P(d,q, \beta|D)\, d^m\beta \\ \\ & {\mbox{$\!\!\!$}}= {{\pi}\left(d,q\right)}\, \int P(\beta |D,d,q)\, d^m\beta \ , \end{array}$$ where ${{\pi}\left(d,q\right)}$ is the prior distribution on the $(d,q)$ plane. We take ${{\pi}\left(d,q\right)}$ to be uniform in log ${d}$ and log ${q}$. This choice comes from the fact that the sizes of the caustics behave like power laws of $d$ and $q$. We then marginalise over nuisance parameters by simply averaging over the samples of our Markov Chain Monte Carlo algorithm (MCMC, see e.g.@gelmanbook for more background information on this), $$\int X(\theta)\, P(\beta|D,d,q)\, d^m\beta \approx \left< X \right> \ ,$$ where $X(\theta)$ is any function of the parameters, and we use the notation $\left< X \right>$ to refer to a simple unweighted average over the MCMC samples. The result is a map of the posterior probability distribution $P(d,q|D)$. We will find that the maximum aposteriori (MAP) estimates of $(d,q)$, which maximise $P(d,q|D)$, can be quite different from the maximum likelihood (ML) estimates, which maximise $P(D|q,d,\beta)$, or mimimise $\chi^2$. Feature-based Parameters and Structure of the Prior ${{\pi}\left(\sin,\sout\right)}$ ------------------------------------------------------------------------------------ The benefit of using the [@cassan08] parameters $(\sin,\sout,\tin,\tout,\dtcc)$, rather than the standard parameters $(\uz,\alpha,\te,\tz,\rhostar)$, is two-fold. First, the caustic-crossing time parameters $\tin$, $\tout$ and $\dtcc$ can often be tightly constrained by features in the observed lightcurve. Second, the $(\sin,\sout)$ parameters bring together onto a compact square all models that have caustic crossings at those times. In contrast, with the standard $(\uz,\alpha,\te,\tz,\rhostar)$ parameters, the models with caustic crossings at times $\tin$ and $\tout$ are widely separated and difficult to locate. The [@kains09] analysis used a genetic algorithm and assumed uniform priors on the [@cassan08] parameters. This has obvious problems: for example, since the caustic folds that make up a caustic structure are concave (see e.g. upper panel inset of [Fig. \[fig:888\_truelc\]]{}), a linear source trajectory cannot enter and then exit a caustic along the same caustic fold. This needs to be reflected in suitable priors on the corresponding parameters, in this example, on the $(\sin,\sout)$ parameters, which determine where the source-lens trajectory crosses the caustic folds. [@cassan09] derived analytic formulae for the prior ${{\pi}\left(\sin,\sout | \tin, \tout, \dtcc\right)}$, hereafter shortened to ${\pi}(\sin, \sout)$, corresponding to a uniform isotropic distribution of source-lens trajectories, and introduced also a prior ${{\pi}\left(\te\right)}$ on the event timescale, showing how ${{\pi}\left(\te\right)}$ effectively modifies ${{\pi}\left(\sin,\sout\right)}$. The analytic prior is proportional to the Jacobian of the transformation between the standard and [@cassan08] parameters, $$\label{eq:jacobian} J = \left| \frac{\partial\left(\uz,\alpha,\tE,\tz, \rhostar \right)} {\partial\left(\sin,\sout,\tin,\tout,\dtcc \right)} \right| \ .$$ [@cassan09] evaluated this Jacobian to find the analytic form of ${{\pi}\left(\sin,\sout\right)}$ corresponding to uniform priors on all standard parameters. As can be seen in e.g. [Fig. \[fig:888\_truelc\]]{}, the prior ${{\pi}\left(\sin,\sout\right)}$ covers a compact square, since $\sin$ and $\sout$ run over the same range as we move around the closed caustic curve. The square naturally sub-divides into “sub-boxes", the boundaries of which correspond to the cusps. For a caustic with $N_c$ cusps, there are thus $N_c^2$ sub-boxes. However, the sub-boxes on the anti-diagonal of the $(\sin,\sout)$ square have $\sin$ and $\sout$ on the same caustic fold, which cannot occur due to the concave geometry of the folds. This means that the anti-diagonal sub-boxes must have zero probability. There are thus $N_c\,(N_c-1)$ sub-boxes to consider for each caustic. The event timescale prior ${{\pi}\left(\te\right)}$ can in principle be obtained by considering models of microlensing in the Galaxy, mapping the joint distribution of lens mass, lens and source distances, and relative proper motion onto the corresponding distribution of $\te$ (e.g. @dominik06). A convenient alternative is to use the observed distribution of $\te$, e.g. from the OGLE survey. Caution is needed because of possible biases in fitting $\te$ to observed lightcurves, and selection effects lowering the occurrence of short and long $\te$ events in the survey. [@cassan09] considered two different priors on $\te$ to illustrate their effect on ${{\pi}\left(\sin,\sout\right)}$. One was a distribution of event timescales observed in past microlensing seasons, and another was the model distribution of [@woodmao95]. These two distributions were shown to be in excellent agreement with each other. In this paper we derive a 2-dimensional joint prior on the event timescale and source size using simulations of synthetic stellar population obtained with the Besan[ç]{}on model [@robin03]. We briefly describe our method to derive this in the next section. ### Deriving priors from a Galactic model The initial maximum likelihood analysis of the microlensing event OGLE-2007-BLG-472 [@kains09] suggests an unusual parameter combination as best description of the data. The strength and plausibility of such an approach can be tested by using a Galactic model that reflects our prior knowledge of the Galactic structure. For interpreting microlensing lightcurves, different Galactic models (@hangould95b, @bennettrhie02, @dominik06) are in use. These models differ in details of the assumed spatial, kinematic, and mass distributions of the Galactic Bulge and Disk stellar populations. A different model which is adapted to reflect the observed star counts in the optical and near-infrared is the so-called Besançon model [@robin03]. As indicated by [@kerins09], this model can be used to predict the optical depth of gravitational microlensing events. Moreover it can be used to used to set detailed parameter constraints when combined with the adapted parameter estimates and the source star properties. Based on the available online catalogue simulation, we generated a sample of stars between Earth and 11 kpc. To ensure that potential lenses, which are typically faint, are included, the apparent magnitude was not constrained. Based on the value used in the previous paper [@kains09], we assumed a visual extinction $A_V=0.7$ mag kpc$^{-1}$, where the resulting model extinction curve stops increasing after several kpc. For a more accurate description, the calibrated spatial extinction in $K_S$ [@marshall06] could have been used, but this would have required a calibration for the $I$ band, which is typically used in microlensing observations. In order to infer microlensing distributions from the simulation, lens-source pairs were randomly drawn from the sample. These were then accepted or rejected depending on the area of their corresponding angular Einstein ring, which gives the instantaneous lensing probability. The simulated bolometric magnitude and effective temperature allowed us to estimate $\rho_*$. Including the simulated proper motion provided us with an estimate for the Einstein time - the only observable parameter directly connected to the lens mass. We did not include the lens-source relative proper motion in the resampling procedure, but this would lead to a distribution that favours shorter Einstein times, as fast lenses lead to larger detection zones on the sky. This is a consequence of moving the lensing cross-section of the instantaneous lensing probability along the lens-source relative proper motion. The correction depends on the annual survey observation and the survey efficiency in tE. For events much longer than the sampling rate, the increase in lensing probability can be modelled as a stripe on the sky. For a coverage of 240 days which we assume here, the actual prior distribution of the event duration changes its expected value by an amount that is negligible in comparison to the error ellipse of instantaneous case. Our estimates illustrate that the value found by [@kains09] for $\te$ is much larger and that of $\rho_*$ much smaller than typical samples drawn from the Besançon model. Consequently, we determine a bivariate Gaussian prior based on the covariance matrix of $\te$ and angular source star radius of the simulated sample. This joint prior is plotted on [Fig. \[fig:priorcontour\]]{}. ![Contour plot of the the logarithm of our joint prior on $\rho_*$ and $\te$. The location of the best-fit models identified for event OGLE-2007-BLG-472 are shown with white filled circles and labelled for the different statistics we use, as discussed in the text. \[fig:priorcontour\]](fig/priorcontour_OB070472.ps){width="8cm"} Since trajectories requiring very large values of $\te$ and/ or very small $\rho_*$ are suppressed by this prior, so too are the corresponding regions of the $(\sin, \sout)$ plane. That is, for given values of $\tin$ and $\tout$, regions of $(\sin, \sout)$ where the source enters and exits the caustic structure very close to the same cusp are suppressed. This is evident on the ${{\pi}\left(\sin,\sout\right)}$ maps, e.g. the bottom panel of [Fig. \[fig:888\_truelc\]]{}, where the prior is low in the corners along the anti-diagonal line. Because the [@cassan08] parameters assume that the source trajectory crosses a caustic, and because we are comparing caustics of different sizes as we move across the $(d,q)$ plane, a full implementation of the uniform isotropic prior on source-lens trajectories must account for large caustics being easier to hit than small ones. If two models have equal $\chi^2$ but cross caustics of different sizes, the prior should favour the model with a larger probability of being hit. As each $(\sin,\sout)$ corresponds to a different source trajectory angle $\alpha$, we quantify this by defining ${{\pi}_{\rm H}}(d,q,\alpha)$, the probability that a caustic will be “hit" by a trajectory with angle $\alpha$. This is proportional to the range of impact parameters intersecting the caustic, i.e. the projected size of the caustic perpendicular to the source trajectory. The concave structure of the caustic means that once the $N_c$ cusp positions are found, and rotated by an angle $-\alpha$, the vertical range then gives the projected cross-section of the caustic. Thus if the [@cassan09] prior ${{\pi}\left(\sin,\sout\right)}$ is normalised to 1 when integrated over the $(\sin,\sout)$ square, the full prior multiplies this by ${{\pi}_{\rm H}}(d,q,\alpha)$. Automated modelling scheme -------------------------- The flowchart in [Fig. \[fig:algorithm\_flow\]]{} summarises the main steps of our automated modelling scheme. In summary, 1. For each node in the $(d,q)$ grid, we construct the corresponding caustic curves. 2. For each caustic curve, we construct the ${{\pi}\left(\sin,\sout\right)}$ prior map, which divides into sub-boxes. 3. In each sub-box, we launch an MCMC run on the $\beta$ parameters to find the best fit and map out the posterior $P(\beta|D,d,q,{\rm box})$. Chains are kept confined to each sub-box by forcing MCMC steps to remain within its boundaries. The results are then collected to construct the posterior probability $P(d,q|D)$, either by optimising the nuisance parameters or by integrating over the nuisance parameters in each sub-box, and then summing over the sub-boxes. Finally, we compute the corresponding “Badness-of-Fit” statistic BoF$(d,q)=-2\,\ln{P(d,q|D)}$. ![image](fig/algoflow_hires.ps){width="11cm"} Our automated modelling scheme exploits the structure of the prior ${{\pi}\left(\sin,\sout\right)}$. For a caustic with $N_c$ cusps, the prior ${{\pi}\left(\sin,\sout\right)}$ has $N_c\,(N_c-1)$ local maxima, one in each of the sub-boxes. As the separation between these local maxima can be large, a single MCMC run, or whichever other parameter optimisation method is employed, may find it difficult to jump from one sub-box to another, and thus may fail to find the best solution. To avoid this, we launch an MCMC run in [*each*]{} of the sub-boxes. Thus we start $N_c (N_c-1)$ chains, each confined to a particular sub-box, to locate the best fit in each sub-box. We stop the chains using the convergence criterion of [@geweke92] once they are past a minimum number of iterations. Our method thus divides the binary-lens parameter space not only into a $(d,q)$ grid, but further into the required sub-boxes for each caustic for each $(d,q)$ pair. In each sub-box we start the MCMC run at the maximum of ${{\pi}\left(\sin,\sout\right)}$. Instead of using $\chi^2$ as the sole criterion for acceptance or rejection of proposed MCMC steps, the ratio of the priors is also taken into account. To incorporate non-uniform priors ${{\pi}\left(\theta\right)}$ in the MCMC algorithm, we simply modify the criterion for accepting a proposed step. Rather than only the likelihood $P(D|\theta)$, we consider the full posterior $P(\theta|D)\propto\,P(D|\theta)\,{{\pi}\left(\theta\right)}$. A proposal to take a random step from $\theta$ to $\theta'$ is always accepted if $\theta'$ increases the posterior, and the acceptance probability when $\theta'$ diminishes the posterior is the ratio of posterior probabilities $$\label{eq:acceptance} \frac{P(\theta'|D)}{P(\theta|D)} = \exp{\left(-\frac{1}{2}\Delta\chi^2\right)}\, \frac{{{\pi}\left(\theta'\right)}}{{{\pi}\left(\theta\right)}} \ ,$$ where $\Delta\chi^2=\chi^2(\theta')-\chi^2(\theta)$ is the difference in $\chi^2$ across the proposed step. For an MCMC run over the full parameter set $\theta$, the relevant prior is $$\label{eqn:priortheta} {{\pi}\left(\theta\right)} = {{\pi}\left(d,q\right)}\, {{\pi}\left(\beta|d,q\right)} \ .$$ For MCMC runs over the nuisance parameters $\beta$, with fixed $d,q$, the relevant prior is $$\label{eqn:priorbeta} {{\pi}\left(\beta|d,q\right)} \propto {{\pi}\left(\sin,\sout\right)}\,{{\pi}_{\rm H}}(d,q,\alpha) \ .$$ Implementation -------------- We implemented the algorithm by using a cluster of desktop computers, each one running one of the MCMC chains to map out the posterior $P(\beta|D,d,q,{\rm box})$ for a grid of $(d,q)$ values and for all the corresponding sub-boxes. The results are then collected to construct the posterior probability map $P(d,q|D)$, integrating over the nuisance parameters $\beta$, and summing over the sub-boxes. $P(d,q|D,{\rm box})$ is evaluated from each MCMC chain using the best-fit parameters $\hat{\beta}$: $$\begin{array}{rl} P(d,q|D,{\rm box}) & {\mbox{$\!\!\!$}}\propto P(D|d,q,{\rm box},\hat{\beta})\, {{\pi}\left(\hat{\beta}|d,q,{\rm box}\right)}\, \ . \end{array}$$ Because each sub-box has its own MCMC chain, we must weight the chain averages by the prior probability of each sub-box: $${{\pi}\left({\rm box}|d,q\right)} = \int{\mbox{$\!\!\!$}}\int {{\pi}\left(\sin,\sout\right)}\, {{\pi}_{\rm H}}\, d\sin\, d\sout \ ,$$ where the integration limits cover the sub-box. The weighted sum of chain averages then gives the posterior $(d,q)$ map, $$P(d,q|D) = \displaystyle\sum\limits_{\rm box} {{\pi}\left({\rm box}|d,q\right)}\, P\left(d,q|D,{\rm box}\right) \ .$$ Normally one sub-box dominates the sum, but sometimes two or more can contribute. Results ======= Badness-of-Fit Criteria ----------------------- We consider and compare results for four alternative “Badness-of-Fit” criteria, corresponding to maximum likelihood (ML), maximum a-posteriori (MAP), and the Bayesian Information Criterion (BIC), as well as a Bayes statistic that integrates the posterior probability over the nuisance parameters. In each case the best-fit parameters $(d,q)$ minimise a “Badness-of-Fit” statistic, BoF$(d,q)$, and the corresponding posterior probability is $P(d,q|D)\propto\exp{\left\{-\frac{1}{2}\,{\rm BoF}(d,q) \right\}}$. Figs \[fig:chi2map\]-\[fig:bayemap\] display the BoF$(d,q)$ maps obtained for the four cases: $$\begin{aligned} & \mathrm{\bf ML:} & \mathrm{BoF} = \chi^2(\hat{\beta})\\ & \mathrm{\bf MAP:} & \mathrm{BoF} = \chi^2(\hat{\beta}) - 2\, \ln{ {\pi}(\hat{\beta}) }\\ & \mathrm {\bf BIC:} & \mathrm{BoF} = \chi^2(\hat{\beta}) - 2\, \ln{ {\pi}(\hat{\beta}) } + \ln{ \left( N_{D} \right)}\, N_{\rm eff}\\ & \mathrm{\bf Bayes:} & \mathrm{BoF} = \chi^2 - 2\, \ln({{\pi}\left({\beta}\right)\,d^m{\beta}}) \ .\end{aligned}$$ Here the prior ${\pi}$ is from Eqn. \[eqn:priortheta\] or Eqn. \[eqn:priorbeta\], depending on the context. We briefly elaborate on the three options below before discussing the results. - [**ML**]{}: The [*maximum likelihood*]{} (ML) parameters maximise the likelihood $L(d,q)=P(D|d,q)$, equivalent to minimising BoF$=-2\ln(L)=\chi^2$. Thus we determine the best-fit value of $\chi^2$ for each $(d,q)$ pair, and let $P(d,q|D)\propto \exp{\left(-\chi^2/2\right)}$. This approach emphasises the fit to the data while disregarding priors on the parameters. - [**MAP**]{}: The [*maximum a-posteriori*]{} (MAP) parameters maximise the posterior probability density $P(d,q|D)$, equivalent to minimising BoF$(d,q)=-2\ln{P(d,q|D)}$. - [**BIC**]{}: The [*Bayesian Information Criterion*]{} (BIC), applies an “Occam” penalty that gives priority to “simpler” models that employ fewer parameters to achieve their fit. Each $(d,q)$ grid point is regarded as a competing model with equal prior probability and $N_{\rm eff}$ effective parameters that have been optimised. The [*Akaike Information Criterion*]{} (AIC) uses an Occam penalty $2\,N_{\rm eff}$ [@akaike74], while the [*Bayesian Information Criterion*]{} (BIC) uses a stronger penalty $\ln(N_D)\,N_{\rm eff}$, with $N_{D}$ the number of data points [@schwarz78]. Our tests with fitting of polynomial models suggest that the BIC may be more reliable than the AIC for model selection. We use the MCMC samples to estimate the “effective number” of nuisance parameters, $$N_{\rm eff} \approx \left<D(\theta)\right> - D(\left<\theta\right>) \ ,$$ where $\left<x\right>$ denotes the expectation value of $x$ under the posterior, simply calculated by taking an unweighted average over the MCMC samples, and the ‘deviance’ is $$D(\theta) \equiv \chi^2 - 2\,\ln{{\pi}} \ ,$$ as used to compute the acceptance probability of each step in the MCMC algorithm, as per [Eq. (\[eq:acceptance\])]{}. Here $D(\left<\theta\right>)$ estimates the deviance at the minimum, while $\left<D(\theta)\right>$ measures the typical value, which should rise by 1 for each dimension of the parameter space explored by the MCMC samples. This definition of $N_{\rm eff}$ is designed to avoid double-counting when two parameters are highly correlated, and is found in the [*Deviance Information Criterion*]{} (DIC, @spiegelhalter02, see also @ando07). - [**Bayes**]{}: A fully Bayesian approach integrates the posterior probability density over the $m$ nuisance parameters $\beta$, rather than just finding the maximum likelihood (ML) or maximum posterior probability density (MAP). Thus if two models have the same MAP statistic, the one that achieves that good fit over a wider range of parameters has a correspondingly higher probability. We can also write out the Bayes statistics as $$\mathrm{BoF} = \chi^2(\hat{\beta}) - 2 \ln{ {\pi}( \hat{\beta} ) } - \sum_{i=1}^m \ln{(2\,\pi\,\lambda_i( \hat{\beta} )) }\, ,$$ where the $2\,\pi$ factor here refers to the constant $\pi=3.141592...$ rather than the prior ${\pi}(\beta)$, and $\lambda_i( \hat{\beta} )$ are the $m$ eigenvalues of the parameter-parameter covariance matrix evaluated at $\hat{\beta}$, their product being the $m$-dimensional parameter volume admitted by the data around the best-fit value $\hat{\beta}$. We approximate the integral over the $m$ nuisance parameters by the method of steepest descents, $$\begin{aligned} \int e^{-\chi^2(\beta)/2}\,{\pi}\left(\beta\right)\,d^m\beta \approx e^{-\chi^2(\hat{\beta})/2}\,{\pi}(\hat{\beta})\,d^m\beta \\ \approx e^{-\chi^2(\hat{\beta})/2}\,{\pi}(\hat{\beta})\, \prod_{i=1}^m \left(2\,\pi\lambda_i(\hat{\beta})\right)^{1/2}\, ,\end{aligned}$$ where the $2\,\pi$ factor here again refers to the constant rather than the prior. This is just the MAP statistic multiplied by a parameter space volume. We evaluate parameter space volume $d^m\beta$ as the square root of the determinant of the parameter-parameter covariance matrix derived from the MCMC chain. Fits to synthetic data ---------------------- For the synthetic event there is a single narrow, well-defined minimum. Table \[tab:888\_parameters\] summarises the parameters of the best-fit model found with a $14\times14$ ($d, q$) grid, evenly spaced in $\log{d}$ and $\log{q}$. The posterior distribution $P(d,q|D)$ found using the MAP option is plotted in [Fig. \[fig:888\_bof2map\]]{}; the ML and BIC posterior maps are almost undistinguishable. The BoF minimum is so tightly defined that the choice of BoF statistic has little effect on the best-fit parameters or the shape of the posterior map. The fit that is recovered it located at the grid point closest to the true model, as can be seen by comparing [Fig. \[fig:888\_bof2map\]]{} to [Fig. \[fig:888\_truelc\]]{}, and the best-fit parameters given in [Table \[tab:888\_parameters\]]{} to [Table \[tab:888\_trueparameters\]]{}. The true parameters are not exactly recovered, as they do not match our grid points, but another modelling could be conducted without keeping $d$ and $q$ fixed, using the best models for each grid point as a starting point for new MCMC runs. ![image](fig/OB010888-bofmap_c_chi2.ps){width="8cm"} ![image](fig/OB010888-bofmap_p_chi2.ps){width="8cm"} ![image](fig/OB010888-model1.ps){width="6cm"} ![image](fig/pmap-synth.ps){width="6cm"} -------------------------------------------------------------------------------- Parameter Units ------------------------------------------------------ ----------------- ------- $d$ (grid) $1.237$ $-$ $q$ (grid) $0.059$ $-$ $g=\fb/\fs$ 5.81 $\pm$ 0.09 $-$ $\chi^2$ 202.9 $-$ “Standard"\ $\tz$ & $5503.62 \pm 0.014$ & MHJD\ $\te$ & $37.52 \pm 0.04$ & days\ $\alpha$ & $1.692 \pm 0.005$ & rad\ $\uz$ & $0.056 \pm 0.003$ & $-$\ $\rhostar$ & $(2.44 \pm 0.26) \times 10^{-3}$ & $-$\ “Caustic"\ $\tin$ & 5500.394 $\pm$ 0.009 & MHJD\ $\tout$ & 5507.342 $\pm$ 0.004 & MHJD\ $\sin$ & 1.273 $\pm$ 0.002 & $-$\ $\sout$ & 0.706 $\pm$ 0.002 & $-$\ $\dtcc$ & 0.092 $\pm$ 0.010 & days\ -------------------------------------------------------------------------------- : Best-fit parameters (from $d, q$ grid exploration) for the synthetic event. \[tab:888\_parameters\] Fits to OGLE-2007-BLG-472 data ------------------------------ Our fits to the OGLE-2007-BLG-472 data are presented in Figs \[fig:chi2map\]-\[fig:bayemap\]. The contour levels are set at $\Delta$BoF=2.3, 6.17, 11.8, 20, 50, 100, 250 and 500, relative to the global minimum, the first 3 thus corresponding to 1, 2, and 3-$\sigma$ confidence regions if the posterior is well approximated by a 2-parameter Gaussian. The best-fit values and uncertainties of additional parameters are summarised in Table \[tab:par\_472\]. Fig. \[fig:chi2map\] exhibits an extended region of low $\chi^2$ around the minimum at $d=0.51$ and $q=2\times10^{-4}$. The width in $d$ is unresolved by the rather coarse $(d,q)$ grid, and the extension in $\log{q}$ is around 1 dex. The best-fit model has the source crossing a very small planetary caustic, requiring a very long event timescale, $\te\sim2000$ d, to match the observed crossings at $\tin$ and $\tout$ separated by 3 d. Thus, as was also found in [@kains09], the lowest-$\chi^2$ model for this event is not very well constrained, and has an implausibly long $\te$. There are no significantly different competing local minima with $\Delta\chi^2<20$; the first competitive model for which the configuration (source trajectory and location of the caustic crossings) is significantly different has $\Delta\chi^2 \sim 22$. Changing the BoF statistic has a significant effect on the posterior map: the penalties introduced by the prior move the best-fit model “up" in $q$, towards models with smaller $\te$ and configurations where the source crosses a central, rather than planetary, caustic. The MAP, BIC and Bayes fits (Fig. \[fig:bicmap\] - \[fig:bayemap\]) all favour a model with $q\sim0.12$, $d\sim 0.61$. The $\chi^2$ increases by $\Delta\chi^2=43$ relative to the global $\chi^2$ minimum, but the priors compensate since $\te$ and $\rho_*$ both move toward more plausible values, and the larger caustic is easier to hit. [Fig. \[fig:priorcontour\]]{} shows the location of the lowest-$\chi^2$ and best Bayesian models with respect to the ${\pi}(\rho_*, \te)$ contour. The $\chi^2$ model is in the wings of the prior distribution, whereas the best-BIC model is near its peak, meaning that the $\chi^2$ model is more strongly penalised by the prior. We used the best-fit Bayesian parameters (Table \[tab:par\_472\]) and the algorithm of [@dominik06] to derive probability distributions for the lens mass and distance as shown in [Fig. \[fig:lensprop\]]{}. With no parallax signal detected in this event, we used only the constraint from $\te$ and $\rho_*$. We find lens component masses of $0.78^{+3.43}_{-0.47}\,\msun$ and $0.11^{+0.47}_{-0.06}\,\msun$ at a distance of $5.88^{+1.49}_{-2.68}$ kpc. The best Bayesian model has a large blend/source flux ratio $F_B/F_S\sim200$. There is no obvious star blended with the lens on the images, and the blending could plausibly come from the binary-star lens system, or from a third body. ![image](fig/OB070472-sumbof_c_chi2.ps){width="8cm"} ![image](fig/OB070472-sumbof_p_chi2.ps){width="8cm"} ![image](fig/OB070472-model1.ps){width="6cm"} ![image](fig/OB070472-model1-pmap-chains.ps){width="6cm"} ![image](fig/OB070472-sumbof_c_baye.ps){width="8cm"} ![image](fig/OB070472-sumbof_p_baye.ps){width="8cm"} ![image](fig/OB070472-model2.ps){width="6cm"} ![image](fig/OB070472-model2-pmap-chains.ps){width="6cm"} ![image](fig/OB070472-sumbof_c_map.ps){width="8cm"} ![image](fig/OB070472-sumbof_p_map.ps){width="8cm"} ![image](fig/OB070472-sumbof_c_bic.ps){width="8cm"} ![image](fig/OB070472-sumbof_p_bic.ps){width="8cm"} ------------------------------------------------------------------------------------------------------------------------------------------------------- Parameter ML (Fig. \[fig:chi2map\]) MAP/BIC/Bayes (Fig. \[fig:bicmap\]) Units ---------------------------------------------------------------- ---------------------------------- ------------------------------------- ------- -- -- ML ($\chi^2$) 915 958 $-$ MAP 1017 968 $-$ BIC 1046 986 $-$ Bayes 1061 1003 $-$ $d$ (grid) 0.51 0.61 $-$ $q$ (grid) $2.03\times10^{-4}$ $0.119$ $-$ $g=\fb/\fs$ 7.59 $\pm$ 0.08 $114.59 \pm 9.62$ $-$ (OGLE) $N_{\rm eff}$ 4.20 4.97 $-$ Standard $\tz$ $7121.28 \pm 113.61$ $4332.41 \pm 0.25$ MHJD $\te$ $1939.35 \pm 80.92$ $73.37 \pm 5.45$ day $\alpha$ $3.134 \pm 0.044$ $3.050 \pm 0.020$ rad $\uz$ $-0.181 \pm 0.029$ $-0.052 \pm 0.003$ $-$ $\rhostar$ $(3.09 \pm 0.37) \times 10^{-5}$ $(5.66 \pm 0.51) \times 10^{-4}$ $-$ “Caustic"\ $\tin$ & 31.379 $\pm$ 0.012 & 31.325 $\pm$ 0.016 & MHJD-4300\ $\tout$ & 34.078 $\pm$ 0.002 & 34.077 $\pm$ 0.002 & MHJD-4300\ $\sin$ & 1.785 $\pm$ 0.012 & 0.807 $\pm$ 0.005 & $-$\ $\sout$ & 1.011 $\pm$ 0.013 & 0.423 $\pm$ 0.011 & $-$\ $\dtcc$ & 0.073 $\pm$ 0.003 & 0.072 $\pm$ 0.004 & day\ $I_s$ & 17.95 & 20.77 & mag\ $I_b$ & 15.74 & 15.62 & mag\ $\theta_*$ & 1.15 & 0.53 & $\mu$as\ ------------------------------------------------------------------------------------------------------------------------------------------------------- ![image](fig/pm.OB070472.model2.ps){width="8cm"} ![image](fig/px.OB070472.model2.ps){width="8cm"} Discussion and conclusions {#sec:conclusion} ========================== The modelling results for the two datasets presented here indicate that our algorithm is successful in locating minima throughout the parameter space, and the subdivision of the prior maps ensures that all possible source trajectories through the caustics are explored. Furthermore, the use of Bayesian priors allows us to incorporate information on the event timescale distribution, as well geometrical information on the concavity of caustics. Although the sampling rate for our synthetic lightcurve data is not particularly high compared to what can now be achieved by survey and follow-up teams, our algorithm located a well-defined minimum near the true minimum, with a grid search of the $(d,q)$ parameter space and MCMC runs to sample the posterior probability in the region around each local minimum. In our re-analysis of the OGLE-2007-BLG-472 data, we improve upon the posterior map calculated in [@kains09] for OGLE-2007-BLG-472 because we now use an MCMC run for each prior sub-box separately rather than just a single one per $(d,q)$ grid point. We find that changing the badness-of-fit statistic leads to important changes in the posterior $P(d,q|D)$ maps. In particular, the model with lowest $\chi^2$ has a planetary mass ratio and an implausibly long $\te\sim2000$ d. Adding priors dramatically shifts the location of the best-fit model, lowering the timescale to $\te\sim70$ d. Using a Bayesian approach to penalise models with improbable parameters leads to best-fit parameters corresponding a binary star lens with $0.78$ and 0.12 $\msun$ components at a distance of $\sim5.9$ kpc, and a more typical event timescale $\te\sim 70$ d. The only remarkable parameter is a rather high blending fraction, which could arise from either the lens itself or a closely blended third star. The new model is very different from that found by [@kains09], which characterised the lens as a binary star with components masses of $1.50$ and $0.12 \msun$ at a distance of 1 kpc. The development of automated algorithms for real-time modelling such as that presented here allows observers to receive feedback on ongoing anomalous microlensing events, and ensure that important features predicted by real-time modelling are not missed. This makes it much easier to assess the nature of the lensing system more rapidly and allocate observing time to targets more effectively. When observational coverage is not complete, or when the $\chi^2$ alone is not sufficient as a criterion for badness-of-fit, statistics like the those we use in this paper could help to assess reliably alternative models. Furthermore, provided that the chosen priors are appropriate, comparing the resulting posterior maps of using different statistics allows for a useful test of a given model’s robustness. Acknowledgments {#acknowledgments .unnumbered} =============== NK is supported by an ESO Fellowship. The research leading to these results has received funding from the European Community’s Seventh Framework Programme (/FP7/2007-2013/) under grant agreement No 229517. KH and MH are supported by The Qatar Foundation QNRF grant NPRP-09-476-1-87. \[lastpage\] [^1]: email:[email protected]
{ "pile_set_name": "ArXiv" }
--- author: - 'Tom Vrancx[^1]' - Jan Ryckebusch - Jannes Nys title: 'Complete sets of observables in pseudoscalar-meson photoproduction' --- Introduction {#sec:introduction} ============ Quantum mechanics dictates that observables can be expressed as bilinear combinations of a certain number of complex amplitudes. An example of pseudoscalar-meson photoproduction is the $\gamma p \to K^+ \Lambda$ reaction. In meson photoproduction, two kinematic variables are involved: for example the invariant mass $W$ and the meson scattering angle in the center-of-mass frame ${\theta_{\textrm{c.m.}}}$. Since the photon, the target and the recoil particle each have two possible spin states, four independent complex amplitudes can be distinguished, considering the conservation of angular momentum. Since quantum states are only determined up to a constant phase factor, all information about the reaction is contained in four moduli and three relative phases. A set containing a minimum number of observables from which the moduli and relative phases of the complex amplitudes can be determined (unambiguously) is dubbed a *complete set*. In Ref. [@Chiang:1996em] it was pointed out that theoretical complete sets consist of eight well-chosen observables, contrary to the nine observables suggested in Ref. [@Barker:1975bp]. Obviously, observables cannot be determined with infinite precision. It is now the question whether or not eight observables still suffice to reach a complete knowledge about the four complex amplitudes? Complete sets in the transversity basis {#sec:transversity_basis} ======================================= The transversity amplitudes in pseudoscalar-meson photoproduction are defined as $b_1 = {}_y{\langle + |} J_y {| + \rangle}_y$, $b_2 = {}_y{\langle - |} J_y {| - \rangle}_y$, $b_3 = {}_y{\langle + |} J_x {| - \rangle}_y$, and $b_4 = {}_y{\langle - |} J_x {| + \rangle}_y$. Here, ${}_y{\langle \pm |}$ and ${| \pm \rangle}_y$ represent the spinors for the recoil and the target particle, and $J_{x,y}$ is the photon current. The subscripts indicate the polarization direction of the particle in question. The normalized transversity amplitudes are defined as $a_j = b_j/\sqrt{\sum_{i = 1}^4|b_i|^2}\equiv r_j e^{i\alpha_j}$, with $r_i$ and $\alpha_i$ the moduli and the phase of the amplitude $a_i$.In order to determine the normalized transversity amplitudes $a_i$, information about the unpolarized differential cross section is no longer needed and complete sets are reduced from eight to seven observables. In Table I of Ref. [@Vrancx:2013pza], the expressions for the three single and twelve double asymmetries in the transversity representation are listed. The moduli $r_i$ can be readily expressed in terms of the single asymmetries ${\varSigma}$, $T$, and $P$: $$\begin{aligned} r_1 &= \tfrac{1}{2}\sqrt{1 + {\varSigma}+ T + P}, \hspace{-75pt}& r_2 &= \tfrac{1}{2}\sqrt{1 + {\varSigma}- T - P}\nonumber\\ r_3 &= \tfrac{1}{2}\sqrt{1 - {\varSigma}- T + P}, \hspace{-75pt}& r_4 &= \tfrac{1}{2}\sqrt{1 - {\varSigma}+ T - P}.\label{eq:transversity_moduli}\end{aligned}$$ This means that the four moduli of the normalized transversity amplitudes can be determined unambiguously from measurements of the three single asymmetries. Note, that only three of these moduli are independent as $r_1^2 + r_2^2 + r_3^2 + r_4^2 = 1$. There are six possible combinations for the relative phases of the transversity amplitudes, namely $\alpha_{ij} = \alpha_i - \alpha_j$ $(i \neq j)$. However, only three of these are independent. By fixing a certain reference phase $\alpha_l$, the three independent phases are denoted by $\delta_i = \alpha_i - \alpha_l$ ($i \neq l)$ and the dependent phases by $\Delta_{ij} = \delta_i - \delta_j$ ($i \neq j$). A specific complete set, consisting of three single and four well-chosen double asymmetries, gives access to a specific set of two independent and two dependent phases: $\{\delta_i, \delta_j, \Delta_{ik}, \Delta_{jk}\}$ ($i \neq j \neq k$). Two kinds of complete sets can be distinguished: complete sets of the first kind which have four different solutions to the set $\{\delta_i, \delta_j, \Delta_{ik}, \Delta_{jk}\}$, and complete sets of the second kind which have eight different solutions. The actual solution satisfies the trivial relation $\delta_i - \Delta_{ik} - \delta_j + \Delta_{jk} = 0$. Results {#sec:results} ======= Figure \[fig:moduli\_GRAAL\_RPR\] shows the extracted transversity moduli $r_i$ from the GRAAL data for the $\{{\varSigma}, T, P\}$ of the $\gamma p \to K^{+} \Lambda$ reaction [@Lleres2007; @Lleres2009] along with the corresponding predictions of the RPR-2011 model [@Corthals2006; @DeCruz2012a; @DeCruz2012b]. The results of Fig. \[fig:moduli\_GRAAL\_RPR\] indicate that with current experimental accuracies, knowledge about the three single asymmetries allows to determine the moduli $r_i$. It is seen that in a few kinematic situations one or more extracted moduli are missing. This is due to finite experimental resolution resulting a negative argument of one of the square roots in Eq. \[eq:transversity\_moduli\]. To this day, there is not a single pseudoscalar-meson photoproduction reaction for which a complete data set has been published. The presented GRAAL data is the sole set of data which allows extracting the kinematic dependence of the moduli of the normalized transversity amplitudes. In order to study the feasibility of extracting the relative phases from data, one has to resort to simulated of complete sets of observables. Here, the complete set $\{{\varSigma},T,P;C_x,O_x,E,F\}$, which is of the first kind, will be considered for the $\gamma p \to K^{+} \Lambda$ reaction. From this set the phases $\{\delta_1, \delta_2, \Delta_{13}, \Delta_{23}\}$ (with $\alpha_4$ as the reference phase) can be obtained [@Vrancx:2013pza]. Measured asymmetries are simulated by generating events from a Gaussian distribution with the RPR-2011 prediction as the mean value and a standard deviation given by a certain experimental resolution ${\sigma_{\textrm{exp}}}$. When experimental uncertainty is involved, none of the four solutions to $\{\delta_1, \delta_2, \Delta_{13}, \Delta_{23}\}$ will satisfy the constraint $\delta_1 - \Delta_{13} - \delta_2 + \Delta_{23} = 0$ exactly in general. The most likely actual solution is the one for which the absolute value of the ratio of the evaluated constraint to its corresponding error is the smallest. However, it is possible that the most likely solution does not coincide with the actual solution. This can be readily verified by comparing the most likely solution with the RPR-2011 predictions that generated the simulated data. A solution that is identified as the most likely one, but is not the actual solution is dubbed an *incorrect solution*. Another possibility, is the occurrence of imaginary solutions for one or more of the moduli and/or phases, as was apparent from Fig. \[fig:moduli\_GRAAL\_RPR\] for example. This type of solutions is referred to as *imaginary solutions*. The *insolvability* $\eta(W, \cos{\theta_{\textrm{c.m.}}})$ at a specific kinematic point is introduced as the fraction of simulated complete data sets that are solved incorrectly or have imaginary solutions: $\eta = \eta_{\textrm{incorrect}} + \eta_{\textrm{imaginary}}$. Figure \[fig:insolvability\_maps\] shows the $\{{\varSigma}, T, P; C_x,O_x,E,F\}$ insolvabilities $\eta$ and $\eta_{\textrm{incorrect}}$ for ${\sigma_{\textrm{exp}}}= 0.1$ and ${\sigma_{\textrm{exp}}}= 0.01$. It is seen that for ${\sigma_{\textrm{exp}}}= 0.1$ the insolvability can become quite substantial and that the largest contribution is attributed to imaginary solutions. Increasing the experimental resolution by a factor of ten clearly improves the overall solvability. Although incorrect solutions make up the smaller contribution to $\eta$, they can never really be identified in an analysis of real data without invoking a model. Incorrect solutions originate from assigning the most likely solution as the actual solution, which is not a statistically sound procedure though. A more conservative approach would consist of imposing a tolerance level on the confidence interval of the most likely solution. Then, the most likely solution would only be ‘accepted’ as the actual solution when it has a certain minimum statistical significance. As discussed at the end of Sec. IV C 2 in Ref. [@Vrancx:2013pza], however, imposing a tolerance confidence level would not be effective as the entire elimination of the incorrect solutions would lead to a rejection of the lion’s share of the fraction of correct solutions. This would result in an almost $100\%$ insolvability. Summarizing, using real data it was illustrated that measurements of the single asymmetries allow to map the moduli of the normalized transversity amplitudes fairly well. Extracting the phases requires also double asymmetries. For infinite precision, four of those are required to form a complete set. For finite experimental resolution, it was shown that in a lot of situations the phases cannot be determined unambiguously. Therefore, theoretical completeness does not necessarily imply experimental completeness. It remains to be investigated whether “overcomplete” sets, containing additional double asymmetries, could help resolve the spurious phase ambiguities. This work is supported by the Research Council of Ghent University and the Flemish Research Foundation (FWO Vlaanderen). Wen-Tai Chiang and F. Tabakin, Phys. Rev. C **55**, 2054 (1997). I. S. Barker, A. Donnachie, and J. K. Storrow, Nucl. Phys. B **95**, 347 (1975). T. Vrancx, J. Ryckebusch, T. Van Cuyck, and P. Vancraeyveld, Phys. Rev. C **87**, 055205 (2013). A. Lleres *et al*. (GRAAL Collaboration), Eur. Phys. J. A **31**, 79 (2007). A. Lleres *et al*. (GRAAL Collaboration), Eur. Phys. J. A **39**, 149 (2009). T. Corthals, J. Ryckebusch, and T. Van Cauteren, Phys. Rev. C **73**, 045207 (2006). L. De Cruz, T. Vrancx, P. Vancraeyveld, and J. Ryckebusch, Phys. Rev. Lett. **108**, 182002 (2012). L. De Cruz, J. Ryckebusch, T. Vrancx, and P. Vancraeyveld, Phys. Rev. C **86**, 015212 (2012). [^1]:
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a novel event recognition approach called Spatially-preserved Doubly-injected Object Detection CNN (S-DOD-CNN), which incorporates the *spatially preserved* object detection information in both a direct and an indirect way. Indirect injection is carried out by simply sharing the weights between the object detection modules and the event recognition module. Meanwhile, our novelty lies in the fact that we have preserved the spatial information for the direct injection. Once multiple regions-of-intereset (RoIs) are acquired, their feature maps are computed and then projected onto a spatially-preserving combined feature map using one of the four *RoI Projection* approaches we present. In our architecture, combined feature maps are generated for object detection which are directly injected to the event recognition module. Our method provides the state-of-the-art accuracy for malicious event recognition.' address: '$^{\star}$US Army Research Laboratory                        $^{\dagger}$Booz Allen Hamilton Inc.[^1] ' bibliography: - 'refs.bib' title: 'S-DOD-CNN: Doubly Injecting Spatially-Preserved Object Information for Event Recognition' --- IOD-CNN, DOD-CNN, malicious crowd dataset, malicious event classification, multi-task CNN Introduction {#sec:intro} ============ Object information provides crucial evidence for identifying the events shown in still images. There have been several attempts which make use of the object information in improving event recognition performance. Most methods perform event recognition with the aid of object detection results via feature-level fusion [@LWangCVPRW2015; @LWangICCVW2015] or score-level fusion [@LLiICCV2007; @TAlthoffACMMM2012; @RRobinsonIROS2015; @MJainCVPR2015; @HLeeIROS2016; @HLeeDCS2016; @HLeeWACV2016; @HLeeICIP2017; @YCaoICIP2017; @HLeeICASSP2018; @HLeeTPAMI2019]. Recently, Lee et al. [@HLeeICASSP2019] introduced Doubly-injected Object Detection CNN (DOD-CNN) that incorporates the use of object detection information in a direct and an indirect way within a CNN architecture for the task of event recognition. DOD-CNN consists of three connected networks responsible for event recognition, rigid object detection, and non-rigid object detection. Three networks are co-trained while object detection information is indirectly passed onto event recognition via the shared portion of the architecture. DOD-CNN achieves further performance improvement by directly passing intermediate output of the rigid and non-rigid object detection onto the event recognition module. More specifically, each of the two feature maps from rigid and non-rigid object detection is generated by pooling multiple per-RoI feature maps (i.e., feature maps for each region-of-interest) via batch pooling. The two feature maps are then directly injected into the event recognition module at the end of the last convolutional layer. Note that the batch pooling simply aggregates multiple feature maps along the batch direction without considering their spatial coordinates in the original image. ![[ , , and arrows indicate the computational flow responsible for event recognition ($e$), rigid object detection ($r$), and non-rigid object detection ($n$), respectively. For rigid and non-rigid object detection, a combined feature map is constructed by combining per-RoI feature maps while preserving the spatial locations of the RoIs within the original image.]{}[]{data-label="fig:architecture"}](architecture.pdf){width="0.85\linewidth"} In this paper, we present an approach to generate a single combined feature map which safely preserves the original spatial location of the per-RoI feature maps provided by the object detection process. Per-RoI feature maps are first projected onto separate projected feature maps using a novel *RoI Projection* which are then aggregated into a single combined feature map. In the RoI projection, each per-RoI feature map is weighted by its object detection probability. Although our approach follows the spirit of DOD-CNN by incorporating the object detection information in two-ways (i.e., doubly injecting), the rigid and non-rigid object detection information is used in a different way by preserving the spatial context for each of the per-RoI feature map. Therefore, we call our new architecture *Spatially-Preserved and Doubly-injected Object Detection CNN (S-DOD-CNN)* which is depicted in Figure \[fig:architecture\]. When projecting the per-RoI feature maps into one single projected feature map, we adopt two interpolation methods which are *MAX interpolation* and *Linear interpolation*. In *MAX interpolation*, a maximum value among the input points is projected into the output point. In *Linear interpolation*, a linearly interpolated value of the four nearest input points is projected into the output. These interpolation methods can be applied with either *class-specific* or *class-agnostic* RoI selection. While class-specific selection carries out the RoI projection for each set of object class, the class-agnostic selection considers only a small subset of RoIs among all the RoIs disregarding the object classes. Therefore, the RoI projection can be conducted in four different combinations. In order to prove the effectiveness of using a spatially-preserved object detection feature maps for event recognition, we conducted several experiments on the malicious event classification [@HLeeICASSP2018]. We have validated that all four combinations of the novel RoI projection within S-DOD-CNN provide higher accuracy than all the baselines. S-DOD-CNN {#sec:ourapproach} ========= Architecture {#ssec:architecture} ------------ [**DOD-CNN.**]{} DOD-CNN [@HLeeICASSP2019] consists of five shared convolutional layers ($C_1,\cdots,C_5$), one RoI pooling layer, and three separate modules, each responsible for event recognition, rigid object detection, and non-rigid object detection, respectively. Each module consists of two convolutional layers ($C_6,~C_7$), one average pooling layer ($AVG$), and one fully connected layer ($FC$), where the output dimension of the last layer is set to match the number of events or objects. DOD-CNN takes one image and multiple RoIs (approximatedly 2000 for rigid objects and 5 for non-rigid objects per image) as input. Selective search [@JUijlingsIJCV2013] and multi-scale sliding windows [@PViolaCVPR2001; @NDalalCVPR2005; @PFelzenszwalbTPAMI2010; @HLeeACCV2012] are used to generate the RoIs for rigid and non-rigid objects, respectively. For each RoI, per-RoI feature map is computed via RoI pooling and then fed into its corresponding task-specific module. For rigid or non-rigid object detection, the output of the last convolutional layer (denoted as *per-RoI $C_7$ feature map*) is pooled into a single map along the batch direction, which is referred to as a *batch pooling*. The two single feature maps are then concatenated with the output of the last convolution layer of the event recognition. The concatenated map is fed into the remaining event recognition layers which are the average pooling and fully connected layer. Batch pooling does not preserve the spatial information of the feature maps since these maps are aligned and pooled without the consideration of their spatial coordinates in the original input image. For instance, consider selecting feature points at a same location, from two different feature maps which are aligned for batch pooling. These points do not necessarily correspond to the same location in the input image as each feature map is tied with a different RoI.\ [**S-DOD-CNN.**]{} We introduce a novel method that aggregates multiple feature maps which come from different regions in the input image while preserving the spatial information. The spatial information for each per-RoI $C_7$ feature map is preserved by projecting each feature map onto a location on a *projected feature map* which corresponds to its original spatial location within the input image. Figure \[fig:map\_building\] illustrates how per-RoI $C_7$ feature maps are processed through RoI Projection (*RoIProj*) to generate corresponding projected feature maps. Note that before the per-RoI $C_7$ feature maps are fed into RoIProj, they are multiplied by its detection probability to incorporate the reliability for each detection result. The projected feature maps are then max-pooled to build a *combined feature map*. In our experiment, five per-RoI $C_7$ feature maps with the highest probability values are chosen to build the combined feature map. ![[ The combined feature map is max-pooled with multiple projected feature maps that are projected from original feature maps (2$\times$2 bins in this example) w.r.t. their original spatial coordinates in the image.]{}[]{data-label="fig:map_building"}](map_building.pdf){width="0.8\linewidth"} ![[]{data-label="fig:RoIProj"}](RoIProj_a.pdf){width="0.8\linewidth"}        ![[]{data-label="fig:RoIProj"}](RoIProj_b.pdf){width="0.8\linewidth"} ![image](training.pdf){width="0.7\linewidth"} Our network generates two separate combined feature maps, one for rigid and another for non-rigid object detection. These two combined feature maps are concatenated with the event recognition feature map as in Figure \[fig:architecture\]. The two combined feature maps share the same-sized and aligned receptive field with the event recognition feature map, and thus they are ‘spatially-preserved’. The event recognition feature map is the output of $C_5$ layer right before RoI pooling. The event recognition module intakes this concatenated map to compute the event recognition probability. As we are constructing our network based on the DOD-CNN, but with ‘spatially-preserved’ object detection information for event recognition, we call it Spatially-preserved DOD-CNN (S-DOD-CNN).\ [**RoI Projection.**]{} When projecting the per-RoI $C_7$ feature maps into one projected feature map (denoted as RoIProj in Figure \[fig:map\_building\]), we adopt one of the two interpolation methods: *MAX interpolation* or *Linear interpolation*. Examples of the two interpolations are shown in Figure \[fig:RoIProj\]. When multiple points on an input map is being projected onto a single point on an output map, the point is filled with a maximum (*MAX*) or a linearly interpolated value of four nearest input points (*Linear*). The RoI projection can be performed in two different ways. These two methods differ based on how a subset of RoIs (the RoIs that are actually used for projection) is selected from the overall set of RoIs. Both of the selection methods utilize $N$ probability scores which are generated for each RoI after AVG & FC (see Figure \[fig:map\_building\]), where $N$ is the number of classes. For *class-specific* selection, 5 RoIs with highest probability scores are chosen for each class. For *class-agnostic* selection, 5 RoIs with highest probability scores are chosen from all the RoIs without regard to which classes they come from. Therefore, the number of executions for RoI projection is either $N$ times or just once, based on which selection method is chosen. In addition, if per-RoI $C_7$ feature map has $k$ channels, the dimension of the channel for the combined map under the class-specific selection becomes $k\times N$, while it remains as $k$ under the class-agnostic selection. Overall, the RoI projection can be performed as one of the four combinations as there are two different interpolations methods (MAX/Linear) and two different RoI selection methods (class-specific/agnostic). Training {#ssec:training} -------- S-DOD-CNN is trained by using a mini-batch stochastic gradient descent (SGD) optimization approach. Event recognition and rigid object detection modules are optimized by minimizing their softmax loss while cross entropy loss is used for non-rigid object detection optimization. Each batch contains two images consisting of one malicious image and one benign image. For event recognition and non-rigid object detection, 1 and 5 RoIs are generated for each image, respectively. For rigid object detection, a batch takes 64 RoIs randomly selected from approximately 2000 RoIs generated by selective search. Accordingly, we need to prepare a large number of batches to cover the entire RoI set for training rigid object detection. A batch (which contains 2 images) consists of 2, 128, and 10 RoIs for event recognition, rigid object detection, and non-rigid object detection, respectively. In preparing the positive and negative samples for training, we have used 0.5 and 0.1 as the rigid and non-rigid object detection thresholds for the intersection-over-union (IOU) metric, respectively. Any RoI whose IOU with respect to the ground truth bounding box is larger than the threshold is treated as a positive example. RoIs whose IOU is lower than 0.1 are treated as negative examples. The weights in $C_1,\cdots,C_5$ are initially inherited from the pre-trained AlexNet [@AKrizhevskyNIPS2012] trained on a large-scale Places dataset [@BZhouNIPS2014] and the remaining layers ($C_6$, $C_7$ and $FC$ layers for all three modules) are initialized according to Gaussian distribution with 0 mean and 0.01 standard deviation.\ [**Two-stage Cascaded Optimization.**]{} To allow more batches for training rigid object detection, we use a two-stage cascaded optimization strategy. In the first stage, only the layers used to perform rigid object detection are trained. Then, in the second stage, all three tasks are jointly optimized in an end-to-end fashion. Figure \[fig:training\] shows the second stage of the training process. For each training iteration in the second stage, two processes ((a) and (b) in Figure \[fig:training\]) are executed in order. In process (a), all the layers of the two object detection modules are optimized with a batch containing 128 RoIs of rigid object and 10 RoIs of non-rigid object. After the process (a) is done, full set of RoIs (i.e. approximately 4000 RoIs for rigid object, 10 RoIs for non-rigid object) is fed into the object detection modules. The resulting combined feature maps are injected into the event recognition module for optimization. We set the learning rate of 0.001, 50k iterations, and the step size of 30k for the first stage and the learning rate of 0.0001, 20k iterations, and the step size of 12k for the second stage. Experiments {#sec:exp} =========== Dataset {#ssec:dataset} ------- Malicious Crowd Dataset [@HLeeICASSP2018; @SEumDCS2018] is selected as it provides the appropriate components to evaluate the effects of using object information for event recognition. It contains 1133 images and is equally divided into [*malicious*]{} classes and [*benign*]{} classes. Half of the dataset is used for training and the rest is used for testing. In addition to the label of the event class, bounding box annotations of three rigid objects ([*police*]{}, [*helmet*]{}, [*car*]{}) and two non-rigid objects ([*fire*]{}, [*smoke*]{}) are provided. [@HLeeICASSP2018] provides details on how these objects are selected. Performance Evaluation {#ssec:performeval} ---------------------- To demonstrate the effectiveness of our approach, we compared the event recognition accuracy of S-DOD-CNN with two baselines: DOD-CNN which does not include direct injection and DOD-CNN which incorporates both direct and indirect injection. The accuracy is measured with average precision (AP) as shown in Table \[tab:performance\]. S-DOD-CNN, which adopts one of the four RoI projections, provides at least 1.1% higher accuracy than both of the baselines. This verifies the effectiveness of using object detection information spatially preserved via RoI projection. RoI projection using linear interpolation and class-specific RoI selection shows the highest accuracy among all the methods but the differences are marginal. [c|cc|c]{} & &\ & MAX/Linear & RoI Selection &\ No Direct Inject. [@HLeeICASSP2019] & $\cdot$ & $\cdot$ & 90.7\ DOD-CNN [@HLeeICASSP2019] & $\cdot$ & $\cdot$ & 94.6\ & MAX & Class-agnostic & 95.7\ & MAX & Class-specific & 95.8\ & Linear & Class-agnostic & 95.8\ & Linear & Class-specific & [**95.9**]{}\ In Table \[tab:sigle\_vs\_multi\], we also analyzed how the each task performs when they are individually optimized (Single-task) or co-optimized. For No Direct Injection and DOD-CNN cases, non-rigid object detection performs better when optimized simultaneously with other tasks. However, in S-DOD-CNN, the performance of the two sub-tasks (rigid and non-rigid object detection) was degraded. This indicates that the two tasks are sacrificed to solely improve event recognition performance. [c|c|c|c|c]{} Task & Single-task & No Direct Injection & DOD-CNN & S-DOD-CNN\ E & 89.9 & 90.7 & 94.6 & 95.8\ R & 8.1 & 7.8 & 7.8 & 7.7\ N & 30.4 & 37.2 & 37.2 & 22.5\ Ablation Study: Location of Building and Injecting Combined Feature Map {#ssec:ablation} ----------------------------------------------------------------------- Applying a convolutional layer after the concatenation may not be effective if the combined feature maps (coming from object detection) are not aligned properly with the event recognition feature map. One advantage achieved by constructing combined feature maps using our approach is that the map can be injected at any position in event recognition. Table \[tab:pool\_concat\_position\] shows the performance that varies according to the location of building and injection of the combined feature map. DOD-CNN, which loses RoI’s spatial information during building a feature map, shows performance degradation when the injection location is placed before any convolutional layer (i.e., $C_6$ in Table \[tab:pool\_concat\_position\]). In contrast, S-DOD-CNN does not lose any performance regardless of the injection position. The performance of S-DOD-CNN depends greatly on the building location of the combined feature map. The best accuracy is achieved when it is constructed after $C_7$. Letting the input image go through more number of convolutional layers before building the combined feature maps may have provided a richer representation. [c|c"ccc]{} Method & & $C_5$ & $C_6$ & $C_7$\ DOD-CNN [@HLeeICASSP2019] & $C_7$ & $\cdot$ & 91.4 & 94.6\ & RoIPool & 90.5 & 90.6 & 90.5\ & $C_6$ & 94.8 & 94.8 & 94.7\ & $C_7$ & [**95.8**]{} & 95.7 & 95.5\ Conclusion {#sec:concl} ========== We have devised an event recognition approach referred to as S-DOD-CNN where the object detection is exploited while preserving the spatial information. Multiple per-RoI feature maps within an object detection module are projected onto a combined feature map using one of the newly presented RoI Projections preserving the spatial location of each RoI with respect to the original image. These maps are then injected to the event recognition module. Our approach provides the state-of-the-art accuracy for malicious event recognition. [^1]: Copyright 2020 IEEE. Published in the IEEE 2020 International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2020), scheduled for 4-9 May, 2020, in Barcelona, Spain. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact: Manager, Copyrights and Permissions / IEEE Service Center / 445 Hoes Lane / P.O. Box 1331 / Piscataway, NJ 08855-1331, USA. Telephone: + Intl. 908-562-3966.
{ "pile_set_name": "ArXiv" }
--- abstract: 'This progress report covers recent developments in the area of quantum randomness, which is an extraordinarily interdisciplinary area that belongs not only to physics, but also to philosophy, mathematics, computer science, and technology. For this reason the article contains three parts that will be essentially devoted to different aspects of quantum randomness, and even directed, although not restricted, to various audiences: a philosophical part, a physical part, and a technological part. For these reasons the article is written on an elementary level, combining simple and non-technical descriptions with a concise review of more advanced results. In this way readers of various provenances will be able to gain while reading the article.' author: - Manabendra Nath Bera - Antonio Acín - Marek Kuś - 'Morgan W. Mitchell' - Maciej Lewenstein title: 'Randomness in Quantum Mechanics: Philosophy, Physics and Technology' --- Introduction ============ Randomness is a very important concept finding many applications in modern science and technologies. At the same time it is also quite controversial, and may have different meanings depending on the field of science it concerns. In this short introduction to our report we explain, in a very general manner, why randomness plays such an important role in various branches of science and technology. In particular we elaborate the concept of [*“apparent randomness”*]{}, to contrast it with what we understand under the name [*“intrinsic randomness”*]{}. [*Apparent randomness*]{} as an element of more efficient description of nature is used practically in all sciences, and in physics in particular, cf. [@Penrose79; @Schrodinger89; @Khinchin14; @Tolman10; @Halmos13]. This kind of randomness expresses our lack of full knowledge of the considered system. Paradigmatic example concerns classical mechanics of many-body systems that are simply too complex to be considered with all details. The complexity of the dynamics of systems consisting of many interacting constituents makes predictions, even assuming perfect knowledge of initial conditions, practically impossible. This fact motivates the development of statistical mechanics and thermodynamics. The descriptions that employ probability distributions and statistical ensembles, or even more reduced thermodynamic description, are more adequate and useful. Another paradigmatic example concerns chaotic systems. In deterministic chaos theory, cf. [@Bricmont95; @Ivancevic08; @Gleick08] even for the small systems involving few degrees of freedom, the immanent lack of precision in our knowledge of initial conditions leads to the impossibility of making long time predictions. This is due to an exponential separation of trajectories, when small differences at start lead to large end-effects. Also here, intrinsic ergodicity allows one to use the tools of statistical ensembles. In quantum mechanics [*apparent (a.k.a. epistemic) randomness*]{} also plays an important role and reflects our lack of full knowledge of the state of a system. A state of a system in quantum mechanics corresponds to a [*vector*]{} in a Hilbert space, and is described by the projector operator on that vector. Such states and the corresponding projectors of rank one are termed as [*pure states*]{}. In general, we never know the actual (pure) state of the system precisely. Such situation may be caused by our own imperfectness in determining the state in question. Even, these may arise from measurements that result in statistical ensembles of many pure states. The appropriate way of describing such states is using [*a density matrix*]{}, i.e. the probabilistic mixture of the projectors on the pure states. The pure states are, simply, represented by those density matrices that are just rank-one projectors. In fact, expressing quantum systems, with a lack of the full knowledge about the state in question, constitutes the main reason of the introduction of the density matrix formalism [@Messiah14; @Tannoudji91]. However, in quantum physics there is a new form of randomness, which is rather [*intrinsic*]{} or [*inherent*]{} to the theory. Namely, even if the state of the system is pure and we know it exactly, the predictions of quantum mechanics could be [*intrinsically probabilistic and random*]{}! Accepting quantum mechanics, that is assuming that the previous sentence is true, we should consequently accept that quantum mechanics could be intrinsically random. We adopt this position in this paper. To summarize the above discussion let us define:\ [**Def. 1 – Apparent (a.k.a. epistemic) randomness.**]{} Apparent randomness is the randomness that results exclusively from a lack of full knowledge about the state of the system in consideration. Had we known the initial state of the system exactly, we could have predicted its future evolution exactly. Probabilities and stochastic processes are used here as an [*efficient tool*]{} to describe at least a partial knowledge about the system and its features. Apparent randomness implies and requires existence of the, so called, underlying [*hidden variable theory*]{}. It is the lack of knowledge of hidden variables that causes apparent randomness. Had we known them, we could have make predictions with certainty.\ [**Def. 2 – Intrinsic (a.k.a. inherent or ontic) randomness.**]{} Intrinsic randomness is the randomness that persists even if we have the full knowledge about the state of the system in consideration. Even exact knowledge of the initial state does not allow to predict future evolution exactly: we can only make probabilistic predictions. Probabilities and stochastic processes are used here as a [*necessary and inevitable tool*]{} to describe our knowledge about the system and its behavior. Of course, intrinsic randomness might coexist with the apparent one – for instance, in quantum mechanics when we have only partial knowledge about the state of the system expressed by the density matrix, the two causes of randomness are present. Moreover, intrinsic randomness [*does not*]{} exclude existence of effective hidden variable theories that could allow for partial predictions of the evolution of the systems with certainty. As we shall see, in quantum mechanics of composite systems, an effective [*local*]{} hidden variable theories in general cannot be used to make predictions about local measurements and the local outcomes are intrinsically random.\ Having defined the main concepts, we present here short resumes of the subsequent parts of the report, where our focus would be mostly on [*quantum randomness*]{}: - [**Quantum Randomness and Philosophy.**]{} Inquiries concerning the nature of randomness accompany European philosophy from its beginnings. We give a short review of classical philosophical attitudes to the problem and their motivations. Our aim is to relate them to contemporary physics and science in general. This is intimately connected to discussion of various concepts of determinism and its understanding in classical mechanics, commonly treated as an exemplary deterministic theory, where chance has only an epistemic status and leaves room for indeterminism only in form of statistical physics description of the world. In this context, we briefly discuss another kind of indeterminism in classical mechanics caused by the non-uniqueness of solutions of the Newton’s equations and requiring supplementing the theory with additional unknown laws. We argue that this situation shares similarities with that of quantum mechanics, where quantum measurement theory *à la* von Neumann provides such laws. This brings us to the heart of the problem of intrinsic randomness of quantum mechanics from the philosophical point of view. We discuss it in two quantum aspects: contextuality and nonlocality, paying special attention to the question: can quantum randomness be certified in any sense? - [**Quantum Randomness and Physics.**]{} Unlike in classical mechanics, randomness is considered to be inherent in the quantum domain. From a scientific point of view, we would raise arguments if this randomness is intrinsic or not. We start by briefly reviewing standard the approach to randomness in quantum theory. We shortly recall the postulates of quantum mechanics and the relation between quantum measurement theory and randomness. Nonlocality as an important ingredient of the contemporary physical approach to randomness is then discussed. We proceed then with a more recent approach to randomness generation based on the so called “device independent” scenario, in which one talks exclusively about correlations and probabilities to characterize randomness and nonlocality. We then describe several problems of classical computer science that have recently found elegant quantum mechanical solutions, employing the nonlocality of quantum mechanics. Before doing this we devote a subsection to describe a contemporary information theoretic approach to the characterization of randomness and random bit sources. In continuation, we discuss the idea of protocols for Bell certified randomness generation (i.e. protocols based on Bell inequalities to generate and certify randomness in the device independent scenario), such as quantum randomness expansion (i.e. generating larger number of random bits from a shorter seed of random bits), quantum randomness amplification (i.e. transforming weakly random sequences of, say, bits into “perfectly” random ones). It should be noted that certification, expansion and amplification of randomness are classically not possible or require extra assumptions in comparison with what quantum mechanics offers. Our goal is to review the recent state-of-art results in this area, and their relations and applications for device independent quantum secret key distribution. We also review briefly and analyze critically the “special status” of quantum mechanics among the so called no-signaling theories. These are the theories, in which the choice of observable to measure by, say, Bob does not influence the outcomes of measurements of Alice and all other parties (for precise definition in terms of conditional probabilities for arbitrary number of parties, observables and outcomes see Eq. (\[eq:no-sig\]). While quantum mechanical correlation fulfill the no-signaling conditions, correlations resulting from no-signaling theories form a strictly larger set. No-signalling correlations were first considered in relation to quantum mechanical ones by Popescu and Rohlich [@Popescu92]. In many situations, it is the no-signaling assumption and Bell non-locality that permit certification of randomness and perhaps novel possibilities of achieving security in communication protocols. - [**Quantum Randomness and Technology.**]{} We start this part by shortly reminding the readers why random numbers are useful in technology and what they are used for. The drawbacks of the classical random number generation, based on classical computer science ideas, are also mentioned. We describe proof-of-principle experiments, in which certified randomness was generated using nonlocality. We then focus on describing existing “conventional” methods of quantum random number generation and certification. We discuss also the current status of detecting non-locality and Bell violations. We will then review current status of commercial implementation of quantum protocols for random number generations, and the first steps toward device independent or at least partially device independent implementations. A complementary review of quantum random generators may be found in Ref. [@HerreroARX2016] - [**Quantum Randomness and Future.**]{} In the conclusions we outline new interesting perspectives for fundamentals of quantum mechanics that are stimulated by the studies of quantum randomness: What’s the relation between randomness, entanglement and non-locality? What are the ultimate limits for randomness generation using quantum resources? How does quantum physics compare to other theories in terms of randomness generation? What’s the maximum amount of randomness that can be generated in a theory restricted only by the no-signaling principle? Randomness in physics has been a subject of extensive studies and our report neither has ambition, nor objective to cover all existing literature on this subject. We stress that there are of course various, highly recommended reviews on randomness in physics, such as for instance the excellent articles by J. Bricmont [@Bricmont95], or the recent book by Juan C. Vallejo and Miguel A.F. Sanjuan [@Sanjuan17]. The main novelty of our report lies in the incorporation of the contemporary approach to quantum randomness and its relation to quantum nonlocality and quantum correlations, and emerging device independent quantum information processing and quantum technologies. In fact, our report focuses on certain aspects of randomness that have become particularly relevant in the view of the recent technical (i.e. qualitative and quantitative, theoretical and experimental) developments in quantum physics and quantum information science: quantum randomness certification, amplification and extension are paradigmatic examples of these developments. The technological progress in constructing publicly or even commercially available, highly efficient quantum random number generators is another important aspect: it has in particular led to the first experimental proof of quantum nonlocality, i.e. loophole-free violation of Bell inequalities [@Hensen15; @Giustina15; @Shalm15]. In particular: - In the philosophical part we concentrate on the distinction between apparent (epistemic) and intrinsic (inherent or ontic) randomness, and on the question whether intrinsic randomness of quantum mechanics can be certified in certain sense. We devote considerable attention to the recent discussion of non-deterministic models in classical physics, in which (in contrast to the standard Newtonian-Laplacian mechanics) similar questions may be posed. Based on the recently proposed protocols, we argue that observation of nonlocality of quantum correlations can be directly use to certify randomness; moreover this can be achieved in a secure device independent way. Similarly, contextuality of quantum mechanics, i.e. results of measurement depend on the context in which they are performed, or, more precisely, which compatible quantities are simultaneously measured, can be used to certify randomness, although not in device independent way. - In the physical part we concentrate on more detailed presentation of the recent protocols of randomness certification, amplification end expansion. - In the technological part we first discuss the certified randomness generation [@Pironio10], accessible as an open source NIST Beacon [@NISTBeacon]. Then we concentrate on the recent technological developments that have led to the first loophole free detection of nonlocality, and are triggering important commercial applications. Here we limit ourselves to the contemporary, but traditional approach to quantum mechanics and its interpretation, as explicated in the books of Messiah or Cohen-Tannoudji [@Messiah14; @Tannoudji91]. In this sense this review is not complete, and important relevant philosophical aspects are not discussed. Thus, we do not describe other interpretations and approaches, such as pilot wave theory of Bohm [@Bohm51] or many-world interpretation (MWI) of Everett [@Everett57], as they are far beyond the scope of this report. Of course, the meanings of randomness and non-locality are completely different in these approaches. For instance, one can consider [*de Broglie–Bohm’s*]{} interpretation of quantum theory. This is also known as the [*pilot-wave theory, Bohmian mechanics, the Bohm (or Bohm’s) interpretation*]{}, and [*the causal interpretation*]{} of quantum mechanics. There a wave function, on the space of all possible configurations, not only captures the epistemic knowledge of system’s state but also carry a “hidden variable” to encode it’s ontic information and this “hidden variable” may not be accessible or observed. In addition to the wave function, the Bohmian particle positions also carry information. Thus, the Bohmian QM has two ontological ingredients: the wave function and positions of particles. As we explain below, the theory is non-local and that is why we do not discuss it in the present review in details. The time evolution of the system (say, the positions of all particles or the configuration of all fields) is guided by Schrödinger’s equation. By construction, the theory is deterministic [@Bohm52] and explicitly non-local. In other words, the velocity of one particle relies on the value of the guiding equation, which depends on the configuration of the system given by its wave function. The latter is constrained to the boundary conditions of the system and that could. in principle, be the entire universe. Thus, as explicitly stated by D. Bohm [@Bohm52]: “In contrast to the usual interpretation, this alternative interpretation permits us to conceive of each individual system as being in a precisely definable state, whose changes with time are determined by definite laws, analogous to (but not identical with) the classical equations of motion. Quantum-mechanical probabilities are regarded (like their counterparts in classical statistical mechanics) as only a practical necessity and not as an inherent lack of complete determination in the properties of matter at the quantum level.” So Bohm’s theory has to be regarded as non-local hidden variable theory and therefore it does not allow intrinsic randomness; similarly, the many-world interpretation (MWI) suggests that intrinsic randomness is an illusion [@Vaidman14]. MWI asserts the objective reality of “universal” wave function and denies any possibility of wave function collapse. MWI implies that all possible pasts and futures are elements of reality, each representing an objective “world” (or “universe”). In simpler words, the interpretation states that there is a very large number of universes, and everything that could possibly have occurred in our past, but did not, has occurred in the past of some other universe or universes. Therefore, MWI indeed does not leave much space for any kind of probability or randomness, since formally, all outcomes take place with certainty. This is already a sufficient reason to not to consider the WMI in the present review. But, obviously, the whole problem is whether one can speak about probabilities within MWI or not. This problem has been extensively discussed by several authors, e. g. [@Saunders1998; @Saunders2010; @Papineau2010; @Albert2010]. We stress that we adopt in this review the “traditional” interpretation, in which quantum mechanics is intrinsically random, but nonlocal. This adaptation is the result of our free choice. Other readers may freely, or better to say deterministically, but nonlocally adopt the Bohmian point of view. Quantum Randomness and Philosophy ================================= Epistemic and ontic character of probability -------------------------------------------- Randomness is a fundamental resource indispensable in numerous scientific and practical applications like Monte-Carlo simulations, taking opinion polls, cryptography etc. In each case one has to generate a “random sample”, or simply a random sequence of digits. Variety of methods to extract such a random sequence from physical phenomena were proposed and, in general successfully, applied in practice. But how do we know that a sequence is “truly random”? Or, at least. “random enough” for all practical purposes? Such problems become particularly acute for cryptography where provably unbreakable security systems are based on the possibility to produce a string of perfectly random, uncorrelated digits used later to encode data. Such a random sequence must be unpredictable for an adversary wanting to break the code, and here we touch a fundamental question concerning the nature of randomness. If all physical processes are uniquely determined by their initial conditions and the only cause of unpredictability is our inability to determine them with an arbitrary precision, or lack of detailed knowledge of actual conditions that can influence their time evolution, the security can be compromised, if an adversary finds finer methods to predict outcomes. On the other hand, if there are processes “intrinsically” random, i.e. random by their nature and not due to gaps in our knowledge, unconditional secure coding schemes are conceivable. Two attitudes concerning the nature of randomness in the world mentioned above can be dubbed as epistemic and ontic. Both agree that we observe randomness (indeterminacy, unpredictability) in nature, but differ in identifying the source of the phenomenon. The first claims that the world is basically deterministic, and the only way in which a random behavior demanding probabilistic description appears is due to lack of knowledge of the actual state of the observed system or details of its interaction with the rest of the universe. In contrast, according to the second, the world is nondeterministic, randomness is its intrinsic property, independent of our knowledge and resistant to attempts aiming at circumventing its consequences by improving precision of our observations. In other words, “intrinsic” means that this kind of randomness cannot be understood in terms of a deterministic “hidden variable” model. The debate on both epistemic and ontic nature of randomness can be traced back to the pre-Socratic beginnings of the European philosophy. For early atomists, Leucippus[^1] and Democritus[^2], the world was perfectly deterministic. Any occurrence of chance is a consequence of our limited abilities[^3]. One century later Epicurus took the opposite side. To accommodate an objective chance the deterministic motion of atoms must be interrupted, without a cause, by “swerves”. Such an indeterminacy propagates then to macroscopic world. The main motivation was to explain, or at least to give room for human free will, hardly imaginable in a perfectly deterministic world[^4]. It should be clear, however, that purely random nature of human actions is as far from free will, as the latter from a completely deterministic process. A common feature of both extreme cases of pure randomness and strict determinism is lack of any possibility to control or influence the course of events. Such a possibility is definitely indispensable component of the free will. The ontological status of randomness is thus here irrelevant and the discussion whether “truly random theories”, (as supposedly should quantum mechanics be), can “explain the phenomenon of the free will” is pointless. It does not mean that free will and intrinsic randomness problems are not intertwined. On one side, as we explain later, the assumption that we may perform experiments in which we can freely choose what we measure, is an important ingredient in arguing that violating of Bell-like inequalities implies “intrinsic randomness” of quantum mechanics. On the second side, as strict determinism in fact precludes the free will, the intrinsic randomness seems to be a necessary condition for its existence. But, we need more to produce a condition that is sufficient. An interesting recent discussion of connections between free will and quantum mechanics may be found in Part I of [@suarezbook13]. In [@gisin13] and [@brassard13] the many-world interpretation of quantum mechanics, which is sometimes treated as a cure against odds of orthodox quantum mechanics, is either dismissed as a theory that can accommodate free will [@gisin13] or, taken seriously in [@brassard13], as admitting the possibility that free will might be a mere illusion. In any case it is clear that one needs much more than any kind of randomness to understand how free will appears. In [@suarez13] the most radical attitude to the problem (apparently present also in [@gisin13]) is that “not all that matters for physical phenomena is contained in space-time”. Randomness in classical physics ------------------------------- A seemingly clear distinction between two possible sources of randomness outlined in the previous section becomes less obvious if we try to make the notion of determinism more precise. Historically, its definition usually had a strong epistemic flavor. Probably the most famous characterization of determinism is that of Pierre Simon de Laplace [@Laplace14]: ‘*Une intelligence qui, pour un instant donné, connaîtrait toutes les forces dont la nature est animée, et la situation respective des êtres qui la composent, si d’ailleurs elle était assez vaste pour soumettre ces données à l’analyse, embrasserait dans la même formule les mouvemens des plus grands corps de l’univers et ceux du plus léger atome : rien ne serait incertain pour elle, et l’avenir comme le passé, serait présent á ses yeux.[^5]*’ Two hundred years later Karl Raimund Popper writes ‘We can ... define ‘scientific’ determinism as follows: The doctrine of ‘scientific’ determinism is the doctrine that the state of any closed physical system at any given future instant of time can be predicted, even from within the system, with any specified degree of precision, by deducing the prediction from theories, in conjunction with initial conditions whose required degree of precision can always be calculated (in accordance with the principle of accountability) if the prediction task is given’ [@Popper82]. By contraposition thus, unpredictability implies indeterminism. If we now equate indeterminism with existence of randomness, we see that a sufficient condition for the latter is the unpredictability. But, unpredictable can be equally well events about which we do not have enough information, and those that are “random by themselves”. Consequently, as it should have been obvious, Laplacean-like descriptions of determinism are of no help when we look for sources of randomness. Let us thus simply say that a course of events is deterministic if there is only one future way for it to develop. Usually we may also assume that its past history is also unique. In such cases the only kind of randomness is the epistemic one. As an exemplary theory describing such situations one usually invokes classical mechanics. Arnol’d in his treatise on ordinary differential equations, after adopting the very definition of determinism advocated above[^6], writes: “Thus for example, classical mechanics considers the motion of systems whose past and future are uniquely determined by the initial positions and velocities of all points of the system”[^7]. The same can be found in his treatise on classical mechanics[^8]. He gives also a kind of justification, “It is hard to doubt this fact, since we learn it very early”[^9]. But, what he really means is that a mechanical system are uniquely determined by positions and momenta of its constituents, “one can imagine a world, in which to determine the future of a system one must also know the acceleration at the initial moment, but experience shows us that our world is not like this”[^10]. It is clearly exposed in another classical mechanics textbook, Landau and Lifschitz’s *Mechanics*: “If all the co-ordinates and velocities are simultaneously specified, it is known from experience that the state of the system is completely determined and that its subsequent motion can, in principle, be calculated. Mathematically, this means that, if all the co-ordinates $q$ and velocities $\dot{q}$ are given at some instant, the accelerations $\ddot{q}$ at that instant are uniquely defined”[^11]. Apparently, also here the “experience” concerns only the observation that positions and velocities, and not higher time-derivatives of them, are sufficient to determine the future. In such a theory there are no random processes. Everything is in fact completely determined and can be predicted with desired accuracy once we improve our measuring and computing devices. Statistical physics, which is based on classical mechanics, is a perfect example of indeterministic theory where measurable quantities like pressure or temperature are determined by mean values of microscopical ‘hidden variables’, for example positions and momenta of gas particles. These hidden variables, however, are completely determined at each instant of time by the laws of classical mechanics, and with an appropriate effort can be, in principle, measured and determined. What makes the theory ‘indeterministic’ is a practical impossibility to follow trajectories of individual particles because of their number and/or the sensitiveness to changes of initial conditions. In fact such a sensitiveness was pointed as a source of chance by Poincaré[^12] and Smoluchowski[^13] soon after modern statistical physics was born, but it is hard to argue that this gives to the chance an ontological status. It is, however, worth mentioning that Poincaré was aware that randomness might have not only epistemic character. In the above cited Introduction to his [*Calcul des probabilités*]{} he states ‘*Il faut donc bien que le hasard soit autre chose que le nom que nous donnons à notre ignorance*’[^14], (‘So it must be well that chance is something other than the name we give our ignorance’[^15]). Still, the very existence of deterministic chaos implies that classical mechanics is unpredictable in general in any practical sense. The technical question how important this unpredictability can be is, actually, the subject of intensive studies in the last decades (for recent monographs see [@Sanjuan17; @Sanjuan16]). It is commonly believed (and consistent with the above cited descriptions of determinism in mechanical systems) that on the mathematical level the deterministic character of classical mechanics takes form of Newton’s Second Law $$\label{N2} m\frac{d^2\mathbf{x}(t)}{dt^2}=\mathbf{F}(\mathbf{x(t)},t),$$ where the second derivatives of the positions, $\mathbf{x}(t)$, are given in terms of some (known) forces $\mathbf{F}(\mathbf{x(t)},t)$. But, to be able to determine uniquely the fate of the system we need something more than merely initial positions $\mathbf{x}(0)$ and velocities $d\mathbf{x}(t)/dt|_{t=0}$. To guarantee uniqueness of the solutions of the Newton’s equations (\[N2\]), we need some additional assumptions about the forces $\mathbf{F}(\mathbf{x(t)},t)$. According to the Picard Theorem[^16] [@Coddington55], an additional technical condition that is sufficient for the uniqueness is the Lipschitz condition, limiting the variability of the forces with respect to the positions. Breaking it opens possibilities to have initial positions and velocities that do not determine uniquely the future trajectory. A world, in which there are systems governed by equations admitting non-unique solutions is not deterministic according to our definition. We can either defend determinism in classical mechanics by showing that such pathologies never occur in our world, or agree that classical mechanics admits, at least in some cases, a nondeterministic evolution. Each choice is hard to defend. In fact it is relatively easy to construct examples of more or less realistic mechanical systems for which the uniqueness is not guaranteed. Norton [@Norton07] (see also [@Norton08]) provided a model of a point particle sliding on a curved surface under the gravitational force, for which the Newton equation reduces to $\frac{d^2r}{dt^2}=\sqrt{r}$. For the initial conditions $r(0)=0, \frac{dr}{dt}|_{t=0}=0$, the equation has not only an obvious solution $r(t)=0$, but, in addition, a one parameter family given by $$r(t)=\left\{ \begin{array}{cl} 0, & \mathrm{for\ } t\le T \\ \frac{1}{144}(t-T)^4, & \mathrm{for\ } t\ge T \end{array} \right.$$ where $T$ is an arbitrary parameter. For a given $T$ the solution describes the particle staying at rest at $r=0$ until $T$ and starting to accelerate at $T$. Since $T$ is arbitrary we can not predict when the change from the state of rest to the one with a non-zero velocity takes place. The example triggered discussions [@Korolev07; @Korolev07a; @Kosyakov08; @Malament08; @Roberts09; @Wilson09; @Zinkernagel10; @Fletcher12; @Laraudogoitia13], raising several problems, in particular its physical relevance in connection with simplifications and idealizations made to construct it. However, they do not seem to be different from ones commonly adopted in descriptions of similar mechanical situations, where the answers given by classical mechanics are treated as perfectly adequate. At this point classical mechanics is not a complete theory of the part of the physical reality it aspires to describe. We are confronted with a necessity to supplement it by additional laws dealing with situations where the Newton’s equation do not posses unique solutions. The explicit assumption of incompleteness of classical mechanics has its history, astonishingly longer than one would expect. Possible consequences of non-uniqueness of solutions attracted attention of Boussinesq who in his *[M]{}émoire* for the French Academy of Moral and Political Sciences writes: ‘*...les ph[é]{}nom[è]{}nes de mouvement doivent se diviser en deux grandes classes. La premi[è]{}re comprendra ceux o[ù]{} les lois m[é]{}caniques exprim[é]{}es par les [é]{}quations diff[é]{}rentielles d[é]{}termineront [à]{} elles seules la suite des [é]{}tats par lesquels passera le syst[è]{}me, et o[ù]{}, par cons[é]{}quent, les forces physico-chimiques ne laisseront aucun r[ô]{}le disponible [à]{} des causes d’une autre nature. Dans la seconde classe se rangeront, au contraire, les mouvements dont les [é]{}quations admettront des int[é]{}grales singuli[è]{}res, et dans lesquels il faudra qu’une cause distincte des forces physico-chimiques intervienne, de temps en temps ou d’une mani[è]{}re continue, sans d’ailleurs apporter aucune part d’action m[é]{}canique, mais simplement pour diriger le syst[è]{}me a chaque bifurcation d’int[é]{}grales qui se pr[é]{}sentera.*’[^17] Boussinesq does not introduce any probabilistic ingredient to the reasoning, but definitely, there is a room to go from mere indeterminism to the awaited ‘intrinsic randomness’. To this end, however, we need to postulate an additional law supplementing classical mechanics by attributing probabilities to different solutions of non-Lipschitzian equations[^18]. It is hard to see how to discover (or even look for) such a law, and how to check its validity. What we find interesting is an explicit introduction to the theory a second kind of motion. It is strikingly similar to what we encounter in quantum mechanics, where to explain all observed phenomena one has to introduce two kinds of kinematics of a perfectly deterministic Schrödinger evolution and indeterministic state reductions during measurements. Similarity consist in the fact, that deterministic (Schrödinger, Newton) equations are not sufficient to describe the full evolution: they have to be completed, for instance by probabilistic description of the results of measurements in quantum mechanics, or by probabilistic choice of non-unique solutions in the Norton’s example[^19]. It is interesting to note the ideas of Boussinesq have been in fact a subject of intensive discussion in the recent years in philosophy of science within the, so called, “second Boussinesq debate”. The first Boussinesq debate took place in France between 1874-1880. As stated by T.M. Mueller [@Mueller15]: “In 1877, a young mathematician named Joseph Boussinesq presented a *mémoire* to the *Académie des Sciences*, which demonstrated that some differential equations may have more than one solution. Boussinesq linked this fact to indeterminism and to a possible solution to the free will versus determinism debate.”. The more recent debate discovered, in fact, that some hints for the Boussinesq ideas can be also found in the works of James Clerk Maxwell [@Maxwell]. The views of Maxwell, important in this debate and not known very much by physicists, show that he was very much influenced by the work of Joseph Boussinesq and Adhémar Jean Claude Barré de Saint-Venant [@Mueller15]. What is also quite unknown by many scientists is that Maxwell learn the statistical ideas from Adolphe Quetelet, a Belgian mathematician, considered to be one of the founders of statistics. Excellent account on the concepts of determinism versus indeterminism, on the notion of uncertainty, also associated to the idea of randomness, as well as on different meanings that randomness has for different audiences may be found in the set of blogs of Miguel A. F. Sanjuán [@blog1; @blog2; @blog3] and in the outstanding book [@Dahan92]. These references cover also a lot of details of the the first and recent Boussinesq debate. A Polish text by Koleżyński [@Polish] discusses related quotations from Boussinesq, Maxwell and Poincaré in a philosophical context of the determinism. Of course, to great extend Boussinesq debate was stimulated by the attempts toward understanding of nonlinear dynamics and hydrodynamics in general, and the phenomenon of turbulence in particular. A nice review of of various approaches and ideas until 1970s is presented in the lecture by Marie Farge [@Farge]. The contemporary approach to turbulence is very much related to the Boussinesq suggestions and the use of non-Lipschitzian, i.e. nondeterministic hydrodynamics, has been develop in the recent years by Falkovich, [Gawȩdzki]{}, Vargassola and others (for outstanding reviews see [@Falkovich01; @Gawedz-rec]). The history of these works is nicely described in the presentation [@Gawedz1], while the most important particular articles include the series of papers by [Gawȩdzki]{} and collaborators [@Gawedz1; @Gawedz2], Vanden Eijnden [@Vanden1; @Vanden2], and Le Jan and Raimond [@Lejan1; @Lejan2; @Lejan3]. Randomness in quantum physics ----------------------------- The chances of proving the existence of ‘intrinsic randomness’ in the world seem to be much higher, when we switch to quantum mechanics. The Born’s interpretation of the wave function implies that we can count only on a probabilistic description of reality, therefore quantum mechanics is inherently probabilistic. Obviously, one should ask what is the source of randomness in quantum physics. As pointed out by one of the referees: “In my view all the sources of randomness originate because of interaction of the system (and/or the measurement apparatus) with an environment. The randomness that affects pure states due to measurement is, in my view, due to the interaction of the measurement apparatus with an environment. The randomness that affects open systems (those that directly interact with an environment) is again due to environmental effects.” This point of view is, as considered by many physicists, of course, parallel to the contemporary theory of quantum measurements, and collapse of the wave function [@WheelerZurek83; @Zurek03; @Zurek09]. Still, the end result of such approach to randomness and quantum measurements is that the Born’s rule and the traditional Copenhagen interpretation is not far from being rigorously correct. At the same time, quantum mechanics viewed from the device independent point of view, i.e. by regarding only probabilities of outcomes of individual or correlated measurements, incorporates randomness, which cannot be reduced to our lack of knowledge or imperfectness of our measurements (this will be discussed with more details below). In this sense for the purpose of the present discussion, the detailed form of the major source of the randomness is not relevant, as long as this randomness leads to contextual results of measurements, or nonlocal correlations. Let us repeat, both the pure Born’s rule and the advanced theory of quantum measurement imply that the measurement outcomes (or expectation value of an observable) may have some randomness. However, *a priori* there are no obvious reasons for leaving the Democritean ground and switch to the Epicurean view. It might be so that quantum mechanics, just as statistical physics, is an incomplete theory admitting deterministic hidden variables, values of which were beyond our control. To be precise, one may ask how “intrinsic” this randomness is and if it can be considered as an epistemic one. To illustrate it further, we consider two different examples in the following. ### Contextuality and randomness Let us consider a case of a spin-$s$ particle. Now if the particle is measured in the $z$-direction, there could be $2s+1$ possible outcomes and each appears with certain probability. Say, the outcomes are labeled by $\{m\}$, where $m \in [-s,-s+1,\ldots,s-1,s]$ and the corresponding probabilities by $\{p_m\}$. It means that, with many repetitions, the experimenter will observe an outcome $m$ with the frequency approaching $p_m$, as predicted by the Born’s rule of quantum mechanics. The outcomes contain some randomness as they appear probabilistically. Moreover, these probabilities are indistinguishable from classical probabilities. Therefore, the randomness here could be explained with the help of a deterministic hidden-variable model[^20] and it is simply a consequence of the ignorance of the hidden-variable(s). But, as we stress in the definition in the Introduction: intrinsic randomness of quantum mechanics does not exclude existence of hidden variable models that can describe outcomes of measurements. Obviously, if the system is in the pure state corresponding to $m_0$, the outcome of the measurement of $z$-component of the spin will be deterministic: $m_0$ with certainty. If we measured $x$-component of the spin, however, the result would be again non-deterministic and described only probabilistically. In fact, this is an instance of the existence of the, so called, non-commuting observables in quantum mechanics that cannot be measured simultaneously with certainty. Uncertainty of measurements of such non-commuting observables is quantitatively bounded from below by generalized Heisenberg Uncertainty Principle [@Messiah14; @Tannoudji91]. One of important consequences of the existence of non-commuting observables is the fact that quantum mechanics is [*contextual*]{}, as demonstrated in the famous Kochen-Specker theorem ([@Kochen67], for philosophical discussion see [@Bub99; @Isham98]). The Kochen–Specker (KS) theorem [@Kochen67], also known as the Bell-Kochen-Specker theorem [@Bell66], is a “no go” theorem [@Bub99], proved by J.S. Bell in 1966 and by S.B. Kochen and E. Specker in 1967. KS theorem places certain constraints on the permissible types of hidden variable theories, which try to explain the randomness of quantum mechanics as an apparent randomness, resulting from lack of knowledge of hidden variables in an underlying deterministic model. The version of the theorem proved by Kochen and Specker also gave an explicit example for this constraint in terms of a finite number of state vectors (cf. [@Peres95]). The KS theorem deals with single quantum systems and is thus a complement to Bell’s theorem that deals with composite systems. As proved by the KS theorem, there is a contradiction between two basic assumptions of the hidden variable theories, which is intended to reproduce the results of quantum mechanics where all hidden variables corresponding to quantum mechanical observables have definite values at any given time, and that the values of those variables are intrinsic and independent of the measurement devices. An immediate contradiction can be caused by non-commutative observables, that are allowed by quantum mechanics. If the Hilbert space dimension is at least three, it turns out to be impossible to simultaneously embed all the non-commuting sub-algebras of the algebra of these observables in one commutative algebra, which is expected to represent the classical structure of the hidden variable theory[^21]. The Kochen–Specker theorem excludes hidden variable theories that require elements of physical reality to be non-contextual (i.e. independent of the measurement arrangement). As succinctly worded by Isham and Butterfield [@Isham98], the Kochen–Specker theorem “asserts the impossibility of assigning values to all physical quantities whilst, at the same time, preserving the functional relations between them.” In a more recent approach to contextuality, i.e. where the measurement results depend on the context with which they are measured, one proves that non-contextual hidden variable theories lead to probabilities of measurement outcomes that fulfill certain inequalities [@Cabello08], similar to Bell’s inequalities for composite systems. More specifically there are Bell-type inequalities for non-contextual theories that are violated by any quantum state. Many of these inequalities between the correlations of compatible measurements are particularly suitable for testing this state-independent violation in an experiment, and indeed violations have been experimentally demonstrated [@Kirchmair09; @Bartosik09]. Quantifying and characterizing contextuality of different physical theories is particularly elegant in a general graph-theoretic framework [@Cabello14; @Acin15a]. This novel approach to contextuality is on hand parallel to the earlier observation by N. Bohr [@Bohr35] that EPR-like paradoxes may occur in the quantum systems without the need for entangled composite systems. On the other hand it offers a way to certify intrinsic randomness of quantum mechanics. If Cabello-like inequalities are violated in an experiment, it implies that there exist no non-contextual deterministic hidden variable theory that can reproduce results of this experiment, [*ergo*]{} the results are intrinsically random. Unfortunately, this kind of randomness certification is not very secure, since it explicitly depends on the non-commuting observables that are measured, and in effect is not device independent. ### Nonlocality and randomness It is important to extend the situation beyond the one mentioned above to multi-party systems and local measurements. For example, consider multi-particle system with each particle placed in a separated region. Now, instead of observing the system as a whole, one may get interested to observe only a part of it, i.e. perform local measurements. Given two important facts that QM allows superposition and no quantum system can be absolutely isolated, spatially separated quantum systems can be non-trivially correlated, beyond classical correlations allowed by classical mechanics. In such situation, the information contained in the whole system is certainly larger than that of sum of individual local systems. The information residing in the nonlocal correlations cannot be accessed by observing locally individual particles of the systems. It means that local descriptions cannot completely describe the total state of the system. Therefore, outcomes due to any local observation are bound to incorporate a randomness in the presence of nonlocal correlation, as long as we do not have access to the global system or ignore the nonlocal correlation. In technical terms, the randomness appearing in the [*local*]{} measurement outcomes cannot be understood in terms of deterministic [*local*]{} hidden variable model and a “true” local indeterminacy is present[^22]. Moreover, randomness on the local level appears even if the global state of the system is pure and completely known – the necessary condition for this is just entanglement of the pure state in question. That is typically referred as “intrinsic” randomness in the literature, and that is the point of view we adopt in this report. ![\[fig:BellEx\] Schematic of a two-party Bell-like experiment. The experimenters Alice and Bob are separated and cannot communicate as indicated by the black barrier. The measurements settings and outcomes, of Alice and Bob, are denoted by $x, \ y$ and $a, \ b$ respectively.](BellEx.pdf){width="35"} Before we move further in discussing quantum randomness in the presence of quantum correlation, let us make a short detour through the history of foundation of quantum mechanics. The possibility of nonlocal correlation, also known as quantum entanglement, was initially put forward with the question if quantum mechanics respects local realism, by Einstein, Podolsky and Rosen (EPR) [@EPR35]. According to EPR, two main properties any reasonable physical theory should satisfy are realism and locality. The first one states that if a measurement outcome of a physical quantity, pertaining to some system, is predicted with unit probability, then there must exits ‘an element of reality’’ correspond to this physical quantity having a value equal to the predicted one, at the moment of measurement. In other words, the values of observables, revealed in measurements, are intrinsic properties of the measured system. The second one, locality, demands that elements of reality pertaining to one system cannot be affected by measurement performed on another sufficiently far system. Based on these two essential ingredients, EPR studied the measurement correlations between two entangled particles and concluded that the wave function describing the quantum state “does not provide a complete description of physical reality”. Thereby they argued that quantum mechanics is an incomplete but effective theory and conjectured that a complete theory describing the physical reality is possible. In these discussions, one needs to clearly understand what locality and realism mean. In fact, they could be replaced with no-signaling and determinism, respectively. The no-signaling principle states that infinitely fast communication is impossible. The relativistic limitation of the speed, by the velocity of light, is just a special case of no-signaling principle. If two observers are separated and perform space-like separated measurements (as depicted in Fig. \[fig:BellEx\]), then the principle ascertains that the statistics seen by one observer, when measuring her particle, is completely independent of the measurement choice made on the space-like separated other particle. Clearly, if it were not the case, one observer could, by changing her measurement choice, make a noticeable change on the other and thereby instantaneously communicate with an infinite speed. Determinism, the other important ingredient, implies that correlations observed in an experiment can be decomposed as mixtures of deterministic ones i.e., occurring in situations where all measurements have deterministic outcomes. A deterministic theory accepts the fact that the apparent random outcomes in an experiment, like in coin tossing, are only consequences of ignorance of the actual state of the system. Therefore, each run of the experiment does have an *a priori* definite result, but we have only an access to averages. In 1964, Bell showed that all theories that satisfy locality and realism (in the sense of EPR) are incompatible with quantum mechanics [@Bell64; @Bell66]. In a simple experiment, mimicking Bell’s scenario, two correlated quantum particles are sent to two spatially separated measuring devices (see Fig. \[fig:BellEx\]), and each device can perform two different measurements with two possible outcomes. The measurement processes are space-like separated and no communication is possible when these are performed. With this configuration a local-realistic model gives bounds on the correlation between the outcomes observed in the two measurement devices, called Bell inequalities [@Bell64]. In other words, impossibility of instantaneous communication (no-signaling) between spatially separated systems together with full local determinism imply that all correlations between measurement results must obey the Bell inequalities. Strikingly, these inequalities are violated with correlated (entangled) quantum particles, and therefore have no explanations in terms of deterministic local hidden variables. In fact, the correlations predicted by the no-signaling and determinism are exactly the same as predicted by EPR model, and they are equivalent. The experimental violations of the Bell inequalities in 1972 [@FreedmanPRL1972], in 1981 [@Aspect81] and in 1982 [@Aspect82], along with the recent loophole-free Bell-inequality violations [@Hensen15; @Giustina15; @Shalm15] confirm that any local-realistic theory is unable to predict the correlations observed in quantum mechanics. It immediately implies that either no-signaling or local determinism has to be abandoned. For the most physicists, it is favorable to dump local determinism and save no-signaling. Assuming that the nature respects no-signaling principle, any violation of Bell inequality implies thus that the outcomes could not be predetermined in advance. Thus, once the no-signaling principle is accepted to be true, the experimental outcomes, due to local measurements, cannot be deterministic and therefore are random. Of course, a valid alternative is to abandon the no-signaling principle, allow for non-local hidden variables, but save the determinism, as for instance is done in Bohm’s theory [@Bohm51; @Bohm52]. In any case, some kind of non-locality is needed to explain Bell correlations. One can, also, abandon both no-signaling and local determinism: such sacrifice is, however, hard to be accepted by majority of physicists, and scientists in general. Another crucial assumption is considered for Bell experiments, that is the measurements performed with the local measurement devices have to be chosen “freely”. In other words, the measurement choices cannot, in principle, be predicted in advance. If the free-choice condition is relaxed and the chosen measurements could be predicted in advance, then it is easy to construct a no-signaling, but deterministic theory that leads to Bell violations. It has been shown in [@Hall10; @Koh12] that one does not have to give up measurement independence completely to violate Bell inequalities. Even, relaxing free-choice condition to a certain degree, the Bell inequities could be maximally violated using no-signaling and deterministic model [@Hall10]. However, in the Bell-like experiment scenarios where the local observers are separated, it is very natural to assume that the choices of the experiments are completely free (this is often referred to free-will assumption). Therefore, the Bell-inequality violation in the quantum regime, with the no-signaling principle, implies that local measurement outcomes are “intrinsically” random. The lesson that we should learn from the above discussion is that the question raised by Einstein, Rosen, Podolsky found its operational meaning in Bell’s theorem that showed incompatibility of hidden-variable theories with quantum mechanics [@Bell64], [@Bell66]. Experiment could now decide about existence or non-existence of nonlocal correlations. Exhibiting non-local correlations in an experiment gave, under the assumption of no-signaling, a proof of a non-deterministic nature of quantum mechanical reality, and allowed certifying the existence of truly random processes. These experiments require, however, random adjustments of measuring devices [@Bell64]. There must exist a truly random process controlling their choice. This, ironically, closes an unavoidable *circulus vitiosus*. We can check the indeterministic character of the physical reality only assuming that it is, in fact, indeterministic. Quantum Randomness and Physics ============================== In this section we consider randomness form the point of view of physics or in particular, quantum physics. In doing so, first we briefly introduce quantum measurements, nonlocality and information theoretic measures of randomness. Then we turn to outline, how the quantum feature such as nonlocality can be exploited not only to generate “true” randomness but also to certify, expand and amplify randomness. Quantum measurements -------------------- According to standard textbook approach, quantum mechanics (QM) is an inherently probabilistic theory (cf. [@Messiah14; @Tannoudji91; @WheelerZurek83]– the prediction of QM concerning results of measurements are typically probabilistic. Only in very rare instances measurements give deterministic outcomes – this happens when the systems is in an eigenstate of an observable to be measured. Note, that in general, even if we have full information about the quantum mechanical state of the system, the outcome of the measurements is in principle random. The paradigmatic example is provided a $d$-state system (a qudit), whose space of states is spanned by the states$|1\rangle$, $|2\rangle$,..., $|d\rangle$. Suppose that we know the system is in the superposition state $|\phi\rangle=\sum_{j=1}^d\alpha_j|j\rangle$, where $\alpha_j$ are complex probability amplitudes and $\sum_{j=1}^d|\alpha_j|^2 =1$, and we ask whether it is in a state $|i\rangle$. To find out, we measure an observable $\hat P=|i\rangle \langle i|$ that projects on the state $|i\rangle$. The result of such measurement will be one (yes, the system is in the state $|i\rangle$) with probability $|\alpha_i|^2$ and zero with probability $1-\sum_{j \ne i}^d|\alpha_j|^2$. We do not want to enter here deeply into the subject of the foundations of QM, but we want to remind the readers the “standard” approach to QM. ### Postulates of QM The postulates of QM for simple mechanical systems (single or many particle), as given in [@Tannoudji91], read: - [**First Postulate.**]{} At a fixed time $t_0$, the state of a physical system is defined by specifying a wave function $\psi(x; t_0)$, where $x$ represents collection of parameters to specify the state. - [**Second Postulate.**]{} Every measurable physical quantity $Q$ is described by an operator $\hat Q$; this operator is called an observable. - [**Third Postulate.**]{} The only possible result of the measurement of a physical quantity $Q$ is one of the eigenvalues of the corresponding observable $\hat Q$. - [**Fourth Postulate (non-degenerate case).**]{} When the physical quantity $Q$ is measured on a system in the normalized state $\psi$, the probability $P(q_n)$ of obtaining the non-degenerate eigenvalue $q_n$ of the corresponding observable $\hat Q$ is $$P(q_n) = |\int dx \ \varphi_n(x)\psi(x)|^2,$$ where $ \varphi_n$ is the normalized eigenvector of $\hat Q$ associated with the eigenvalue $q_n$. - [**Fifth Postulate (collapse).**]{} If the measurement of the physical quantity $Q$ on the system in the state $\psi$ gives the result $q_n$, the state of the system immediately after the measurement is $\varphi_n$. - [**Sixth Postulate (time evolution).**]{} The time evolution of the wave function $\psi(x; t)$ is governed by the Schrödinger equation $$i\hbar \frac{\partial \psi}{\partial t}={\hat H}\psi,$$ where $\hat H$ is the observable associated with the total energy of the system. - [**Seventh Postulate (symmetrization).**]{} When a system includes several identical particles, only certain wave functions can describe its physical states (leads to the concept of bosons and fermions). For electrons (which are fermions), the wave function must change sign whenever the coordinates of two electrons are interchanged. For hydrogen atoms (regarded as composite bosons) the wave function must not change whenever the coordinates of two bosons are interchanged. ### Measurement theory Evidently, the inherent randomness of QM is associated with the measurement processes (Fourth and Fifth Postulates). The quantum measurement theory has been a subject of intensive studies and long debate, see e.g., [@WheelerZurek83]. In particular the question of the physical meaning of the wave function collapse has been partially solved only in the last 30 years by analyzing interactions of the measured system with the environment (reservoir), describing the measuring apparatus (see seminal works of Zurek [@Zurek03; @Zurek09]) In the abstract formulation in the early days of QM, one has considered von Neumann measurements [@Neumann55], defined in the following way. Let the observable $\hat Q$ has (possibly degenerated) eigenvalues $q_n$ and let $\hat E_n$ denote projectors on the corresponding invariant subspaces (one dimensional for non-degenerate eigenvalues, $k$-dimensional for $k$-fold degenerated eigenvalues). Since the invariant subspace are orthogonal, we have $\hat E_n\hat E_m=\delta_{nm}\hat E_n$, where $\delta_{mn}$ is the Kronecker delta. If $\hat P_\psi$ denotes the projector, which describes a state of a system, the measurement outcome corresponds to the eigenvalue $q_n$ of the observable will appear with probability $p_ n={\rm Tr}(\hat P_\psi\hat E_n)$, where ${\rm Tr}(.)$ denotes the matrix trace operation. Also, after the measurement, the systems is found in the state $\hat E_n\hat P_\psi\hat E_n/p_n$ with probability $p_n$. In the contemporary quantum measurement theory the measurements are generalized beyond the von Neumann projective ones. To define the, so called, positive-operator valued measures (POVM), one still considers von Neumann measurements, but on a system plus an additional ancilla system [@Peres95]. POVMs are defined by a set of Hermitian positive semidefinite operators $\{F_i\}$ on a Hilbert space $\mathcal{H}$ that sum to the identity operator, $$\sum_{i=1}^K F_i = \mathbb{I}_H.$$ This is a generalization of the decomposition of a (finite-dimensional) Hilbert space by a set of orthogonal projectors, $\{E_i\}$, defined for an orthogonal basis $\{\left|\phi_{i}\right\rangle\}$ by $$E_i=\left|\phi_{i}\right\rangle \left\langle\phi_{i}\right|,$$ hence, $$\sum_{i=1}^N E_i = \mathbb{I}_H, \quad E_i E_j = \delta_{i j} E_i$$ An important difference is that the elements of POVM are not necessarily orthogonal, with the consequence that the number $K$ of elements in the POVM can be larger than the dimension $N$ of the Hilbert space they act on. The post-measurement state depends on the way the system plus ancilla are measured. For instance, consider the case where the ancilla is initially a pure state $|0\rangle_B$. We entangle the ancilla with the system, taking $$|\psi\rangle_A |0\rangle_B \rightarrow \sum_i M_i |\psi\rangle_A |i\rangle_B,$$ and perform a projective measurement on the ancilla in the $\{|i\rangle_B\}$ basis. The operators of the resulting POVM are given by $$F_i = M_i ^\dagger M_i .$$ Since the $M_i$ are not required to be positive, there are an infinite number of solutions to this equation. This means that there are infinitely many different experimental apparatus giving the same probabilities for the outcomes. Since the post-measurement state of the system (expressed now as a density matrix) $$\rho_i = {M_i \rho M_i^\dagger \over {\rm tr}(M_i \rho M_i^\dagger)}$$ depends on the $M_i$, in general it cannot be inferred from the POVM alone. If we accept quantum mechanics and its inherent randomness, then it is possible in principle to implement measurements of an observable on copies of a state that is not an eigenstate of this observable, to generate a set of perfect random numbers. Early experiments and commercial devices attempted to mimic a perfect coin with probability 1/2 of getting head and tail. To this aim quantum two-level systems were used, for instance single photons of two orthogonal circular polarizations. If such photons are transmitted through a linear polarizer of arbitrary direction then they pass (do not pass) with probability 1/2. In practice, the generated numbers are never perfect, and randomness extraction is required to generate good random output. The challenges of sufficiently approximating the ideal two-level scenario, and the complexity of detectors for single quantum systems, have motivated the development of other randomness generation strategies. In particular, continuous-variable techniques are now several orders of magnitude faster, and allow for randomness extraction based on known predictability bounds. See Section \[sec:RandTech\]. It is worth mentioning that the Heisenberg uncertainty relation [@Heisenberg27] and its generalized version, i.e., the Robertson-Scrödinger relation [@Robertson29; @Schrodinger30; @WheelerZurek83], often mentioned in the context of quantum measurements, signify how precisely two non-commuting observables can be measured on a quantum state. Quantitatively, for a given state $\rho$ and observables $X$ and $Y$, it gives a lower bound on the uncertainty when they are measures simultaneously, as $$\begin{aligned} \delta X^2 \delta Y^2 \geq \frac{1}{4} |\mbox{Tr} \rho \left[X,Y \right]|^2, \label{eq:RSUR}\end{aligned}$$ where $\delta X^2=\mbox{Tr}\rho X^2-(\mbox{Tr} \rho X)^2$ is the variance and $\left[X,Y \right]=XY-YX$ is the commutator. A non-vanishing $\delta X$ represents a randomness in the measurement process and that may arise from non-commutativity (misalignment in the eigenbases) between state and observable, or even it may appear due to classical uncertainty present in the state (i.e., not due to quantum superposition). In fact Eq. (\[eq:RSUR\]) does allow to have either $\delta X$ or $\delta Y$ vanishing, but not simultaneously for a given state $\rho$ and $\left[X,Y \right]\neq 0$. However, when $\delta X$ vanishes, it is nontrivial to infer on $\delta Y$ and vice versa. To overcome this drawback, the uncertainty relation is extended to sum-uncertainty relations, both in terms of variance [@Maccone14] and entropic quantities [@Beckner75; @Birula75; @Deutsch83; @Maassen88]. We refer to [@Coles15] for an excellent review on this subject. The entropic uncertainty relation was also considered in the presence of quantum memory [@Berta10]. It has been shown that, in the presence of quantum memory, any two observables can simultaneously be measured with arbitrary precision. Therefore the randomness appearing in the measurements can be compensated by the side information stored in the quantum memory. As we mentioned in the previous section, Heisenberg uncertainty relations are closely related to the contextuality of quantum mechanics at the level of single systems. Non-commuting observables are indeed responsible for the fact that there does not exist non-contextual hidden variable theories that can explain all results of quantum mechanical measurements on a given system. The inherent randomness considered in this work is steaming out the Born’s rule in quantum mechanics, irrespective of the fact if there is more than one observable being simultaneously measured or not. Furthermore the existence of nonlocal correlations (and quantum correlations) in the quantum domain give rise to possibility of, in a sense, a new form of randomness. In the following we consider such randomness and its connection to nonlocal correlations. Before we do so, we shall discuss nonlocal correlations in more detail. Nonlocality ----------- ### Two-party nonlocality Let us now turn to nonlocality, i.e. property of correlations that violate Bell inequalities [@Bell64; @Brunner14]. As we will see below, nonlocality is intimately connected to the intrinsic quantum randomness. In the traditional scenario a Bell nonlocality test relies on two spatially separated observers, say Alice and Bob, who perform space-like measurements on a bipartite system possibly produced by a common source. For a schematic see Fig. \[fig:BellEx\]. Suppose Alice’s measurement choices are $x\in \mathcal{X}=\{1,\ldots,M_A\}$ and Bob’s choices $y\in \mathcal{Y}=\{1,\ldots,M_B\}$ and the corresponding outcomes $a\in \mathcal{A}=\{1,\ldots,m_A\}$ and $b\in \mathcal{B}=\{1,\ldots,m_B\}$ respectively. After repeating many times, Alice and Bob communicate their measurement settings and outcomes to each other and estimate the joint probability $p(a,b|x,y)=p(A=a,B=b|X=x,Y=y)$ where $X$, $Y$ are the random variables that govern the inputs and $A$, $B$ are the random variables that govern the outputs. The outcomes are considered to be correlated, for some $x,y,a,b$, if $$\begin{aligned} p(a,b|x,y)\neq p(a|x)p(b|y).\end{aligned}$$ Observing such correlations is not surprising as there are many classical sources and natural processes that lead to correlated statistics. These can be modeled with the help of another random variable $\Lambda$ with the outcomes $\lambda$, which has a causal influence on both the measurement outcomes and is inaccessible to the observers or ignored. In a [*local hidden-variable model*]{}, considering all possible causes $\Lambda$, the joint probability can then be expressed as $$\begin{aligned} p(a,b|x,y,\lambda) = p(\lambda) p(a|x, \lambda)p(b|y,\lambda ).\end{aligned}$$ One, thereby, could explain any observed correlation in accordance with the fact that Alice’s outcomes solely depends on her local measurement settings $x$, on the common cause $\lambda$, and are independent of Bob’s measurement settings. Similarly Bob’s outcomes are independent of Alice’s choices. This assumption – the no-signaling condition – is crucial – it is required by the theory of relativity, where nonlocal causal influence between space-like separated events is forbidden. Therefore, any joint probability, under the [*local hidden-variable model*]{}, becomes $$\begin{aligned} p(a,b|x,y) = \int_{\Lambda} d\lambda p(\lambda) p(a|x, \lambda)p(b|y,\lambda ), \label{eq:lhv}\end{aligned}$$ with the implicit assumption that the measurement settings $x$ and $y$ could be chosen independently of $\lambda$, i.e., $p(\lambda|x,y)=p(\lambda)$. Note that so far we have not assumed anything about the nature of the local measurements, whether they are deterministic or not. In a [*deterministic local hidden-variable model*]{}, Alice’s outcomes are completely determined by the choice $x$ and the $\lambda$. In other words, for an outcome $a$, given input $x$ and hidden cause $\lambda$, the probability $p(a|x,\lambda)$ is either $1$ or $0$ and so as for Bob’s outcomes. Importantly, the [*deterministic local hidden-variable model*]{} has been shown to be fully equivalent to the [*local hidden-variable model*]{} [@Fine82]. Consequently, the observed correlations that admit a join probability distribution as in (\[eq:lhv\]), can have an explanation based on a [*deterministic local hidden-variable model*]{}. In 1964, Bell showed that any [*local hidden-variable model*]{} is bound to respect a set of linear inequalities, which are commonly know as Bell inequalities. In terms of joint probabilities they can be expressed as $$\begin{aligned} \sum_{a,b,x,y} \alpha^{xy}_{ab} \ p(a,b|x,y) \leq \mathcal{S}_L, \label{eq:bi}\end{aligned}$$ where $\alpha^{xy}_{ab}$ are some coefficients and $\mathcal{S}_L$ is the classical bound. Any violation of Bell inequalities (\[eq:bi\]) implies a presence of correlations that cannot be explained by a [*local hidden-variable model*]{}, and therefore have a nonlocal character. Remarkably, there indeed exists correlations violating Bell inequalities that could be observed with certain choices of local measurements on a quantum system, and hence do not admit a [*deterministic local hidden-variable model*]{}. To understand it better, let us consider an example of the most studied two-party Bell inequalities, also known as Clauser-Horne-Shimony-Holt (CHSH) inequalities, introduced in [@Clauser69]. Assume the simplest scenario (as in Fig. \[fig:BellEx\]) in which Alice and Bob both choose one of two local measurements $x,y \in \{0,1\}$ and obtain one of two measurement outcomes $a,b \in \{-1,1\}$. Let the expectation values of the local measurements are $\langle a_x b_y \rangle = \sum_{a,b} a\cdot b \cdot p(a,b|x,y)$, then the CHSH-inequality reads: $$\begin{aligned} I_{CHSH}=\langle a_0 b_0 \rangle + \langle a_0 b_1 \rangle + \langle a_1 b_0 \rangle - \langle a_1 b_1 \rangle \leq 2. \label{eq:CHSHprob}\end{aligned}$$ One can maximize $I_{CHSH}$ using local deterministic strategy and to do so one needs to achieve the highest possible values of $\langle a_0 b_0 \rangle, \ \langle a_0 b_1 \rangle, \ \langle a_1 b_0 \rangle$ and the lowest possible value of $ \langle a_1 b_1 \rangle$. By choosing $p(1,1|0,0)=p(1,1|0,1)=p(1,1|1,0)=1$, the first three expectation values can be maximized. However, in such situation the $p(1,1|1,1)=1$ and $I_{CHSH}$ could be saturated to 2. Thus the inequality is respected. However, it can be violated in a quantum setting. For example, considering a quantum state $|\Psi^+\rangle=\frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$ and measurement choices $A_0=\sigma_z$, $A_1=\sigma_x$, $B_0=\frac{1}{\sqrt{2}}(\sigma_z+\sigma_x)$, $B_1=\frac{1}{\sqrt{2}}(\sigma_z-\sigma_x)$ one could check that for the quantum expectation values $\langle a_\alpha b_\beta \rangle = \langle \Psi^+ | A_\alpha \otimes B_\beta |\Psi^+ \rangle $ we get $I_{CHSH}=2\sqrt{2}$. Here $\sigma_z$ and $\sigma_x$ are the Pauli spin matrices and $|0\rangle$ and $|1\rangle$ are two eigenvectors of $\sigma_z$. Therefore the joint probability distribution $p(a,b|x,y)$ cannot be explained in terms of local deterministic model. ### Multi-party nonlocality and device independent approach Bell-type inequalities can be also constructed in multi-party scenario. Their violation signifies nonlocal correlations distributed over many parties. A detailed account may be found in [@Brunner14]. Here we introduce the concept of nonlocality using the contemporary language of device independent approach (DIA) [@Brunner14]. Recent successful hacking attacks on quantum cryptographic devices [@Lydersen10] triggered this novel approach to quantum information theory in which protocols are defined independently of the inner working of the devices used in the implementation. That leads to avalanches of works in the field of device independent quantum information processing and technology [@Brunner14a; @Pironio15]. ![\[fig:Bell1\] Schematic representation of device independent approach. In this approach several users could access to uncharacterized black-boxes (shown as squares) possibly prepared by an adversary. The users are allowed to choose inputs $\left(x_1, \hdots, x_k, \hdots, x_n\right)$ for the boxes and acquire outputs $\left(a_1, \hdots, a_k, \hdots, a_n\right)$ as results. The joint probability with which the outputs appear is $p\left(a_1, \hdots, a_k, \hdots, a_n | x_1, \hdots, x_k, \hdots, x_n\right)$.](diqip1.pdf){width="25.00000%"} The idea of DIA is schematically given in Fig. \[fig:Bell1\]. We consider here the following scenario, usually referred to as the *$(n,m,d)$ scenario*. Suppose $n$ spatially separated parties $A_1,\ldots,A_n$. Each of them possesses a black box with $m$ measurement choices (or observables) and $d$ measurement outcomes. Now, in each round of the experiment every party is allowed to perform one choice of measurement and acquires one outcome. The accessible information, after the measurements, is contained in a set of $(md)^n$ conditional probabilities $p(a_1,\ldots, a_n|x_1,\ldots, x_n)$ of obtaining outputs $a_1, a_2, \ldots, a_n$, provided observables $x_1, x_2, \ldots, x_n$ were measured. The set of all such probability distributions forms a convex set; in fact, it is a polytope in the probability manifold. From the physical point of view (causality, special relativity) the probabilities must fulfill the [*no-signaling condition*]{}, i.e., the choice of measurement by the $k$-th party, cannot be instantaneously signalled to the others. Mathematically it means that for any $k=1,\ldots,n$, the following condition $$\begin{aligned} \label{eq:no-sig} &\sum_{a_k}p(a_1, \ldots, a_k, \ldots, a_n|x_1,\ldots, x_k,\ldots, x_n)\nonumber\\ & = p(a_1,\ldots, a_{k-1}, a_{k+1}\ldots, a_n|x_1, \ldots, x_{k-1},x_{k+1}\ldots, x_n), \end{aligned}$$ is fulfilled. The *local correlations* are defined via the concept of a local hidden variable $\lambda$ with the associated probability $q_{\lambda}$. The correlations that the parties are able to establish in such case are of the form $$\begin{aligned} p(a_1, & \ldots, a_n|x_1, \ldots, x_n) \nonumber \\ & = \sum_{\lambda} q_\lambda D(a_1|x_1,\lambda) \ldots D(a_n|x_n,\lambda),\end{aligned}$$ where $D(a_k|x_k,\lambda)$ are deterministic probabilities, i.e., for any $\lambda$, $D(a_k|x_k,\lambda)$ equals one for some outcome, and zero for all others. What is important in this expression is that measurements of different parties are independent, so that the probability is a product of terms corresponding to different parties. In this $n$-party scenario the local hidden variable model bounds the joint probabilities to follow the Bell inequalities, given as $$\begin{aligned} \sum_{a_1, \ldots, a_n,x_1, \ldots, x_n} \alpha_{a_1, \ldots, a_n}^{x_1, \ldots, x_n} \ p(a_1,\ldots, a_n|x_1, & \ldots, x_n) \nonumber \\ &\leq \mathcal{S}_L^n,\end{aligned}$$ where $\alpha_{a_1, \ldots, a_n}^{x_1, \ldots, x_n}$ are some coefficients and $\mathcal{S}_L^n$ is the classical bound. The probabilities that follow local (classical) correlations form a convex set that is also a polytope, denoted $\mathbbm{P}$ (cf. Fig. \[fig:zbiory\]). Its extremal points (or vertices) are given by $\prod_{i=1}^n D(a_i|x_i,\lambda)$ with fixed $\lambda$. The Bell theorem states that the quantum-mechanical probabilities, which also form a convex set $\mathcal{Q}$, may stick out of the classical polytope [@Bell64; @Fine82]. The quantum probabilities are given by the trace formula for the set of local measurements $$p(a_1, \ldots, a_n | x_1, \ldots, x_n)={\rm Tr}(\rho \otimes_{i=1}^n M_{a_i}^{x_i}),$$ where $\rho$ is some $n$-partite state and $M_{a_i}^{x_i}$ denote the measurement operators (POVMs) for any choice of the measurement $x_i$ and party $i$. As we do not impose any constraint on the local dimension, we can always choose the measurements to be projective, i.e., the measurement operators additionally satisfy $M_{a'_i}^{x_i}M_{a_i}^{x_i}=\delta_{a'_i,a_i} M_{a_i}^{x_i}$. This approach towards the Bell inequalities is explained in Fig. \[fig:zbiory\]. Any hyperplane in the space of probabilities that separates the classical polytope from the rest determines a Bell inequality: everything that is above the upper horizontal dashed line is obviously nonlocal. But the most useful are the [*tight Bell inequalities*]{} corresponding to the facets of the classical polytope, i.e. its walls of maximal dimensions (lower horizontal dashed line). In general $(n,m,d)$ scenarios, the complexity of characterizing the corresponding classical polytope is enormous. It is fairly easy to see that, even for $(n,2,2)$, the number of its vertices (extremal points) is equal to $2^{2n}$, hence it grows exponentially with $n$. Nevertheless, a considerable effort has been made in recent time to characterize multi-party nonlocality [@Brunner14; @Tura14a; @Tura14; @Liang15; @Tura15; @Rosicka16]. ![Schematic representation of different sets of correlations: classical (grey area) and quantum (the area bounded by the thick line). Clearly, the former is the subset of the latter and, as has been shown by Bell [@Bell64], they are not equal – there are quantum correlations that do not fall into the grey area. The black dots represent the vertices of the classical polytope $\mathbbm{P}$ – deterministic classical correlations – satisfying deterministic local hidden variable models. The dashed lines represent Bell inequalities. In particular, the black dashed line is tight and it corresponds to the facet of the classical set.\[fig:zbiory\]](BellPoly.png){width="48.00000%"} Among the many other device independent applications, the nonlocality appears to be a valuable resource in random number generation, certification, expansion and amplification, which we outline in the following subsections. In fact, it has been shown that Bell nonlocal correlation is a genuine resource, in the framework of a resource theory, where the allowed operations are restricted to device independent local operations [@GallegoPRL2012; @Vicente14]. Randomness: information theoretic approach\[sec:Randomness\] ------------------------------------------------------------ Before turning to the quantum protocols involving randomness, we discuss in this section randomness from the information theory standpoint. It is worth mentioning the role of randomness in various applications, beyond its fundamental implications. In fact randomness is a resource in many different areas – for a good overview see Refs. [@Motwani95; @Menezes96]. Random numbers play important role in cryptographic applications, in numerical simulations of complex physical, chemical, economical, social and biological systems, not to mention gambling. That is why, much efforts were put forward to (1) develop good, reliable sources of random numbers, and (2) to design reliable certification tests for a given source of random numbers. In general, there exists three types of random number generators (RNG). They are “true” RNGs, pseudo-RNGs and the quantum-RNGs. The true RNGs are based on some physical processes that are hard to predict, like noise in electrical circuits, thermal or atmospheric noises, radioactive decays etc. The pseudo-RNGs rely on the output of a deterministic function with a shorter random seed possibly generated by a true RNG. Finally, quantum RNGs use genuine quantum features to generate random bits. We consider here a finite sample space and denote it by the set $\Omega$. The notions of ideal and weak random strings describe distributions over $\Omega$ with certain properties. When a distribution is [*uniform*]{} over $\Omega$, we say that it has ideal randomness. A uniform distribution over $n$-bit strings is denoted by $U_n$. The uniform distributions are very natural to work with. However, when we are working with physical systems, the processes or measurements are usually biased. Then the bit strings resulting from such sources are not uniform. A string with nonuniform distribution, due to some bias (could be unknown), is referred to have [*weak*]{} randomness and the sources of such strings are termed as weak sources. Consider the random variables denoted by the letters $(X,Y,\ldots)$. Their values will be denoted by $(x,y,\ldots)$. The probability of a random variable $X$ with a value $x$ is denoted as $p(X=x)$ and when the random variable in question is clear we use the shorthand notation $p(x)$. Here we briefly introduce the operational approach to define randomness of a random variable. In general, the degree of randomness or bias of a source is unknown and it is insufficient to define a weak source by a random variable $X$ with a probability distribution $P(X)$. Instead one needs to model the weak randomness by a random variable with an unknown probability distribution. In another words, one need to characterize a set of probability distributions with desired properties. If we suppose that the probability distribution $P(X)$ of the variable $X$ comes from a set $\Omega$, then the degree of randomness is determined by the properties of the set, or more specifically, by the least random probability distribution(s) in the set. The types of weak randomness differ with the types of distribution $P(X)$ on $\Omega$ and the set $\Omega$ itself – they are determined by the allowed distributions motivated by a physical source. There are many ways to classify the weak random sources, and an interested reader may go through Ref. [@Pivoluska14]. Here we shall consider two types of weak random sources, Santha-Vazirani (SV) and Min-Entropy (ME) sources, which will be sufficient for our later discussions. A Santha-Vazirani (SV) source [@Santha86] is defined as a sequence of binary random variables $(X_1, X_2,\ldots,X_n)$, such that $$\begin{aligned} \label{eq:SVsource} \frac{1}{2} - \epsilon \leq & p(x_i=1|x_1,\ldots,x_{i-1}) \leq \frac{1}{2} + \epsilon, \\ & \forall i \in \mathbb{N}, \forall x_1,\ldots,x_{i-1} \in \{0,1\} \nonumber,\end{aligned}$$ where the conditional probability $p(x_i=1|x_1,\ldots,x_{i-1})$ is the probability of the value $x_i=1$ conditioned on the values $x_1,\ldots,x_{i-1}$. The $0 \leq \epsilon \leq \frac{1}{2}$ represents bias of the source. For fixed $\epsilon$ and $n$, the SV-source represents a set of probability distributions over $n$-bit strings. If a random string satisfies (\[eq:SVsource\]), then we say that it is $\epsilon$-free. For $\epsilon=0$ the string is perfectly random – uniformly distributed sequence of bits $U_n$. For $\epsilon=\frac{1}{2}$, nothing can be inferred about the string and it can be even deterministic. Note that in SV sources the bias can not only change for each bit $X_i$, but it also can depend on the previously generated bits. It requires that each produced bit must have some amount of randomness, when $\epsilon \neq \frac{1}{2}$, and even be conditioned on the previous one. In order the generalize it further one considers [*block*]{} source [@Chor88], where the randomness is not guaranteed in every single bit, but rather for a block of $n$-bits. Here, in general, the randomness is quantified by the min-entropy, which is defined as: $$\begin{aligned} H_{\infty}(Y)=\mbox{min}_{y} [-\mbox{log}_2(p(Y=y))],\end{aligned}$$ for an $n$-bit random variable $Y$. For a block source, the randomness is guaranteed by the most probable $n$-bit string appearing in the outcome of the variable – simply by guessing the most probable element – provided that the probability is less than one. A [*block*]{} $(n,k)$ source can now be modeled, for $n$-bit random variables $(X_1,X_2,...,X_n)$, such that $$\begin{aligned} H_{\infty}&(X_i|X_{i-1}=x_{i-1},...,X_1=x_1)\geq k, \\ & \forall i \in \mathbb{N}, \forall x_1,...,x_{i-1} \in \{0,1\}^n.\nonumber\end{aligned}$$ These block sources are generalizations of SV-sources; the latter are recovered with $n=1$ and $\epsilon = 2^{-H_{\infty}(X)}-\frac{1}{2}$. The block sources can be further generalized to sources of randomness of finite output size, where no internal structure is given, e.g., guaranteed randomness in every bit (SV-sources) or every block of certain size (block sources). The randomness is only guaranteed by its overall min-entropy. Such sources are termed as the [*min-entropy*]{} sources [@Chor88] and are defined, for an $n$-bit random variable $X$, such that $$\begin{aligned} H_{\infty}(X)\geq k.\end{aligned}$$ Therefore, a min-entropy source represents a set of probability distributions where the randomness is upper-bounded by the probability of the most probable element, measured by min-entropy. Let us now briefly outline the [*randomness extraction (RE)*]{}, as it is one of the most common operations that is applied in the post-processing of weak random sources. The randomness extractors are the algorithms that produce nearly perfect (ideal) randomness useful for potential applications. The aim of RE is to convert randomness of a string from a weak source into a possibly shorter string of bits that is [*close*]{} to a perfectly random one. The closeness is defined as follows. The random variables $X$ and $Y$ over a same domain $\Omega$ are $\varepsilon$-close, if: $$\begin{aligned} \Delta(X,Y)=\frac{1}{2}\sum_{x \in \Omega}| p(X=x)-p(Y=x)| \leq \varepsilon.\end{aligned}$$ With respect to RE, the weak sources can be divided into two classes – extractable sources and non-extractable sources. Only from extractable sources a perfectly random string can be extracted by a deterministic procedure. Though there exist many non-trivial extractable sources (see for example [@Kamp11]), most of the natural sources, defined by entropies, are non-extractable and in such cases non-deterministic ([*stochastic*]{}) randomness extractors are necessary. Deterministic extraction fails for the random strings from SV-sources, but it is possible to post-process them with a help of an additional random string. As shown in [@Vazirani87], for any $\epsilon$ and two mutually independent $\epsilon$-free strings from SV-sources, it is possible to efficiently extract a single almost perfect bit ($\epsilon^\prime \rightarrow 0$). For two $n$-bit independent strings $X=(X_1,...,X_n)$ and $Y=(Y_1,...Y_n)$, the post-processing function, [*Ex*]{}, has been defined as $$\begin{aligned} Ex(X,Y)=(X_1\cdot Y_1)\oplus(X_2 \cdot Y_2)\oplus \cdots \oplus (X_n \cdot Y_n),\end{aligned}$$ where $\oplus$ denotes the sum modulo 2. The function [*Ex*]{} is the inner product between the $n$-bit strings $X$ and $Y$ modulo 2. Randomness extraction of SV-sources are sometime referred as [*randomness amplification*]{} as two $\epsilon$-free strings from SV-sources are converted to one-bit string of $\epsilon^\prime$-free and with $\epsilon^\prime < \epsilon$. Deterministic extraction is also impossible for the min-entropy sources. Nevertheless, an extraction might be possible with the help of [*seeded extractor*]{} in which an extra resource of uniformly distributed string, called the seed, is exploited. A function, $Ex: \ \{0,1\}^n \ \times \ \{0,1\}^r \mapsto \{0,1\}^m $ is seeded $(k,\varepsilon)$-extractor, for every string from block $(n,k)$-source of random variable $X$, if $$\begin{aligned} \Delta (Ex(X,U_r),U_m) \leq \varepsilon.\end{aligned}$$ Here $U_r$ ($U_m$) is the uniformly distributed $r$-bit ($m$-bit) string. In fact, for a variable $X$, min-entropy gives the upper bound on the number of extractable perfectly random bits [@Shaltiel02]. Randomness extraction is well developed area of research in classical information theory. There are many randomness extraction techniques using multiple strings [@Dodis04; @Raz05; @Barak10; @Nisan99; @Shaltiel02; @Gabizon08], such as universal hashing extractor, Hadamard extractor, DEOR extractor, BPP extractor etc., useful for different post-processing. Nonlocality, random number generation and certification ------------------------------------------------------- Here we link the new form of randomness, i.e the presence of nonlocality (in terms of Bell violation) in the quantum regime, to random number generation and certification. To do so, we outline how nonlocal correlations can be used to generate new types of random numbers, what has been experimentally demonstrated in [@Pironio10]. Consider the Bell-experiment scenario (Fig. (\[fig:BellEx\])), as explained before. Two separated observers perform different measurements, labeled as $x$ and $y$, on two quantum particles in their possession and get measurement outcomes $a$ and $b$, respectively. With many repetitions they can estimate the joint probability, $p(a,b|x,y)$, for the outcomes $a$ and $b$ with the measurement choices $x$ and $y$. With the joint probabilities the observers could check if the Bell inequalities are respected. If a violation is observed the outcomes are guaranteed to be random. The generation of these random numbers is independent of working principles of the measurement devices. Hence, this is a device independent random number generation. In fact, there is a quantitative relation between the amount of Bell-inequality violation and the observed randomness. Therefore, these random numbers could be (a) certifiable, (b) private, and (c) device independent [@Colbeck07; @Pironio10]. The resulting string of random bits, obtained by $N$ uses of measurement device, would be made up of $N$ pair of outcomes, $(a_1,b_1,...,a_N,b_N)$, and their randomness could be guaranteed by the violation of Bell inequalities. There is however an important point to be noted. *A priori*, the observers do not know whether the measurement devices violate Bell inequalities or not. To confirm they need to execute statistical tests, but such tests cannot be carried out in predetermined way. Of course, if the measurement settings are known in advance, then an external agent could prepare the devices that are completely deterministic and the Bell-inequality violations could be achieved even in the absence of nonlocal correlations. Apparently there is a contradiction between the aim of making random number generator and the requirement of random choices to test the nonlocal nature of the devices. However, it is natural to assume that the observers can make free choices when they are separated. Initially it was speculated that the more particles are nonlocally correlated (in the sense of Bell violation), the stronger would be the observed randomness. However, this intuition is not entirely correct, as shown in [@Acin12] – a maximum production of random bits could be achieved with a non-maximal CHSH violation. To establish a quantitative relation between the nonlocal correlation and generated randomness, let us assume that the devices follow quantum mechanics. There exist thus a quantum state $\rho$ and measurement operators (POVMs) of each device $M^x_a$ and $M^y_b$ such that the joint probability distribution $P(a,b|x,y)$ could be expressed, through Born rule, as $$P_Q(a,b|x,y)=\mbox{Tr}(\rho M^x_a \otimes M^y_b),$$ where the tensor product reflects the fact that the measurements are local, i.e., there are no interactions between the devices, while the measurement takes place. The set of quantum correlations consists of all such probability distributions. Consider a linear combinations of them, $$\sum_{x,y,a,b} \alpha^{xy}_{ab} P_Q(a,b|x,y) = \mathcal{S},$$ specified by real coefficients $\alpha^{xy}_{ab}$. For local hidden-variable model, with certain coefficients $\alpha^{xy}_{ab}$, the Bell inequalities can be then expressed as $$\mathcal{S} \leq \mathcal{S}_L. \label{eq:BI}$$ This bound could be violated ($\mathcal{S} > \mathcal{S}_L$) for some quantum states and measurements indicating that the state contains nonlocal correlation. Let us consider the the measure of randomness quantified by min-entropy. For a $d$-dimensional probability distribution $P(X)$, describing a random variable $X$, the min-entropy is defined as $H_{\infty}(X)=-\mbox{log}_2\left[ \mbox{max}_x p(x) \right]$. Clearly, for the perfectly deterministic distribution this maximum equals one and the min-entropy is zero. On the other hand, for a perfectly random (uniform) distribution, the entropy acquires the maximum value, $\mbox{log}_2 d$. In the Bell scenario, the randomness in the outcomes, generated by the pair of measurements $x$ and $y$, reads $H_{\infty}(A,B|x,y)=-\mbox{log}_2 c_{xy}$, where $c_{xy}=\mbox{max}_{ab}P_Q(a,b|x,y)$. For a given observed value $\mathcal{S} > \mathcal{S}_L$, violating Bell inequality, one could find a quantum realization, i.e., the quantum states and set of measurements, that minimizes the min-entropy of the outcomes $H_{\infty}(A,B|x,y)$ [@Navascues08]. Thus, for any violation of Bell inequalities, the randomness of a pair of outcomes satisfies $$H_{\infty}(A,B|x,y) \geq f(\mathcal{S}), \label{eq:RandBound1}$$ where $f$ is a convex function and vanishes for the case of no Bell-inequality violation, $\mathcal{S} \leq \mathcal{S}_L$. Hence, the (\[eq:RandBound1\]) quantitatively states that [*a violation of Bell inequalities guarantees some amount randomness*]{}. Intuitively, if the joint probabilities admit (\[eq:BI\]), then for each setting $x, \ y$ and a hidden cause $\lambda$, the outcomes $a$ and $b$ can be deterministically assigned. However, the violation of (\[eq:BI\]) rules out such possibility. As a consequence, the observed correlation cannot be understood with a deterministic model and the outcomes are fundamentally undetermined at the local level. Although there are many different approaches to generate random numbers [@Marsaglia08; @Bassham10], the certification of randomness is highly non-trivial. However, this problem could be solved, in one stroke, if the random sequence shows a Bell violation, as it certifies a new form of “true” randomness that has no deterministic classical analogue. Nonlocality and randomness expansion ------------------------------------ Nonlocal correlations can be also used for the construction of novel types of randomness expansion protocols. In these protocols, a user expands an initial random string, known as seed, into a larger string of random bits. Here, we focus on protocols that achieve this expansion by using randomness certified by a Bell inequality violation. Since the first proposals in Refs. [@Colbeck07; @Pironio10], there have been several different works studying Bell-based randomness expansion protocols, see for instance [@Colbeck07; @Pironio10; @Colbeck10; @Vazirani12; @coudron_yuen; @miller_shi; @Chung14; @miller_shi2; @EATQKD]. It is not the scope of this section to review the contributions of all these works, which in any case should be interpreted as a representative but non-exhaustive list. However, most of them consider protocols that have the structure described in what follows. Note that the description aims at providing the main intuitive steps in a general randomness expansion protocols and technicalities are deliberately omitted (for details see the references above). The general scenario consists of a user who is able to run a Bell test. He thus has access to $n\geq 2$ devices where he can implement $m$ local measurements of $d$ outputs. For simplicity, we restrict the description in what follows to protocols involving only two devices, which are also more practical form an implementation point of view. The initial seed is used to randomly choose the local measurement settings in the Bell experiment. The choice of settings does not need to be uniformly random. In fact, in many situations, there is a combination of settings in the Bell test that produces more randomness than the rest. It is then convenient to bias the choice of measurement towards these settings so that (i) the amount of random bits consumed from the seed, denoted by $N_s$, is minimize and (ii) the amount of randomness produced during the Bell test is maximized. The choice of settings is then used to perform the Bell test. After $N$ repetitions of the Bell test, the user acquires enough statistics to have a proper estimation of the non-locality of the generated data. If not enough confidence about a Bell violation is obtained in this process, the protocol is aborted or more data are generated. From the observed Bell violation, it is possible to bound the amount of randomness in the generated bits. This is often done by means of the so-called min-entropy, $H_{\infty}$. In general, for a random variable $X$, the min-entropy is expressed in bits and is equal to $H_{\infty}=-\log_2\max_x P(X=x)$. The observed Bell violation is used to establish a lower bound on the min-entropy of the generated measurement outputs. Usually, after this process, the user concludes that with high confidence the $N_G\leq N$ generated bits have an entropy at least equal to $R\leq N_g$, that is $H_{\infty}\geq R$. This type of bounds is useful to run the last step in the protocol: the final randomness distillation using a randomness extractor [@randextr; @Nisan99]. This consists of classical post-processing of the measurement outcomes, in general assisted by some extra $N_e$ random bits from the seed, which map the $N_g$ bits with entropy at least $R$ to $R$ bits with the same entropy, that is, $R$ random bits. Putting all the things together, the final expansion of the protocols is given by the ration $R/(N_s+N_e)$. Every protocol comes with a security proof, which guarantees that the final list of $R$ bits is unpredictable to any possible observer, or eavesdropper, who could share correlated quantum information with the devices used in the Bell test. Security proofs are also considered in the case of eavesdroppers who can go beyond the quantum formalism, yet without violating the no-signaling principle. All the works mentioned above represent important advances in the design of randomness expansion protocols. At the moment, it is for instance known that (i) any Bell violation is enough to run a randomness expansion protocol [@miller_shi] and (ii) in the previous simplest configuration presented here, there exist protocol attaining an exponential randomness expansion [@Vazirani12]. More complicated variants, where devices are nested, even attain an unbounded expansion [@coudron_yuen; @Chung14]. Before concluding, it is worth mentioning that another interesting scenario consists of the case in which a trusted source of public randomness is available. Even if public, this trusted randomness can safely be used to run the Bell test and does not need to be taken into account in the final rate. Nonlocality and randomness amplification ---------------------------------------- Here we discuss the usefulness of nonlocal correlation for randomness amplification, a task related but in a way complementary to randomness expansion. While in randomness expansion one assumes the existence of an initial list of perfect random bits and the goal is to generate a longer list, in randomness amplification the user has access to a list of imperfect randomness and the goal is to generate a source of higher, ideally arbitrarily good, quality. As above, the goal is to solve this information task by exploiting Bell violating correlations. ![(Color online.) \[fig:RandAmp\] Scheme of randomness amplification using four devices, as in [@Brandao16]. The devices are shielded from each other as indicated with black barriers. The local measurement choices, in each run, are governed by the part of the SV-source, $x$, and corresponding output forms random bits $a$. After n-runs Bell test is performed (denoted by – Test). If the test violates Bell inequalities, then the outputs and rest of the initial SV-source $t$ are fed into an extractor (denoted by – Extractor) in order to obtain final outputs. If the test doesn’t violate Bell inequalities, the protocol is aborted.](RandAmp.pdf){width="70"} Randomness amplification based on non-locality was introduced in [@Colbeck12]. There, the initial source of imperfect randomness consisted of a SV source. Recall that the amplification of SV sources is impossible by classical means. A protocol was constructed based on the two-party chained Bell inequalities that was able to map an SV source with parameter $\epsilon < 0.058$ into a new source with $\epsilon$ arbitrarily close to zero. This result is only valid in an asymptotic regime in which the user implements the chained Bell inequality with an infinite number of measurements. Soon after, a more complicated protocol attained full randomness amplification [@Gallego13], that is, it was able to map SV sources of arbitrarily weak randomness, $\epsilon<1/2$, to arbitrarily good sources of randomness, $\epsilon\rightarrow 0$. The final result was again asymptotic, in the sense that to attain full randomness amplification the user now requires an infinite number of devices. Randomness amplification protocols have been studied by several other works, see for instance [@Grudka14; @Mironowicz15; @Brandao16; @Bouda14; @Chung14; @coudron_yuen; @WBGHHHPR16; @Ravi16a]. As above, the scope of this section is not to provide a complete description of all the works studying the problem of randomness amplification, but rather to provide a general framework that encompasses most of them. In fact, randomness amplification protocols (see e.g., Fig. \[fig:RandAmp\]) have a structure similar to randomness expansion ones. The starting point of a protocol consists of a source of imperfect randomness. This is often modelled by an SV source, although some works consider a weaker source of randomness, known as min-entropy source, in which the user only knows a lower bound on the min-entropy of the symbols generated by the source [@Bouda14; @Chung14]. The bits of imperfect randomness generated by the user are used to perform $N$ repetitions of the Bell test. If the observed Bell violation is large enough, with enough statistical confidence, bits defining the final source are constructed from the measurement outputs, possibly assisted by new random bits from the imperfect source. Note that contrarily to the previous case, the extraction process cannot be assisted with a seed of perfect random numbers, as this seed could be trivially be used to produce the final source. As in the case of expansion protocols, any protocol should be accompanied by a security proof showing that the final bits are unpredictable to any observer sharing a system correlated with the devices in the user’s hands. Quantum randomness and technology \[sec:RandTech\] ================================================== Random numbers have been a part of human technology since ancient times. If Julius Caesar indeed said “Alea iacta est” (“the die is cast,”) when he crossed the Rubicon, he referred to a technology that had already been in use for thousands of years. Modern uses for random numbers include cryptography, computer simulations, dispute resolution, and gaming. The importance of random numbers in politics, social science and medicine should also not be underestimated; randomized polling and randomized trials are essential methodology in these areas. A major challenge for any modern randomness technology is quantification of the degree to which the output could be predicted or controlled by an adversary. A common misconception is that the [*output*]{} of a random number generator can be tested for randomness, for example using statistical tests such as Diehard/Dieharder [@Marsaglia08; @BrownWEB2004], NIST SP800-22 [@RukhinNIST2010], or TestU01 [@LEcuyerACM2007]. While it is true that failing these tests indicates the presence of patterns in the output, and thus a degree of predictability, passing the tests does not indicate randomness. This becomes clear if you imagine a device that on its first run outputs a truly random sequence, perhaps from ideal measurements of radioactive decay, and on subsequent runs replays this same sequence from a recording it kept in memory. Any of these identical output sequences will pass the statistical tests, but only the first one is random; the others are completely predictable. We can summarize this situation with the words of John von Neumann: “there is no such thing as a random number – there are only methods to produce random numbers” [@VonNeumannAMS1951]. How can we know that a process does indeed produce random numbers? In light of the difficulties in determining the predictability of the apparent randomness seen in thermal fluctuations and other classical phenomena, using the intrinsic randomness of quantum processes is very attractive. One approach, described in earlier sections, is to use device-independent methods. In principle, device-independent randomness protocols can be implemented with any technology capable of a strong Bell-inequality violation, including ions [@Pironio10], photons [@Giustina15; @Shalm15], nitrogen-vacancy centres [@Hensen15], neutral atoms [@RosenfeldOS2011] and superconducting qubits [@JergerARX2016]. Device-independent randomness expansion based on Bell inequality violations was first demonstrated using a pair of Yb$^+$ ions held in spatially-separated traps [@Pironio10]. In this protocol, each ion is made to emit a photon which, due to the availability of multiple decay channels with orthogonal photon polarizations, emerges from the trap entangled with the internal state of the ion. When the two photons are combined on a beamsplitter, the Hong-Ou-Mandel effect causes a coalescence of the two photons into a single output channel, except in the case that the photons are in a polarization-antisymmetric Bell state. Detection of a photon pair, one at each beamsplitter output, thus accomplishes a projective measurement onto this antisymmetric Bell state, and this projection in turn causes an entanglement swapping that leaves the ions entangled. Their internal states can then be detected with high efficiency using fluorescence readout. This experiment strongly resembles a loophole-free Bell test, with the exception that the spatial separation of about one meter is too short to achieve space-like separation. Due to the low probability that both photons were collected and registered on the detectors, the experiment had a very low success rate, but this does not reduce the degree of Bell inequality violation or the quality of the randomness produced. The experiment generated 42 random bits in about one month of continuous running, or $1.6 \times 10^{-5}$ bits/s. A second experiment, in principle similar but using very different technologies, was performed with entangled photons and high-efficiency detectors [@ChristensenPRL2013] to achieve a randomness extraction rate of 0.4 bits/s. While further improvements in speed can be expected in the near future [@NISTBeacon], at present device-independent techniques are quite slow, and nearly all applications must still use traditional quantum randomness techniques. It is also worth noting that device-independent experiments consume a large quantity of random bits in choosing the measurement settings in the Bell test. Pironio et al. used publicly available randomness sources drawn from radioactive decay, atmospheric noise, and remote network activity. Christensen et al. used photon-arrival-time random number generators to choose the measurement settings. Although it has been argued that no additional physical randomness is needed in Bell tests [@PironioARX2015], there does not appear to be agreement on this point. At least in practice if not in principle, it seems likely that there will be a need for traditional quantum randomness technology also in device-independent protocols. If one does not stick to the device-independent approach, it is in fact fairly easy to obtain signals from quantum processes, and devices to harness the intrinsic randomness of quantum mechanics have existed since the 1950s. This began with devices to observe the timing of nuclear decay [@Isida1956], followed by a long list of quantum physical processes including electron shot noise in semiconductors, splitting of photons on beamsplitters, timing of photon arrivals, vacuum fluctuations, laser phase diffusion, amplified spontaneous emission, Raman scattering, atomic spin diffusion, and others. See [@HerreroARX2016] for a thorough review. While measurements on many physical processes can give signals that contain some intrinsic randomness, any real measurement will also be contaminated by other signal sources, which might be predictable or of unknown origin. For example, one could make a simple random number generator by counting the number of electrons that pass through a Zener diode in a given amount of time. Although electron shot noise will make an intrinsically-random contribution, there will also be an apparently-random contribution from thermal fluctuations (Johnson-Nyquist noise), and a quite non-random contribution due to technical noises from the environment. If the physical understanding of the device permits a description in terms of the conditional min-entropy (see Section \[sec:Randomness\]) $$\begin{aligned} H_{\infty}&(X_i|h_i)\geq k, \forall i \in \mathbb{N}, \forall h_i\end{aligned}$$ where $X_i$ is the $i$’th output string and $h_i$ is the “history” of the the device at that moment, including all fluctuating quantities not ascribable to intrinsic randomness, then randomness extraction techniques can be used to produce arbitrarily-good output bits from this source. Establishing this min-entropy level can be an important challenge, however. The prevalence of optical technologies in recent work on quantum random number generators is in part in response to this challenge. The high coherence and relative purity of optical phenomena allows experimental systems to closely approximate idealized quantum measurement scenarios. For example, fluorescence detection of the state of a single trapped atom is reasonably close to an ideal von Neumann projective measurement, with fidelity errors at the part-per-thousand level [@MyersonPRL2008]. Some statistical characterizations can also be carried out directly using the known statistical properties of quantum systems. For example, in linear optical systems shot noise can be distinguished from other noise sources based purely on scaling considerations, and provides a very direct calibration of the quantum versus thermal and technical contributions, without need for detailed modeling of the devices used. Considering an optical power measurement, the photocurrent $I_1$ that passes in unit time will obey $${{\rm var}}(I_1) = A + B \langle I_1 \rangle + C \langle I_1 \rangle^2$$ where $A$ is the “electronic noise” contribution, typically of thermal origin, $C \langle I_1 \rangle^2$ is the technical noise contribution, and $B \langle I_1 \rangle$ is the shot-noise contribution. Measuring ${{\rm var}}(I_1)$ as a function of $\langle I_1 \rangle$ then provides a direct quantification of the noise contributed by each of these distinct sources. This methodology has been used to estimate entropies in continuous-wave phase diffusion random number generators [@XuOE2012]. To date, the fastest quantum random number generators are based on laser phase diffusion [@JofreOE2011b; @XuOE2012; @AbellanOE2014; @YuanAPL2014], with the record at the time of writing being 68 Gbits/second [@Nie2015]. These devices, illustrated in Fig. \[fig:PDQRNG\] work entirely with macroscopic optical signals (the output of lasers), which greatly enhances their speed and signal-to-noise ratios. It is perhaps surprising that intrinsic randomness can be observed in the macroscopic regime, but in fact laser phase diffusion (and before it maser phase diffusion) was one of the first predicted quantum-optical signals, described by Schawlow and Townes in 1958 [@SchawlowPR1958]. Because stimulated emission is always accompanied by spontaneous emission, the light inside a laser cavity experiences random phase-kicks due to spontaneous emission. The laser itself has no phase-restoring mechanism; its governing equations are phase-invariant, and the phase diffuses in a random walk. As the kicks from spontaneous emission accumulate, the phase distribution rapidly approaches a uniform distribution on $[0,2\pi)$, making the laser field a macroscopic variable with one degree of freedom fully randomized by intrinsic randomness. The phase diffusion accumulated in a given time can be detected simply by introducing an optical delay and interfering earlier output with later output in an unbalanced Mach-Zehnder interferometer. It is worth noting that the phase distribution is fully insensitive to technical and thermal contributions; it is irrelevant if the environment or an adversary introduces an additional phase shift if the phase, a cyclic variable, is already fully randomized, i.e. uniformly distributed on $[0,2\pi)$. Considerable effort has gone into determining the min-entropy due to intrinsic randomness of laser phase-diffusion random number generators [@MitchellPRA2015], especially in the context of Bell tests [@AbellanPRL2015]. To date, laser phase diffusion random number generators have been used to choose the settings for all loophole-free Bell tests [@Hensen15; @Giustina15; @Shalm15]. Here we outline the modeling and measurement considerations used to bound the min-entropy of the output of these devices. Considering an interferometer with two paths, short (S) and long (L) with relative delay $\tau$, fed by the laser output $E(t) = |E(t)| \exp[i \phi(t)]$, the instantaneous power that reaches the detector is $$\label{eq:Interference} p_I(t) = {p_{\rm S}}(t) + {p_{\rm L}}(t) + 2{{\cal V}}\sqrt{{p_{\rm S}}(t) {p_{\rm L}}(t)} \cos \Delta \phi(t),$$ where ${p_{\rm S}}(t) \equiv \frac{1}{4} |E(t)|^2$, ${p_{\rm L}}(t) \equiv \frac{1}{4} |E(t-\tau)|^2$, $\Delta \phi(t) = \phi(t) - \phi(t-\tau)$ and ${{\cal V}}$ is the interference visibility. Assuming $\tau$ gives sufficient time for a full phase diffusion, $\Delta \phi(t)$ is uniformly distributed on $[0,2\pi)$ due to intrinsic quantum randomness. The contributions of ${p_{\rm S}}(t)$ and ${p_{\rm L}}(t)$, however, may reflect technical or thermal fluctuations, and constitute a contamination of the measurable signal $p_I(t)$. The process of detection converts this to a voltage $V(t)$, and in doing so adds other technical and thermal noises. Also, the necessarily finite speed of the detection system implies that $V(t)$ is a function of not only of $p_I(t)$, but also to a lesser extent of prior values $p_I(t'), t'<t$. This “hangover,” which is predictable based on the previous values, must be accounted for so as to not overestimate the entropy in $p_I(t)$. Digitization is the conversion from the continuous signal $V$ to a digital value $d$. Considering only the simplest case of binary digitization, we have $$\begin{aligned} d_i & = & \left\{ \begin{array}{rl} 0 & V(t_i) < V_0 \\ 1 & V(t_i) \ge V_0 \end{array} \right.\end{aligned}$$ where $V_0$ is the threshold voltage, itself a random variable influenced by a combination of thermal and technical noises. We can now express the distribution of $d$ in function of the total noise ${V_{{\rm noise}}}$ $$\begin{aligned} \label{eq:PredFromVc} P(d=1|{V_{{\rm noise}}}) &=& \frac{2}{\pi} \arcsin \sqrt{\frac{1}{2} + \frac{{V_{{\rm noise}}}}{2{\Delta V}}}\end{aligned}$$ where $2 {\Delta V}\propto 4 {{\cal V}}\sqrt{{p_{\rm S}}{p_{\rm L}}}$ is the peak-to-peak range of the signal due to the random $\Delta \phi$. This derives from the “arcsin” distribution that describes the cumulative distribution function of the cosine of a uniformly-random phase. The noise contributions can all be measured in ways that conservatively estimate their variation; for example interrupting one or the other path of the interferometer we can measure the distribution of ${p_{\rm S}}$ and ${p_{\rm L}}$, and comparing digitizer input to output we can upper bound the variation in $V_0$. With the measured distributions in hand, we can assign probabilities to ${V_{{\rm noise}}}$ and thus to the min-entropy of $d$. For example, if the total noise ${V_{{\rm noise}}}$ is described by a normal distribution with zero mean and width $\sigma_{{{\rm noise}}} = $ 10 mV, and ${\Delta V}= 0.5$ V, a probability $P(d|{V_{{\rm noise}}}) > P(d=1| 8 \sigma_{{{\rm noise}}}) \approx \frac{1}{2} + 0.0511$ will occur as often as ${V_{{\rm noise}}}$ exceeds $8 \sigma_{{{\rm noise}}}$, which is to say with probability $\approx 6 \times 10^{-16}$. Since $P(d|{V_{{\rm noise}}}) \le \frac{1}{2} + 0.0511$ implies a single-bit min entropy $H_{\infty}(d|{V_{{\rm noise}}}) > 0.86$, a randomness extraction based on this min-entropy can then be applied to give fully-random output strings. It is worth emphasizing that the characterizations used to guarantee randomness of this kind of source are not measurements of the digital output of the source, which as mentioned already can never demonstrate randomness. Rather, they are arguments based on physical principles, backed by measurements of the internal workings of the device. To summarize the argument, the trusted random variable $\Delta \phi(t)$ is known to be fully randomized by the intrinsic quantum randomness of spontaneous emission. This statement relies on general principles that concern laser physics, such as Einstein’s A and B coefficient argument linking spontaneous emission to stimulated emission and the fact that lasers have no preferred phase, due to the time-translation invariance of physical law. The next step of the guarantee follows from a model of the interference process, Eq. (\[eq:Interference\]), whose simplicity mirrors the simplicity of the experimental situation, in which single-mode devices (fibres) are used to ensure a low-dimensional field characterized only by the time variable. Finally there is an experimental characterization of the noises and a simple computation to bound their effects on the distribution of outputs. The computation can and should be performed with worst-case assumptions, assuming for example that all noise contributions are maximally correlated, unless the contrary has been experimentally demonstrated. Quantum Randomness and Future ============================= Randomness is a fascinating concept that absorbs human attention since centuries. Nowadays we are witnessing a novel situation, when the theoretical and experimental developments of quantum physics allow to investigate quantum randomness from completely novel points of view. The present review provides an overview of the problem of quantum randomness, and covers the implications and new directions emerging in the studies of this problem. From a philosophical and fundamental perspective, the recent results have significantly improved our understanding of what can and cannot be said about randomness in nature using quantum physics. While the presence of randomness cannot be proven without making some assumptions about the systems, theses assumptions are constantly weakened and it is an interesting open research problem to identify the weakest set of assumption sufficient to certify the presence of randomness. From a theoretical physics perspective, the recent results have provided a much better understanding of the relation between non-locality and randomness in quantum theory. Still, the exact relation between these two fundamental concepts is not fully understood. For instance, small amount of non-locality, or even entanglement, sometimes suffice to certify the presence of maximal randomness in the measurement outputs of a Bell experiment [@Acin12]. The relation between non-locality and randomness can also be studied in the larger framework of no-signaling theories, that is theories only limited by the no-signaling principle, which can go beyond quantum physics [@PR]. For instance it is known that in these theories maximal randomness certification is impossible, while it is in quantum physics [@delatorre]. From a more applied perspective, quantum protocols for randomness generation follow different approaches and require different assumptions. Until very recently, all quantum protocol required a precise knowledge of the devices used in the protocol and certified the presence of randomness by means of standard statistical tests. The resulting protocols are cheap, feasible to implement in practice, including the development of commercial products, and lead to reasonably high randomness generation rates. Device-independent solutions provide a completely different approach, in which no modeling of the devices is needed and the certification comes from a Bell inequality violation. Their implementation is however more challenging and only few much slower experimental realizations have until now been reported [^23]. Due to the importance and need of random numbers in our information society, we expect important advances in all these approaches, resulting in a large variety of quantum empowered solutions for randomness generation. Acknowledgements ================ We thank anonymous referees for constructive criticism and valuable suggestions. We are very grateful to Krzysztof Gawedzki, Alain Aspect, Philippe Grangier and Miguel A.F. Sanjuan for enlightening discussions about non-deterministic theories and unpredictability in classical physics. We acknowledge financial support from the John Templeton Foundation, the European Commission (FETPRO QUIC, STREP EQuaM and RAQUEL), the European Research Council (AdG OSYRIS, AdG IRQUAT, StG AQUMET, CoG QITBOX and PoC ERIDIAN), the AXA Chair in Quantum Information Science, the Spanish MINECO (Grants No. FIS2008-01236, No. FIS2013-40627-P, No. FIS2013-46768-P FOQUS, FIS2014-62181-EXP, FIS2015-68039-P, FIS2016-80773-P, and Severo Ochoa Excellence Grant SEV-2015-0522) with the support of FEDER funds, the Generalitat de Catalunya (Grants No. 2014-SGR-874, No. 2014-SGR-875, No. 2014-SGR-966 and No. 2014-SGR-1295 and CERCA Program), and Fundació Privada Cellex. [180]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty (), [**** (), ](\doibase 10.1364/OE.22.001645) (), [****, ](\doibase 10.1103/PhysRevLett.115.250403) (), [**** (), ](\doibase 10.1007/s00220-014-2260-1) (), [****, ](\doibase 10.1103/PhysRevLett.108.100402) (), in [**](http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199560561.001.0001/acprof-9780199560561-chapter-13), , Chap.  (, ) pp.  (), @noop [**]{} () (), @noop [**]{} () (), @noop (), [“,” ](https://www.coursera.org/learn/quantum-optics-single-photon) (), [****, ](\doibase 10.1103/PhysRevLett.47.460) (), [****, ](\doibase 10.1103/PhysRevLett.49.91) (), [**** (), ](\doibase 10.1145/1734213.1734214) (), [****, ](\doibase 10.1103/PhysRevLett.103.040403) (), @noop [**]{},  () (), [**** (), ](http://www.jstor.org/stable/1970980) (), @noop [****, ]{} (), [****, ](\doibase 10.1103/RevModPhys.38.447) (), [**** (), ](\doibase 10.1023/A:1023212600779) (), [****, ](\doibase 10.1038/nphys1734) (), [**** (), ](\doibase 10.1007/BF01608825) (), @noop [**]{} () (), [****, ](\doibase 10.1103/PhysRev.85.166) (), [****, ](\doibase 10.1103/PhysRev.48.696) (), [****, ](\doibase 10.1103/PhysRevA.90.032313) (), @noop [**]{} () (), [****, 10.1038/ncomms11345](\doibase 10.1038/ncomms11345) (), in @noop [**]{}, Chap.  (, ) pp.  (), [**** (), ](\doibase 10.1111/j.1749-6632.1996.tb23135.x) (), [****](http://www.phy.duke.edu/~rgb/General/dieharder.php) (), in [**](\doibase 10.1364/QIM.2014.QW3A.2) () p.  (), [****, ](\doibase 10.1103/RevModPhys.86.419) (), [**](http://www.cambridge.org/us/academic/subjects/physics/quantum-physics-quantum-information-and-quantum-computation/interpreting-quantum-world) () (), [****, ](\doibase 10.1103/PhysRevLett.101.210401) (), [****, ](\doibase 10.1103/PhysRevLett.112.040401) (), [**** (), ](\doibase 10.1137/0217015) (), [****, ](\doibase 10.1103/PhysRevLett.111.130406) (), @noop [“,” ]{} (), @noop [**]{} () (), [****, ](\doibase 10.1103/PhysRevLett.23.880) (), [**](http://trove.nla.gov.au/work/10556041) () (), @noop [**]{} () (), @noop [**]{} () (), [**** (), ](http://stacks.iop.org/1751-8121/44/i=9/a=095305) (), [****, ](\doibase 10.1038/nphys2300) (), [ ](http://arxiv.org/abs/1511.04857) (), @noop (), @noop [**]{} () (), [****, ](\doibase 10.1103/PhysRevLett.50.631) (), @noop [**]{} () (), “,” Chap.  (, ) pp.  (), [****, ](\doibase 10.1103/PhysRev.47.777) (), [****, ](\doibase 10.1103/RevModPhys.29.454) (), [****, ](\doibase 10.1103/RevModPhys.73.913) (), [ ****, ](http://wavelets.ens.fr/PUBLICATIONS/publications.htm) (), [****, ](\doibase 10.1103/PhysRevLett.48.291) (), [**** (), ](\doibase 10.1007/s13194-011-0040-8) (), [****, ](\doibase 10.1103/PhysRevLett.28.938) (), @noop [**]{} () (), “,” Chap.  (, ) pp.  (), [****, ](\doibase 10.1103/PhysRevLett.109.070401) (), [****, 10.1038/ncomms3654](\doibase 10.1038/ncomms3654) (), [**](http://www.cambridge.org/gb/academic/subjects/mathematics/fluid-dynamics-and-solid-mechanics/intermittency-turbulent-flows?format=HB&isbn=9780521792219#E81Bf11QWMSMLRQB.97) () (), [**** (), ](\doibase https://doi.org/10.1016/S0167-2789(99)00171-2) (), in @noop [**]{}, Chap.  (, ) pp.  (), [****, ](\doibase 10.1103/PhysRevLett.115.250401) (), “,” Chap.  (, ) pp.  (), @noop [**]{} () (), [****, ](\doibase 10.1103/PhysRevA.90.032322) (), [**** (), ](\doibase http://dx.doi.org/10.1063/1.1665156) (), [****, ](\doibase 10.1103/PhysRevLett.105.250404) (), @noop [**]{} () (), [**** (), ](\doibase 10.1007/BF01397280) (), [****, ](\doibase 10.1038/nature15759) (), [****, ](\doibase 10.1103/RevModPhys.89.015004) (), [ ](http://arxiv.org/abs/quant-ph/9803055v4) (), [**** (), ](\doibase 10.1007/BF02863577) (), [“,” ](http://www.informationphilosopher.com/solutions/scientists/maxwell/science_and_free_will.html) (), @noop [**]{} () (), [****, ](\doibase 10.1038/ncomms12930) (), [**** (), ](\doibase 10.1364/OE.19.020665) (), [**** (), ](\doibase http://dx.doi.org/10.1016/j.jcss.2010.06.014), (), @noop [**]{} () (), [****, ](\doibase 10.1038/nature08172) (), @noop [****, ]{} (), [****, ](\doibase 10.1103/PhysRevLett.109.160404) (), @noop [****, ]{} (), [****, ](\doibase 10.1086/525635) (), (, ) (), [**** (), ](\doibase 10.1007/s10701-007-9185-x) (), @noop [**]{} () (), @noop [**]{} () (), @noop [**]{} () (), @noop [**]{} () (), [**** (), ](\doibase 10.1007/s11229-012-0105-z) (), [**** (), ](\doibase http://dx.doi.org/10.1016/S0764-4442(99)80039-1) (), [**** (), ](\doibase 10.1214/aop/1023481009) (), [**** (), ](\doibase 10.1214/009117904000000207) (), [**** (), ](\doibase 10.1145/1268776.1268777) (), [****, ](\doibase 10.1103/PhysRevLett.114.190401) (), [****, ](\doibase 10.1038/nphoton.2010.214) (), [****, ](\doibase 10.1103/PhysRevLett.60.1103) (), [****, ](\doibase 10.1103/PhysRevLett.113.260401) (), [**** (), ](http://www.jstor.org/stable/10.1086/594525) (), [**](http://www.stat.fsu.edu/pub/diehard/) (), [**](http://cacr.uwaterloo.ca/hac/) (, ) (), @noop [**]{} () (), [**** (), ](\doibase 10.1017/S0269889715000290) (), @noop (), [**** (), ](\doibase 10.1145/2885493) (), [****, ](\doibase 10.1103/PhysRevA.91.032317) (), [****, ](\doibase 10.1103/PhysRevA.91.012314) (), @noop [**]{} (, ) (), [****, ](\doibase 10.1103/PhysRevLett.100.200502) (), [“,” ](http://www.nist.gov/itl/csd/ct/nist_beacon.cfm) (), [**** (), ](http://stacks.iop.org/1367-2630/10/i=7/a=073013) (), @noop [****, ]{} (), [**](http://press.princeton.edu/titles/2113.html) () (), @noop [**]{} () (), [**** (), ](http://scitation.aip.org/content/aip/journal/rsi/86/6/10.1063/1.4922417) (), [**** (), ](\doibase http://dx.doi.org/10.1006/jcss.1997.1546) (), “,” Chap.  (, ) pp. (), [**** (), ](\doibase 10.1086/594524) (), in [**](http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199560561.001.0001/acprof-9780199560561-chapter-9), , Chap.  (, ) pp.  (), [**** (), ](http://stacks.iop.org/0034-4885/42/i=12/a=002) (), [**](\doibase 10.1007/0-306-47120-5) () (), @noop (), [****, ](\doibase 10.1038/nature09008) (), [ ](http://iopscience.iop.org/1367-2630/focus/Focus-on-Device-Independent-Quantum-Information) (), [****, ](\doibase 10.2478/apsrt-2014-0006) (), @noop [**]{} () (), [**** (), ](\doibase http://dx.doi.org/10.1016/0375-9601(92)90711-T) (), [**** (), ](\doibase 10.1007/BF02058098) (), @noop [**]{} () (), [**](\doibase 10.1007/978-3-319-24886-8) () (), [****, ](\doibase 10.1103/PhysRevLett.117.230501) (), in [**](\doibase 10.1145/1060590.1060593),  (, ) pp.  (), @noop [ ]{} (), [****, ](\doibase 10.1103/PhysRev.34.163) (), [**** (), ](\doibase 10.1134/S0030400X11110233) (), [**** (), ](http://stacks.iop.org/1367-2630/18/i=4/a=045020) (), [**](http://csrc.nist.gov/publications/PubsSPs.html#800-22),   () (), [ ](http://www.madrimasd.org/blogs/complejidad/2009/10/24/127508) (), [ ](http://www.madrimasd.org/blogs/complejidad/2009/11/06/128252) (), [ ](http://www.madrimasd.org/blogs/complejidad/2009/11/06/128257) (), [**** (), ](\doibase http://dx.doi.org/10.1016/0022-0000(86)90044-9) (), [**** (), ](\doibase 10.1023/A:1005079904008) (), in [**](http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199560561.001.0001/acprof-9780199560561-chapter-8), , Chap.  (, ) pp.  (), [****, ](\doibase 10.1103/PhysRev.112.1940) (), in @noop [**]{}, Vol. , p.  (), @noop [**]{} () (), “,” Chap.  (, ) pp.  (), [****, ](\doibase 10.1103/PhysRevLett.115.250402) (), @noop [****, ]{} (), @noop [****, ]{} (), in @noop [**]{}, Chap.  (, ) pp.  ,  and , Eds. (), @noop [**]{} (, ) (), @noop [**]{} () (), [****, ](\doibase 10.1103/PhysRevLett.114.160502) (), [**** (), ](\doibase 10.1145/502090.502099) (), [**** (), ](\doibase 10.1126/science.1247715) (), [****, ](\doibase http://dx.doi.org/10.1016/j.aop.2015.07.021) (), [**** (), ](http://stacks.iop.org/1751-8121/47/i=42/a=424024) (), [**** (), ](http://stacks.iop.org/1751-8121/48/i=50/a=505303) (), [**** (), ](\doibase 10.1007/s40509-014-0008-4) (), [**](\doibase 10.1007/978-3-319-51893-0) () (), [**** (), ](\doibase 10.1073/pnas.97.15.8200) (), in [**](\doibase 10.1145/2213977.2213984),  (, ) pp.  (), [**** (), ](\doibase 10.1007/BF02579325) (), [**** (), ](http://stacks.iop.org/1751-8121/47/i=42/a=424017) (), [****, ](\doibase https://doi.org/10.1016/S0167-2789(01)00196-8) (), [**](http://www.jstor.org/stable/j.ctt7ztxn5) () (), [**** (), ](\doibase 10.1093/bjps/axn052) (), [****, ](\doibase 10.1103/PhysRevA.51.2785) (), @noop (), [**** (), ](\doibase http://dx.doi.org/10.1016/0375-9601(85)90339-1) (), [**** (), ](\doibase 10.1364/OE.20.012366) (), [**** (), , http://dx.doi.org/10.1063/1.4886761](\doibase http://dx.doi.org/10.1063/1.4886761) (), “,” Chap.  (, ) pp.  (), [****, ](\doibase 10.1103/RevModPhys.75.715) (), [****, ](\doibase 10.1038/nphys1202) [^1]: ‘Nothing happens at random; everything happens out of reason and by necessity’, from the lost work [Perí noũ]{} *On Mind*, see [@Diels06], p. 350, [@Freeman48], p. 140, fr. 2. [^2]: ‘All things happen by virtue of necessity’, [@Laertius25], IX, 45. [^3]: ‘Men have fashioned an image of [*chance*]{} as an excuse for their own stupidity’, [@Diels06], p. 407, [@Freeman48], p. 158, fr. 119. [^4]: ‘Epicurus saw that if the atoms traveled downwards by their own weight, we should have no freedom of the will, since the motion of the atoms would be determined by necessity. He therefore invented a device to escape from determinism (the point had apparently escaped the notice of Democritus): he said that the atom while traveling vertically downward by the force of gravity makes a very slight swerve to one side’ [@Cicero33], I, XXV. [^5]: “We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.” [@Laplace51] p. 4 [^6]: “A process is said to be *deterministic* if its entire future course and its entire past are uniquely determined by its state at the present instant of time”, [@Arnold73], p. 1 [^7]: *ibid.* [^8]: ”The initial state of a mechanical system (the totality of positions and velocities of its points at some moment of time) uniquely determines all of its motion”, [@Arnold89], p. 4 [^9]: *ibid* [^10]: *ibid.* [^11]: [@Landau60], p. 1. [^12]: “Le premier exemple que nous allons choisir est celui de l’équilibre instable; si un cône repose sur sa pointe, nous savons bien qu’il va tomber, mais nous ne savons pas de quel côté; il nous semble que le hasard seul va en décider.”, [@Poincare12] page 4, (“The first example we select is that of unstable equilibrium; if a cone rests upon its apex, we know well that it will fall, but we do not know toward what side; it seems to us chance alone will decide.” [@Newman56], vol. 2, p. 1382) [^13]: “...ein ganz wesentliches Merkmal desjenigen, was man im gew[ö]{}hnlichen Leben oder in unserer Wissenschaft als Zufall bezeichnet ... l[ä]{}[ß]{}t sich ... kurz in die Worte fassen: [*kleine Ursache – gro[ß]{}e Wirkung”*]{}, [@Smoluchowski18] (“...fundamental feature of what one calls chance in everyday life or in science allows a short formulation: [*small cause – big effect*]{}”) [^14]: [@Poincare12] p. 3. [^15]: [@Newman56], vol. 2, p. 1381 [^16]: In mathematics of differential equations, the Picard’s existance theorem (also known as Cauchy–Lipschitz theorem) is important to ensure existence and uniqueness of solutions to first-order equations with given initial conditions. Consider an initial value problem, say, $y'(t)=f(t,y(t))$ with $y(t_0)=y_0$. Also assume $f(.,.)$ is is uniformly Lipschitz continuous in $y$ (i.e., the Lipschitz constant can be taken independent of $t$) and continuous in $t$. Then for some values of $\varepsilon >0$, there exists a unique solution of $y(t)$, given the initial condition, in the interval $[t_0 - \varepsilon , t_0 + \varepsilon]$. [^17]: [@Boussinesq78], p. 39. “The movement phenomena should be divided into two major classes. The first one comprises those for which the laws of mechanics expressed as differential equations will determine by themselves the sequence of states through which the system will go and, consequently, the physico-chemical forces will not admit causes of different nature to play a role. On the other hand, to the second class we will assign movements for which the equations will admit singular integrals, and for which one will need a cause distinct from physico-chemical forces to intervene, from time to time, or continuously, without using any mechanical action, but simply to direct the system at each bifurcation point which will appear.” The “singular integrals” mentioned by Boussinesq are the additional trajectories coexisting with “regular” ones when conditions guaranteeing uniqueness of solutions are broken. [^18]: Thus in the Norton’s model, the new law of nature should, in particular, ascribe a probability $p(T)$ to the event that the point staying at rest at $r=0$ starts to move at time $T$. [^19]: Similar things seem to happen also in so called “general no-signaling theories” where, in comparison with quantum mechanics, the only physical assumption concerning the behavior of a system is the impossibility of transmitting information with an infinite velocity, see [@Tylec15]. [^20]: Note, here we do not impose any constraint on the hidden variables and these could be even [*nonlocal*]{}. In fact, the quantum theory becomes deterministic if one assumes the hidden variables to be [*nonlocal*]{} [@Gudder70]. [^21]: In fact, it was A. Gleason [@Gleason75], who pointed out first that quantum contextuality may exist in dimensions greater than two. For a single qubit, i.e. for the especially simple case of two dimensional Hilbert space, one can explicitly construct the non-contextual hidden variable models that describes all measurements (cf. [@Wodkiewicz85; @Wodkiewicz95; @Scully89]). In this sense, a single qubit does not exhibit intrinsic randomness. For the consistency of approach, we should thus consider that intrinsic randomness could appear in all quantum mechanics, with exception of quantum mechanics of single qubits. In this report we will neglect this subtlety, and talk about intrinsic randomness for the whole quantum mechanics without exceptions, remembering, however, Gleason’s result. [^22]: Of course one could argue that such randomness appears only to be “intrinsic”, since it is essentially epistemic in nature and arises due to the inaccessibility or ignorance of the information that resides in the nonlocal correlations. In another words, this kind of randomness on the local level is caused by our lack of knowledge of the global state, and further, it can be explained using deterministic [*nonlocal*]{} hidden variable models. [^23]: An important discussion of the commercial and practical aspects of quantum random number generation, and cryptography based on device dependent and independent protocols can be found in the lecture No. 7 by Alain Aspect and Michel Brune [@moon]
{ "pile_set_name": "ArXiv" }
--- abstract: | We apply Generative Adversarial Networks (GANs) to the domain of digital pathology. Current machine learning research for digital pathology focuses on diagnosis, but we suggest a different approach and advocate that generative models could drive forward the understanding of morphological characteristics of cancer tissue. In this paper, we develop a framework which allows GANs to capture key tissue features and uses these characteristics to give structure to its latent space. To this end, we trained our model on $249$K H&E breast cancer tissue images, extracted from 576 TMA images of patients from the Netherlands Cancer Institute (NKI) and Vancouver General Hospital (VGH) cohorts. We show that our model generates high quality images, with a Fréchet Inception Distance (FID) of 16.65. We further assess the quality of the images with cancer tissue characteristics (e.g. count of cancer, lymphocytes, or stromal cells), using quantitative information to calculate the FID and showing consistent performance of 9.86. Additionally, the latent space of our model shows an interpretable structure and allows semantic vector operations that translate into tissue feature transformations. Furthermore, ratings from two expert pathologists found no significant difference between our generated tissue images from real ones. The code, generated images, and pretrained model are available at <https://github.com/AdalbertoCq/Pathology-GAN> bibliography: - 'quiros20.bib' title: 'PathologyGAN: Learning deep representations of cancer tissue' --- Generative Adversarial Networks, Digital Pathology. Introduction ============ Cancer is a disease with extensive heterogeneity, where malignant cells interact with immune cells, stromal cells, surrounding tissues and blood vessels. Histological images, such as haematoxylin and eosin (H&E) stained tissue microarrays (TMAs), are a high-throughput imaging technology used to study such diversity. Despite its growing popularity, computational analysis of TMAs often lacks analysis of other omics data of the same cohort. Consequently, cellular behaviours and the tumour microenvironment recorded in TMAs remain poorly understood. ![(a): Images ($224\times224$) from PathologyGAN trained on H&E breast cancer tissue. (b): Real images, Inception-V1 closest neighbor to the generated above.[]{data-label="fig:hand_picked_samples"}](images/hand_picked_grid.jpg) The motivation for our work is to develop methods that could lead to a better understanding of phenotype diversity between/within tumors. We hypothesize that this diversity could be quite substantial given the highly diverse genomic and transcriptomic landscapes observed in large scale molecular profiling of tumors across multiple cancer types [@Ciriello2013]. We argue that representation learning with GAN-based models is the most promising to achieve our goal for the two following reasons: 1. By being able to generate high fidelity images, a GAN could learn the most relevant descriptions of tissue phenotype. 2. The continuous latent representation learned by GANs could help us quantify differences in tissue architectures free from supervised information. In this paper, we propose to use Generative Adversarial Networks (GANs) to learn representations of entire tissue architectures and define an interpretable latent space (e.g. colour, texture, spatial features of cancer and normal cells, and their interaction). To this end, we present the following contributions: 1. We propose PathologyGANs to generate high fidelity cancer tissue images from an structured latent space. The model combines BigGAN [@DBLP:journals/corr/abs-1809-11096], StyleGAN [@Karras2018ASG] and Relativistic Average Discriminator [@DBLP:journals/corr/abs-1807-00734]. 2. We assess the quality of the generated images through two different methods: convolutional Inception-V1 features and prognostic features of the cancer tissue (such as counts and densities of different cell types [@beck/systematic_analysis/2011; @yuan/quantitative_cellular/2012]. Both features are benchmarked with the Fréchet Inception Distance (FID). The results show that the model captures pathologically meaningful representations, and when evaluated by expert pathologists, generated tissue images are not distinct from real tissue images. 3. We show that our model induces an ordered latent space based on tissue characteristics, this allows to perform linear vector operations that transfer into high level tissue image changes. Background ========== GANs [@DBLP:journals/corr/GoodfellowPMXWOCB14] are generative models that are able to learn high fidelity and diverse data representations from a target distribution. This is done with a generator, $G(z)$, that maps random noise, $\boldsymbol{z} \sim p_{\boldsymbol{z}}(z)$, to samples that resemble the target data, $\boldsymbol{x} \sim p_{\text { data }}(\boldsymbol{x})$, and a discriminator, $D(x)$, whose goal is to distinguish between real and generated samples. The goal of a GAN is find the equilibrium in the min-max problem: $$\min _{G} \max _{D} V(D, G)=\mathbb{E}_{\boldsymbol{x} \sim p_{\text { data }}(\boldsymbol{x})}[\log D(\boldsymbol{x})]+\mathbb{E}_{\boldsymbol{z} \sim p_{\boldsymbol{z}}(\boldsymbol{z})}[\log (1-D(G(\boldsymbol{z})))].$$ Since its introduction, modeling distributions of images has become the mainstream application for GANs, firstly proposed by [@DBLP:journals/corr/RadfordMC15]. State-of-the-art GANs such as BigGAN [@DBLP:journals/corr/abs-1809-11096] and StyleGAN [@Karras2018ASG] have recently shown impressive high-resolution images, and proposed solutions like Spectral Normalization GANs [@DBLP:conf/iclr/MiyatoKKY18], Self-Attention GANs [@DBLP:journals/corr/abs-1805-08318], and also BigGAN achieved high diversity images in data sets like ImageNet [@imagenet_cvpr09] with 14 million images and 20 thousand different classes. At the same time, evaluating these models has been a challenging task. Many different metrics such as Inception Score (IS) [@DBLP:conf/nips/SalimansGZCRCC16], Fréchet Inception Distance (FID) [@DBLP:conf/nips/HeuselRUNH17], Maximum Mean Discrepancy (MMD) [@DBLP:journals/jmlr/GrettonBRSS12], Kernel Inception Distance (KID) [@DBLP:conf/iclr/BinkowskiSAG18], and 1-Nearest Neighbor classifier (1-NN) [@DBLP:conf/iclr/Lopez-PazO17] have been proposed to do so, and thorough empirical studies [@DBLP:journals/corr/abs-1806-07755; @DBLP:journals/corr/abs-1801-01973] have shed some light on the advantages and disadvantages of each them. However, the selection of a feature space is crucial for using these metrics. Currently, machine learning approaches to digital pathology have been focusing on building classifiers to achieve pathologist-level diagnosis [@Esteva2017; @DBLP:journals/corr/abs-1901-11489; @sci_reports/breast_cancer_multi/2017], and assisting in the decision process through computer-human interaction [@DBLP:conf/chi/CaiRHHKSWVCST19]. Recently, there has been an increasing interest in applying GANs to solve a range of specific tasks in digital pathology, including staining normalization [@Zanjani2018], staining transformation [@DBLP:conf/icmla/RanaYLS18; @Xu2019], and nuclei segmentation [@deep_ad_multi_organ]. Together with deep learning-based classification frameworks [@Esteva2017; @Ardila2019], these advances offer hope for better disease diagnosis than standard pathology [@Niazi2019]. Deep learning approaches have a lack of interpretability, which is a major limiting factor in making a real impact in clinical practice. For breast cancer, traditional computer vision approaches such as [@beck/systematic_analysis/2011] and [@yuan/quantitative_cellular/2012] have identified correlation between morphological features of cells and patient survival. Based these findings, we propose PathologyGAN as an approach to learn clinically/pathologically meaningful representations within the cancer tissue images. ![High level architecture of PathologyGAN. We include details of each modules architecture in the Appendix \[appendix:model\_arch\][]{data-label="fig:pathologygan_model"}](images/pathologyGAN_model.jpg) PathologyGAN ============ We use BigGAN from [@DBLP:journals/corr/abs-1809-11096] as a baseline architecture and introduced changes which empirically improved the Fréchet Inception Distance (FID) and the structure of the latent space. We followed the same architecture as BigGAN, employed Spectral Normalization in both generator and discriminator, self attention layers, and we also use orthogonal initialization and regularization as mentioned in the original paper. We make use of the Relativistic Average Discriminator [@DBLP:journals/corr/abs-1807-00734], where the discriminator’s goal is to estimate the probability of the real data being more realistic than the fake. We take this approach instead of following the Hinge loss [@DBLP:journals/corr/LimY17] as the GAN objective. We find that this change makes the model convergence faster and produce higher quality images. Images using the Hinge loss did not capture the morphological structure of the tissue (we provide examples of these results in the Appendix \[appendix:hinge\]). The discriminator, and generator loss function are formulated as in Equations 2 and 3, where $\mathbb{P}$ is the distribution of real data, $\mathbb{Q}$ is the distribution for the fake data, and $C(x)$ is the non-transformed discriminator output or critic: $$\begin{aligned} L_{Dis}=-\mathbb{E}_{x_{r} \sim \mathbb{P}}&\left[\log \left(\tilde{D}\left(x_{r}\right)\right)\right]-\mathbb{E}_{x_{f} \sim \mathbb{Q}}\left[\log \left(1-\tilde{D}\left(x_{f}\right)\right)\right], \\ L_{Gen}=-\mathbb{E}_{x_{f} \sim \mathbb{Q}}& \left[\log\left(\tilde{D}\left(x_{f}\right)\right)\right]-\mathbb{E}_{x_{r} \sim\mathbb{P}}\left[\log \left(1-\tilde{D}\left(x_{r}\right)\right)\right], \\ \quad \quad \tilde{D}\left(x_{r}\right) &= \text{sigmoid}\left(C\left(x_{r}\right)-\mathbb{E}_{x_{f} \sim \mathbb{Q}} C\left(x_{f}\right)\right)), \\ \quad \quad \tilde{D}\left(x_{f}\right) &= \text{sigmoid}\left(C\left(x_{f}\right)-\mathbb{E}_{x_{r} \sim \mathbb{P}} C\left(x_{r}\right)\right). \label{eqn:disc_loss} \end{aligned}$$ Additionally, we introduce two elements from StyleGAN [@Karras2018ASG] with the purpose of allowing the generator to freely optimize the latent space and find high-level features of the cancer tissue. First, a mapping network $M$ composed by four dense ResNet layers [@He2015DeepRL], placed after the latent vector $z \sim \mathcal{N}(0, I)$, with the purpose of allowing the generator to find the latent space $w \sim M(z)$ that better disentangles the latent factors of variation. Secondly, style mixing regularization, where two different latent vectors $z1$ and $z2$ are run into the mapping network and fed at the same time to the generator, randomly choosing a layer in the generator and providing $w1$ and $w2$ to the different halves of the generator (e.g. on a generator of ten layers and being six the randomly selected layer, $w1$ would feed layers one to six and $w2$ seven to ten). Style mixing regularization encourages the generator to localize the high level features of the images in the latent space. We also use adaptive instance normalization (AdaIN) on our models, providing the entire latent vectors. We use the Adam optimizer[@DBLP:journals/corr/KingmaB14] with $\beta_{1}=0.5$ and same learning rates of $0.0001$ for both generator and discriminator, the discriminator takes 5 steps for each of the generator. Each model was trained on an NVIDIA Titan RTX 24 GB for approximately 72 hours. To train our model, we utilize two H&E breast cancer databases from the Netherlands Cancer Institute (NKI) cohort and the Vancouver General Hospital (VGH) cohort with 248 and 328 patients respectively  [@beck/systematic_analysis/2011]. Each of them include TMA images, along with clinical patient data such as survival time, and estrogen-receptor (ER) status. The original TMA images all have a resolution of $1128 \times 720$ pixels, and we split each of the images into smaller patches of $224 \times 224$, and allow them to overlap by 50%. We also perform data augmentation on these images, a rotation of $90^{\circ}$, and $180^{\circ}$, and vertical and horizontal inversion. We filter out images in which the tissue covers less than 70% of the area. In total this yields a training set of 249K images and a test set of 62K. Results ======= Image quality analysis ---------------------- ![CRImage identifies different cell types in our generated images. Cancer cells are highlighted with a green color, while lymphocytes and stromal cells are highlighted in yellow.[]{data-label="fig:CRImage_SVM_example"}](images/generated_crimage_1.jpg "fig:") ![CRImage identifies different cell types in our generated images. Cancer cells are highlighted with a green color, while lymphocytes and stromal cells are highlighted in yellow.[]{data-label="fig:CRImage_SVM_example"}](images/generated_crimage_2.jpg "fig:") Model Inception FID CRImage FID -------------- --------------- -------------- -- -- -- PathologyGAN 16.65$\pm$2.5 9.86$\pm$0.4 : Evaluation of PathologyGANs. Mean and standard deviations are computed over three different random initializations. The low FID scores in both feature space suggest consistent and accurate representations.[]{data-label="GAN_results-table"} We evaluate our models using the Fréchet Inception Distance (FID), we take the usual procedure of using convolutional features of a Inception-V1 network. As additional assessment, we use cellular information of the tissue image to calculate the FID score, the motivation behind this approach is to ensure that our models capture meaningful and faithful representations of the tissue. The CRImage tool [@yuan/quantitative_cellular/2012] uses an SVM classifier to provide quantitative information about tumor cellular characteristics in cancer tissue. This approach allows us to gather pathological information in the images, namely the number of cancer cells, the number of other types of cells (such as stromal or lymphocytes), and the ratio of tumorous cells per area. We use this information as features to calculate the FID metric. Figure \[fig:CRImage\_SVM\_example\] displays an example of how the tool captures the different cells in the generated images, such as cancer cells, stromal cells, and lymphocytes. We evaluate our model with the FID score, generating 5000 fake images, and randomly selecting 5000 real images. We also use both approaches for feature space selection, using CRImage cell classifier and the convolutional features of an Inception-V1. Table \[GAN\_results-table\] shows that our model is able to achieve a accurate characterization of the cancer tissue. Using the Inception feature space, FID shows a stable representation for all models with values similar to ImageNet models of BigGAN [@DBLP:journals/corr/abs-1809-11096] and SAGAN [@DBLP:journals/corr/abs-1805-08318]), with FIDs of 7.4 and 18.65 respectively or StyleGAN [@Karras2018ASG] trained on FFHQ with FID of 4.40. Using the CRImage cellular information as feature space, FID shows again close representations to real tissue. Analysis of latent representations ---------------------------------- In this section we focus on the PathologyGAN’s latent space, exploring the impact of introducing a mapping network in the generator and using style mixing regularization. We include a complete comparison on using these two features in the Appendix \[appendix:map\_style\]. Here we will provide examples of its impact on linear interpolations and vector operations on the latent space $z$, as well as visualizations on the latent space $w$. In Figure \[fig:latent\_space\] we capture how the latent space $w$ has a structure that shows direct relationship with the number of cancer cells in the tissue. We generated $10,000$ images and use CRImage to extract the count of cancer cells in the tissue, using this information we created $8$ different classes that account for counts of cancer cells in the tissue image, and consecutively we label each image with the corresponding class. Along with each tissue image we have the corresponding latent vector $w$, we used UMAP [@lel2018umap] to project it to two dimensional space. In each of the sub-figures (a-h) in Figure \[fig:latent\_space\], we provide a density plot of latent vectors $w$ for each class, (a) corresponds to generated images with the lowest count of cancer cells in the tissue and (h) to images with the largest, as we increase the number of cancer cells in the image, the density of the latent vectors $w$ moves from quadrant $IV$ to quadrant $II$ in the UMAP representation. We also found that linear interpolations between two latent vectors $z$ have better feature transformations when the mapping network and style mixing regularization are introduced. Figure \[fig:linear\_interpolation\] shows linear interpolations in latent space $z$ between images with malignant tissue and benign tissue. (a) corresponds to a model with a mapping network and style mixing regularization and (b) to a model without those features, we can see that transitions on (a) include an increasing population of cancer cells rather than the fading effect observed in images of (b). This result indicates that (a) better translates interpolations in the latent space, as real cells do not fade away. Finally, we performed linear vector operations in $z$, that translated into semantic image features transformations. In Figure \[fig:vector\_operations\] we provide examples of three vector operations that result into feature alterations in the images. This evidence shows further support on the relation between a structured latent space and tissue characteristics. ![Density plots of samples on the UMAP reduced representation of the latent space $w$. Each subfigure (a-h) belongs to samples of only one class, where each class represents a range of counts of cancer cells in the tissue image. (a) accounts for images with the lowest number of cancer cells and (h) corresponds to images with the largest count, subfigures from (a) to (h) belong to increasing number of cancer cells. This figure shows how increasing the number of cancer cells in the tissue image corresponds to moving the latent vectors from regions of quadrant $IV$ to quadrant $II$.[]{data-label="fig:latent_space"}](images/UMAP_w_latent_StylePathologyGAN_zdim_200_dimension_2_density.jpg) ![Linear interpolation in the latent space $z$ from a benign (less cancer cells, left end) to a malignant tissue (more cancer cells, right end). (a) PathologyGAN model interpolations with a mapping network and style mixing regularization. (b) PathologyGAN model interpolations without a mapping network and style mixing regularization. (a) includes an increasing population of cancer cells rather than a fading effect from model (b), this shows that model (a) better translates high level features of the images from the latent space vectors.[]{data-label="fig:linear_interpolation"}](images/linear_interpolation_StylePathologyGAN_latent_z_vs_BigGAN_0.jpg) ![Linear vector operations on the latent space $z$ translate into image feature transformations. We gather latent vectors that generate images with different high level features and perform linear operations on the vectors before we feed the generator, resulting into semantic translations of the characteristics of the images. We perform the following operations (a) Benign tissue and lymphocytes - benign tissue + tumorous tissue = cancer cells and lymphocytes, (b) Benign tissue with patches of cancer cells - tumorous = benign tissue, and (c) Tumorous tissue with lymphocytes - benign tissue with lymphocytes + benign tissue = tumorous or necrotic tissue. []{data-label="fig:vector_operations"}](images/vector_op/title/op_0_72.jpg "fig:") ![Linear vector operations on the latent space $z$ translate into image feature transformations. We gather latent vectors that generate images with different high level features and perform linear operations on the vectors before we feed the generator, resulting into semantic translations of the characteristics of the images. We perform the following operations (a) Benign tissue and lymphocytes - benign tissue + tumorous tissue = cancer cells and lymphocytes, (b) Benign tissue with patches of cancer cells - tumorous = benign tissue, and (c) Tumorous tissue with lymphocytes - benign tissue with lymphocytes + benign tissue = tumorous or necrotic tissue. []{data-label="fig:vector_operations"}](images/vector_op/title/op_1_1.jpg "fig:") ![Linear vector operations on the latent space $z$ translate into image feature transformations. We gather latent vectors that generate images with different high level features and perform linear operations on the vectors before we feed the generator, resulting into semantic translations of the characteristics of the images. We perform the following operations (a) Benign tissue and lymphocytes - benign tissue + tumorous tissue = cancer cells and lymphocytes, (b) Benign tissue with patches of cancer cells - tumorous = benign tissue, and (c) Tumorous tissue with lymphocytes - benign tissue with lymphocytes + benign tissue = tumorous or necrotic tissue. []{data-label="fig:vector_operations"}](images/vector_op/title/op_2_76.jpg "fig:") Pathologists’ results --------------------- To demonstrate that the generated images can sustain the scrutiny of clinical examination, we asked two expert pathologists to take two different tests, setup as follows: - Test I: 25 Sets of 8 images - Pathologists were asked to find the only fake image in each set. - Test II: 50 Individual images - Pathologists were asked to rate all individual images from 1 to 5, where 5 meant the image appeared the most real. In total, each of the pathologists classified 50 individual images and 25 sets of 8 images. We chose fake images in two ways, with half of them hand-selected and the other half with fake images that had the smallest Euclidean distance to real images in the convolutional feature space (Inception-V1). All the real images are randomly selected between the three closest neighbors of the fake images. On Test I, pathologist 1 and 2 were able to find only 2 fake out of the 25 sets, and 3 out of the 25, respectively. This is indicative of the images’ quality because we argue that pathologists should be less challenged to find fake images amongst real ones, since having other references to compare with facilitates the task. Figure \[fig:roc\_curve\] shows Test II in terms of false positive vs true positive, and we can see that pathologist classification is close to random. The pathologists mentioned that the usual procedure is to work with larger images with bigger resolution, but that the generated fake images were of a quality, that at the $224\times224$ size used in this work, they were not able to differentiate between real and fake tissue. ![ROC curve of Pathologists’ classification for images in Test II. The near random classification performance from both expert pathologists suggests that generated tissue images do not present artifacts that give away the tissue as generated.[]{data-label="fig:roc_curve"}](images/roc_curve.jpg) Conclusion ========== We presented a new approach to the use of machine learning in digital pathology, using GANs to learn cancer tissue representations. We assessed the quality of the generated images through the FID metric, using the convolutional features of a Inception-V1 network and quantitative cellular information of the tissue, both showed consistent state-of-the-art values of $16.65$ and $9.86$. We showed that our model allows high level interpretation of its latent space, even performing linear operations that translate into feature tissue transformations. Finally, we demonstrate that the quality of the generated images do not allow pathologists to reliably find differences between real and generated images. As future work, this model could be extended to achieve higher resolutions of 1024x1024 as TMAs, further experimentation with pathologists and non-experts classification on generated tissue, as well as studies of borderline cases (e.g. atypia) between generated images and pathologists interpretations. We are most interested in extending our model so it can provide novel understanding of the complex tumor microenvironment recorded in the WSIs, and this is where we think the tissue representation learning properties of our model is key. Being able to encode real tissue patches from WSIs and introducing an encoder into the model that allows us to project real tissue onto the GAN’s latent space. Code ==== We provide the code at this [location](https://github.com/AdalbertoCq/Pathology-GAN). Hinge vs Relativistic Average Discriminator {#appendix:hinge} =========================================== In this section we show corresponding generated images and loss function plots for Relativistic Average Discriminator model and Hinge Loss model. ![Left grid images correspond to Relativistic Average Discriminator model vs right grid images from the Hinge loss model. We can see that the Relativistic Average model is able to reproduce cancer tissue characteristics compared to Hinge loss, which does not.[]{data-label="fig:generated_hinge_vs_rad"}](images/appendix/hinge_vs_rad/rad.jpg "fig:") ![Left grid images correspond to Relativistic Average Discriminator model vs right grid images from the Hinge loss model. We can see that the Relativistic Average model is able to reproduce cancer tissue characteristics compared to Hinge loss, which does not.[]{data-label="fig:generated_hinge_vs_rad"}](images/appendix/hinge_vs_rad/hinge.jpg "fig:") ![(a) Generator and Discriminator loss functions of the Relativistic Average Discriminator model, (b) Generator and Discriminator loss functions from the Hinge loss model. Here we capture the corresponding loss functions to the images in Figure  \[fig:generated\_hinge\_vs\_rad\], both of them converge but only Relativistic Average Discriminator produces meaningful images.[]{data-label="fig:loss_functions"}](images/appendix/hinge_vs_rad/loss_comp_hinge_rel.jpg) Mapping Network and Style Mixing Regularization Comparison {#appendix:map_style} ========================================================== To measure the impact of introducing a mapping network and using style mixing regularization during training, we provide different figures of the latent space $w$ for two PathologyGANs, one using these features and another one without them. Figures \[fig:latent\_space\_comp\_all\], \[fig:latent\_space\_comp\_point\_label\], and \[fig:latent\_space\_comp\_density\_label\] capture the clear difference in the latent space ordering with respect to the counts of cancer cells in the tissue image. Without a mapping network and style mixing regularization the latent space $w$ shows a random placement of the vectors subject to the tissue images, when these two elements are introduced moving the selected vector int the latent space from quadrant $IV$ to quadrant $II$ results into increasing the number of cancer cells in the tissue. ![Latent space $w \in \mathbb{R}^{200}$ visualization of $10,000$ vector samples on a UMAP reduced space of 2 dimensions, each vector’s generated image was label using CRImage and annotated with the respective class subject to the count of cancer cells in the image. Class $0$ accounts for images with the lowest count cancer cells, on the other extreme Class $8$ accounts for images with the larger count. (a) corresponds to latent space of a PathologyGAN with a mapping network and style mixing regularization, and (b) to a PathologyGAN without these two features. We show that in model (b) vector samples are randomly placed in the latent space $w$, where in (a) vector samples increasingly concentrate in the quadrant $II$ as we increase the number of cancer cells in the tissue images.[]{data-label="fig:latent_space_comp_all"}](images/appendix/map_style/UMAP_StylePathologyGAN_latent_space_zdim_200_dimension_2.jpg "fig:") ![Latent space $w \in \mathbb{R}^{200}$ visualization of $10,000$ vector samples on a UMAP reduced space of 2 dimensions, each vector’s generated image was label using CRImage and annotated with the respective class subject to the count of cancer cells in the image. Class $0$ accounts for images with the lowest count cancer cells, on the other extreme Class $8$ accounts for images with the larger count. (a) corresponds to latent space of a PathologyGAN with a mapping network and style mixing regularization, and (b) to a PathologyGAN without these two features. We show that in model (b) vector samples are randomly placed in the latent space $w$, where in (a) vector samples increasingly concentrate in the quadrant $II$ as we increase the number of cancer cells in the tissue images.[]{data-label="fig:latent_space_comp_all"}](images/appendix/map_style/UMAP_PathologyGAN_latent_space_zdim_200_dimension_2.jpg "fig:") ![Comparison of the latent space $w$ for two different PathologyGAN models, (a-h) include a mapping network and style mixing regularization, and (i-p) do not include them. Each sub-figure shows datapoints only related to one of the classes, and each class is subject to the count of cancer cells in the tissue image, (a) and (i) \[class $0$\] are associated to images with the lowest number of cancer cells, (h) and (p) \[class $8$\] with the largest. In the model (a-h) images with increasing number of cancer cells correspond to proportionally moving to quadrant $II$ in the 2 dimensional space , where (i-p) are randomly placed.[]{data-label="fig:latent_space_comp_point_label"}](images/appendix/map_style/UMAP_w_latent_StylePathologyGAN_zdim_200_dimension_2.jpg "fig:") ![Comparison of the latent space $w$ for two different PathologyGAN models, (a-h) include a mapping network and style mixing regularization, and (i-p) do not include them. Each sub-figure shows datapoints only related to one of the classes, and each class is subject to the count of cancer cells in the tissue image, (a) and (i) \[class $0$\] are associated to images with the lowest number of cancer cells, (h) and (p) \[class $8$\] with the largest. In the model (a-h) images with increasing number of cancer cells correspond to proportionally moving to quadrant $II$ in the 2 dimensional space , where (i-p) are randomly placed.[]{data-label="fig:latent_space_comp_point_label"}](images/appendix/map_style/UMAP_w_latent_PathologyGAN_zdim_200_dimension_2.jpg "fig:") ![Comparison of the latent space $w$ for two different PathologyGAN models, (a-h) include a mapping network and style mixing regularization, and (i-p) do not include them. Each sub-figure shows the density of datapoints only related to one of the classes, and each class is subject to the count of cancer cells in the tissue image, (a) and (i) \[class $0$\] are associated to images with the lowest number of cancer cells, (h) and (p) \[class\] with the largest. In the model (a-h) images with increasing number of cancer cells correspond to proportionally moving to quadrant $II$ in the 2 dimensional space , where (i-p) are randomly placed.[]{data-label="fig:latent_space_comp_density_label"}](images/appendix/map_style/UMAP_w_latent_StylePathologyGAN_zdim_200_dimension_2_density.jpg "fig:") ![Comparison of the latent space $w$ for two different PathologyGAN models, (a-h) include a mapping network and style mixing regularization, and (i-p) do not include them. Each sub-figure shows the density of datapoints only related to one of the classes, and each class is subject to the count of cancer cells in the tissue image, (a) and (i) \[class $0$\] are associated to images with the lowest number of cancer cells, (h) and (p) \[class\] with the largest. In the model (a-h) images with increasing number of cancer cells correspond to proportionally moving to quadrant $II$ in the 2 dimensional space , where (i-p) are randomly placed.[]{data-label="fig:latent_space_comp_density_label"}](images/appendix/map_style/UMAP_w_latent_PathologyGAN_zdim_200_dimension_2_density.jpg "fig:") Vector Operation Samples {#appendix:vector_op} ======================== ![Samples of vector operations with different images, all operations correspond to: Benign tissue and lymphocytes- benign tissue + tumorous tissue = cancer cells and lymphocytes.[]{data-label="fig:vector_op_0"}](images/vector_op/op_0_samples/op_0_12.jpg "fig:") ![Samples of vector operations with different images, all operations correspond to: Benign tissue and lymphocytes- benign tissue + tumorous tissue = cancer cells and lymphocytes.[]{data-label="fig:vector_op_0"}](images/vector_op/op_0_samples/op_0_17.jpg "fig:") ![Samples of vector operations with different images, all operations correspond to: Benign tissue and lymphocytes- benign tissue + tumorous tissue = cancer cells and lymphocytes.[]{data-label="fig:vector_op_0"}](images/vector_op/op_0_samples/op_0_43.jpg "fig:") ![Samples of vector operations with different images, all operations correspond to: Benign tissue and lymphocytes- benign tissue + tumorous tissue = cancer cells and lymphocytes.[]{data-label="fig:vector_op_0"}](images/vector_op/op_0_samples/op_0_55.jpg "fig:") ![Samples of vector operations with different images, all operations correspond to: Benign tissue and lymphocytes- benign tissue + tumorous tissue = cancer cells and lymphocytes.[]{data-label="fig:vector_op_0"}](images/vector_op/op_0_samples/op_0_58.jpg "fig:") ![Samples of vector operations with different images, all operations correspond to: Benign tissue with patches of cancer cells - tumorous = benign tissue.[]{data-label="fig:vector_op_1"}](images/vector_op/op_1_samples/op_1_0.jpg "fig:") ![Samples of vector operations with different images, all operations correspond to: Benign tissue with patches of cancer cells - tumorous = benign tissue.[]{data-label="fig:vector_op_1"}](images/vector_op/op_1_samples/op_1_3.jpg "fig:") ![Samples of vector operations with different images, all operations correspond to: Benign tissue with patches of cancer cells - tumorous = benign tissue.[]{data-label="fig:vector_op_1"}](images/vector_op/op_1_samples/op_1_4.jpg "fig:") ![Samples of vector operations with different images, all operations correspond to: Benign tissue with patches of cancer cells - tumorous = benign tissue.[]{data-label="fig:vector_op_1"}](images/vector_op/op_1_samples/op_1_5.jpg "fig:") ![Samples of vector operations with different images, all operations correspond to: Benign tissue with patches of cancer cells - tumorous = benign tissue.[]{data-label="fig:vector_op_1"}](images/vector_op/op_1_samples/op_1_6.jpg "fig:") ![Samples of vector operations with different images, all operations correspond to: Tumorous tissue with lymphocytes - benign tissue with lymphocytes + benign tissue = tumorous or necrotic tissue.[]{data-label="fig:vector_op_2"}](images/vector_op/op_2_samples/op_2_4.jpg "fig:") ![Samples of vector operations with different images, all operations correspond to: Tumorous tissue with lymphocytes - benign tissue with lymphocytes + benign tissue = tumorous or necrotic tissue.[]{data-label="fig:vector_op_2"}](images/vector_op/op_2_samples/op_2_6.jpg "fig:") ![Samples of vector operations with different images, all operations correspond to: Tumorous tissue with lymphocytes - benign tissue with lymphocytes + benign tissue = tumorous or necrotic tissue.[]{data-label="fig:vector_op_2"}](images/vector_op/op_2_samples/op_2_61.jpg "fig:") ![Samples of vector operations with different images, all operations correspond to: Tumorous tissue with lymphocytes - benign tissue with lymphocytes + benign tissue = tumorous or necrotic tissue.[]{data-label="fig:vector_op_2"}](images/vector_op/op_2_samples/op_2_78.jpg "fig:") PathologyGAN at 448x448 ======================= We include in this section experimental results of a $448\times448$ image resolution model. We trained this model for 90 epochs over approximately five days, using four NVIDIA Titan RTX $24$ GB. Over one model the results of Inception FID and CRImage FID were $29.53$ and $203$ respectively. We found that CRImage FID is highly sensitive to changes in the images since it looks for morphological shapes of cancer cells, lymphocytes, and stroma in the tissue, at this resolution the generated tissue images don’t hold the same high quality as in the $224\times224$ case. As we capture in the **Conclusion** section, this is an opportunity to improve the detail in the generated image at high resolutions. Figure \[fig:hand\_picked\_448\_samples\] show three examples of comparisons between (a) PathologyGAN images and (b) real images. Additionally, the representation learning properties are still preserved in the latent space. Figure \[fig:latent\_space\_448\] captures the density of cancer cells in the $448\times448$ tissue images as previously presented for the $224\times224$ case in **Appendix C.** ![(a): Images ($448\times448$) from PathologyGAN trained on H&E breast cancer tissue. (b): Real images.[]{data-label="fig:hand_picked_448_samples"}](images/448/Figure_samples_PathologyGAN_batch_0.jpg "fig:") ![(a): Images ($448\times448$) from PathologyGAN trained on H&E breast cancer tissue. (b): Real images.[]{data-label="fig:hand_picked_448_samples"}](images/448/Figure_samples_PathologyGAN_batch_1.jpg "fig:") ![(a): Images ($448\times448$) from PathologyGAN trained on H&E breast cancer tissue. (b): Real images.[]{data-label="fig:hand_picked_448_samples"}](images/448/Figure_samples_PathologyGAN_batch_2.jpg "fig:") ![Scatter and density plots of $448\times448$ samples on the UMAP reduced representation of the latent space $w$. Each subfigure (a-h) belongs to samples of only one class, where each class represents a range of counts of cancer cells in the tissue image. (a) accounts for images with the lowest number of cancer cells and (h) corresponds to images with the largest count, subfigures from (a) to (h) belong to increasing number of cancer cells.[]{data-label="fig:latent_space_448"}](images/448/UMAP_w_latent_PathologyGAN_zdim_200_dimension_2.jpg "fig:") ![Scatter and density plots of $448\times448$ samples on the UMAP reduced representation of the latent space $w$. Each subfigure (a-h) belongs to samples of only one class, where each class represents a range of counts of cancer cells in the tissue image. (a) accounts for images with the lowest number of cancer cells and (h) corresponds to images with the largest count, subfigures from (a) to (h) belong to increasing number of cancer cells.[]{data-label="fig:latent_space_448"}](images/448/UMAP_w_latent_PathologyGAN_zdim_200_dimension_2_density.jpg "fig:") GAN evaluation metrics for digital pathology ============================================ In this section, we investigate how relevant GAN evaluation metrics perform on distinguishing differences in cancer tissue distributions. We center our attention on metrics that are model agnostic and work with a set of generated images. We focus on Fréchet Inception distance (FID), Kernel Inception Distance (KID), and 1-Nearest Neighbor classifier (1-NN) as common metrics to evaluate GANs. We do not include Inception Score and Mode Score because they do not compare to real data directly, they require a classification network on survival times and estrogen-receptor (ER), and they have also showed lower performance when evaluating GANs [@DBLP:journals/corr/abs-1801-01973; @DBLP:journals/corr/abs-1806-07755]. [@DBLP:journals/corr/abs-1806-07755] reported that the choice of feature space is critical for evaluation metrics, so we follow these results by using the ’pool\_3’ layer from an ImageNet trained Inception-V1 as a convolutional feature space. We set up two experiments to test how the evaluation metrics capture: - Artificial contamination from different staining markers and cancer types. - Consistency when two sample distributions of the same database are compared. Detecting changes in markers and cancer tissue features ------------------------------------------------------- We used multiple cancer types and markers to account for alterations of color and shapes in the tissue. Markers highlight parts of the tissue with different colors, and cancer types have distinct tissues structures. Examples of these changes are displayed in Figure \[fig:cancer\_types\_markers\]. We constructed one reference image set with 5000 H&E breast cancer images from our data sets of NKI and VGH, and compared it against another set of 5000 H&E breast cancer images contaminated with other markers and cancer types. We used three types of marker-cancer combinations for contamination, all from the Stanford TMA Database [@stanford_tma]: H&E - Bladder cancer, Cathepsin-L - Breast cancer, and CD137 - Lymph/Colon/Liver cancer. ![Different cancer types and markers. (a) H&E Breast cancer, (b) H&E Bladder cancer, (c) Capthepsin-L Breast cancer, and (d) CD137 Bone marrow cancer. We can see different coloring per marker, and tissue architecture per cancer type.[]{data-label="fig:cancer_types_markers"}](images/cancer_marker_type_abcd.jpg) Each set of images was constructed by randomly sampling from the respective marker-cancer type data set, which is done to minimize the overlap between the clean and contaminated sets. Figure \[fig:contamination\_example\] shows how (a) FID, (b) KID, (c) 1-NN behave when the reference H&E breast cancer set is measured against multiple percentage of contaminated H&E breast cancer sets. Marker types have a large impact due to color change and all metrics capture this except for 1-NN. Cathepsin-L highlights parts of the tissue with brown colors and CD137 has similar color to necrotic tissue on H&E breast cancer, but still far from the characteristic pink color of H&E. Accordingly, H&E-Bladder has a better score in all metrics due to the color stain, again expect for 1-NN. Cancer tissue type differences are captured by all the metrics, which shows a marker predominance, but we can see that on the H&E marker the differences between breast and bladder types are still captured. In this experiment, we find that FID and KID have a gradual response distinguishing between markers and cancer tissue types, however 1-NN is not able to give a measure that clearly defines these changes. ![Distinguishing a set H&E Breast cancer images against different contamination of markers and cancer types. For a metric to be optimnal, the value should decreas along with the contamination. (a) corresponds to FID, (b) KID, (c) 1-NN. FID and KID gradually define changes in marker and tissue type, 1-NN does not provide a clear measure of the changes.[]{data-label="fig:contamination_example"}](images/score_contamination.jpg) Reliability on evaluation metrics --------------------------------- Another evaluation we performed was to study which metrics are consistent when two independent sample distributions with the same contamination percentage are compared. To construct this test, for each contamination percentage, we create two independent sample sets of 5000 images and compare them against each other. Again, we constructed these image sets by randomly selecting images for each of the marker-cancer databases. We do this to ensure there are no overlapping images between the distributions. In Figure \[fig:test\_train\_example\] we show that (a) FID has a stable performance, compared to (b) KID, and especially (c) 1-NN. The metrics should show a close to zero distance for each of the contamination rates since we are comparing two sample-distributions from the same data set. This shows that only FID has a close to zero constant behavior across different data sets when comparing the same tissue image distributions. Based on these two experiments, we argue that 1-NN does not clearly represent changes in the cancer types and marker, and both KID and 1-NN do not give a constant reliable measure across different markers and cancer types. Therefore we focused on FID as the most promising evaluation metrics. ![Consistency of metrics when two independent sets of images with the same contamination are compared. Consistent metrics should be close to zero for each of the contamination rates. (a) FID, (b) KID, and (c) 1-NN, we can see that FID is the metric that shows a close to zero constant measure.[]{data-label="fig:test_train_example"}](images/train_test_score_verification.jpg) Model Architecture {#appendix:model_arch} ================== Mapping Network $M:z \rightarrow w$ ------------------------------------------------------- $z \in \sim \mathbb{R}^{200} \sim \mathcal{N}(0, I)$ ResNet Dense Layer and ReLU, $200 \rightarrow 200$ ResNet Dense Layer and ReLU, $200 \rightarrow 200$ ResNet Dense Layer and ReLU, $200 \rightarrow 200$ ResNet Dense Layer and ReLU, $200 \rightarrow 200$ Dense Layer, $200 \rightarrow 200$ : Mapping Network Architecture details of PathologyGAN model.[]{data-label="mapping_network_arch"} Generator Network $G:w \rightarrow x$ ------------------------------------------------------------------------------- Dense Layer, adaptive instance normalization (AdaIN), and leakyReLU $200 \rightarrow 1024$ Dense Layer, AdaIN, and leakyReLU $1024 \rightarrow 12544$ Reshape $7\times7\times256$ ResNet Conv2D Layer, 3x3, stride 1, pad same, AdaIN, and leakyReLU $0.2$ $ 7\times7\times256 \rightarrow 7\times7\times256 $ ConvTranspose2D Layer, 2x2, stride 2, pad upscale, AdaIN, and leakyReLU $0.2$ $ 7\times7\times256 \rightarrow 14\times14\times512 $ ResNet Conv2D Layer, 3x3, stride 1, pad same, AdaIN, and leakyReLU $0.2$ $ 14\times14\times512 \rightarrow 14\times14\times512 $ ConvTranspose2D Layer, 2x2, stride 2, pad upscale, AdaIN, and leakyReLU $0.2$ $ 14\times14\times512 \rightarrow 28\times28\times256 $ ResNet Conv2D Layer, 3x3, stride 1, pad same, AdaIN, and leakyReLU $0.2$ $ 28\times28\times256 \rightarrow 28\times28\times256 $ Attention Layer at $28\times28\times256$ ConvTranspose2D Layer, 2x2, stride 2, pad upscale, AdaIN, and leakyReLU $0.2$ $ 28\times28\times256 \rightarrow 56\times56\times128 $ ResNet Conv2D Layer, 3x3, stride 1, pad same, AdaIN, and leakyReLU $0.2$ $ 56\times56\times128 \rightarrow 56\times56\times128 $ ConvTranspose2D Layer, 2x2, stride 2, pad upscale, AdaIN, and leakyReLU $0.2$ $ 56\times56\times128 \rightarrow 112\times112\times64 $ ResNet Conv2D Layer, 3x3, stride 1, pad same, AdaIN, and leakyReLU $0.2$ $ 112\times112\times64 \rightarrow 112\times112\times64 $ ConvTranspose2D Layer, 2x2, stride 2, pad upscale, AdaIN, and leakyReLU $0.2$ $ 112\times112\times64 \rightarrow 224\times224\times32 $ Conv2D Layer, 3x3, stride 1, pad same, $ 32 \rightarrow 3 $ $ 224\times224\times32 \rightarrow 224\times224\times3 $ Sigmoid : Generator Network Architecture details of PathologyGAN model.[]{data-label="generator_arch"} Discriminator Network $C:x \rightarrow d$ ------------------------------------------------------------------- $x \in \mathbb{R}^{224\times224\times3}$ ResNet Conv2D Layer, 3x3, stride 1, pad same, and leakyReLU $0.2$ $ 224\times224\times3 \rightarrow 224\times224\times3 $ Conv2D Layer, 2x2, stride 2, pad downscale, and leakyReLU $0.2$ $ 224\times224\times3 \rightarrow 122\times122\times32 $ ResNet Conv2D Layer, 3x3, stride 1, pad same, and leakyReLU $0.2$ $ 122\times122\times32 \rightarrow 122\times122\times32 $ Conv2D Layer, 2x2, stride 2, pad downscale, and leakyReLU $0.2$ $ 122\times122\times32 \rightarrow 56\times56\times64 $ ResNet Conv2D Layer, 3x3, stride 1, pad same, and leakyReLU $0.2$ $ 56\times56\times64 \rightarrow 56\times56\times64 $ Conv2D Layer, 2x2, stride 2, pad downscale, and leakyReLU $0.2$ $ 56\times56\times64 \rightarrow 28\times28\times128 $ ResNet Conv2D Layer, 3x3, stride 1, pad same, and leakyReLU $0.2$ $ 28\times28\times128 \rightarrow 28\times28\times128 $ Attention Layer at $28\times28\times128$ Conv2D Layer, 2x2, stride 2, pad downscale, and leakyReLU $0.2$ $ 28\times28\times128 \rightarrow 14\times14\times256 $ ResNet Conv2D Layer, 3x3, stride 1, pad same, and leakyReLU $0.2$ $ 14\times14\times256 \rightarrow 14\times14\times256 $ Conv2D Layer, 2x2, stride 2, pad downscale, and leakyReLU $0.2$ $ 14\times14\times256 \rightarrow 7\times7\times512 $ Flatten $ 7\times7\times512 \rightarrow 25088 $ Dense Layer and leakyReLU, $25088 \rightarrow 1024$ Dense Layer and leakyReLU, $1024 \rightarrow 1$ : Discriminator Network Architecture details of PathologyGAN model.[]{data-label="Discriminator_arch"} Pathologists Tests ================== We provide in here examples of the test taken by the pathologists: - Test I: Sets of 8 images - Pathologists were asked to find the only fake image in each set. - Test: II: 10 Individual images - Pathologists were asked to rate all individual images from 1 to 5, where 5 meant the image appeared the most real. Additionally we include the complete tests and solutions. ![Example of Test I.[]{data-label="fig:test_i"}](images/appendix/path_tests/set_2.jpg) ![Two examples of Test II.[]{data-label="fig:test_ii"}](images/appendix/path_tests/individual_0.jpg "fig:") ![Two examples of Test II.[]{data-label="fig:test_ii"}](images/appendix/path_tests/individual_1.jpg "fig:") ![Two examples of Test II.[]{data-label="fig:test_ii"}](images/appendix/path_tests/individual_2.jpg "fig:") ![Two examples of Test II.[]{data-label="fig:test_ii"}](images/appendix/path_tests/individual_3.jpg "fig:") ![Two examples of Test II.[]{data-label="fig:test_ii"}](images/appendix/path_tests/individual_4.jpg "fig:") ![Two examples of Test II.[]{data-label="fig:test_ii"}](images/appendix/path_tests/individual_5.jpg "fig:") ![Two examples of Test II.[]{data-label="fig:test_ii"}](images/appendix/path_tests/individual_6.jpg "fig:") ![Two examples of Test II.[]{data-label="fig:test_ii"}](images/appendix/path_tests/individual_7.jpg "fig:") ![Two examples of Test II.[]{data-label="fig:test_ii"}](images/appendix/path_tests/individual_8.jpg "fig:") ![Two examples of Test II.[]{data-label="fig:test_ii"}](images/appendix/path_tests/individual_9.jpg "fig:") Images ====== We show here two types of figures: - Hand-selected fake images with real Inception-V1 closest neighbors. - Fake images with the smallest distance to real Inception-V1 closest neighbors. ![Hand-selected fake images: for each row, the first image is a generated one, the remaining seven images are close Inception-V1 neighbors of the fake image.[]{data-label="fig:uncond_hand_picked_uncond"}](images/appendix/uncond/neigbor_selected.jpg) ![Minimum distance fake images:For each row, the first image is a generated one, the remaining seven images are close Inception-V1 neighbors of the fake image.[]{data-label="fig:uncond_Min_distance_uncond"}](images/appendix/uncond/neighbors_min_dist.jpg)
{ "pile_set_name": "ArXiv" }
--- abstract: 'Platooning of heavy-duty vehicles (HDVs) is a key component of smart and connected highways and is expected to bring remarkable fuel savings and emission reduction. In this paper, we study the coordination of HDV platooning on a highway section. We model the arrival of HDVs as a Poisson process. Multiple HDVs are merged into one platoon if their headways are below a given threshold. The merging is done by accelerating the following vehicles to catch up with the leading ones. We characterize the following random variables: (i) platoon size, (ii) headway between platoons, and (iii) travel time increment due to platoon formation. We formulate and solve an optimization problem to determine the headway threshold for platooning that leads to minimal cost (time plus fuel). We also compare our results with that from Simulation of Urban MObility (SUMO).' author: - 'Xi Xiong, Erdong Xiao, and Li Jin [^1][^2]' bibliography: - 'bib\_LJ.bib' title: '**Analysis of a Stochastic Model for Coordinated Platooning of Heavy-duty Vehicles**' --- [**Index terms**]{}: Automated highways, connected and autonomous vehicles, vehicle platooning, Poisson point process. [^1]: This work was supported in part by NYU Tandon School of Engineering and C2SMART Department of Transportation Center. The authors appreciate the discussion with Profs. Pravin Varaiya, Saurabh Amin, Karl H. Johansson, and Zhong-Ping Jiang. [^2]: X. Xiong and L. Jin are with the Department of Civil and Urban Engineering, and E. Xiao is with the Department of Mechanical and Aerospace Engineering, New York University Tandon School of Engineering, Brooklyn, NY, USA, emails: [email protected], [email protected], [email protected].
{ "pile_set_name": "ArXiv" }
--- abstract: | An analysis of the environments around a sample of 28 3CR radio galaxies with redshifts $0.6 < z < 1.8$ is presented, based primarily upon K–band images down to $K \sim 20$ taken using the UK Infrared Telescope (UKIRT). A net overdensity of K–band galaxies is found in the fields of the radio galaxies, with the mean excess counts being comparable to that expected for clusters of Abell Class 0 richness. A sharp peak is found in the angular cross–correlation amplitude centred on the radio galaxies, which, for reasonable assumptions about the luminosity function of the galaxies, corresponds to a spatial cross–correlation amplitude between those determined for low redshift Abell Class 0 and Abell Class 1 clusters. These data are complimented by J–band images also from UKIRT, and by optical images from the Hubble Space Telescope. The fields of the lower redshift ($z \lta 0.9$) radio galaxies in the sample generally show well–defined near–infrared colour–magnitude relations with little scatter, indicating a significant number of galaxies at the redshift of the radio galaxy; the relations involving colours shortward of the 4000Å break show considerably greater scatter, suggesting that many of the cluster galaxies have low levels of recent or on–going star formation. At higher redshifts the colour–magnitude sequences are less prominent due to the increased field galaxy contribution at faint magnitudes, but there is a statistical excess of galaxies with the very red infrared colours ($J-K \gta 1.75$) expected of old cluster galaxies at these redshifts. Although these results are appropriate for the [*mean*]{} of all of the radio galaxy fields, there exist large field–to–field variations in the richness of the environments. Many, but certainly not all, powerful $z \sim 1$ radio galaxies lie in (proto–)cluster environments. author: - | P. N. Best[^1][^2]\ Sterrewacht Leiden, Postbus 9513, 2300 RA Leiden, the Netherlands\ bibliography: - 'pnb.bib' nocite: - '[@bow98]' - '[@dre87a; @djo87b]' - '[@whi91]' - '[@but78]' - '[@sta97]' - '[@fan74]' - '[@yat89]' - '[@cra96b]' - '[@mcc95]' - '[@bes98d]' - '[@bar96a]' - '[@bes97c]' - '[@dic97a]' - '[@min98; @djo95; @mou97; @szo98; @gar93; @mcl95; @ber98]' - '[@gia98]' - '[@roc98a]' - '[@lon79b; @pre88]' - '[@kau98a]' - '[@gui90]' - '[@bes98d]' - '[@raw95; @eco95; @bes98d]' - '[@mcc87]' - '[@sta98]' - '[@fev87; @ben97a]' - '[@dok99b]' - '[@kor95]' title: 'The cluster environments of the $\mathbf{z \sim 1}$ 3CR radio galaxies' --- \#1[to 0pt[\#1]{}]{} 50[$H_0 = 50$kms$^{-1}$Mpc$^{-1}$]{} \[firstpage\] Galaxies: clustering — Galaxies: active Introduction {#intro} ============ Clusters of galaxies are the largest, most massive, collapsed structures in the Universe, and as such are of fundamental importance for many cosmological studies. They provide a unique probe of large–scale structure in the early Universe and, as the systems which separated from the Hubble flow at the earliest epochs, they contain the oldest galaxies known. These can strongly constrain the first epoch of the formation of ellipticals, and hence set a lower limit to the age of the Universe; the effectiveness of using such old galaxies at high redshifts towards this goal has been well demonstrated by Dunlop et al. . Further, because clusters contain large numbers of galaxies at the same distance, they are important testbeds for models of galaxy evolution. The cores of optically–selected clusters are dominated by a population of luminous early–type red galaxies which occupy a narrow locus in colour–magnitude relations. Stanford, Eisenhardt and Dickinson showed that, out to redshifts $z \sim 1$, the evolution of the colours of early–type cluster galaxies on these relations is completely consistent with passive evolution of an old stellar population formed at high redshift. The small intrinsic scatter of the galaxy colours, and particularly the fact that it remains small at redshifts $z \gta 0.5$ [@sta98], implies that the star formation of the ellipticals comprising a cluster must have been well synchronized, and sets tight limits on the amount of recent star formation that might have occurred (e.g. Bower, Kodama and Terlevich 1998). Cluster ellipticals also show a tight relationship between their effective radius, effective surface brightness, and central velocity dispersion (the ‘fundamental plane’; c.f Dressler  1987, Djorgovski and Davies 1987). The location of these ellipticals within the fundamental plane has been shown to evolve with redshift out to $z = 0.83$, in a manner which implies that the mass–to–light ratio of the galaxies evolves as $\Delta {\rm log} (M/L_{\rm r}) \sim -0.4 \Delta z$, roughly in accordance with passive evolution predictions (e.g. van Dokkum et al 1998 and references therein). These results are in qualitative agreement with ‘monolithic collapse’ models of galaxy formation, in which an elliptical galaxy forms the majority of its stars in a single short burst of star formation at an early cosmic epoch. On the other hand, the hierarchical galaxy formation models favoured by cold dark matter cosmologies (e.g. White and Frenk 1991) predict a later formation of galaxies with star formation on–going to lower redshifts. These are supported by the appearance of a population of bluer galaxies in many clusters at redshifts $z \gta 0.3$ (the Butcher–Oemler effect; Butcher and Oemler 1978) and by the high fraction of merging galaxies seen in the $z=0.83$ cluster MS1054$-$03 [@dok99a]. Since finding clusters at high redshift is strongly biased towards the very richest environments, in which early–type galaxies will have formed at the very highest redshifts, hierarchical models can still explain the apparent passive evolution and small scatter of the colour–magnitude relation out to redshifts $z \sim 1$. It is clearly important to extend cluster studies out to still higher redshifts, but the difficulty lies in the detection of clusters. At optical wavelengths, the contrast of a cluster above the background counts is minimal at these redshifts: the deep wide–area ESO Imaging Survey (EIS) has found 12 ‘good’ cluster candidates above redshift 0.8, but none above $z=1.2$ [@sco99]. Addition of near–infrared wavebands helps (e.g. Stanford  1997), but is still a relatively inefficient method. Selection using X–ray techniques is more reliable, but X–ray surveys are currently sensitivity limited: the ROSAT Deep Cluster Survey found about 30 clusters with $z \gta 0.5$, but none above redshift 0.9 [@ros98]. Chandra and XMM will make a big improvement here. An alternative approach is to use powerful radio galaxies as probes of distant clusters: these can be easily observed out to the highest redshifts, and there is growing evidence that at high redshifts they lie in rich environments. At low redshifts powerful double radio sources (FRII’s; Fanaroff & Riley 1974) are associated with giant elliptical galaxies that are typically the dominant members of galaxy groups; the only nearby radio source of comparable radio luminosity to the powerful high redshift radio galaxies is Cygnus A, and this source lies in a rich cluster [@owe97]. At a redshift $z \sim 0.5$, analysis of the galaxy cross–correlation function around FRII radio galaxies (Yates, Miller and Peacock 1989) and an Abell clustering classification [@hil91] have shown that about 40% of radio sources are located in clusters of Abell richness class 0 or greater. At $z \sim 1$, the circumstantial evidence that at least some powerful radio sources are located at the centres of clusters is overwhelming and includes the following: - Detections of luminous X–ray emission from the fields of the radio galaxies (e.g. Crawford and Fabian 1996), sometimes observed to be extended, indicating the presence of a relatively dense intracluster medium. - Large over-densities of galaxies in the fields of some distant radio sources, selected by infrared colour [@dic97a]. - Direct detections of companion galaxies with narrow–band imaging (McCarthy, Spinrad and van Breugel 1995) and spectroscopic studies [@dic97a]. - The observation that powerful radio galaxies are as luminous as brightest cluster galaxies at $z \sim 0.8$, and have radial light profiles which are well–matched by de Vaucouleurs law with large (10 to 15kpc) characteristic radii (Best, Longair and Röttgering 1998b). - The radio sources display large Faraday depolarisation and rotation measures (e.g. Best  ; see also Carilli for redshift $z > 2$ radio sources), requiring a dense, ionised surrounding medium. - Theoretical arguments that to produce such luminous radio sources requires not only a high AGN power, due to a very massive central black hole being fueled at close to the Eddington limit [@raw91b], but also a dense environment to confine the radio lobes and convert the jet kinetic energy efficiently into radiation (e.g. see Barthel and Arnoud 1996). At still higher redshifts, radio sources have been detected out to redshift $z=5.2$ [@bre99], and some well–studied sources are known to lie in cluster environments. For example, towards the radio galaxy 1138$-$215 ($z=2.2$) extended X–ray emission has been detected (Carilli  1998) and narrow–band imaging reveals over 30 nearby Ly-$\alpha$ emitters [@kur00b]. Radio sources may therefore offer a unique opportunity to study dense environments back to the earliest cosmic epochs. As yet, however, there have been no [*systematic*]{} studies of radio galaxy environments much beyond $z \sim 0.5$, and so it is important to investigate in detail the nature of the environments of the general population: do all powerful distant radio galaxies lie in cluster environments, or do only a minority but these have ‘grabbed the headlines’?; what are the properties of any clustering environments (richness, radius, shape, etc.) surrounding these objects?; what is the nature of the constituent galaxies (morphological composition, segregation, etc.) of any detected cluster? In this paper, deep near–infrared observations of the fields of a sample of 28 powerful radio galaxies with $0.6 < z < 1.8$ are analysed, in conjunction with Hubble Space Telescope (HST) images of the same fields, to investigate the ubiquity and richness of the environments of the radio galaxies. The observations, data reduction, and source extraction are described in Section \[obssect\]. In Section \[galcounts\] the integrated galaxy counts are considered, in Section \[crosscor\] the angular and spatial cross–correlation amplitudes are derived, and in Section \[colmagsect\] an investigation of the colour–magnitude and colour–colour relations is carried out. The implications of the results are discussed in Section \[discuss\]. Throughout the paper values for the cosmological parameters of $\Omega = 1$ and $H_0 = 50$kms$^{-1}$Mpc$^{-1}$ are assumed. Observations and Data Reduction {#obssect} =============================== The dataset {#dataset} ----------- The data used for this research was presented and described by Best, Longair and R[ö]{}ttgering . In short, the sample consists of 28 radio galaxies with redshifts $0.6 < z < 1.8$ drawn from the revised 3CR sample of Laing, Riley and Longair . The fields of these radio galaxies were observed at optical wavelengths using the Wide-Field Planetary Camera II (WFPC2) on the HST generally for one orbit in each of two different wavebands. They were also observed at near–infrared wavelengths using IRCAM3 on the UK Infrared Telescope (UKIRT) in the K–band for approximately 54 minutes, and in 20 of the 28 cases also in the J–band. For 5 sources (3C13, 3C41, 3C49, 3C65, 3C340) a further 3 to 4 hours of K–band observations have subsequently been taken using IRCAM3 in September 1998 (Best , in preparation), and these were combined with the original data to provide much deeper images. These further data were taken in the same manner as the original IRCAM3 data except for the use of the tip-tilt system that was available on UKIRT then, but wasn’t available for the original runs; use of this system provided a significant reduction in the effective seeing. The HST data were reduced according to the standard Space Telescope Science Institute (STScI) calibration pipeline [@lau89], following the description of Best  . The UKIRT data were reduced using IRAF and mosaicked using the [^3] package, again following the general procedure outlined by Best  . During the mosaicking process, the individual galaxy frames were block replicated by a factor of 4 in each dimension to allow more accurate alignment. The final images were then block–averaged in 2 by 2 pixels, resulting in frames with a pixel size of 0.142 arcseconds. The seeing varied from image to image, from about 0.7 to 1.2 arcsec.  also produces an output exposure map indicating the exposure for each pixel on the combined image; this is useful for weighting the different regions of the images, which reach different limiting sensitivities owing to the dithering process used in taking the data (see Best  1997). This dataset has a number of advantages and disadvantages for use in studies of clustering around distant radio galaxies. On the negative side, since these data were originally taken with the goal of studying the host galaxies of the radio sources, no comparison fields were taken with which the data can be compared to statistically remove background counts. Further, IRCAM3 is only a 256 by 256 array with a field of view per frame of just over 70 by 70 arcseconds. After mosaicking this provided a final image of nearly 100 by 100 arcsec, which corresponds to a little over 800kpc at a redshift of one for the adopted cosmology, but the highest sensitivity is only obtained in the central 50 by 50 arcsec region. This is a significantly smaller size than a cluster would be expected to have at this redshift, and so only those galaxies in the central regions of any prospective clusters will be investigated (cf. the much larger extent of the structure around the radio galaxy 3C324 at redshift 1.206, as demonstrated in Figure 2 of Dickinson , although just over half of the associated cluster members found there would lie within the IRCAM3 field of view). Although these two factors limit the usefulness of the dataset, they are far outweighed by the benefits. First, the large size and essentially complete nature of the sample make it ideal for surveying the average environments of these sources. Second, the multi–colour nature of the dataset provides important information on any cluster membership through the creation of colour–magnitude diagrams and the determination of photometric redshifts. Third, the availability of near–infrared data is essential for such studies at these high redshifts since it continues to sample the old stellar populations longwards of the 4000Å break; in the K–waveband the K and evolutionary corrections are small and relatively independent of morphological type, even at redshifts $z \gta 1$, in contrast to the situation at optical wavelengths. Fourth, the availability of the high resolution HST data allows accurate star–galaxy separation down to the faintest magnitudes studied. For these reasons, this dataset is well–suited to investigating the environments of $z \sim 1$ radio galaxies. K–band image detection and photometry {#sextract} ------------------------------------- Throughout these analyses, the K–band data were used as the primary dataset. Image detection and photometry on the K–band frames was carried out using  version 2.1 [@ber96]. As a result of the dithering technique used to obtain near–infrared data, the exposure time varies with position across the image meaning that the rms background noise level also varies.  allows the supply of an input weight map by which the local detection threshold is adjusted as a function of position across the image to compensate the varying noise levels, thus avoiding missing objects in the most sensitive regions of the image or detecting large numbers of spurious features in the noisiest regions. The output exposure map of each field produced by  was used as the weight map for . The source extraction parameters were set such that, to be detected, an object must have a flux in excess of 1.5 times the local background noise level over at least $N$ connected pixels, where $N$ was varied a little according to the seeing conditions, but in general was about 20 (equivalent to 5 pixels prior to the block replication and averaging during the mosaicking procedure). To test the validity of this extraction method, a search for negative holes was carried out using the same extraction parameters, and resulted in a total of only 43 negative detections throughout the 28 images, that is, about 1.5 negative detections per frame. The fluxes associated with these negative detections all correspond to positive features below the 50% completeness limit and so it is expected that there are essentially no false positive detections above that limit. The output from  was examined carefully, with minor (typically 1–2 per field) modifications being made to the catalogue if necessary: objects coinciding with the spikes of bright stars were removed, the data for occasional entries that had been separated into two objects by  but were clearly (on comparison with data at other wavebands) a single object were combined, and very occasionally it was necessary to add to the catalogue an obvious object which had been missed due to its proximity to a brighter object. Further, all objects that lay within 21 pixels (3 arcsec) of the edge of an image were removed from the catalogue in case their magnitudes were corrupted by aperture truncation.  provides a [flag]{} parameter to indicate the reliability of its measured magnitudes. After this trimming of objects close to the edges of the fields, less than 2% of all objects had high [flag]{} values indicative of truncated apertures, saturated pixels, or corrupted data. These objects were also removed from the catalogue. ’s [mag\_best]{} estimator was used to determine the magnitudes of the sources; this yields an estimate for the ‘total’ magnitude using Kron’s first–moment algorithm, except if there is a nearby companion which may bias the total magnitude estimate by more than 10% in which case a corrected isophotal magnitude is used instead. The determined magnitudes were also corrected for galactic extinction, using the extinction maps of Burstein and Heiles . To investigate the accuracy of these total magnitudes, and the completeness level of the source extraction as a function of position on the image, Monte–Carlo simulations were carried out using the following 4–step process. [*Step 1:*]{} The point–spread function (PSF) of each K–band image was determined using objects which were unsaturated and which were unresolved on the HST images. [*Step 2:*]{} A series of model galaxies was made. 30% of the galaxies were assumed to be ellipticals, with radial light profiles governed by de Vaucouleurs’ law, $I(r) \propto {\rm exp} \left [-7.67 \left((a/a_{\rm e})^2 + (b/b_{\rm e})^2\right )^{1/8}\right ]$, where $a$ and $b$ are the distances along the projected major and minor axes, and $a_{\rm e}$ and $b_{\rm e}$ are the characteristic scale lengths in those directions. Lambas, Maddox and Loveday provide a parameterization for the ellipticities of elliptical galaxies in the APM Bright Galaxy Survey in terms of the parameter $p = b_{\rm e}/a_{\rm e}$. An ellipticity was drawn from each galaxy at random from this distribution, and the position angle of the major axis was also chosen randomly. Songaila   found that for galaxies with $18 < K < 19$ the median redshift is about 0.6, which means that for $K \gta 18$ (where detection and completeness are being tested) the majority of the galaxies will be distant enough that their angular sizes are relatively insensitive to redshift. High redshift ellipticals have typical characteristic sizes ranging from 2 to 10kpc (e.g. Dickinson 1997), and so the apparent characteristic size, $r_{\rm e}$ ($= (a_{\rm e}b_{\rm e})^{1/2}$), of each elliptical was chosen randomly from the range 0.2 to 1.2 arcsec. The remaining 70% of the galaxies were built using a single exponential profile appropriate for galaxy disks, $I(r) \propto {\rm exp} [-r/r_{\rm d}]$, where $r_{\rm d}$ is the characteristic scale length of the disk. Mao, Mo and White show that the characteristic scale lengths of disks decreases with redshift approximately as $(1+z)^{-1}$, but Simard   shows that a magnitude–size correlation somewhat counter–balances this: at a given apparent magnitude, higher redshift objects must be intrinsically brighter and so are larger. From their data, the typical scale length of galaxy disks at around our completeness limit will be 1–4kpc. The disk scale length of each galaxy was therefore chosen at random from the range 0.1 to 0.5 arcsec. The disk inclination was chosen at random from 0 to 90 degrees, with inclinations greater than 75 degrees being replaced by 75 degrees to account for the non–zero thickness of the disk. The projected orientation of the galaxy was also chosen randomly. The model galaxies were then convolved with the stellar PSF derived in step 1. [*Step 3:*]{} Five stellar or galactic objects were added to each frame with a random position and scaled to a random magnitude in the range $15 < K < 22$.  was then run on the new image using the same input parameters as for the original source extraction, to see if the added objects were detected, and if so to determine the difference between the total magnitude measured by  and the true total magnitude. [*Step 4:*]{} Step 3 was repeated until 25000 stellar objects and 25000 model galaxies had been added to each image. From these results, the mean completeness fraction was determined as a function of magnitude for both stars and galaxies over the entire set of images. The results are shown in Figure \[compplot\]. The 50% completeness limits are $K=19.75$ for galaxies and $K=20.4$ for stars. As discussed in Section \[dataset\], new deeper observations are available for five fields, and considering these deeper fields alone, the 50% completeness limits are 20.2 and 20.9 for galaxies and stars respectively. The input and measured magnitudes of each detected object can also be compared. Typical results for an individual field are shown in Figure \[inoutmags\]. Note the generally low scatter, particularly at bright magnitudes, but the occasional source with a large deviation. These are caused by the proximity of the model object to a bright source in the field; large errors such as this are avoided in the observed data by careful examination of the output  catalogues, as discussed above. The mean difference between the input and measured magnitudes, and the scatter of the measured magnitudes around this mean, are shown as functions of input magnitude for both model stars and galaxies in Figure \[inmagplots\]. Multi–colour source extraction {#colsext} ------------------------------ The J–band frames were aligned with the K–band frames by using a number of objects that appeared unresolved on the HST images. This alignment often involved a small shift in position of the frames, but generally much less than a degree of frame rotation.  was then run in its double image mode using the J and K–band images. In double image mode  uses one image to detect objects and define the apertures to be used for flux determination, and then the fluxes and magnitudes are measured from the second image. In this way J–band magnitudes or upper limits were determined for the sources detected in the K–band, through exactly the same apertures. For the HST data,  was first run on the HST frames obtained at the end of the calibration procedure. Then, the four separate WFPC2 frames were overlaid individually with the K–band data, using between 3 and 15 unresolved objects visible at both wavelengths. In this process the HST data was re-pixelated to match the UKIRT data, essential for a combined running of  on the K–band and HST images; the re-pixelated and aligned HST images were then convolved to the angular resolution of the K–band data using a Gaussian convolving function.  was run in double image mode on the convolved HST data, thus measuring accurate fluxes and magnitudes for the objects through the same apertures as the K–band data. The HST fluxes were corrected for the small differences in gain ratios between the different WFPC2 chips and using a 4% linear correction ramp for the charge transfer efficiency effects [@hol95]. They were also corrected for galactic extinction using the extinction maps of Burstein and Heiles . Finally, for each object the ‘stellaricity index’ (see Section \[stargalsep\]), used for the separation of stars and galaxies, was replaced by that calculated in the run of  on the original HST frame: using the highest angular resolution data provides a much more accurate determination of this parameter. For both the J–band and HST frames, only objects which were detected with fluxes of at least 3$\sigma$ were considered, where $\sigma$ is the uncertainty on the flux measurement provided by ; this flux error estimate includes the uncertainty due to the Poisson nature of the detected counts and that from the standard deviation of the background counts. An additional source of flux error arises from the uncertainty in the subtraction of the background count level as a function of position across the image. This value was estimated as the product of the area of the extraction aperture and the rms variation of the subtracted background flux across the image. This background subtraction error estimate was combined in quadrature with the flux error given by  to determine the uncertainties on the magnitudes of the extracted objects. Star–galaxy separation {#stargalsep} ----------------------  provides a ‘stellaricity index’ for each object, which is an indication of the likelihood of an object being a galaxy or a star, based on a neural network technique [@ber96]. In the ideal cases a galaxy has a stellaricity index of 0.0 and a star has 1.0. In practice, at faint magnitudes, low signal–to–noise and galaxy sizes smaller than the seeing lead to an overlap in the calculated stellaricity indices for the two types of object. For sources which were present on the HST frames (typically only about 80% of objects, the precise percentage depending upon the sky rotation angle at which the HST frame was taken), the stellaricity index for each object determined from the un-convolved HST data was adopted: the high resolution of these images meant that, apart from the very reddest objects, star–galaxy separation[^4] was relatively unambiguous right down to the completeness limit of the K–band frames. For the objects for which no HST data was available, stellaricity indices were taken from the K–band data. At magnitudes $K \lta 17.5$, star–galaxy separation could be carried out directly from these stellaricity values. At fainter magnitudes it was still possible to provide a fairly accurate segregation of galaxies from stars by combining the stellaricity index with the $J-K$ (or F814W$-K$) colour of the object, as shown in Figure \[colstel\]: the bluer average colour of stars can be used to separate out the stars and galaxies in the ambiguous range of stellaricity indices from about 0.5 to 0.9. Figure \[starcnts\] shows the total star counts as a function of magnitude derived from all of the frames, fit with the function $\log N(K) = 0.247 K - 0.830$. This good fit to the differential star counts using a single power–law distribution clearly demonstrates that the star–galaxy separation is working well. Galaxy Counts {#galcounts} ============= Using the differences between true and measured magnitudes determined for the simulated galaxies in Section \[sextract\] (and Figure 3), the measured ‘total’ magnitudes of the galaxies were converted to true ‘total’ magnitudes, and these were binned in 0.5 magnitude bins over the magnitude range $14 < K < 20$ to determine the raw galaxy counts, $n_{\rm raw}$. These were corrected for completeness using the observed mean completeness fraction as a function of magnitude from the simulations (Figure \[compplot\]). A possible source of systematic error in the number counts at faint magnitudes must also be taken into account: the increased scatter in the photometric measurements at faint magnitudes, and the fact that there are more faint galaxies than bright galaxies, mean that it is more likely for faint galaxies to be brightened above the $K=20$ limit (or be brightened and move up one magnitude bin) than for brighter galaxies to appear erroneously in a fainter bin or fall out of the catalogue completely. This leads to an apparent increase in the number counts at the faintest magnitudes. The results from the model galaxy simulations were used to correct for this effect: the required corrections were smaller than 10% in all cases. Combining these effects produced corrected galaxy counts ($n_{\rm c}$), and scaling by the observed sky area, produced final counts per magnitude per square degree, $N_{\rm c}$. Number counts were also derived using the same technique considering just the five fields with deeper images (see Section \[dataset\]). This provided determinations of galaxy counts for $17.5 < K < 20.5$ which may be more reliable at the fainter magnitudes due to smaller completeness corrections. [crrrrr]{} $K$&$n_{\rm raw}$&$n_{\rm c}$ &$N_{\rm c}$$^*$ &$\delta N_{\rm c}$$^*$ &$N_{\rm lit}$$^*$\ \ 14.0 – 14.5 & 1 & 1.0 & 101 & 101 & 48\ 14.5 – 15.0 & 2 & 2.0 & 202 & 143 & 85\ 15.0 – 15.5 & 3 & 3.0 & 304 & 175 & 218\ 15.5 – 16.0 & 10 & 10.1 & 1021 & 320 & 620\ 16.0 – 16.5 & 18 & 18.2 & 1840 & 430 & 960\ 16.5 – 17.0 & 36 & 36.4 & 3680 & 610 & 1530\ 17.0 – 17.5 & 63 & 64.3 & 6500 & 830 & 2530\ 17.5 – 18.0 & 93 & 90.8 & 9180 & 1010 & 5540\ 18.0 – 18.5 & 125 & 123.7 & 12500 & 1240 & 9470\ 18.5 – 19.0 & 189 & 203.0 & 20500 & 1770 & 10500\ 19.0 – 19.5 & 174 & 208.9 & 21100 & 2450 & 11600\ 19.5 – 20.0 & 168 & 306.7 & 31000 & 4700 & 17200\ \ 17.5 – 18.0 & 18 & 18.4 & 9340 & 2190 & 5540\ 18.0 – 18.5 & 23 & 21.6 & 10980 & 2510 & 9470\ 18.5 – 19.0 & 43 & 44.6 & 22700 & 3610 & 10500\ 19.0 – 19.5 & 46 & 47.9 & 24300 & 4080 & 11600\ 19.5 – 20.0 & 59 & 83.1 & 42200 & 6600 & 17200\ 20.0 – 20.5 & 54 &  114.6 &  58200 &  10900 &  22700\ \ These galaxy counts are tabulated in Table \[galcntstab\] and plotted in Figure \[galcntsplot\]. The error on the number counts in each bin was calculated from two factors: the Poissonian error on the raw galaxy counts, and an error in the completeness correction which is generously assumed to be 30% of the number of counts added in the correction. The number counts are compared on the plot with counts from various K–band field surveys in the literature. The final column in Table \[galcntstab\] shows the mean galaxy counts per magnitude bin determined from these literature field surveys. The galaxy counts derived in this paper show an excess of counts relative to the literature counts, which for magnitudes $K > 15.5$ is at greater than the $1\sigma$ significance level. For comparison, the K–magnitudes of the radio galaxies themselves span the range $15.4 < K <18.0$. The integrated excess counts over the magnitude range $15.5 < K < 20$ corresponds to, on average, 11 galaxies per field. Unfortunately, as described in Section \[dataset\], the original goals of these observations was to carry out detailed studies of the radio source host galaxies, and so no blank sky frames were taken. Therefore no comparison can be made between radio galaxy and blank sky frames to determine whether the excess counts found here are associated with structures surrounding the radio galaxies or whether there is some systematic offset between the counts calculated here and the literature counts. We believe that the majority of the excess counts are associated with the presence of the radio galaxies, for a number of reasons. First, Roche, Eales and Hippelein found a similar excess in the K–band galaxy counts in a study of the fields surrounding 6C radio galaxies at similar redshifts. Second, as reviewed in the introduction, there are numerous lines of evidence suggesting that there are clusters around at least some of these objects. Third, later in this paper it will be shown that cross–correlation analyses and colour–magnitude relations indicate an overdensity of galaxies comparable to that observed. Cross–correlation analyses {#crosscor} ========================== Angular cross-correlation estimators {#angcross} ------------------------------------ The clustering of galaxies around the radio galaxies can be investigated using the angular cross–correlation function $w(\theta)$, which is defined from the probability of finding two sources in areas $\delta\Omega_1$ and $\delta\Omega_2$ separated by a distance $\theta$: $$\delta P = N^2 [1 + w(\theta)] \delta\Omega_1 \delta\Omega_2$$ where $N$ is the mean surface density of sources on the sky. From this, it follows that for a given survey: $$DD(\theta) = \frac{1}{2} n_{\rm D} (n_{\rm D}-1) [1 + w(\theta)] \frac{\langle\delta\Omega_{\rm D}(\theta)\rangle}{\Omega}, \label{dddef}$$ where $DD(\theta)$ is the number of data–data pairs with angular separation of between $\theta$ and $\theta + \delta\theta$, $n_{\rm D}$ is the total number of sources in data catalogue, $\Omega$ is the total angular area of sky sampled and $\langle\delta\Omega_{\rm D}(\theta)\rangle$ is the mean angular area of sky accessible at a distance $\theta$ to $\theta + \delta\theta$ around the data points. Clearly $\langle\delta\Omega_{\rm D}(\theta)\rangle$ is extremely difficult to calculate due to boundary effects; various estimators for $w(\theta)$ have therefore been derived using comparisons between the data points and catalogues of randomly distributed points (e.g. see Cress  1996 for a discussion). For the estimator adopted in this paper, it is considered that if $n_{\rm R}$ random points are added to the image (where $n_{\rm R} \gg n_{\rm D}$ to minimise errors introduced by the random catalogue) then the number $DR(\theta)$ of data–random pairs between $\theta$ and $\theta + \delta\theta$ will be given by: $$DR(\theta) = n_{\rm D} n_{\rm R} \frac{\langle\delta\Omega_{\rm D}(\theta)\rangle}{\Omega} \label{drdef}$$ Combining equations \[dddef\] and \[drdef\] gives the following estimator for $w(\theta)$: $$w(\theta) = \frac{2 n_{\rm R}}{(n_{\rm D}-1)} \frac{DD(\theta)}{DR(\theta)} - 1 \label{wthetadef}$$ In addition, it is possible to consider the angular cross–correlation function just around an individual source, in this case the radio galaxy. For this, the parameters become $RGD(\theta) = (n_{\rm D}-1) [1 + w_{\rm rg}(\theta)] \langle\delta\Omega_{\rm RG}(\theta)\rangle / \Omega$ and $RGR(\theta) = n_{\rm R} \langle\delta\Omega_{\rm RG}(\theta)\rangle / \Omega$, where $RGD(\theta)$ and $RGR(\theta)$ are respectively the number of radio galaxy–data and radio galaxy–random pairs between $\theta$ and $\theta + \delta\theta$, $w_{\rm rg}(\theta)$ is the angular cross–correlation function for galaxies around the radio galaxy, and $\langle\delta\Omega_{\rm RG}(\theta)\rangle$ is the angular area of sky accessible at a distance $\theta$ to $\theta + \delta\theta$ from the radio galaxy. These equations combine to give: $$w_{\rm rg}(\theta) = \frac{n_{\rm R}}{n_{\rm D}} \frac{RGD(\theta)}{RGR(\theta)} - 1 \label{wthetargdef}$$ Calculating $w(\theta)$ and $w_{\rm rg}(\theta)$ {#wthetacalc} ------------------------------------------------ One problem with estimating $w(\theta)$ and $w_{\rm rg}(\theta)$ from the current data is that, due to the K–band dithering technique, the detection limit is a function of position across each image. More objects are detected in the central regions of the image, which would produce a spurious peak in the angular cross–correlation statistics. This non-uniform noise level can be accounted for using the simulations which were discussed in Section \[sextract\]. From these simulations, for each field the completeness fraction can be calculated as a function of both magnitude and position, and then applied to the random catalogue in the following way. [*Step 1:*]{} Excluding those objects determined to be stars, $DD(\theta)$, $RGD(\theta)$, and $n_{\rm D}$ were determined for each frame down to a chosen limiting magnitude $K_{\rm lim}$, for a set of bins in $\theta$. [*Step 2:*]{} 25000 random objects were each assigned a random position on the image, and a magnitude $K \le K_{\rm lim}$ drawn at random from a distribution matching the observed number counts (Figure \[galcntsplot\]). [*Step 3:*]{} For each random object, the completeness fraction of objects of that magnitude and position (averaged in 50 by 50 pixel bins) was determined from the simulations. The object was accepted or rejected at random with a likelihood of acceptance based upon the derived completeness fraction. [*Step 4:*]{} For those objects which remained in the catalogue, $DR(\theta)$, $RGR(\theta)$, and $n_{\rm R}$ were calculated. Hence $w(\theta)$ and $w_{\rm rg}(\theta)$ were derived. This process was repeated for all images and for two different limiting magnitudes, $K_{\rm lim}=19$ and $K_{\rm lim}=20$. $K_{\rm lim}=19$ corresponds to the limit at which the data are still almost 100% complete, whilst $K_{\rm lim}=20$ requires a significant completeness correction but provides improved statistics for the galaxy counts. The results were averaged over all 28 fields; investigating the values for individual fields was not attempted since for these small fields of view the measured amplitudes of single images are too strongly affected by variations in the background (and foreground) counts to provide results with a high statistical significance; only by combining the data of the 28 fields is a sufficiently robust measurement obtained. An estimation for the uncertainty in the value of the combined $w(\theta)$ in each bin was provided by the scatter in the values between the different fields. The results are shown in Figure \[wthetaplot\]. $w(\theta)$ is usually assumed to have a power–law form; $w(\theta) = A (\theta / {\rm deg})^{-\delta}$. If so, the observed $w(\theta)$ will follow a form $w(\theta) = A (\theta / {\rm deg})^{-\delta} - C$, where $C$ is known as the integral constraint and arises from the finite size of the field of view. Its value can be estimated by integrating $w(\theta)$ over the area of each field, and for our data corresponds to $C_{\ggal} = 41.1 A_{\ggal}$ and $C_{\rggal} = 48.0 A_{\rggal}$ for, respectively, all galaxy-galaxy pairs and just radio galaxy-galaxy pairs. It is not possible to determine both the amplitude and the slope of the fit from the current data, and so the canonical value of $\delta = 0.8$ (which also seems to be appropriate at high redshifts; Giavalisco  1998) has been adopted. The observed data in Figure \[wthetaplot\] has been fit with functions of this form, and the resulting angular cross–correlation amplitudes are provided in Table \[angres\]. [cccccc]{} Sources & $K_{\rm lim}$ & $A$ & $\Delta A$ & $B$ &$\Delta B$\ \ All g-g pairs & $K<19$ & 0.0031 & 0.0012\ & $K<20$ & 0.0015 & 0.0004\ Only rg-g pairs & $K<19$ & 0.0092 & 0.0037 & 600 & 240\ & $K<20$ & 0.0093 & 0.0021 & 510 & 120\ The results at $K<19$ and $K<20$ are in approximate agreement. For the galaxy–galaxy pairs there may be a decrease of $A_{\ggal}$ as fainter magnitude limits are used (as expected, e.g. see Roche 1998), but the errors are too large to determine this with any degree of confidence. The galaxy–galaxy amplitudes are similar to those derived by Roche  to the same magnitude limits in the fields of radio galaxies at $z \sim 0.75$. A more obvious feature is that at both magnitude limits the cross–correlation amplitude around the radio galaxy is significantly larger ($A_{\rggal} \gg A_{\ggal}$). Figure \[wthetaplot\] shows that much signal originates at small angular separations ($\theta \lta 10''$) and so this cannot be related to any problems with completeness in the outer regions of the frames; the similarity of the results for $K<19$ and $K<20$ also demonstrates this. The spatial cross–correlation amplitude {#spatcorr} --------------------------------------- As has been described by many authors (e.g. Longair & Seldner 1979, Prestage and Peacock 1988) it is possible to convert from an angular cross–correlation amplitude to a spatial cross–correlation amplitude if the galaxy luminosity function is known. The spatial cross–correlation function is usually assumed to have a power–law form: $$\xi(r) = B_{\rggal} \left(\frac{r}{{\rm Mpc}}\right)^{-\gamma}$$ where the power–law slope is related to the slope of the angular cross–correlation function by $\gamma = \delta + 1$ ($=$1.8 for the $\delta = 0.8$ adopted here). The spatial cross–correlation amplitude, $B_{\rggal}$, is then related to that of the angular cross-correlation by $A_{\rggal} = H(z) B_{\rggal}$ (see Longair & Seldner for a full derivation) where: $$H(z) = \frac{I_{\gamma}}{N_{\rm g}} \left(\frac{D(z)}{1+z}\right)^{3-\gamma} \phi(m_{\rm 0},z) \left(\pi / 180\right)^{-(\gamma-1)}$$ Here, $I_{\gamma}$ is a definite integral which for $\gamma = 1.8$ has a value of 3.8, $N_{\rm g}$ is the measured sky density of objects above the magnitude limit (per steradian), $D(z)$ is the proper distance to a source at redshift $z$, $\phi(m_{\rm 0},z)$ is the number of galaxies per unit comoving volume which at redshift $z$ are more luminous than apparent magnitude $m_{\rm 0}$, and the factor $\left(\pi / 180\right)^{-(\gamma-1)}$ is required to convert $A_{\rggal}$ from degrees to radians. The value of $H(z)$ is only a very weak function of $\gamma$ provided $\gamma$ is of order 2 [@pre88], and so the fixing of $\gamma$ at 1.8 will introduce no significant errors. $\phi(m_{\rm 0},z)$ requires assumptions for both the K–correction of galaxies and the local luminosity function, as well as being cosmology dependent, and so this conversion will always be somewhat uncertain. In this paper a Schechter form is adopted for the local luminosity function, that is: $$\phi(L) {\rm d}L = \phi^* \left(\frac{L}{L^*}\right)^{\alpha} \exp\left(\frac{-L~}{L^*}\right){\rm d}\left(\frac{L}{L^*}\right)$$ where $\phi(L) {\rm d}L$ is the number of galaxies per comoving cubic Mpc with luminosity $L$ to $L + {\rm d}L$, $\phi^*$ is the density normalization factor, $L^*$ is the characteristic luminosity and $\alpha$ is the faint–end slope. A pure luminosity evolution model is also adopted; that is, values for $\phi^*$, $\alpha$ and $L^*(z=0)$ are taken, and the only evolution of this function is then evolution of the value of $L^*$ with redshift in accordance with passive evolution predictions. As discussed in the introduction, passive evolution models provide a good description of cluster galaxy properties back to $z \sim 1$, and they can also provide a reasonable fit to the observed K–band field number counts (see below). These models are, however, undoubtedly a simplification in view of hierarchical galaxy formation theories. In hierarchical models, fewer luminous galaxies are predicted to exist at high redshifts (e.g. see Figure 4 of Kauffmann and Charlot 1998), resulting in the values of $\phi(m_{\rm 0},z)$ and consequently $H(z)$ being lower, and hence $B_{\rggal}$ will be increased. The pure passive luminosity approach therefore provides a conservative lower estimate for the spatial cross–correlation amplitude around high redshift radio galaxies. For the passive evolutionary models, the galaxy population was split into four different galaxy types: ellipticals and S0’s (E’s), Sa and Sb types (Sab’s), Sc types (Sc’s) and Sd types and irregulars (Sdm’s). Galaxies of these types were built up using the Bruzual and Charlot stellar synthesis codes (1996 version), assuming a Scalo initial mass function, solar metallicity, a formation redshift $z_{\rm f} = 10$, and four different star formation histories (cf. Guiderdoni and Rocca–Volmerange 1990). The E’s were assumed to form their stars in a rapid early burst with the star formation decreasing exponentially on a $\tau_{\rm sfr} = 0.5$Gyr timescale. The Sab’s had 50% of their stars in a bulge component formed in the same manner as the E’s, and the remaining 50% in a disk–like component with a much longer star formation timescale ($\tau_{\rm sfr} = 6$Gyr). The Sc’s were modelled using just a single long star formation timescale ($\tau_{\rm sfr} = 8$Gyr), and the Sdm’s using a still longer timescale ($\tau_{\rm sfr} = 50$Gyr) to produce significant star formation at the current epoch. These star formation histories approximately reproduce the colours of the various morphological types at the current epoch. It was assumed that all of these four morphological types displayed the same luminosity function (that is, $M^*$ and $\alpha$), with the weightings of the different types (ie. the relative contributions to $\phi^*$) being 30%, 30%, 30% and 10% respectively. In fact, at wavelengths as long as the K–band, the K and evolutionary corrections are small and relatively independent of morphological type, and so the results are not strongly dependent upon any of these assumptions. Changing the values adopted above by a sufficiently large amount that they produce unacceptable distributions of galaxy colours at the current epoch produces only 10-20% changes in the final K–band luminosity function. Local K–band luminosity functions have been derived by Gardner , by Mobasher, Sharples and Ellis and by Loveday . All three of these datasets are approximately consistent with a characteristic absolute magnitude $M^*_{\rm K} = -25.1$ and a slope $\alpha = -1.0$. To investigate whether the combination of this luminosity function and the assumption of passive evolution models produces acceptable results, and to calculate the appropriate value of $\phi^*$, the K–band number counts as a function of apparent magnitude that would be expected in this model were derived. The value of $\phi^*$ was adjusted to provide a good fit to the observed data points. The resulting fit, for a value $\phi^* \approx 0.004$, is shown as the solid line in Figure \[galcntsplot\], and demonstrates that this simple model can provide a reasonable fit to the observed number counts. With this model luminosity function, it is possible to calculate $\phi(m_{\rm 0},z)$ and, using the values of $N_{\rm g}$ from Table \[galcntstab\] ($N_{\rm g} = 9.2 \times 10^7 {\rm sr}^{-1}$ for $K<19$ and $2.0\times 10^8 {\rm sr}^{-1}$ for $K<20$), hence to determine $H(z)$. For $K<19$, $H(z)$ ranges from $15.4 \times 10^{-7}$ to $1.8 \times 10^{-7}$ over the redshift range 0.6 to 1.8. The mean value of $1 / H(z)$ averaged over all 28 radio source redshifts is $\overline{1 / H(z)} \approx 1.7 \times 10^6$, and using this value to convert from $A_{\rggal}$ to $B_{\rggal}$ gives $B_{\rggal} = 600 \pm 240$. For $K<20$ the corresponding range of $H(z)$ is from $12.2 \times 10^{-7}$ to $3.1 \times 10^{-7}$, with a mean value of $\overline{1 / H(z)} \approx 1.4\times 10^6$, corresponding to $B_{\rggal} = 510 \pm 120$. These values can be interpreted physically by comparing with the equivalent values for Abell clusters calculated between the central galaxy and the surrounding galaxies ($B_{\cgal}$). This has been calculated independently as a function of Abell cluster richness by a number of authors [@pre88; @hil91; @and94; @yee99]. Converting their values to $H_0 = 50$kms$^{-1}$Mpc$^{-1}$ and $\gamma = 1.8$ they are all approximately consistent with each other, and average to $B_{\cgal} \approx 350$ for Abell class 0 and $B_{\cgal} \approx 710$ for Abell class 1. The environments surrounding the redshift one radio galaxies are therefore comparable, on average, to those of clusters of between Abell Classes 0 and 1 richness. Hill and Lilly further showed that there is a correlation between the value of $B_{\cgal}$ and the parameter $N_{0.5}$, where $N_{0.5}$ is an Abell–type measurement defined as the net excess number of galaxies within a radius of 0.5Mpc of the central galaxy with magnitudes between $m_1$ and $m_1 + 3$, $m_1$ being the magnitude of the central galaxy. Using a larger dataset, Wold   calibrated this relation as $B_{\cgal} = (37.8 \pm 10.9) N_{0.5}$. The average $B_{\rggal}$ value for $K<20$ then implies that an average net excess of 13.5 galaxies should be found around each radio galaxy within a radius of 0.5Mpc and with a magnitude down to three magnitudes fainter than the radio galaxy. The data presented in this paper cover about 80% of this sky area and, since the radio galaxies have typical magnitudes of $K \sim 17$, the galaxy counts to $K=20$ do sample approximately 3 magnitudes below the radio galaxy. Therefore, the net excess counts of 11 galaxies per field down to $K=20$ (see Section \[galcounts\]) is fully consistent with the determined value of $B_{\rggal}$. Colour–magnitude and colour–colour relations {#colmagsect} ============================================ Near–infrared colour–magnitude relations ---------------------------------------- For each field the magnitudes of all galaxies were determined through all of the different filters, as described in Section \[colsext\]. After correction for galactic reddening, these were used to construct colour–magnitude relations for each field. The near–infrared $J-K$ vs $K$ relation for each field (if $J$–band data were not taken, the longest wavelength HST filter available was used instead) are shown in Figure \[colmagfigs\] in order of ascending redshift. The stars (open diamonds) and galaxies (asterisks) have been separated on these diagrams using the technique described in Section \[stargalsep\], and the radio galaxy is plotted using a large symbol. The uncertainties on the measured colours are not shown to avoid cluttering up the diagrams; at bright magnitudes ($K \lta 17$) they are dominated by calibration uncertainties ($\lta 0.1$ mags), but increase to about 0.3 magnitudes by $K \sim 20$. The colour–magnitude relation for the Coma cluster has been redshifted and evolved according to the passive evolutionary models for elliptical galaxies described in the previous section; this redshifted relation is over-plotted on the figures. Note that the radio galaxies generally have near–infrared colours similar to this theoretical line, indicating that these are old elliptical galaxies (e.g. Best  1998b), but not in all cases: some of the radio galaxies appear redder because a heavily reddened nuclear component contributes to the K–band emission (e.g. 3C22, 3C41; Economou  1995, Rawlings  1995, Best 1998), whilst for others the bluer filter is significantly contaminated by excess optical–UV emission induced by the radio source (the ‘alignment effect’, e.g. McCarthy 1987). A number of features are apparent from these near–infrared colour–magnitude diagrams. Considering initially those radio galaxies with redshifts $z \lta 0.9$, many of the sample show reasonably convincing evidence for associated clusters; here it should be born in mind that for an Abell Class 1 cluster only 10 to 15 associated cluster galaxies are expected in the observed region of sky down to three magnitudes below the magnitude of the radio galaxy (see Section \[spatcorr\]). Some fields (e.g. 3C34, 3C337) clearly show at least this number (the ‘background’ counts at these K–magnitudes and colours are small, as can be seen by a comparison with the colour–magnitude relations of the higher redshift sources), whilst other fields (e.g. 3C217) show few if any associated galaxies. There are clearly large source–to–source variations in environmental richness. Where a clear colour–magnitude relation is observed, the mean colour of this relation lies close to that calculated theoretically by just passively evolving the Coma colour–magnitude relations back in redshift, and the scatter around the colour–magnitude sequence is small. These results have been shown before for optical and X–ray selected clusters at redshifts out to $z \sim 0.8$ (e.g. Stanford  1998). There are some small deviations in the colour of the observed sequence from the passive evolution relation (e.g. see 3C265), but none much larger than a couple of tenths of a magnitude. The near–infrared colours of the galaxies in the radio galaxy fields are therefore consistent with those observed in other clusters at the same redshift, implying that the excess galaxy counts are associated with a structure at the radio galaxy redshift. This is important because of previous suggestions that powerful distant radio galaxies may be systematically amplified by foreground lensing structures (Le F[è]{}vre  1987; Ben[í]{}tez, Mart[í]{}nez–Gonz[á]{}les and Martin–Mirones 1997); were such structures to be present, they could account for both the excess K–band counts and the peak in the cross–correlation statistics, but the colour–magnitude relations argue against this. At higher redshifts the evidence for clear colour–magnitude relations is much poorer. This is mainly because the combination of the greatly increased contribution from field galaxies at these fainter magnitudes and the increased scatter in the colours due to photometric uncertainties, results in any colour–magnitude sequence appearing much less prominent. The difficulty of selecting cluster candidates at these redshifts on the basis of a single colour can be gauged by examining 3C324; for this field, Dickinson has confirmed the presence of a poor cluster of galaxies, but this is barely apparent from its colour–magnitude relation. Although no prominent colour–magnitude sequences are seen in the higher redshift fields, there remains a net excess of K–band counts: if the fields of the radio galaxies are divided into two redshift bins, no significant differences in the faint galaxy counts are seen between the high and low redshift fields. A simple analysis also shows that the excess in the fields of the higher redshift radio galaxies is comprised of red ($J-K \ge 1.75$) galaxies. In Table \[colfracs\] are given the mean number of galaxies per field with magnitudes $17 < K < 20$ and colours $J-K \ge 1.75$ or $1.25 < J-K < 1.75$, in the low and high redshift bins. There are more galaxies with bluer $J-K$ colours in the fields of the lower redshift radio galaxies than in those at higher redshifts, due to the associated cluster galaxies in the lower redshift fields which have colours $1.25 \lta J-K \lta 1.75$ (cf. Figure \[redcolfigs\]). On the other hand, there are more galaxies with $J - K \ge 1.75$ colours in the high redshift than low redshift fields. The excess K–band galaxy counts in the high redshift field appear to be predominantly associated with red galaxies, with colours similar to those expected for old cluster galaxies at these redshifts. This again indicates that the excess number counts are associated with a structure at the redshift of the radio galaxy rather than a foreground structure. ----------- ------------ --------------------- ---------------- No. fields $1.25 < J-K < 1.75$ $J-K \ge 1.75$ $z < 0.9$ 9 $19.6 \pm 2.5$ $8.1 \pm 1.7$ $z > 0.9$ 11 $14.1 \pm 1.8$ $13.1 \pm 2.6$ ----------- ------------ --------------------- ---------------- : \[colfracs\] The mean number of galaxies per field with magnitudes $17 < K < 20$ and blue/red colours, as a function of redshift. Multi–colour relations ---------------------- In Figure \[colmagfigs\] are plotted a complete set of colour–magnitude relations for six galaxies in the sample, chosen to be those which show amongst the best examples of near–infrared colour–magnitude relations for their redshift[^5]. From these it is apparent that, although the scatter around the near–infrared colour–magnitude sequence remains small even at these high redshifts, colours that reach shortward of the rest–frame 4000Å break show a dramatic increase in the scatter of the relation. (e.g. compare the various relations for 3C34); these colours can be strongly influenced by small amounts of recent or on–going star formation, indicating that this may be common in these high redshift clusters. Whether such star formation is in some way connected with the presence of a powerful radio source in these clusters cannot be distinguished from these data. To properly investigate the nature of the galaxies in these fields, all of the colour information must be used simultaneously to derive photometric redshifts and investigate star formation activity. This is beyond the scope of the current paper but will be addressed later (Kodama & Best, in preparation). Here, the use of the all multi–colour information simulataneously is merely demonstrated in Figure \[colcolplots\] through colour–colour plots for these six fields. For each field, the near–infrared $J-K$ colour of each galaxy is plotted against its optical-infrared colour. These data are then compared against theoretical evolutionary tracks for the four different passively evolving galaxy models considered in Section \[spatcorr\] (E’s, Sab’s, Sc’s, Sdm’s). For the lower redshift radio galaxies there is clearly a large concentration of galaxies with colours very close to those of the model elliptical galaxy at the redshift of the radio source. Further, there is a distribution of galaxies with colours between this and the colours of the model spiral galaxies at that redshift. In contrast, for the higher redshift radio galaxies, no strong concentration of galaxies is seen close to the elliptical galaxy prediction, and the number of cluster candidates lying between the locations of the elliptical and spiral model galaxies is smaller than that found in the lower redshift cases. Clearly, despite the excess $K$–counts and red $J-K$ galaxies in the fields of these high redshift radio sources, to accurately investigate cluster membership requires the construction of photometric redshifts using several colours measured with high photometric accuracy. Discussion {#discuss} ========== In this paper a number of pieces of evidence for clustering around distant 3CR radio galaxies have been presented. These can be summarised as follows: - The K–band number counts show an overdensity of faint galaxies in the fields of the radio galaxies, with a mean value of 11 excess galaxies per field. - This excess is comparable to the galaxy overdensity expected for a field of view of this size centred on a cluster of approximately Abell Class 0 richness. - Cross–correlation analyses show a pronounced peak in the angular cross–correlation function around the radio galaxies. - Assuming that the galaxy luminosity function undergoes pure passive luminosity evolution with redshift, the corresponding spatial cross–correlation amplitude lies between those determined for Abell Class 0 and Abell Class 1 clusters. - The galaxies in the fields of most of the lower redshift radio galaxies in the sample show clear near–infrared colour–magnitude relations with only small scatter. The colours of these sequences are in agreement with those of other clusters at these redshifts, indicating that the excess number counts and cross–correlation peak are both associated with a structure at the redshift of the radio galaxy. - There is considerably more scatter in the relations involving shorter wavelength colours, suggesting low levels of recent or on–going star formation in many of the galaxies. - At higher redshifts the colour–magnitude relations are less prominent due to increased background contributions, but there is a clear excess of galaxies with very red infrared colours. These features all provide strong evidence that distant radio galaxies tend to reside in rich environments. The number counts, the cross–correlation statistics, and the colour–magnitude relations all complement the previous results from X–ray imaging, narrow–band imaging, spectroscopic studies and radio polarisation studies discussed in Section \[intro\]. A coherent picture that most, but not all, high redshift radio sources live in poor to medium richness clusters has now been built. Taking the results at face value, the environmental richness around these $z \sim 1$ radio galaxies is higher than that around powerful radio galaxies at $z \sim 0.5$ calculated by Yates   and by Hill and Lilly . This suggests that the increase between $z=0$ and $z=0.5$ in the mean richness of the environments surrounding FRII radio galaxies, found by those authors, continues to higher redshifts. Some notes of caution must be added to this conclusion. First, from the variations in richness of the colour–magnitude relation at any given redshift, (e.g. compare 3C217 and 3C226), it is apparent that there is a wide spread in the density of the environments in these fields. Although most show some evidence of living in at least group environments, not all powerful distant radio galaxies lie in clusters. Second, a simple visual comparison with the extremely rich high redshift cluster MS1054$-$03 ($z=0.83$, van Dokkum 1999) is sufficient to demonstrate that even at high redshifts, powerful radio galaxies still avoid the most extreme richness clusters. Further, although the galaxy count excesses and the cross–correlation amplitudes have been compared with those of Abell clusters at low redshifts, in hierarchical galaxy formation models such comparisons will always be somewhat ambiguous. On–going mergers between galaxies mean that more sub–clumps are seen at higher redshifts, whilst the general galaxy cross–correlation length also evolves with redshift in a manner dependent upon both cosmological parameters and the method of selecting the galaxy populations [@kau99]. Therefore, quantitative interpretations of either of these parameters at high redshift must be considered with some care. On the other hand, in hierarchical growth models, the structures in which the radio galaxies reside will also continue to grow and evolve into much richer structures by a redshift of zero, meaning that the qualitative result that the high redshift radio galaxies lie in rich environments for their redshift is secure. Finally, it is important to consider the consequences for our understanding of the onset and nature of powerful radio sources of a change in the environments of FRII radio galaxies from groups at low redshifts to clusters at high redshift. As discussed by Best , if this result holds then the standard interpretation of the tightness and slope of the Hubble $K-z$ relation, that of ‘closed–box passive evolution’ of radio galaxies at $z \gta 1$ into radio galaxies at $z \sim 0$, is no longer valid. It is not possible that the environments can become less rich with progressing cosmic time. Instead, Best  propose that powerful radio galaxies selected at high and low redshift have different evolutionary histories but must contain a similar mass of stars, a few times $10^{11} M_{\odot}$, so conspiring to produce the observed ‘passively evolving’ K$-z$ relation. In their model, powerful FRII radio sources are seldom formed in more massive galaxies (that is, in central cluster galaxies at low redshifts) because of the difficulties in supplying sufficient fueling gas to the black hole: in rich low redshift clusters the galaxies and gas have been virialised and take up equilibrium distributions within the cluster gravitational potential, the galaxies have high velocity dispersions greatly reducing the merger efficiency, and there is a dearth of gas–rich galaxies close to the centre of the clusters which might merge with, and fuel, the central galaxy. Thus, the formation of a powerful radio source in these environments is a rare event (but can still happen, e.g. Cygnus A). At high redshifts, radio galaxies can be found in (proto) cluster environments because these are not yet virialised, have frequent galaxy mergers, and have a plentiful supply of disturbed intracluster gas to fuel the central engine and confine the radio lobes. The central cluster galaxies will be amongst the most massive galaxies at these redshifts and so, from the correlation between black hole mass and bulge mass (e.g. Kormendy and Richstone 1995), will have the most massive black holes. The kinetic energy of the relativistic radio jets of distant 3CR radio galaxies corresponds to the Eddington luminosity of a black hole with $M \sim 10^8 -10^9\,M_{\odot}$ [@raw91b], implying that these sources are fueled close to the Eddington limit. Therefore the most powerful radio sources will tend to be powered by the most massive central engines and hence be hosted by the most massive galaxies, which tend to be found at the centres of forming rich clusters. The significant scatter in the black hole versus bulge mass correlation [@kor95] would, however, result in some scatter in the richness of the environments of the radio galaxies. The data presented in this paper are in full agreement with this model. Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported in part by the Formation and Evolution of Galaxies network set up by the European Commission under contract ERB FMRX– CT96–086 of its TMR programme. The United Kingdom Infrared Telescope is operated by the Joint Astronomy Centre on behalf of the U.K. Particle Physics and Astronomy Research Council. This work is, in part, based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA Inc., under contract from NASA. I thank Huub R[ö]{}ttgering for useful discussions, and the referee for helpful comments. \[lastpage\] [^1]: Present address: Institute for Astronomy, Royal Observatory Edinburgh, Blackford Hill, Edinburgh, EH9 3HJ [^2]: Email: [email protected] [^3]:  is the ‘Deep Infrared Mosaicking Software’ package developed by Eisenhardt, Dickinson, Stanford and Ward. [^4]: Of course, star–galaxy separation using the resolved or unresolved nature of sources means that any quasars will be placed into the star category, but their contribution is negligible. [^5]: Equivalent plots for the other fields are available from the author on request, but are not reproduced here to save space.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Static spherically symmetric solutions for conformal gravity in three dimensions are found. Black holes and wormholes are included within this class. Asymptotically the black holes are spacetimes of arbitrary constant curvature, and they are conformally related to the matching of different solutions of constant curvature by means of an improper conformal transformation. The wormholes can be constructed from suitable identifications of a static universe of negative spatial curvature, and it is shown that they correspond to the conformal matching of two black hole solutions with the same mass.' address: - 'Centro de Estudios Científicos (CECS), Casilla 1469, Valdivia, Chile.' - | Centro de Estudios Científicos (CECS), Casilla 1469, Valdivia, Chile.\ Departamento de Física, Universidad de Concepción, Casilla, 160-C, Concepción, Chile. - | Centro de Estudios Científicos (CECS), Casilla 1469, Valdivia, Chile.\ Centro de Ingeniería de la Innovación del CECS (CIN), Valdivia, Chile. author: - Julio Oliva - David Tempo - Ricardo Troncoso title: | Static spherically symmetric solutions\ for conformal gravity in three dimensions --- Introduction ============ The lack of propagating degrees of freedom for General Relativity (GR) in three dimensions stems from the fact that the solutions must be spacetimes of constant curvature (see e.g. Ref. ). Nonetheless, for negative cosmological constant, nontrivial solutions including black holes can be found from suitable identifications of anti-de Sitter (AdS) spacetime[@BHTZ]. Besides, few is known about conformal gravity in three dimensions, whose field equations correspond to the vanishing of the Cotton tensor, $$C_{\hspace{0.05in}\nu}^{\mu}=\epsilon^{\kappa\sigma\mu}\nabla_{\kappa}\left( R_{\sigma\nu}-\frac{1}{4}g_{\sigma\nu}R\right) =0,\label{fieldequations}%$$ which are fulfilled if and only if the spacetime metric is locally conformally flat. Exact solutions for this theory have been recently explored in Refs. . The purpose of this paper is showing that, since the field equations (\[fieldequations\]) are conformal invariant, interesting nontrivial solutions can be found not only from suitable identifications, but also from improper conformal transformations of maximally symmetric spacetimes. It is worth pointing out that wormholes as well as asymptotically locally flat or de Sitter (dS) black hole solutions arise within this new set. This can be seen as follows: It is possible to choose the gauge such that the static spherically symmetric solution of (\[fieldequations\]) reads$$ds^{2}=-\left( ar^{2}+br+c\right) dt^{2}+\frac{dr^{2}}{ ar^2+br+c }+r^{2}d\phi^{2},\label{final}%$$ where $a,b$ and $c$ are integration constants. These solutions are asymptotically of constant curvature $-a$, which by means of a trivial (proper) global conformal transformation can be rescaled to $\pm1$ or zero. For vanishing $b$ the metric (\[final\]) has constant curvature, and it reduces to the usual solution of standard GR. Thus, switching on the constant $b$ relaxes the asymptotic behavior of the metric as compared with the one of GR, enlarging the space of allowed solutions. Indeed, for $b\neq0$ the Ricci scalar is given by $R=-6a-2b r^{-1}$, which is singular at the origin. Depending on the value of the integration constants, this singularity could be surrounded by one or two horizons. In the case of vanishing $a$, for $b>0$ and $c<0$, the metric (\[final\]) describes an asymptotically locally flat black hole with a spacelike singularity at the origin surrounded by an event horizon located at $r=r_{+}:=-cb^{-1}$. Its causal structure coincides with the one of the Schwarzschild black hole (see Fig.1 C.1). The case $a=-1$, corresponds to an asymptotically dS black hole with a spacelike singularity at the origin enclosed by event and cosmological horizons located at $r_{+}$ and $r_{++}$, respectively, provided $b=r_{+}+r_{++}$ and $c=-r_{+}r_{++}$. As shown in Fig.1 B.1, this black hole shares the same causal structure with the Schwarzschild-dS metric. It is worth to remark that static black holes cannot be obtained from three-dimensional GR with non negative cosmological constant in vacuum. Asymptotically AdS black holes are obtained for the case $a=1$. For $c>0$ and $b<0$ the curvature singularity is timelike and it is surrounded by a Cauchy and an event horizon located at $r_{-}$ and $r_{+}$, respectively, provided $b=-(r_{-}+r_{+})$ and $c=r_{-}r_{+}$. In this case the causal structure corresponds to the one of the Reissner-Nordstrom-AdS black hole (see Fig.1 A.1). The extremal case is obtained for $r_{+}=r_{-}$. For negative $c$ ($r_{-}<0$) the black hole possesses a spacelike curvature singularity at the origin, surrounded by a single event horizon located at $r_{+}$, and its causal structure reduces to the one of the Schwarzschild-AdS black hole, as it is depicted Fig.1 D.1. The “Schwarzschild gauge" is not the only option. Actually, a different gauge fixing leads to the following static spherically symmetric solution: $$ds_{w}^{2}=-dt^{2}+dz^{2}+l_{0}^{2}\cosh^{2}\left( z\right) d\phi^{2},% \label{wormholeansatz}%$$ describing a wormhole with a neck of radius $l_{0}$ located at $z=0$. This spacetime is the product of the real line with a hyperbolic space identified along a boost ($R\times H_{2}/ \Gamma$), so that it connects two static universes of negative spatial curvature with unit radii, located at $z\rightarrow\pm \infty$. The causal structure of the wormhole (\[wormholeansatz\]) coincides with the one of Minkowski spacetime in two dimensions, as it is depicted in Fig.1 E.1. It can be shown that the black holes (\[final\]) and the wormhole (\[wormholeansatz\]) correspond to the matching of different patches of constant curvature spacetimes *at spatial infinities*, by means of improper conformal transformations. It is worth pointing out that the matching of different spaces through the boundary of their corresponding conformal compactifications cannot be performed in GR, since the proper distance, $ds_{E}^{2}$, to pass from one patch to the other diverges. However, in conformal gravity this kind of matching can be carried out by means of an improper conformal transformation, i.e., a local rescaling $\Omega^{2}$ that vanishes at the matching surfaces. In this way, the proper distance required to pass through a pair of points located at each side of the matching surface, given by $ds^{2}=\Omega^{2}ds_{E}^{2}$, becomes finite. Note that this procedure works in a way that is analogous to the one required to obtain black holes in $1+1$ dimensions (see e.g. ). In the case of the wormhole (\[wormholeansatz\]), the metric is conformally related to the matching of two independent patches covering the exterior region of BTZ black holes, where the horizon radius is related to the radius of the neck according to $\rho_{+}=l_{0}$. This is shown in Fig.1 E.2. The class of black holes described by (\[final\]) is conformally related to the matching of different independent patches of static spherically symmetric Einstein spaces of mass $M=-c$ with a cosmological constant of sign $sgn(c)$ (See Fig.1 A-D). One may wonder about the possibility of generating new static spherically symmetric solutions of conformal gravity with an arbitrary number of horizons through the “conformal matching" procedure described above. It is possible to show that the most general solution admits at most two horizons, since the attempt of including a third one necessarily introduces an additional curvature singularity at a finite radius between the second and the third horizon. Hence, once the region inside this singular shell is excised, one necessarily recovers a static spherically symmetric black hole possessing one or two horizons only, described by the metric (\[final\]). The solutions presented here can be extended to the rotating case[@OTT2]. The definition of suitable conserved charges and the black hole thermodynamics within this theory is an open problem. Acknowledgments {#acknowledgments .unnumbered} =============== We thank the organizers of the meeting for their kind hospitality. Special thanks to M. Katanaev and R.E. Troncoso for very useful comments. This work was partially funded by FONDECYT grants 1095098, 1061291, 1071125, 1085322, 3085043; D. Tempo thanks CONICYT and Escuela de Graduados (UdeC), for financial support. The Centro de Estudios Científicos (CECS) is funded by the Chilean Government through the Millennium Science Initiative and the Centers of Excellence Base Financing Program of CONICYT. CECS is also supported by a group of private companies which at present includes Antofagasta Minerals, Arauco, Empresas CMPC, Indura, Naviera Ultragas and Telefónica del Sur. CIN is funded by CONICYT and the Gobierno Regional de Los Ríos. [0]{} S.Carlip,[*Quantum gravity in 2+1 dimensions*]{}, Cambridge University Press (1998). M. Banados, M. Henneaux, C. Teitelboim and J. Zanelli, Phys. Rev.  D [**48**]{}, 1506 (1993) G. Guralnik, A. Iorio, R. Jackiw and S. Y. Pi, Annals Phys.  [**308**]{}, 222 (2003). D. Grumiller and W. Kummer, Annals Phys.  [**308**]{}, 211 (2003). E. Witten, Phys. Rev.  D [**44**]{}, 314 (1991). M. O. Katanaev, W. Kummer and H. Liebl, Phys. Rev.  D [**53**]{}, 5609 (1996). J. Oliva, D. Tempo and R. Troncoso, Preprint CECS-PHY-08/13.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Radio-bright regions near the solar poles are frequently observed in Nobeyama Radioheliograph (NoRH) maps at 17 GHz, and often in association with coronal holes. However, the origin of these polar brightening has not been established yet. We propose that small magnetic loops are the source of these bright patches, and present modeling results that reproduce the main observational characteristics of the polar brightening within coronal holes at 17 GHz. The simulations were carried out by calculating the radio emission of the small loops, [ with several temperature and density profiles, within a 2D coronal hole atmospheric model. If located at high latitudes, the size of the simulated bright patches are much smaller than the beam size and they present the instrument beam size when observed. The larger bright patches can be generated by a great number of small magnetic loops unresolved by the NoRH beam. Loop models that reproduce bright patches contain denser and hotter plasma near the upper chromosphere and lower corona. On the other hand, loops with increased plasma density and temperature only in the corona do not contribute to the emission at 17 GHz. This could explain the absence of a one-to-one association between the 17 GHz bright patches and those observed in extreme ultraviolet. Moreover, the emission arising from small magnetic loops located close to the limb may merge with the usual limb brightening profile, increasing its brightness temperature and width.]{}' author: - 'Caius L. Selhorst' - 'Paulo J. A. Simões' - 'Alexandre J. Oliveira e Silva' - 'C. G. Giménez de Castro' - 'Joaquim E. R. Costa' - Adriana Valio title: Association of radio polar cap brightening with bright patches and coronal holes --- Introduction ============ [Bright areas in the polar regions of the Sun have been frequently reported at radio [to infrared]{} frequencies, ranging from $\sim$15 GHz to 860 GHz [see @Selhorst2003 and references therein].]{} [Most of those observations]{} were obtained by [single-dish]{} [telescopes]{} with low spatial resolution, [posing challenges to draw firm conclusions about the physical origin of the increase in emission]{}. [@Efanov1980] observed the presence of [bright regions near the poles]{} [at 22 and 37 GHz]{} during the [period of]{} minimum solar activity, [and reported that such bright regions were not seen during the maximum of solar activity.]{} Similar [findings]{} were also reported [through]{} other [single-dish]{} observations [@Riehokainen1998; @Riehokainen2001], [also]{} suggesting that the polar brightening [could be associated with]{} regions in which the white-light polar faculae are observed and follows their cycle, i.e., anti-correlated with the solar cycle. [Great advances]{} in the study of [the polar]{} brightening were obtained due to interferometric solar observations at 17 GHz by the Nobeyama Radioheliograph [NoRH, @Nakajima1994], in operation since 1992. [@Shibasaki1998] concluded that these [polar cap bright regions]{} observed at 17 GHz was the sum of two components: [a limb brightening effect superposed on bright features intrinsic to the poles,]{} that can increase the brightening up to 40% above the quiet Sun temperature. The polar [cap]{} brightening at 17 GHz is characterised by the presence of small bright structures (bright patches) that appear in the regions close to the limb, with their sizes ranging from the NoRH beam size (about $15''$) up to $50''-55''$ [@Nindos1999]. Through synoptic limb charts, [@Oliveira2016] showed a good association between the presence of coronal holes and the 17 GHz polar brightening in the period of 2010-2015. Moreover, the authors attributed the enhancement of radio brightness in coronal holes to the presence of bright patches closely associated with the presence of intense unipolar magnetic fields. [ In Figure \[fig:obs\], we present en example of 17 GHz bright patches (top panel), with a good correspondence with small bright structures observed in extreme ultraviolet (EUV) emission, from images of the Atmospheric Imaging Assembly (AIA) instrument [@Lemen2012], on board of the Solar Dynamics Observatory (SDO). Moreover, the EUV lines formed above the transition region (171, 193 and 211 Å) show that these bright structures are embedded in a coronal hole. Nevertheless, not all bright structures observed in EUV have 17 GHz association, as reported before [@Nindos1999; @Riehokainen2001; @Nitta2014]. ]{} [![Comparison between the 17 GHz polar bright patches and the EUV images obtained by SDO/AIA. The 17 GHZ contour curves correspond to 15% above the quiet Sun temperature.[]{data-label="fig:obs"}](Figure_01.eps "fig:"){width="9cm"}]{} Apart from being more frequently observed at the poles, the association between coronal holes and the presence of bright patches was also observed at lower latitudes [@Gopal1999; @Maksimov2006]. While the radio limb brightening is now relatively well understood [@Selhorst2005a], the origin of the intrinsic bright patches near the solar poles has not been identified yet. In this work, we propose a model to explain the observed radio bright patches within coronal holes near the solar poles. Using small magnetic loop models to represent the source of the bright patches, we were able to reproduce the typical brightness temperature and size of the small (around 10”) polar bright patches. We suggest that larger regions ($\sim 50$”) are formed by a number of small loops, unresolved by NoRH. Modeling coronal holes and polar bright patches =============================================== In this section, we describe our proposed atmospheric model for coronal holes and the small magnetic structures to represent the origin of the polar bright patches. The atmospheric model --------------------- [@Selhorst2005a] proposed an atmospheric model [(hereafter referred to as the SSC model)]{} with the distributions of temperature and density (electron and proton) as a function of height, from the photosphere up to $40,000$ km in the corona. [ To calculate the 17 GHz limb brightening and verify the influence of spicules, the radiative transfer was performed through a 2D space, in order to account for the curvature of the Sun, from the disc center to the limb, and the SSC solar atmosphere. In this work, we follow the same procedure, with the appropriate atmospheric model for coronal holes (Section \[sec:ch\]) and inclusion of magnetic loops to represent the sources of radio bright patches (Section \[sec:pbp\]).]{} [![The SSC atmospheric model with the presence of spicules and three small magnetic loops located in the spicules-less region. The arrows indicate the direction of the radiative transference integration at the polar region.[]{data-label="fig:model"}](Figure_02.eps "fig:"){width="9cm"}]{} Assuming that the NoRH maps have an spatial resolution of 10”, the SSC model without the inclusion of spicules showed a limb brightening of 36% above the quiet Sun values, which is compatible with the maximum values observed at the poles. The inclusion of spicules reduced the initial limb brightening to $\sim10\%$, that is close to the values observed at equatorial regions. [ [@Selhorst2005b] explain the high polar brightening values by ]{}the presence of holes in the spicule forest caused by intense magnetic features (i.e, faculae)[ , hereafter referred to as bare regions]{}. For a large bare region located between $80.2-90.0^\circ$ heliographic angle, the simulation results showed a sharp and intense brightening, up to 40% above the quiet Sun. However, the intensity decreased to $21.4\%$ for a low latitude bare region located between $76.2-81.4^\circ$. Thus, the authors concluded that a simple hole in the spicule forest is only able to reproduce the brightness temperature increase caused by large bright patches very close to the limb ($\gtrsim 80^\circ$). [However, since]{} the high intensity bright patches at 17 GHz were also observed at lower heliographic angles, another physical source is necessary to explain their brightness temperature values. As first suggested in [@Selhorst2010], the simulations presented here include small magnetic loops in the regions without spicules, [ within]{} coronal holes. Coronal holes {#sec:ch} ------------- [ Using the SSC models as a starting point,]{} the presence of coronal holes was simulated by reducing the coronal temperature and density (electron and ions) distributions, above 3650 km, in which the original values were multiplied by constant values, $N_T$ and $N_{ne}$. Figure \[fig:SSC\] shows (a) temperature and (b) electron density [profiles]{} for the quiet Sun (black curves), coronal holes (red curves) and bright patches (blue curves, see Section \[sec:pbp\]). [![(a) Temperature and (b) electron density [profile]{} for the quiet Sun (black curves), coronal holes (red curves) and bright patches (blue curves).[]{data-label="fig:SSC"}](Figure_03.eps "fig:"){width="9cm"}]{} The temperature reduction in the coronal hole was set [ as $N_T=0.5$, whereas the densities were reduced by a factor $N_{ne}=0.5$. These settings resulted in temperatures and densities of $0.49\times10^6$ K and $1.93\times10^8\rm cm^{-3}$ at $10$ Mm above the surface, while, at $40$ Mm the values were $0.71\times10^6$ K and $0.79\times10^8\rm~cm^{-3}$. The upper atmosphere temperature and density are compatible with plumes regions reported by [@Wilhelm2006]]{}. The simulation of a coronal hole located at $65^\circ$ of latitude with the characteristics above, present a reduction in the 17 GHz limb brightening (21%) when compared with the standard SSC simulation without spicules (36%), this comparison is showed in Figure \[fig:CH\]a. [ If the inter-plume temperature and density, observed by [@Wilhelm2006], were adopted ($N_T=0.8$ and $N_{ne}=0.1$), the limb brightening was reduced to 16%.]{} All simulations in this work were convolved with a $10''$ Gaussian beam, that represents the NoRH best spatial resolution. [![a) Center-to-limb brightness temperature variation for the standard SSC model (black curve) and for a simulation with a coronal hole located at $65^\circ$ of latitude. [ The blue and red curves were obtained, respectively, for plumes and inter-plumes temperature and density distributions]{}. b) The influence of the inclusion spicules in the simulation with coronal hole [ (plumes region)]{}. The black curve resulted from simulations with spicules distributed throughout the limb, while the blue one considered a spicule-less region above $70^\circ$ of latitude. []{data-label="fig:CH"}](Figure_04.eps "fig:"){width="9cm"}]{} Similar to the procedure used in previous works [@Selhorst2005a; @Selhorst2005b] to estimate the contribution of spicules in coronal holes, they were randomly distributed in the temperature and densities matrices covering about $10\%$ of the solar surface. Except for the width, that was fixed at 500 km, all spicules physical parameters were randomly chosen, with temperatures ranging from 7,000 to 13,000 K, densities in the interval of $2-6\times 10^{10}\rm$, heights from 5,000 to 7,000 km and inclination angles from $30$ to $150^\circ$. [ These parameters are consistent with the values inferred from observations of optical lines, mainly $\rm H\alpha$ and $\rm Ca~II$ [@Sterling2000; @Tsiropoula2012]]{}. The brightness temperature was calculated every 100 km, instead of the 700 km used in previous works. To obtain a final mean profile, $N$ simulations were performed until a convergence criterion was satisfied, that is, the rms of the 400 points of the mean profile closest to disk center should be less than 0.0003 in comparison with the rms calculated in the previous simulation. Usually, it sufficed to perform $20-40$ simulations. As can be seem in Figure \[fig:CH\]b (black curve), the presence of spicules prevent the [ reduction of the]{} brightness temperature caused by the coronal hole at low latitude angles ($\lesssim 930'' $ in Figure  \[fig:CH\]b). Moreover, the limb brightening is completely absorbed by the presence of spicules, in agreement with [@Selhorst2005a; @Selhorst2005b]. As a result, although the inclusion of spicules in a plume region produces an emission at the limb 7% more intense than the quiet Sun, this brightening cannot be [ identified from the emission originating in their surroundings.]{} As has been proposed by [@Selhorst2005b], the intense limb brightening can be caused by the magnetized regions that inhibit the presence of spicules causing a hole in the spicules forest. Nevertheless, a polar bare region within a coronal hole cannot reproduce the high 17 GHz bright patches temperatures. The blue curve in Figure \[fig:CH\]b differs from the black one by the inclusion of a polar bare region above $70^\circ$ of latitude. [ The simulated spicules are optically thick at 17GHz, and their adopted density range lies around the lowest density values reported [@Tsiropoula2012]. This absorption is caused by the optical thickness of the spicules at 17 GHz, which, in the SSC model, is formed ($\tau\sim1$) around a region 2,900 km above the solar surface, where the local density and temperature are $9.3 \times 10^9\rm cm^{-3}$ and $10,390$ K, respectively. Since, this density is approximately half of the [minimum density value adopted for the]{} spicules, all the spicules reaching heights above 2,900 km are optically thick at 17 GHz.]{} [ Moreover, observationally, spicules are not easily identified in the chromosphere, being predominantly seen when reaching above chromospheric heights [@Pereira2014]. Recent simulations suggest that spicules do not maintain their structure in the chromosphere, but only [*may become*]{} spicules once the chromospheric material flows upwards along the magnetic field strands [@Martinez2017]. For these reasons, we designed our spicules model to focus on the main aspects affecting the formation/propagation of the radio emission.]{} Polar bright patches {#sec:pbp} -------------------- Since the spicules-less regions below $~80^\circ$ of latitude are not able to reproduce the intense bright patches observed in the NoRH maps [@Selhorst2005b] and the presence of coronal holes reduce the expected limb brightening at 17 GHz (Figure \[fig:CH\]), other solar features should be acting inside the coronal holes to increase their brightness temperature at 17 GHz [@Gopal1999; @Oliveira2016]. To simulate the observed 17 GHz bright patches, we introduced small magnetic loops inside coronal hole regions (Figure \[fig:model\]). [ In these simulations the coronal hole was set to have temperature and density distributions consistent with plume regions (red curves in Figure \[fig:SSC\]).]{} The simulated magnetic loops were set as half circumference, perpendicular to the solar surface, with two possible external radius of 5.0 Mm (6.9”) and 7.5 Mm (10.3”) and fixed [cross-section]{} width of 2.5 Mm. Inside the magnetic loop the temperatures and densities varied as an active region flux tube [@Selhorst2008], i.e., hotter and denser than the atmosphere surrounding it. The blue curves in Figure \[fig:SSC\] show the example of the assumed atmospheric variation in the magnetic loops, in which the chromospheric gradients of temperature and density were $\nabla T=7.08~K~km^{-1}$ and $\nabla n_e=-4.08\cdot 10^7~cm^{-3}~km^{-1} $, respectively, the transition region was considered to be at 3,000 km. Moreover, the coronal temperature and densities were considered as twice the quiet atmospheric values. Table \[table1\] lists 30 bright patches simulations, with distinct plasma compositions and locations. The reference simulation numbers are placed in the first column, with different distribution of temperature ($\nabla T$ and $N_T$) and density ($\nabla n_e$ and $N_{n_e}$) within the flux tubes organised in the next four columns, followed by the magnetic loops size and position. The simulations outcomes are listed in the last four columns, where the first two are the maximum brightness temperature, $T_{B_{max}}$, and the width obtained at the point in which $T_B=0.5T_{B_{max}}$ obtained from the bright patch simulation without the convolution with the NoRH beam, whereas the last two represent the same values after the beam convolution. All small magnetic field loops were simulated inside the coronal hole limits. The widths were measured at half power of the maximum brightness temperature of the bright patches, after the subtracting the coronal hole brightness temperature profile (blue curve in Figure \[fig:CH\]a). Results ======= Since most of the 17 GHz emission is generated in the chromosphere, the size of the magnetic loop determines the position where the emission is produced. While in the smaller loops (5.0 Mm) the emission is formed at the loop top, the emission in the larger loops (7.5 Mm) comes from their footpoints, whereas their tops are optically thin. In the first six simulations presented in Table \[table1\] the loops are placed at the center of the solar disk. These results show the brightness temperature increase due to the larger gradients of the chromospheric temperatures and densities. In these simulations, when the plasma composition inside the loop is the same, the obtained $T_{B_{max}}$ is independent of the magnetic loop size before the beam convolution, [ as expected]{}. Nevertheless, smaller loops [ produced a bright patch ($11''.3$) larger than the larger loops ($7''.9$)]{}. [ This is easily explained by the different sizes of their emitting areas: while for the smaller loops the loop top is bright, for the larger loops only the footpoints are brighter than the surroundings.]{} [ This can be visualized in]{} Figure \[fig:arc\], which shows the results of simulations using (a) 5.0 Mm and (b) 7.5 Mm loops. The dotted lines are the [ unconvolved]{} results with 100 km spatial resolution and the continuous lines are the result of the convolution with the NoRH beam ($\sim 10''$). Because the emitting area of the footpoint is smaller than the NoRH resolution, after the beam convolution, $T_{B_{max}}$ reduces more significantly in the brighter loop (Figure \[fig:arc\]b). Moreover, the convolved brightness temperature profile still presents a double peak with $\sim 21''$ width, i.e. more than $10''$ increase in width. On the other hand, the small loop (Figure \[fig:arc\]a) shows a smaller reduction in $T_{B_{max}}$, a single peaked profile and less than $2''$ increase in its width. [![Simulations of magnetic field loops located at the center of solar disc (1–6 on Table \[table1\]). Dotted lines are the unconvolved results, while continuous lines are the convolved ones.[]{data-label="fig:arc"}](Figure_05.eps "fig:"){width="9cm"}]{} [![Simulations of bright patches located at a) $71.1^\circ$, b) $74.1^\circ$, c) $77.4^\circ$ and d) $81.4^\circ$. Different colours were used to refer to the distinct simulations listed in Table 1. In panels e) and f) were simulated two bright patches located in different longitudes, respectively, $77.4^\circ$ and $81.4^\circ$.[]{data-label="fig:BP"}](Figure_06.eps "fig:"){width="9cm"}]{} Due to [ projection effects and the curvature of the Sun]{}, the size of the emitting magnetic loops is [strongly]{} reduced when they are simulated at high latitudes. The unconvolved width of the hotter small loops was reduced from $\sim 11''$ at disk center to $\sim 1''$ when they are placed at $81.4^\circ$ (Sim. 3 and 25 in Table \[table1\]). After the convolution the resulting bright patches mimic the beam size ($\sim10''$). With respect to their brightness temperatures, $T_{B_{max}}$ is seen to increase with the angular position in the unconvolved values, however, due to the reduction in the emitting source size, the convolved values follows in the opposite way, reducing the $T_{B_{max}}$ values. The profile of simulations 7 to 30 from Table \[table1\] are plotted in Figure \[fig:BP\]a, b, c and d. The continuous lines refer to the small magnetic loops and the dotted ones represent the larger loops, for the same plasma configuration (temperature and electron density) the same color is used. [ The emission from loops located at $71.1^\circ$ and $74.1^\circ$ can be identified apart from]{} the limb brightening (Figure \[fig:BP\]a and \[fig:BP\]b). However, those located at $77.4^\circ$ and $81.4^\circ$ cannot be distinguished from the [ usual]{} limb brightening (Figure \[fig:BP\]c and \[fig:BP\]d). [ We also tested the effects of including two magnetic loops, using the physical conditions of simulations 20 and 27 (Figure \[fig:BP\]e) and simulations 21 and 27 (Figure \[fig:BP\]f.)]{} [ Note that the $T_B$ increase caused by each loop cannot be resolved after the convolution with the NoRH beam. Although not shown in the figure, for the unconvolved results with 100 km resolution, the $T_B$ increase caused by each loop can be easily resolved]{}. [ Here]{}, $T_{B_{max}}$ in both simulations is the same as that of simulation 27; [ however,]{} the width increased to $\sim19''.7$ and $21''.1$ in the profiles plotted in Figure \[fig:BP\]e and \[fig:BP\]f, respectively. Discussion and conclusions ========================== The purpose of this work is to model the emission of the 17 GHz polar bright patches, which are frequently observed in the NoRH maps in association with coronal holes [@Gopal1999; @Selhorst2003; @Oliveira2016]. The simulations were based on the temperature and density distributions proposed in the SSC atmospheric model [@Selhorst2005a], [ with modifications to include a coronal hole atmospheric model and magnetic loops as the sources of the radio bright patches.]{} [ We have calculated the radio emission at 17 GHz from coronal holes, in comparison with typical quiet Sun regions. As expected, in a static atmosphere, the lower temperature and density (red profiles in Figure \[fig:SSC\]) inside a coronal hole resulted in lower brightness temperature values]{} (Figure \[fig:CH\]a). [ Our results show, however, that the presence of (spatially unresolved) spicules can produce brighter regions than what would be expected from coronal holes (Figure \[fig:CH\]b).]{} To simulate the bright patches, [ we have introduced small magnetic loops, with hotter and denser plasma than its surroundings. We find that the radio emission from smaller loops (5.0 Mm of radius) comes from the top of the loop, while the emission from larger loops (7.5 Mm of radius) originates from the footpoints.]{} As a consequence, the size of the [ simulated bright patches originating from]{} small loops was larger than [ from]{} larger loops, $\sim11''$ and $\sim8''$, respectively. However, after convolving the results with NoRH beam, [ large loop produced]{} broader and colder bright patches in comparison with the results obtained for the small loops. [ The inclusion of magnetic loops in the model only affect the radio brightness at 17 GHz if their temperature and density properties are substantially different from the surrounding plasma at heights where $\tau \approx 1$, which happens near the upper chromosphere and lower corona. These results are in agreement with the findings of [@Brajsa2007]. Moreover, loop models with increased density and temperature only at coronal heights do not contribute significantly to the radio emission at 17 GHz. Such loops could be brighter at EUV wavelengths, and thus]{} this could explain the absence of a one-to-one correlation between the 17 GHz bright patches and those observed in EUV [@Nindos1999; @Riehokainen2001; @Nitta2014]. [ The maximum brightness temperature]{} $T_{B_{max}}$ in the simulations increased up to $~30\%$ [ by placing the loops at higher latitudes]{}, however, the size of these bright regions were reduced to $~1''$, [ much smaller than the NoRH beam]{}. As a consequence, after the beam convolution, the size of the bright patch corresponds to the beam size [ (as expected)]{}, in agreement with the minimum size of the bright patches observed in the NoRH maps [@Nindos1999]. On the other hand, a single small loop located in near the pole is not able to reproduce the larger bright patches sizes observed ($50''-55''$, [@Nindos1999]), which could be caused by the presence of a great number of small magnetic loops unresolved by the NoRH beam. As showed in Figure \[fig:BP\]e and \[fig:BP\]f, even loops separated by an angular distance of $3^\circ$ ($\sim 37$ Mm) will not be resolved in the NoRH maps. Moreover, the presence of small magnetic loops close to the limb can result in a merged brightness limb profile, increasing the observed limb brightening temperature and width. To improve our knowledge about these small bright structures inside the coronal holes, high resolution observations at different wavelengths are necessary. Today, only solar observations with ALMA can achieve these spatial resolution [@Wedemeyer2016]. ---- -------------------- ----------- ---------------------------- --------------- ------------- ------------------ ---------------------------- ----------- ---------------------------- ----------- [$\nabla T$]{} [$N_T$]{} [$\nabla n_e$]{} [$N_{n_e}$]{} Loop Radius Loop Position [ (K km$^{-1}$)]{} [ (cm$^{-3}$ km$^{-1}$)]{} [ (Mm)]{} [$ (^\circ)$ ]{} $T_{Bmax}~(\times 10^3)$ K Width (”) $T_{Bmax}~(\times 10^3)$ K Width (”) 1 3.04 1.2 $-5.14\cdot 10^7$ 1.2 5.0 0 12.2 (T) 11.3 11.8 13.1 2 4.56 1.5 $-5.01\cdot 10^7$ 1.5 5.0 0 15.1 (T) 11.3 14.2 13.0 3 7.08 2.0 $-4.80\cdot 10^7$ 2.0 5.0 0 20.0 (T) 11.3 18.3 13.0 4 3.04 1.2 $-5.14\cdot 10^7$ 1.2 7.5 0 12.2 (F) 7.9 11.0 20.7 5 4.56 1.5 $-5.01\cdot 10^7$ 1.5 7.5 0 15.1 (F) 7.9 12.1 20.7 6 7.08 2.0 $-4.80\cdot 10^7$ 2.0 7.5 0 20.0 (F) 7.7 13.9 21.2 7 3.04 1.2 $-5.14\cdot 10^7$ 1.2 5.0 71.1 13.3 (T) 3.2 11.9 9.9 8 4.56 1.5 $-5.01\cdot 10^7$ 1.5 5.0 71.1 16.6 (T) 3.2 12.9 9.9 9 7.08 2.0 $-4.80\cdot 10^7$ 2.0 5.0 71.1 22.5 (T) 3.2 14.6 9.9 10 3.04 1.2 $-5.14\cdot 10^7$ 1.2 7.5 71.1 13.3 (F) 1.9 11.6 11.7 11 4.56 1.5 $-5.01\cdot 10^7$ 1.5 7.5 71.1 16.6 (F) 1.8 12.2 11.7 12 7.08 2.0 $-4.80\cdot 10^7$ 2.0 7.5 71.1 22.5 (F) 1.9 13.2 11.6 13 3.04 1.2 $-5.14\cdot 10^7$ 1.2 5.0 74.1 13.8 (T) 2.5 12.6 9.8 14 4.56 1.5 $-5.01\cdot 10^7$ 1.5 5.0 74.1 16.9 (T) 2.6 12.8 9.8 15 7.08 2.0 $-4.80\cdot 10^7$ 2.0 5.0 74.1 22.8 (T) 2.6 14.3 9.8 16 3.04 1.2 $-5.14\cdot 10^7$ 1.2 7.5 74.1 13.6 (F) 1.4 12.6 11.0 17 4.56 1.5 $-5.01\cdot 10^7$ 1.5 7.5 74.1 16.9 (F) 1.4 12.6 11.0 18 7.08 2.0 $-4.80\cdot 10^7$ 2.0 7.5 74.1 22.9 (F) 1.4 13.1 11.0 19 3.04 1.2 $-5.14\cdot 10^7$ 1.2 5.0 77.4 14.1 (T) 1.8 12.6 9.8 20 4.56 1.5 $-5.01\cdot 10^7$ 1.5 5.0 77.4 17.7 (T) 1.8 12.9 9.7 21 7.08 2.0 $-4.80\cdot 10^7$ 2.0 5.0 77.4 23.8 (T) 1.8 14.1 9.8 22 3.04 1.2 $-5.14\cdot 10^7$ 1.2 7.5 77.4 14.2 (F) 0.8 12.6 10.5 23 4.56 1.5 $-5.01\cdot 10^7$ 1.5 7.5 77.4 17.8 (F) 0.7 12.6 10.5 24 7.08 2.0 $-4.80\cdot 10^7$ 2.0 7.5 77.4 23.9 (F) 0.7 13.1 10.6 25 3.04 1.2 $-5.14\cdot 10^7$ 1.2 5.0 81.4 21.5 (T) 1.0 12.9 9.8 26 4.56 1.5 $-5.01\cdot 10^7$ 1.5 5.0 81.4 21.5 (T) 1.1 13.2 9.8 27 7.08 2.0 $-4.80\cdot 10^7$ 2.0 5.0 81.4 25.3 (T) 1.1 14.0 9.7 28 3.04 1.2 $-5.14\cdot 10^7$ 1.2 7.5 81.4 21.5 (F) 0.4 12.6 9.9 29 4.56 1.5 $-5.01\cdot 10^7$ 1.5 7.5 81.4 21.5 (F) 0.3 12.8 10.5 30 7.08 2.0 $-4.80\cdot 10^7$ 2.0 7.5 81.4 23.4 (F) 1.3 13.2 10.3 ---- -------------------- ----------- ---------------------------- --------------- ------------- ------------------ ---------------------------- ----------- ---------------------------- ----------- We would like to thank the Nobeyama Radioheliograph, which is operated by the NAOJ/Nobeyama Solar Radio Observatory. A.J.O.S. acknowledge the scholarship form CAPES. C.L.S. acknowledge financial support from the São Paulo Research Foundation (FAPESP), grant number 2014/10489-0. P.J.A.S. acknowledges support from grant ST/L000741/1 made by the UK’s Science and Technology Facilities Council and from the University of Glasgow’s Lord Kelvin Adam Smith Leadership Fellowship. , R., [Benz]{}, A. O., [Temmer]{}, M., [et al.]{} 2007, , 245, 167 , V. A., [Moiseev]{}, I. G., [Nesterov]{}, N. S., & [Stewart]{}, R. T. 1980, in IAU Symp. 86: Radio Physics of the Sun, 141–144 , N., [Shibasaki]{}, K., [Thompson]{}, B. J., [Gurman]{}, J., & [DeForest]{}, C. 1999, , 104, 9767 , J. R., [Title]{}, A. M., [Akin]{}, D. J., [et al.]{} 2012, , 275, 17 , V. P., [Prosovetsky]{}, D. V., [Grechnev]{}, V. V., [Krissinel]{}, B. B., & [Shibasaki]{}, K. 2006, , 58, 1 , J., [De Pontieu]{}, B., [Carlsson]{}, M., [et al.]{} 2017, , 847, 36 , H., [Nishio]{}, M., [Enome]{}, S., [et al.]{} 1994, IEEE Proceedings, 82, 705 , A., [Kundu]{}, M. R., [White]{}, S. M., [et al.]{} 1999, , 527, 415 , N. V., [Sun]{}, X., [Hoeksema]{}, J. T., & [DeRosa]{}, M. L. 2014, , 780, L23 , A. J., [Selhorst]{}, C. L., [Sim[õ]{}es]{}, P. J. A., & [Gim[é]{}nez de Castro]{}, C. G. 2016, , 592, A91 , T. M. D., [De Pontieu]{}, B., [Carlsson]{}, M., [et al.]{} 2014, , 792, L15 , A., [Urpo]{}, S., & [Valtaoja]{}, E. 1998, , 333, 741 , A., [Urpo]{}, S., [Valtaoja]{}, E., [et al.]{} 2001, , 366, 676 , C. L., [Gim[é]{}nez de Castro]{}, C. G., [Varela Saraiva]{}, A. C., & [Costa]{}, J. E. R. 2010, , 509, A51 , C. L., [Silva]{}, A. V. R., & [Costa]{}, J. E. R. 2005, , 433, 365 —. 2005, , 440, 367 , C. L., [Silva]{}, A. V. R., [Costa]{}, J. E. R., & [Shibasaki]{}, K. 2003, , 401, 1143 , C. L., [Silva-V[á]{}lio]{}, A., & [Costa]{}, J. E. R. 2008, , 488, 1079 , K. 1998, in Synoptic Solar Physics, ASP Conf. Ser., Vol. 140, 373 , A. C. 2000, , 196, 79 , G., [Tziotziou]{}, K., [Kontogiannis]{}, I., [et al.]{} 2012, , 169, 181 , S., [Bastian]{}, T., [Braj[š]{}a]{}, R., [et al.]{} 2016, , 200, 1 , K. 2006, , 455, 697
{ "pile_set_name": "ArXiv" }
[**A further $q$-analogue of Van Hamme’s (H.2) supercongruence for $p\equiv1\pmod{4}$ [^1]**]{} Chuanan Wei [School of Biomedical Information and Engineering,\ Hainan Medical University, Haikou 571199, China\ [[email protected] ]{} ]{} 0.7cm [**Abstract.**]{} Several years ago, Long and Ramakrishna \[Adv. Math. 290 (2016), 773–808\] extended Van Hamme’s (H.2) supercongruence to the modulus $p^3$ case. Recently, Guo \[Int. J. Number Theory, to appear\] found a $q$-analogue of the Long–Ramakrishna formula for $p\equiv 3\pmod 4$. In this note, a $q$-analogue of the Long–Ramakrishna formula for $p\equiv 1\pmod 4$ is derived through the $q$-Whipple formulas and the Chinese remainder theorem for coprime polynomials. [*Keywords*]{}: basic hypergeometric series; $q$-Whipple formula; $q$-supercongruence 0.2cm [*AMS Subject Classifications:*]{} 33D15; 11A07; 11B65 Introduction ============ For any complex variable $x$, define the shifted-factorial to be $$(x)_{0}=1\quad \text{and}\quad (x)_{n} =x(x+1)\cdots(x+n-1)\quad \text{when}\quad n\in\mathbb{N}.$$ In 1997, Van Hamme [@Hamme (H.2)] conjectured that $$\label{eq:hamme} \sum_{k=0}^{(p-1)/2}\frac{(1/2)_k^3}{k!^3}\equiv \begin{cases} \displaystyle -\Gamma_p(1/4)^4 \pmod{p^2}, &\text{if $p\equiv 1\pmod 4$,}\\[10pt] 0\pmod{p^2}, &\text{if $p\equiv 3\pmod 4$.} \end{cases}$$ Here and throughout the paper, $p$ always denotes an odd prime and $\Gamma_p(x)$ is the $p$-adic Gamma function. In 2016, Long and Ramakrishna [@LR Theorem 3] gave the following extension of : $$\label{eq:long} \sum_{k=0}^{(p-1)/2}\frac{(1/2)_k^3}{k!^3}\equiv \begin{cases} \displaystyle -\Gamma_p(1/4)^4 \pmod{p^3}, &\text{if $p\equiv 1\pmod 4$,}\\[10pt] \displaystyle -\frac{p^2}{16}\Gamma_p(1/4)^4\pmod{p^3}, &\text{if $p\equiv 3\pmod 4$.} \end{cases}$$ For any complex numbers $x$ and $q$, define the $q$-shifted factorial as $$(x;q)_{0}=1\quad\text{and}\quad (x;q)_n=(1-x)(1-xq)\cdots(1-xq^{n-1})\quad \text{when}\quad n\in\mathbb{N}.$$ For simplicity, we also adopt the compact notation $$(x_1,x_2,\dots,x_m;q)_{n}=(x_1;q)_{n}(x_2;q)_{n}\cdots(x_m;q)_{n}.$$ Following Gasper and Rahman [@Gasper], define the basic hypergeometric series $_{r+1}\phi_{r}$ by $$_{r+1}\phi_{r}\left[\begin{array}{c} a_1,a_2,\ldots,a_{r+1}\\ b_1,b_2,\ldots,b_{r} \end{array};q,\, z \right] =\sum_{k=0}^{\infty}\frac{(a_1,a_2,\ldots, a_{r+1};q)_k} {(q,b_1,b_2,\ldots,b_{r};q)_k}z^k.$$ Then the $q$-Whipple formula due to Andrews [@Andrews] and the $q$-Whipple formula due to Jain [@Jain] can be stated as $$\begin{aligned} & _{4}\phi_{3}\!\left[\begin{array}{cccccccc} q^{-n}, q^{1+n}, b, -b \\ -q, c, b^2q/c \end{array};q,\, q \right] =q^{\binom{n+1}{2}}\frac{(b^2q^{1-n}/c, cq^{-n};q^2)_{n}} {(b^2q/c, c;q)_{n}}, \label{eq:q-whipple-a} \\[5pt] & _{4}\phi_{3}\!\left[\begin{array}{cccccccc} a, q/a, q^{-n}, -q^{-n} \\ c, q^{1-2n}/c, -q \end{array};q,\, q \right] =\frac{(ac, cq/a;q^2)_{n}} {(c;q)_{2n}}. \label{eq:q-whipple-b}\end{aligned}$$ Recently, Guo and Zudilin [@GuoZu2 Theorem 2] displayed a $q$-analogue of : for any positive odd integer $n$, $$\begin{aligned} &\sum_{k=0}^{(n-1)/2}\frac{(q;q^2)_k^2(q^2;q^4)_k}{(q^2;q^2)_k^2(q^4;q^4)_k}q^{2k} \notag\\[5pt] &\equiv \begin{cases} \displaystyle\frac{(q^2;q^4)_{(n-1)/4}^2}{(q^4;q^4)_{(n-1)/4}^2}\pmod{\Phi_n(q)^2}, &\text{if $n\equiv 1\pmod 4$,}\\[10pt] \displaystyle 0\pmod{\Phi_n(q)^2}, &\text{if $n\equiv 3\pmod 4$.} \end{cases} \label{eq:guo-a}\end{aligned}$$ Here and throughout the paper, $\Phi_n(q)$ stands for the $n$-th cyclotomic polynomial in $q$: $$\Phi_n(q)=\prod_{\substack{1\leqslant k\leqslant n\\ \gcd(k,n)=1}}(q-\zeta^k),$$ where $\zeta$ is an $n$-th primitive root of unity. Further, Guo [@Guo-new Theorem 1] provided the following partial $q$-analogue of : for any positive integer $n\equiv3\pmod{4}$, $$\begin{aligned} \sum_{k=0}^{(n-1)/2}\frac{(q,;q^2)_k^2(q^2;q^4)_k}{(q^2;q^2)_k^2(q^4;q^4)_k}q^{2k} \equiv[n]\frac{(q^3;q^4)_{(n-1)/2}}{(q^5;q^4)_{(n-1)/2}}\pmod{\Phi_n(q)^3}. \label{eq:guo-b}\end{aligned}$$ For more $q$-analogues of supercongruences, we refer the reader to [@Guo-rima; @Guo-jmaa; @Guo-rama; @Guo-a2; @GS1; @GuoZu; @LP; @NP; @Tauraso; @WY-a; @Zu19]. Motivated by the work just mentioned, we shall establish the following result. \[thm-a\] Let $n\equiv 1\pmod 4$ be a positive integer. Then, modulo $\Phi_n(q)^3$, $$\begin{aligned} \sum_{k=0}^{(n-1)/2}\frac{(q;q^2)_k^2(q^2;q^4)_k}{(q^2;q^2)_k^2(q^4;q^4)_k}q^{2k} \equiv q^{(n-1)/2}\frac{(q^2;q^4)_{(n-1)/4}^2}{(q^4;q^4)_{(n-1)/4}^2}\bigg\{1+2[n]^2\sum_{i=1}^{(n-1)/4}\frac{q^{4i-2}}{[4i-2]^2}\bigg\}.\end{aligned}$$ Obviously, Theorem \[thm-a\] is an extension of for $n\equiv1\pmod{4}$. Letting $n=p$ be a prime and taking $q\to 1$ in this theorem, we obtain the conclusion: $$\begin{aligned} \label{eq:wei} \sum_{k=0}^{(p-1)/2}\frac{(1/2)_k^3}{k!^3} \equiv \frac{(1/2)_{(p-1)/4}^2}{\big((p-1)/4\big)!^2}\bigg\{1+\frac{p^{2}}{2}H_{(p-1)/2}^{(2)}-\frac{p^{2}}{8}H_{(p-1)/4}^{(2)}\bigg\}\pmod{p^3},\end{aligned}$$ where the harmonic numbers of $2$-order are given by $$H_{m}^{(2)} =\sum_{k=1}^m\frac{1}{k^{2}}.$$ Using the known formula (cf. [@Sun Page 7]): $$\begin{aligned} &H_{(p-1)/2}^{(2)}\equiv0\pmod{p}\quad\text{with}\quad p>3,\end{aligned}$$ we deduce the following supercongruence from . \[cor-a\] Let $p\equiv 1\pmod 4$ be a prime. Then $$\begin{aligned} \label{eq:wei-a} \sum_{k=0}^{(p-1)/2}\frac{(1/2)_k^3}{k!^3} \equiv \frac{(1/2)_{(p-1)/4}^2}{\big((p-1)/4\big)!^2}\bigg\{1-\frac{p^{2}}{8}H_{(p-1)/4}^{(2)}\bigg\}\pmod{p^3}.\end{aligned}$$ For the sake of explaining the equivalence of for $p\equiv 1\pmod 4$ and , we need to verify the following relation. \[prop-a\] Let $p\equiv 1\pmod 4$ be an odd prime. Then $$\begin{aligned} \frac{(1/2)_{(p-1)/4}^2}{\big((p-1)/4\big)!^2}\bigg\{1-\frac{p^{2}}{8}H_{(p-1)/4}^{(2)}\bigg\}\equiv -\Gamma_p(1/4)^4\pmod{p^3}.\end{aligned}$$ The rest of the paper is arranged as follows. By means of the Chinese remainder theorem for coprime polynomials, a $q$-supercongruence modulo $(1-aq^n)(a-q^n)(b-q^n)$ will be derived in Section 2. Then it is utilized to provide a proof of Theorem \[thm-a\] in the same section. Finally, the proof of Proposition \[prop-a\] will be given in Section 3. Proof of Theorem \[thm-a\] ========================== In order to prove Theorem \[thm-a\], we need the following parameter extension of it. \[thm-b\] Let $n\equiv 1\pmod 4$ be a positive integer. Then, modulo $(1-aq^n)(a-q^n)(b-q^n)$, $$\begin{aligned} \sum_{k=0}^{(n-1)/2}\frac{(aq,q/a,q/b,-q/b;q^2)_k}{(q^2,q^2,-q^2,q^2/b^2;q^2)_k}q^{2k} \equiv \Omega_n(a,b),\label{eq:wei-aa}\end{aligned}$$ where $$\begin{aligned} \Omega_n(a,b)&=\frac{(b-q^n)(ab-1-a^2+aq^n)}{(a-b)(1-ab)}\frac{(b/q)^{(1-n)/2}(q^2,b^2q^2;q^4)_{(n-1)/4}}{(q^4,q^4/b^2;q^4)_{(n-1)/4}} \\[5pt] &+\frac{(1-aq^n)(a-q^n)}{(a-b)(1-ab)}\frac{(aq^3,q^3/a;q^4)_{(n-1)/2}}{(q^2;q^2)_{n-1}}.\end{aligned}$$ When $a=q^{-n}$ or $a=q^n$, the left-hand side of is equal to $$\begin{aligned} \sum_{k=0}^{(n-1)/2}\frac{(q^{1-n},q^{1+n},q/b,-q/b;q^2)_k}{(q^2,q^2,-q^2,q^2/b^2;q^2)_k}q^{2k} = {_{4}\phi_{3}}\!\left[\begin{array}{cccccccc} q^{1-n}, q^{1+n}, q/b, -q/b \\ q^2, -q^2, q^2/b^2 \end{array};q^2,\, q^2 \right]. \label{eq:whipple-aa}\end{aligned}$$ According to , the right-hand side of can be expressed as $$\begin{aligned} (b/q)^{(1-n)/2}\frac{(q^2,b^2q^2;q^4)_{(n-1)/4}}{(q^4,q^4/b^2;q^4)_{(n-1)/4}}.\end{aligned}$$ Since $(1-aq^n)$ and $(a-q^n)$ are relatively prime polynomials, we get the following result: Modulo $(1-aq^n)(a-q^n)$, $$\begin{aligned} \sum_{k=0}^{(n-1)/2}\frac{(aq,q/a,q/b,-q/b;q^2)_k}{(q^2,q^2,-q^2,q^2/b^2;q^2)_k}q^{2k}\equiv (b/q)^{(1-n)/2}\frac{(q^2,b^2q^2;q^4)_{(n-1)/4}}{(q^4,q^4/b^2;q^4)_{(n-1)/4}}. \label{eq:wei-bb}\end{aligned}$$ When $b=q^{n}$, the left-hand side of is equal to $$\begin{aligned} \sum_{k=0}^{(n-1)/2}\frac{(aq,q/a,q^{1-n},-q^{1-n};q^2)_k}{(q^2,q^2,-q^2,q^{2-2n};q^2)_k}q^{2k} = {_{4}\phi_{3}}\!\left[\begin{array}{cccccccc} aq, q/a, q^{1-n}, -q^{1-n} \\ q^2, -q^2, q^{2-2n} \end{array};q^2,\, q^2 \right]. \label{eq:whipple-bb}\end{aligned}$$ In terms of , the right-hand side of can be written as $$\begin{aligned} \frac{(aq^3,q^3/a;q^4)_{(n-1)/2}}{(q^2;q^2)_{n-1}}.\end{aligned}$$ Therefore, we are led to the following conclusion: Modulo $(b-q^n)$, $$\begin{aligned} \sum_{k=0}^{(n-1)/2}\frac{(aq,q/a,q/b,-q/b;q^2)_k}{(q^2,q^2,-q^2,q^2/b^2;q^2)_k}q^{2k} \equiv\frac{(aq^3,q^3/a;q^4)_{(n-1)/2}}{(q^2;q^2)_{n-1}}.\label{eq:wei-cc}\end{aligned}$$ It is clear that the polynomials $(1-aq^n)(a-q^n)$ and $(b-q^n)$ are relatively prime. Noting the $q$-congruences $$\begin{aligned} &\frac{(b-q^n)(ab-1-a^2+aq^n)}{(a-b)(1-ab)}\equiv1\pmod{(1-aq^n)(a-q^n)}, \\[5pt] &\qquad\qquad\frac{(1-aq^n)(a-q^n)}{(a-b)(1-ab)}\equiv1\pmod{(b-q^n)}\end{aligned}$$ and employing the Chinese remainder theorem for coprime polynomials, we deduce Theorem \[thm-b\] from and . It is not difficult to see that $$\begin{aligned} (q^2;q^2)_{n-1}&=(q^2, q^{n+1}, q^4, q^{n+3};q^4)_{(n-1)/4} \\[5pt] &=q^{(n-1)(3n-1)/4}(q^2, q^4, q^{2-2n}, q^{4-2n};q^4)_{(n-1)/4}\\[5pt] &\equiv b^{n-1}q^{(1-n^2)/4}(q^{2},q^{4},q^2/b^2,q^4/b^2;q^4)_{(n-1)/4}\pmod{(b-q^n)}, \\[5pt] (aq^3;q^4)_{(n-1)/2}&=(aq^3;q^4)_{(n-1)/4}(aq^{n+2};q^4)_{(n-1)/4} \\[5pt] &\equiv(abq^{3-n};q^4)_{(n-1)/4}(abq^{2};q^4)_{(n-1)/4} \\[5pt] &=(-ab)^{(n-1)/4}q^{-(n-1)^2/8}(abq^2,q^2/ab;q^4)_{(n-1)/4}\pmod{(b-q^n)}, \\[5pt] (q^3/a;q^4)_{(n-1)/2}&=(-b/a)^{(n-1)/4}q^{-(n-1)^2/8}(bq^2/a,aq^2/b;q^4)_{(n-1)/4}\pmod{(b-q^n)}.\end{aligned}$$ Thus, the $q$-supercongruence may be rewritten as follows: Modulo $(1-aq^n)(a-q^n)(b-q^n)$, $$\begin{aligned} &\sum_{k=0}^{(n-1)/2}\frac{(aq,q/a,q/b,-q/b;q^2)_k}{(q^2,q^2,-q^2,q^2/b^2;q^2)_k}q^{2k} \\[5pt] &\quad\equiv \frac{(b-q^n)(ab-1-a^2+aq^n)}{(a-b)(1-ab)}\frac{(b/q)^{(1-n)/2}(q^2,b^2q^2;q^4)_{(n-1)/4}}{(q^4,q^4/b^2;q^4)_{(n-1)/4}} \\[5pt] &\quad\quad+\frac{(1-aq^n)(a-q^n)}{(a-b)(1-ab)}\frac{(b/q)^{(1-n)/2}(abq^2,bq^2/a,aq^2/b,q^2/ab;q^4)_{(n-1)/4}}{(q^2,q^4,q^2/b^2,q^4/b^2;q^4)_{(n-1)/4}}.\end{aligned}$$ Letting $b\to 1$, we arrive at the following formula: Modulo $\Phi_n(q)(1-aq^n)(a-q^n)$, $$\begin{aligned} &\sum_{k=0}^{(n-1)/2}\frac{(aq,q/a;q^2)_k(q^2;q^4)_k}{(q^2;q^2)_k^2(q^4;q^4)_k}q^{2k} \notag\\[5pt] &\:\:\:\equiv q^{(n-1)/2}\frac{(q^2;q^4)_{(n-1)/4}^2}{(q^4;q^4)_{(n-1)/4}^2}+q^{(n-1)/2}\frac{(1-aq^n)(a-q^n)}{(1-a)^2} \notag\\[5pt] &\quad\:\:\times\bigg\{\frac{(q^2;q^4)_{(n-1)/4}^2}{(q^4;q^4)_{(n-1)/4}^2}-\frac{(aq^2,q^2/a;q^4)_{(n-1)/4}^2}{(q^2,q^4;q^4)_{(n-1)/4}^2}\bigg\}. \label{eq:wei-dd}\end{aligned}$$ By the L’Hospital rule, we have $$\begin{aligned} &\lim_{a\to1}\frac{(1-aq^n)(a-q^n)}{(1-a)^2}\bigg\{\frac{(q^2;q^4)_{(n-1)/4}^2}{(q^4;q^4)_{(n-1)/4}^2} -\frac{(aq^2,q^2/a;q^4)_{(n-1)/4}^2}{(q^2,q^4;q^4)_{(n-1)/4}^2}\bigg\}\\[5pt] &=2[n]^2\frac{(q^2;q^4)_{(n-1)/4}^2}{(q^4;q^4)_{(n-1)/4}^2}\sum_{i=1}^{(n-1)/4}\frac{q^{4i-2}}{[4i-2]^2}.\end{aligned}$$ Letting $a\to1$ in and utilizing the above limit, we complete the proof of Theorem \[thm-a\]. Proof of Proposition \[prop-a\] =============================== Via the congruence due to Wang and Pan [@Wang Page 6]: $$\begin{aligned} H_{(p-1)/4}^{(2)}\equiv\frac{\Gamma_p^{''}(1/4)}{\Gamma_p(1/4)}-\bigg\{\frac{\Gamma_p^{'}(1/4)}{\Gamma_p(1/4)}\bigg\}^2\pmod{p},\end{aligned}$$ where $\Gamma_p^{'}(x)$ and $\Gamma_p^{''}(x)$ are respectively the first derivative and second derivative of $\Gamma_p(x)$, we obtain $$\begin{aligned} \label{eq:wei-b} 1-\frac{p^{2}}{8}H_{(p-1)/4}^{(2)}\equiv1-\frac{p^{2}}{8}\frac{\Gamma_p^{''}(1/4)}{\Gamma_p(1/4)}+ \frac{p^{2}}{8}\bigg\{\frac{\Gamma_p^{'}(1/4)}{\Gamma_p(1/4)}\bigg\}^2\pmod{p^3}.\end{aligned}$$ In terms of the properties of the $p$-adic Gamma function, we get $$\begin{aligned} \frac{(1/2)_{(p-1)/4}^2}{\big((p-1)/4\big)!^2}&=\bigg\{\frac{\Gamma_p((1+p)/4)\Gamma_p(1)}{\Gamma_p(1/2)\Gamma_p((3+p)/4)}\bigg\}^2 \notag\\[5pt] &=\bigg\{\frac{\Gamma_p((1+p)/4)\Gamma_p((1-p)/4)}{\Gamma_p(1/2)}\bigg\}^2 \notag\\[5pt] &\equiv-\bigg\{\Gamma_p(1/4)+\Gamma_p^{'}(1/4)\frac{p}{4}+\Gamma_p^{''}(1/4)\frac{p^2}{2\times4^2}\bigg\}^2 \notag\\[5pt] &\quad\times\bigg\{\Gamma_p(1/4)-\Gamma_p^{'}(1/4)\frac{p}{4}+\Gamma_p^{''}(1/4)\frac{p^2}{2\times4^2}\bigg\}^2\pmod{p^3}. \label{eq:wei-c}\end{aligned}$$ The combination of and produces $$\begin{aligned} &\frac{(1/2)_{(p-1)/4}^2}{\big((p-1)/4\big)!^2}\bigg\{1-\frac{p^{2}}{8}H_{(p-1)/4}^{(2)}\bigg\} \\[5pt] &\quad\equiv-\bigg\{\Gamma_p(1/4)+\Gamma_p^{'}(1/4)\frac{p}{4}+\Gamma_p^{''}(1/4)\frac{p^2}{2\times4^2}\bigg\}^2 \notag\\[5pt] &\qquad\times\bigg\{\Gamma_p(1/4)-\Gamma_p^{'}(1/4)\frac{p}{4}+\Gamma_p^{''}(1/4)\frac{p^2}{2\times4^2}\bigg\}^2 \\[5pt] &\qquad\times\bigg\{1-\frac{p^{2}}{8}\frac{\Gamma_p^{''}(1/4)}{\Gamma_p(1/4)}+ \frac{p^{2}}{8}\bigg\{\frac{\Gamma_p^{'}(1/4)}{\Gamma_p(1/4)}\bigg\}^2\bigg\} \\[5pt] &\quad\equiv -\Gamma_p(1/4)^4 \pmod{p^3}.\end{aligned}$$ [99]{} G.E. Andrews, On $q$-analogues of the Watson and Whipple summations, SIAM J. Math. Anal. 7 (1976), 332–336. G. Gasper, M. Rahman, Basic Hypergeometric Series (2nd edition), Cambridge University Press, Cambridge, 2004. V.J.W. Guo, Proof of some $q$-supercongruences modulo the fourth power of a cyclotomic polynomial, Results Math. 75 (2020), Art. 77. V.J.W. Guo, $q$-Analogues of Dwork-type supercongruences, J. Math. Anal. Appl. 487 (2020), Art. 124022. V.J.W. Guo, $q$-Analogues of three Ramanujan-type formulas for $1/\pi$, Ramanujan J. 52 (2020), 123–132. V.J.W. Guo, A $q$-analogue of the (A.2) supercongruence of Van Hamme for primes $p\equiv 1\pmod 4$, Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 114 (2020), Art. 123. V.J.W. Guo, A further $q$-analogue of Van Hamme’s (H.2) supercongruence for primes $p\equiv3\pmod{4}$, Int. J. Number Theory, to appear. V.J.W. Guo and M.J. Schlosser, A family of $q$-hypergeometric congruences modulo the fourth power of a cyclotomic polynomial, Israel J. Math. (to appear). V.J.W. Guo, W. Zudilin, A $q$-microscope for supercongruences, Adv. Math. 346 (2019), 329–358. V.J.W. Guo, W. Zudilin, On a $q$-deformation of modular forms, J. Math. Anal Appl. 475 (2019), 1636–1646. V.K. Jain, Some transformations of basic hypergeometric functions II, SIAM J. Math. Anal. 12 (1981), 957–961. J.-C. Liu, F. Petrov, Congruences on sums of $q$-binomial coefficients, Adv. Appl. Math. 116 (2020), Art. 102003. L. Long, R. Ramakrishna, Some supercongruences occurring in truncated hypergeometric series, Adv. Math. 290 (2016), 773–808. H.-X. N, H. Pan, On a conjectured $q$-congruence of Guo and Zeng, Int. J. Number Theory 14 (2018), 1699–1707. Z.-W. Sun, A new series for $\pi^3$ and relateld congruences, Internat. J. Math. 26 (2015), no.8, 1550055. R. Tauraso, $q$-Analogs of some congruences involving Catalan numbers, Adv. Appl. Math. 48 (2009), 603–614. L. Van Hamme, Some conjectures concerning partial sums of generalized hypergeometric series, in: p-Adic Functional Analysis (Nijmegen, 1996), Lecture Notes in Pure and Appl. Math. 192, Dekker, New York, 1997, pp. 223–236. C. Wang, H. Pan, Supercongruences concerning truncated hypergeometric series, preprint, 2018, arXiv:1806.02735v2. X. Wang, M. Yue, Some $q$-supercongruences from Watson’s $_8\phi_7$ transformation formula, Results Math. 75 (2020), Art. 71. W. Zudilin, Congruences for $q$-binomial coefficients, Ann. Combin. 23 (2019), 1123–1135. [^1]: This work is supported by the National Natural Science Foundation of China (No. 11661032).
{ "pile_set_name": "ArXiv" }
--- abstract: | Hyperfine induced $2s2p~^3P_0 \rightarrow 2s^2~^1S_0$ transition rates in an external magnetic field for Be-like $^{47}$Ti were calculated based on the multiconfiguration Dirac-Fock method. It was found that the transition probability is dependent on the magnetic quantum number $M_F$ of the excited state, even in the weak field. The present investigation clarified that the difference of the hyperfine induced transition rate of Be-like Ti ions between experiment \[Schippers [*et al.*]{}, Phys Rev Lett [**98**]{}, (2007) 033001(4)\] and theory does not result from the influence of external magnetic field. PACS: 31.30.Gs, 32.60.+i Keywords: Hyperfine induced transition; Zeeman effect; MCDF method. author: - | Jiguang Li $^{1}$ [^1], Chenzhong Dong $^{1, 2}$ [^2] , Per Jönsson $^{3}$ and Gediminas Gaigalas $^{4, 5}$\ [$^1$ College of Physics and Electronic Engineering, Northwest Normal University, Lanzhou 730070, China]{}\ [$^2$ Key Laboratory of Atomic and Molecular Physics & Functional Materials of Gansu Province, Lanzhou 730070, China]{}\ [$^3$ Center for Technology Studies, Malmö University, Malmö S-20506, Sweden]{}\ [$^4$ Department of Physics, Vilnius Pedagogical University, Studentu 39, Vilnius LT-08106, Lithuania]{}\ [$^5$ Institute of Theoretical Physics and Astronomy, A. Gostautǒ 12, Vilnius LT-01108, Lithuania]{} title: '$M_F$-dependent Hyperfine Induced Transition Rates in an External Magnetic Field for Be-like $^{47}$Ti$^{18+}$' --- Introduction ============ The hyperfine induced transition (HIT) rate of the $2s2p~^3P_0$ level for Be-like $^{47}$Ti ions has been measured with high accuracy by means of resonant electron-ion recombination in the heavy-ion storage-ring TSR of the Max-Planck Institute for Nuclear Physics, Heidelberg, Germany [@Schippers]. However, the measured transition rate $A_{HIT}=0.56(3)$ s$^{-1}$ differs from all present theoretical results $A_{HIT} \thickapprox 0.67$ s$^{-1}$ [@Cheng; @Andersson; @Li] by about 20%. In the theoretical calculations the major part of the electron correlation, which always causes the dominant uncertainty, has been taken into account very elaborately. As a result, it is desirable to find out other reasons for the difference. In this letter, we focus on the influence of the magnetic field present in the heavy-ion storage-ring on the HIT rate. The HIT rate in an external magnetic field depends on the magnetic quantum number $M_F$ of the excited state, even in a relatively weak field. This effect, combined with the non-statistical distribution of the magnetic sublevel population of the excited level, might lead to the difference in transition rate mentioned above. Theory ====== In presence of the magnetic field, the Hamiltonian of an atom with non-zero nuclear spin $I$ is $$\label{H} H = H_{fs} + H_{hfs} + H_{m},$$ where $H_{fs}$ is the relativistic fine-structure Hamiltonian that includes the Breit interaction. $H_{hfs}$ is the hyperfine interaction Hamiltonian, which can be written as a multipole expansion $$H_{hfs} = \sum_{k \le 1}{\bf T}^{(k)} \cdot {\bf M}^{(k)},$$ where ${\bf T}^{(k)}$ and ${\bf M}^{(k)}$ are spherical tensor operator in electronic and nuclear space, respectively [@Schwartz]. $H_{m}$ is the interaction Hamiltonian with the external homogeneous magnetic field [**B**]{}, $$H_{m} = ({\bf N}^{(1)} + \Delta {\bf N}^{(1)}) \cdot {\bf B},$$ where ${\bf N}^{(1)}$ are first-order tensor with the similar form of ${\bf T}^{(1)}$, $\Delta {\bf N}^{(1)}$ is the so called Schwinger QED correction [@Cheng2]. We choose the direction of the magnetic field as the $z$-direction, and only $M_{F}$ is a good quantum number. The wavefunction of the atomic system can thus be written as an expansion $$\label{AW} |\Upsilon \widetilde{\Gamma} I M_F \rangle = \sum_{\Gamma J F} d_{\Gamma J F} |\Upsilon \Gamma I J F M_F \rangle.$$ The total angular momentum $\textbf{F}$ is coupled by the nuclear $\textbf{I}$ and electronic $\textbf{J}$ angular momentum. The $\Upsilon$ and $\Gamma$ are the other quantum numbers labeling the nuclear and electronic states, respectively. The coefficients $d_{\Gamma J F}$ in Eq. (\[AW\]) are obtained through solving the eigenvalue equation using HFSZEEMAN package [@Andersson2] $${\bf H d} = E \bf{d},$$ where ${\bf H}$ is the interaction matrix with elements $$\label{HME} H_{\Gamma J F, \Gamma' J' F'} = \langle \Upsilon \Gamma I J F M_F | H_{fs} + H_{hfs} + H_{m}| \Upsilon \Gamma' I J' F' M_F \rangle.$$ The readers are referred to Ref. [@Cheng2; @Andersson2] for a detailed derivation of the different matrix elements . For the present problem, the wavefunction of the $^3P_0$ state can be written $$\label{WF-1} |``2s2p~^3P_0 ~ I ~ M_F " \rangle = d_0 | 2s2p~^3P_0 ~ I ~ F(=I) ~ M_F \rangle + \sum_{S(=1,3); F'} d_{S;F'} |2s2p~^{S}P_1 ~ I ~ F' ~ M_F \rangle.$$ The quotation marks in the left-hand wave function emphasize the fact that the notation is just a label indicating the dominant character of the eigenvector. Remaining interactions between $2s2p~^3P_0$ and higher members of the Rydberg series can be neglected due to large energy separations and comparatively weak hyperfine couplings [@Brage]. Furthermore, those perturbative states with different total angular momentum $\textbf{F}$ can be neglected because of relatively weak magnetic interaction. As a result, Eq. (\[WF-1\]) is simplified to $$\label{WF-2} |``2s2p~^3P_0 ~ I ~ M_F " \rangle = d_0 | 2s2p~^3P_0 ~ I ~ F(=I) ~ M_F \rangle + \sum_{S=1,3} d_S |2s2p~^{S}P_1 ~ I ~ F(=I) ~ M_F \rangle.$$ Similarly, the wavefunction of the ground state is approximatively written $$\label{WF-3} |``2s^2~^1S_0 ~ I ~ M_F " \rangle = | 2s^2~^1S_0 ~ I ~ F(=I) ~ M_F \rangle,$$ where all perturbative states were neglected for the same reasons as mentioned above. The one-photon $2s2p~^3P_0 \rightarrow 2s^2~^1S_0$ E1 transition becomes allowed via mixing with the perturbative states of $2s2p~^3P_1$ and $2s2p~^1P_1$ (see Eq. (\[WF-2\])) induced by both the off-diagonal hyperfine interaction and the interaction with the magnetic field. The decay rate $a(M^e_F)_{HIT}$ from the excited state $|``2s2p~^3P_0 ~ I ~ M^e_F " \rangle$ to the ground state $|``2s^2~^1S_0 ~ I ~ M^g_F " \rangle$ in s$^{-1}$ is given by $$\label{MHIT-1} a(M^e_F)_{HIT} = \frac{2.02613 \times 10^{18}} {\lambda^3} \sum_{q} |\langle ``2s^2~^1S_0 ~ I ~ M^g_F " | P^{(1)}_{q} | ``2s2p~^3P_0 ~ I ~ M^e_F " \rangle |^2,$$ Substitute Eq. (\[WF-2\]) and (\[WF-3\]) into above formula, then $$\begin{aligned} \label{MHIT-2} a(M^e_F)_{HIT} &= \frac{2.02613 \times 10^{18}} {\lambda^3} \sum_{q} |\sum_{S} d_{S} \sqrt{2F^g(=I)+1}\sqrt{2F^e(=I)+1} \nonumber \\ & \times \left(\begin{array}{ccc} F^g(=I) & 1 & F^e(=I) \\ -M^g_{F(=I)} & q & M^e_{F^e(=I)} \end{array}\right ) \left \{ \begin{array}{ccc} J^g(=0) & F^g(=I) & I \\ F^e(=I)& J^e(=1) & 1 \end{array}\right \} \langle 2s^2~^1S_0 || P^{(1)} || 2s2p~^{S}P_1 \rangle |^2.\end{aligned}$$ Applying standard tensor algebra, the Eq. (\[MHIT-2\]) is further simplified to $$\label{MHIT-3} a(M^e_F)_{HIT} = \frac{2.02613 \times 10^{18}} {3\lambda^3} (2I+1) \sum_{q} | \sum_{S} d_{S} \left ( \begin{array}{ccc} I & 1 & I \\ -M^g_I & q & M^e_I \end{array}\right ) \langle 2s^2~^1S_0 || P^{(1)} || 2s2p~^{S}P_1 \rangle |^2,$$ where $\lambda$ is the wavelength in [Å]{} for the transition and $\langle 2s^2~^1S_0 || P^{(1)} || 2s2p~^{S}P_1 \rangle$ the reduced electronic transition matrix element in a.u.. From the Eq. (\[MHIT-3\]) we can obtain the Einstein spontaneous emission transition probability [@Cowan] $$\begin{aligned} \label{TMHIT} A(M^e_F)_{HIT} &= \sum_{M^g_F} a(M^e_F)_{HIT} \nonumber \\ &= \frac{2.02613 \times 10^{18}} {3 \lambda^3} | \sum_{S} d_{S} \langle 2s^2~^1S_0 || P^{(1)} || 2s2p~^{S}P_1 \rangle|^2.\end{aligned}$$ It should be noticed that in present approximation of weak magnetic field, i.e., neglecting those perturbative states with different total angular quantum number $F$, the formula for the transition rate (see Eq. \[TMHIT\]) is similar to the one where the transition is induced by only hyperfine interaction [@Cheng; @Andersson]. However, a significant difference exists in the mixing coefficients $d_{S}$ by virtue of incorporating the magnetic interaction into the Hamiltonian for the present work. The electronic wavefunctions are computed using the GRASP2K program package [@grasp2K]. Here the wavefunction for a state labeled $\gamma J$ is approximated by an expansion over $jj$-coupled configuration state functions (CSFs) $$|\gamma J \rangle = \sum_i c_i \Phi(\gamma_i J).$$ In the multi-configuration self-consistent field (SCF) procedure both the radial parts of the orbitals and the expansion coefficients $c_i$ are optimized to self-consistency. In the present work a Dirac-Coulomb Hamiltonian is used, and the nucleus is described by an extended Fermi charge distribution [@Parpia]. The multi-configuration SCF calculations are followed by relativistic CI calculations including Breit interaction and leading QED effects. In addition, a biorthogonal transformation technique introduced by Malmqvist [@Malmqvist; @Biotra] is used to compute reduced transition matrix elements where the even and odd parity wave functions are built from independently optimized orbital sets. Results and discussion ====================== As a starting point SCF calculations were done for the configurations belonging to the even and odd complex of $n=2$, respectively. Valence correlation was taken into account by including CSFs obtained by single (S) and double (D) excitations from the even and odd reference configurations to active sets of orbitals. The active sets were systematically increased up to $n \le 5$. The SCF calculations were followed by CI calculations in which core-valence and core-core correlations and the Breit interaction and QED effects were incorporated. Based on this correlation model, we calculated the hyperfine induced $2s2p~^3P_0 \rightarrow 2s^2~^1S_0$ E1 transition rate for Be-like $^{47}$Ti ions in absence of the magnetic field to $A_{HIT} = 0.66$ s$^{-1}$, where the experimental wavelength 346.99 Å [@NIST] was used to re-scaled the rate.[^3] The value is in good agreement with the other theoretical results: $A_{HIT} = 0.67$ s$^{-1}$ by Cheng *et al.* [@Cheng] and $A_{HIT} = 0.677$ s$^{-1}$ by Andersson *et al.* [@Andersson]. Recent theoretical calculations are all in disagreement with the experimental measurement $A=0.56(3)$ s$^{-1}$ [@Schippers] by about 20%. It is hypothezised that the discrepancy results from the effect of magnetic field present in the storage ring. Actually, the magnetic field effect has already been noticed and been discussed in previous experiment measuring the lifetime of the hyperfine state of metastable level $5d~^4D_{7/2}$ for Xe $^+$ using the ion storage ring CRYRING at the Manne Siegbahn Laboratory (Stockholm) [@Mannervik]. Returning to the present problem, experiment was conducted in the heavy-ion storage-ring TSR where the rigidity of the ion beam is given as $B \times \rho = 0.8533$ T [@Schippers], and the bending radius of the storage ring dipole magnets is $\rho = 1.15 $m [@Baumann]. As a result, the magnetic field in the experiment has been 0.742 T. Considering the factual experimental environment, we calculated the hyperfine induced $2s2p~^3P_0 \rightarrow 2s^2~^1S_0$ E1 transition rate of Be-like $^{47}$Ti ion in the external magnetic field B=0.5 T, B=0.742 T and B=1 T, respectively. With assistance of Eq. (\[MHIT-3\]) and Eq. (\[TMHIT\]), we obtained the transition rate $a(M^e_F)_{HIT}$ from the excited Zeeman state to the ground Zeeman state, the Einstein transition probability $A(M^e_F)_{HIT}$ of the excited state, and the corresponding lifetime $\tau$. Computational results are displayed in Table 1. As can be seen from this table, the transition rates $A(M^e_F)_{HIT}$ for each of the individual excited states $``2s2p~^3P_0 ~ I ~ M^e_F "$ are obviously different because the mixing coefficients $d_S$ in Eq. (\[TMHIT\]) depend on the magnetic quantum number $M^e_F$ of the excited state. As can be found from Table 1, the lifetime of $^3P_0$ level is still not sensitive to the sublevel specific lifetimes, if the magnetic sublevels are populated statistically (the lifetimes $\tau = \sum_{M^e_F} \tau(M^e_F)/ (2I+1) = 1.52s, 1.52s, 1.53 s$ in the external magnetic field B=0.5T, 0.742T and 1T, respectively). In this case, the zero-field lifetime within the exponential error can be obtained, as made in Ref. [@Schippers], through only a fit of one exponential decay curve instead of 6 exponential decay curves with slightly different decay constants. To the contrary, in the experiment measuring the HIT rate of the $2s2p~^3P_0$ level of the Be-like Ti ion, the level concerned was produced through beam-foil excitation [@Schippers2]. As we know, the cross sections with magnetic sublevels for ion-atom collision are different [@Mies; @Stoehlker], and the magnetic sublevel population is in general not statistically distributed. Combining this fact with the $M_F$-dependent HIT rate in an external field, the transition probability of $^3P_0$ level cannot be obtained by statistical average over all magnetic sublevel. However, we also noticed that an external magnetic field can lower the transition rate only for those magnetic sublevels with $M_{F} \ge 0$. In other word, only if these specific magnetic sublevels with $M_{F} \ge 0$ were populated, it is possible to explain or decrease the discrepancy between the measured and theoretical HIT rates for Be-like $^{47}$Ti. In fact, such extreme orientation of the stored ions seems improbable by means of beam-foil excitation. Moreover, the experimental heavy-ion storage-ring was only partly covered with dipole magnets (this fraction amounts to 13%) [@Baumann]. It further reduces the influence of magnetic field on the lifetime of level. Therefore, we still cannot clarify the disagreement between experimental measurement and theoretical calculations at present even though the influence of an external magnetic field was taken into account. Summary ======= To sum up, we have calculated the hyperfine induced $2s2s~^3P_0 \rightarrow 2s^2~^1S_0$ E1 transition rate in an external magnetic field for each of the magnetic sub-hyperfine levels of $^{47}$Ti$^{18+}$ ions based on the multiconfiguration Dirac-Fock method. It was found that the transition rate is dependent on the magnetic quantum number $M^e_F$ of the excited state, even in relatively weak magnetic fields. Considering the influence of an external magnetic field, we still did not explain the difference in the HIT rate of Be-like Ti ion between experiment and theory. Acknowledgment {#acknowledgment .unnumbered} ============== We would like to gratefully thank Prof. Stefan Schippers and Prof. Jianguo Wang for helpful discussions. The referees’ very valuable suggestions should be acknowledged. This work is supported by the National Nature Science Foundation of China (Grant No. 10774122, 10876028), the specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20070736001) and the Foundation of Northwest Normal University (NWNU-KJCXGC-03-21). Financial support by the Swedish Research Council is gratefully acknowledged. [99]{} S. Schippers *et al.*, Phys. Rev. Lett [**98**]{}, (2007) 033001(4). K. T. Cheng, M. H. Chen and W. R. Johnson, Phys. Rev. A [**77**]{} (2008) 052504(14). M. Andersson, Y. Zou, R. Hutton and T. Brage, Phys. Rev. A [**79**]{} (2009) 032501(15). J. G. Li and C. Z. Dong, Plasma Sci. Tech. **12** (2010) 364-368. C. Schwartz, Phys. Rev. **97** (1955) 380-395. K. T. Cheng and W. J. Childs, Phys. Rev. A **31** (1985) 2775-2784. M. Andersson and P. Jönsson, Comput. Phys. Commun. **178** (2008) 156-170. T. Brage, P. G. Judge, A. Aboussaïd, M. R. Godefroid, P. Jönsson, A. Ynnerman, C. Froese Fischer and D. S. Leckrone, ApJ [**500**]{} (1998) 507-521. R. D. Cowan, [*The theory of atomic and struture and spectra*]{}, (University of Califonia, Berkely, 1981). P. Jönsson, X. He, C. Froese Fischer and I.P. Grant, Comput. Phys. Commun. [**177**]{} (2007) 597-622. F. A. Parpia and A. K. Mohanty, Phys. Rev. A [**46**]{} (1992) 3735-3745. P. [Å]{}. Malmqvist, Int. J. Quant. Chem. [**30**]{} (1986) 479-494. J. Olsen, M. Godefroid, P. Jönsson, P.[Å]{}. Malmqvist and C. Froese Fischer, Phys. Rev. E [**52**]{} (1995) 4499-4508. Y. Ralchenko, A. E. Kramida, J. Reader and NIST ASD Team (2008). NIST Atomic Spectra Database (v 3.1.5) \[online\]. Available : `http://physics.nist.gov/asd3` \[2008, June 26\] National Institute of Standards and Technology, Gaithersburg, MD. N. J. Stone, At. Data Nucl. Data Tables [**90**]{} (2005) 75-176. S. Mannervik *et al.*, Phys. Rev. Lett. **76** (1996) 3675-3678. P. Baumann *et al.*, Nucl. Instrum. Methods A **268** (1988) 531-537. S. Schippers, (private communication). F. H. Mies, Phys. Rev. A **7** (1973) 942-957, 957-967. Th. Stoehlker, *et al.* Phys. Rev. A **57** (1998) 845-854. ---------------- --------- ------------ ---------------- ---------------- ------------ -- ---------------- ---------------- ------------ -- ---------------- ---------------- ------------ -- -- -- -- -- -- -- -- M$^e_F$ M$^g_F$ $\Delta$ M $a$ (s$^{-1}$) $A$ (s$^{-1}$) $\tau$ (s) $a$ (s$^{-1}$) $A$ (s$^{-1}$) $\tau$ (s) $a$ (s$^{-1}$) $A$ (s$^{-1}$) $\tau$ (s) 5/2 5/2 0 0.44 0.61 1.64 0.42 0.59 1.71 0.40 0.56 1.78 3/2 -1 0.17 0.17 0.16 \[0.2cm\] 3/2 5/2 1 0.18 0.63 1.59 0.18 0.62 1.62 0.17 0.60 1.67 3/2 0 0.16 0.16 0.15 1/2 -1 0.29 0.28 0.27 \[0.2cm\] 1/2 3/2 1 0.30 0.65 1.54 0.30 0.65 1.55 0.29 0.64 1.56 1/2 0 0.02 0.02 0.02 -1/2 -1 0.33 0.33 0.33 \[0.2cm\] -1/2 1/2 1 0.35 0.67 1.49 0.35 0.68 1.48 0.35 0.68 1.47 -1/2 0 0.02 0.02 0.02 -3/2 -1 0.31 0.31 0.31 \[0.2cm\] -3/2 -1/2 1 0.32 0.69 1.44 0.32 0.71 1.41 0.33 0.73 1.38 -3/2 0 0.18 0.18 0.19 -5/2 -1 0.20 0.20 0.21 \[0.2cm\] -5/2 -3/2 1 0.20 0.71 1.40 0.21 0.74 1.35 0.22 0.77 1.30 -5/2 0 0.51 0.53 0.55 \[0.2cm\] ---------------- --------- ------------ ---------------- ---------------- ------------ -- ---------------- ---------------- ------------ -- ---------------- ---------------- ------------ -- -- -- -- -- -- -- -- : Hyperfine induced $2s2p~^3P_0 \rightarrow 2s^2~^1S_0$ E1 transition rates in presence of magnetic field B=0.5 T, B=0.742 T and B=1 T for Be-like $^{47}$Ti ion. $a$ represents the transition probability from the excited state $``2s2p~^3P_0 ~ I ~ M^e_F "$ to the ground state $``2s^2~^1S_0 ~ I ~ M^g_F "$, $A$ is the Einstein transition probability from the excited state $``2s2p~^3P_0 ~ I ~ M^e_F "$. $\tau$ is the lifetime of excited state $``2s2p~^3P_0 ~ I ~ M^e_F "$. The experimental wavelength ($\lambda$) 346.99 Å[@NIST] was used in this calculations, where the influence of hyperfine interaction and magnetic field was neglected. [^1]: Present address: Chimie Quantique et Photophysique, CP 160/09, Université Libre de Bruxelles, Brussels B-1050, Belgium. [^2]: Correspondence author: [email protected] [^3]: The nucleus of $^{47}$Ti has the nuclear spin $I=5/2$, nuclear dipole moment $\mu = -0.78848$ in $\mu_N$ and electric quadrupole moment $Q=0.3$ in barns [@Stone].
{ "pile_set_name": "ArXiv" }
--- abstract: 'We identify 4 unusually bright (H$_{160,AB}$&lt;25.5) galaxies from HST and Spitzer CANDELS data with probable redshifts *z*$\sim$7-9. These identifications include the brightest-known galaxies to date at $z\gtrsim7.5$. As $Y$-band observations are not available over the full CANDELS program to perform a standard Lyman-break selection of $z>7$ galaxies, we employ an alternate strategy using deep Spitzer/IRAC data. We identify z$\sim$7.1-9.1 galaxies by selecting *z*$\gtrsim$6 galaxies from the HST CANDELS data that show quite red IRAC \[3.6\]$-$\[4.5\] colors, indicating strong \[OIII\]+H$\beta$ lines in the 4.5$\mu$m band. This selection strategy was validated using a modest sample for which we have deep Y-band coverage, and subsequently used to select the brightest $z\geq7$ sources. Applying the IRAC criteria to all HST-selected optical-dropout galaxies over the full $\sim$900 arcmin$^{2}$ of the CANDELS survey revealed four unusually bright $z\sim7.1$, 7.6, 7.9 and 8.6 candidates. The median \[3.6\]$-$\[4.5\] color of our selected $z\sim7.1$-9.1 sample is consistent with rest-frame \[OIII\]+H$\beta$ EWs of $\sim$1500Å$\,$ in the \[4.5\] band. Keck/MOSFIRE spectroscopy has been independently reported for two of our selected sources, showing Ly$\alpha$ at redshifts of 7.7302$\pm$0.0006 and 8.683$_{-0.004}^{+0.001}$, respectively. We present similar Keck/MOSFIRE spectroscopy for a third selected galaxy with a probable 4.7$\sigma$ Ly$\alpha$ line at $z_{spec}=$7.4770$\pm$0.0008. All three have H$_{160}$-band magnitudes of $\sim$25 mag and are $\sim$0.5 mag more luminous ($M_{1600}\sim-22.0$) than any previously discovered *z*$\sim$8 galaxy, with important implications for the UV LF. Our 3 brightest, highest redshift $z$&gt;7 galaxies all lie within the CANDELS EGS field, providing a dramatic illustration of the potential impact of field-to-field variance.' author: - 'G. W. Roberts-Borsani, R. J. Bouwens, P. A. Oesch, I. Labbe, R. Smit, G. D. Illingworth, P. van Dokkum, B. Holden, V. Gonzalez, M. Stefanon, B. Holwerda, S. Wilkins' title: '$z\gtrsim7$ Galaxies with Red Spitzer/IRAC \[3.6\]$-$\[4.5\] colors in the full CANDELS data set: The brightest-known galaxies at $z\sim7$-9 and a probable spectroscopic confirmation at $z=7.48$' --- Introduction {#sec:intro} ============ The first galaxies are believed to have formed within the first 300-400 Myr of the Universe and great strides have been made towards identifying objects within this era. Since the installation of the Wide Field Camera 3 (WFC3) instrument on the Hubble Space Telescope (HST), an increasing number of candidates have been identified by means of their photometric properties, with $\gtrsim$700 probable galaxies identified at $z\sim$7-8 (@bouwens15: see also @mclure2013; @schenker2013; @lorenzoni2013; @schmidt2014; @bradley14; @mason2015; @finkelstein15; @atek15) and another 10-15 candidates identified even further out at $z\sim$9-11 (e.g., @zheng12; @ellis13; @oesch14a [@oesch14b]; @bouwens15; @zitrin14; @zheng14; @ishigaki15; @mcleod14). One of the most interesting questions to investigate with these large samples is the build-up and evolution of galaxies. While these issues have long been explored in the context of fainter galaxies through the evolution of the $UV$ LF, less progress has been made in the study of the most luminous galaxies due to the large volumes that must be probed to effectively quantify their evolution. The entire enterprise of finding especially bright galaxies at $z\geq7$ has been limited by the availability of sufficiently deep, multi-wavelength near-infrared data over wide areas of the sky. The most noteworthy such data sets are the UKIDSS UDS program (Lawrence et al. 2007), the UltraVISTA program (McCracken et al. 2012), the 902-orbit CANDELS program from the Hubble Space Telescope (Grogin et al. 2011; Koekemoer et al. 2011), the BoRG/HIPPIES pure-parallel data set (Trenti et al. 2011; Yan et al. 2011; Bradley et al. 2012; Schmidt et al. 2014; Trenti 2014), and the ZFOURGE data set (Tilvi et al. 2013; I. Labb[é]{} et al. 2016, in prep) Of these surveys, arguably the program with the best prospects for probing the bright end of the $z>7$ population would be the wide-area CANDELS program.[^1] The challenge with CANDELS has been that it is only covered with particularly deep near-infrared observations from 1.2$\mu$m to 1.6$\mu$m, but lacks HST-depth $Y$-band observations at 1.05$\mu$m over the majority of the area. Deep observations at 1.05$\mu$m are needed for the determination of photometric redshifts for galaxies in the redshift range $z\sim6.3$ to $z\sim8.5$. While this can be partially compensated for by the availability of moderate-depth $1.05\mu$m observations from various ground-based programs over the CANDELS program, e.g., HUGS [@fontana14], UltraVISTA [@mccracken12], and ZFOURGE (I. Labb[é]{} et al. 2016, *in prep*), such observations are not available over the entire program, making it difficult to consider a search for bright $z>7$ galaxies over the full area. Fortunately, there appears to be one attractive, alternate means for making use of the full CANDELS area to search for bright $z>7$ galaxies. This is to exploit the availability of uniformly deep Spitzer/IRAC observations over the full area (e.g., Ashby et al. 2013) and redshift information present in the \[3.6\]$-$\[4.5\] colors of $z\sim5$-8 galaxies. As demonstrated by many authors (e.g., @labbe13; @smit14a [@smit14b]; @bowler14; @laporte14 [@laporte15]; @huang16), the \[3.6\]$-$\[4.5\] colors appear to depend on redshift in a particularly well-defined way, a dependence which appears to arise from very strong nebular emission lines, such as H$\alpha$ and \[OIII\]$\lambda$5007 Å, which pass through the IRAC bands at particular redshifts. For example, while $z\sim$6.8 galaxies have very blue \[3.6\]$-$\[4.5\] colors likely due to contamination of the \[3.6\] filter by \[OIII\]+H$\beta$ lines (and no similar contamination of the \[4.5\] band), $z\geq$7 galaxies exhibit much redder \[3.6\]$-$\[4.5\] colors, as only the 4.5$\mu$m band is contaminated by the especially strong \[OIII\]+H$\beta$ lines (@labbe13; @wilkins13; @smit14a). Here, we make use of the redshift information in the Spitzer/IRAC observations and apply a consistent set of selection criteria to search for bright $z\sim8$ galaxies over all 5 CANDELS fields. A full analysis of the HST + ground-based observations is made in preselecting candidate $z\gtrsim6$ galaxies, for further consideration with the available Spitzer/IRAC data. The identification of such bright sources allows us to better map out the bright end of the UV luminosity function (LF) at $z>7$ and constrain quantities like the characteristic luminosity $M^*$ or the functional form of the LF at $z>7$. @bouwens15 only observe a modest ($\sim$0.6$\pm$0.3 mag) brightening in the characteristic luminosity $M^*$ – or bright end cut-off – from $z\sim8$ to $z\sim5$ taking advantage of the full CANDELS + XDF + HUDF09-Ps search area ($\sim$1000 arcmin$^2$). @bowler15 also report evidence for a limited evolution in the characteristic luminosity with cosmic time, based on a wider-area search for $z\sim6$-7 galaxies found over the $\sim$1.7 deg$^2$ UltraVISTA+UDS area. Limited evolution was also reported by Finkelstein et al. (2015) in subsequent work, but utilizing a $\sim$3-15$\times$ smaller area than Bouwens et al. (2015) or Bowler et al. (2015) had used. This paper is organised as follows: §2 presents our $z\sim$5-8 catalogs and data sets, as well as methodology for performing photometry. §3 describes the selection criteria we define for our samples and methodology. §4 presents the results of our investigation and discusses the constraints added by $Y$-band observations and Keck/MOSFIRE spectroscopy. In §5, we use the present search results to set a constraint on the bright end of the $z>7$ LF. Finally, §6 includes a summary of our paper and a prospective. Throughout this paper, we refer to the HST F606W, F814W, F105W, F125W, F140W, and F160W bands as *V*$_{606}$, *I*$_{814}$, *Y*$_{105}$, *J*$_{125}$, *JH*$_{140}$ and *H*$_{160}$, respectively, for simplicity. We also assume *H*$_{0}$ = 70 km/s/Mpc, $\Omega_{m} =$ 0.3, and $\Omega_{\wedge} =$ 0.7. All magnitudes are in the AB system [@oke83]. [cccccc]{} CANDELS GS DEEP & 64.5 & 27.8 & 27.5 & 26.1 & 25.9\ CANDELS GS WIDE & 34.2 & 27.1 & 26.8 & 26.1 & 25.9\ ERS & 40.5 & 27.6 & 27.4 & 26.1 & 25.9\ GS other & 31.8\ CANDELS GN DEEP & 62.9 & 27.7 & 27.5 & 26.1 & 25.9\ CANDELS GN WIDE & 60.9 & 26.8 & 26.7 & 26.1 & 25.9\ GN other & 34.0\ CANDELS UDS & 191.2 & 26.6 & 26.8 & 25.5 & 25.3\ CANDELS COSMOS & 183.9 & 26.6 & 26.8 & 25.4 & 25.2\ CANDELS EGS & 192.4 & 26.6 & 26.9 & 25.5 & 25.3\ Total & 896.3 Observational data sets, photometry and z$\sim$5-8 sample ========================================================= HST + ground-based data set and photometry ------------------------------------------ The sample of $z\sim8$ galaxies we identify in this paper is based on HST + ground-based observations that were acquired over 5 CANDELS and ERS fields (@grogin11; @koe11; Windhorst et al. 2011). The near-IR HST observations over the CANDELS fields range in depth from $\sim$4 orbits over the $\sim$130 arcmin$^2$ CANDELS DEEP components in GOODS-North (GN) and GOODS-South (GS) to $\sim1$ orbit depth over the $\sim$550 arcmin$^2$ CANDELS WIDE component in the GN, GS, UDS, COSMOS, and EGS fields. Over the GN and GS fields, the near-IR imaging observations are available in the $Y_{105}$, $J_{125}$, and $H_{160}$ bands, while in the UDS, COSMOS, and EGS fields, the near-IR observations are available in the $J_{125}$ and $H_{160}$ bands. These fields also feature observations at optical wavelengths with the HST ACS camera in the $B_{435}$, $V_{606}$, $i_{775}$, $I_{814}$ and $z_{850}$ bands for CANDELS-GN+GS (with 3-10+ orbits per band), as well as $V_{606}$ and $I_{814}$ observations ($\sim$2-orbit depth) for the CANDELS-UDS+COSMOS+EGS fields. In addition to the HST observations, these fields also have very deep ground-based observations from CFHT, Subaru Suprime-Cam, VLT HAWK-I, and VISTA/VIRCAM over the latter fields. Optical data are available in CANDELS-COSMOS field in the *u*, *g*, *r*, *i*, *y* and *z* bands as part of the CFHT legacy survey, and also in the *B*, *g*, *V*, *r*, *i* and *z* bands from Subaru observations over the same field (@capak11). The CANDELS-EGS field is observed in the same bands as the COSMOS field, as part of the CFHT legacy survey, whilst the CANDELS-UDS field is observed by Subaru as part of the Subary XMM-Newton Deep Field (SXDF) program (Furusawa et al. 2008). For extended sources, these optical observations reach similar or greater depths to the available HST data over these fields (i.e., 26 mag to 28 mag at $5\sigma$ in $1.2''$-diameter apertures: see Bouwens et al. 2015) and allow us to exclude any potential lower redshift contaminants from our samples. Importantly, our ground-based observations also include moderately-deep ($\sim$26 mag at $5\sigma$ \[1.2$''$-diameter apertures\]) Y-band observations which we use to constrain the nature of our selected $z>7$ candidates (where HST observations are unavailable). These observations are available over the CANDELS-UDS+COSMOS fields through HAWK-I and VISTA as part of the HUGS (@fontana14) and UltraVISTA (@mccracken12) programs, respectively. A more detailed description of the observations we utilize in constructing our source catalogs, as well as our procedure for constructing these catalogs is provided in @bouwens15 (see Table 1, Figure 2, and §3 from @bouwens15). HST photometry was performed running the Source Extractor software [@bertin96] in dual-image mode, taking the detection images to be the square root of the $\chi^{2}$ image [@szalay99] and PSF-matching the observations to the H$_{160}$-band PSF. The colors and total magnitudes were measured with Kron-like (1980) apertures and Kron factors of 1.6 and 2.5 respectively. Photometry on sources in the ground-based data is performed after the contamination from foreground sources is removed, using an automated cleaning procedure (@labbe10a; @labbe10b). The positions and two-dimensional spatial profiles of the foreground sources are assumed to match that seen in the high-spatial resolution HST images, after PSF-matching to the ground-based observations. The total flux in each source is then varied to obtain a good match to the light in the ground-based images. Light from the foreground sources is subsequently subtracted from the images, before doing photometry on the sources of interest. Flux measurements for individual sources are then performed in 1.2$''$-diameter circular apertures due to the objects being inherently unresolved in the ground-based observations. These flux measurements are then corrected to total, based on the model flux profiles computed for individual sources based on the observed PSFs. The procedure we employ here to derive fluxes is very similar to that employed in Skelton et al. (2014: see also Galametz et al. 2013 and Guo et al. 2013 who have also adopted a similar procedure for their ground-based photometry). Spitzer/IRAC Data Set and Photometry ------------------------------------ The detailed information we have on $z\sim6$-9 galaxy candidates over the CANDELS fields from HST is nicely complemented in the mid-IR by the Spitzer Extended Deep Survey (SEDS, PI: Fazio) program [@ashby13], which ranges in depth from 12 hours to $>$100 hours per pointing, though 12 hours is the typical exposure time. The SEDS program provides us with flux information at $3.6\mu$m and $4.5\mu$m, which can be useful for probing $z\sim6$-9 galaxies in the rest-frame optical, quantifying the flux in various nebular emission lines, and estimating the redshift. Over the GOODS-North and GOODS-South fields, we make use of Spitzer/IRAC reductions, which include essentially all the Spitzer/IRAC observations obtained to the present (Labb[é]{} et al. 2015: but see also Ashby et al. 2015), with 50-200 hours of observations per pixel in both bands (and typically $\sim$100 hours). Our procedure for performing photometry on the IRAC data is essentially identical to that used on the ground-based observations, except that we utilize $2''$-diameter circular apertures for measuring fluxes. These fluxes are then corrected to total based on the model profile of the individual sources + the PSF. Depending on the size of the source, these corrections range from $\sim$2.2$\times$ to 2.4$\times$. The median $5\sigma$ depths of these Spitzer/IRAC observations for a $\sim$26-mag source is 25.5 mag in the $3.6\mu$m band and 25.3 mag in the $4.5\mu$m band. Sample selection ================ \[3.6\]-\[4.5\] IRAC color vs. redshift and HST detections {#subsec:method} ---------------------------------------------------------- Many recent studies (e.g., @schaerer09; @shim11; @smit14a; @labbe13; @stark13; @debarros14) have presented convincing evidence to support the presence of strong nebular line contamination in photometric filters, particularly for the Spitzer/IRAC \[3.6\] and \[4.5\] bands. The observed \[3.6\]$-$\[4.5\] IRAC color of galaxy candidates appear to be strongly impacted by the presence of these lines at different redshifts, in particular those of H$\alpha$ and \[OIII\]. Figure \[fig:color\] provides an illustration of the expected dependence of the Spitzer/IRAC $[3.6]-[4.5]$ color, as a function of redshift, assuming an \[OIII\]+H$\beta$ EW (rest-frame) of $\sim$2250Å, which is at the high end of what has been estimated for galaxies at $z\sim7$ (@labbe13; @smit14a; @smit14b). The significant change in the $[3.6]-[4.5]$ color of galaxies from $z\sim6$-7 to $z\geq7$ suggests this might be a promising way of segregating sources by redshift and in particular to identify galaxies at $z\geq7$. Such information would be especially useful for search fields like CANDELS EGS, which lack deep observations in $Y$-band at $\sim$1.1$\mu$m to estimate the redshifts directly from the position of the Lyman break. Smit et al. (2015) have shown that selecting sources with blue $[3.6]-[4.5]$ colors can effectively single-out sources at $z\sim6.6$-6.9 over all CANDELS fields, even in the absence of $Y$-band coverage. Here we attempt to exploit this strong dependence of the $[3.6]-[4.5]$ color on redshift to identify some of the brightest $z\geq7$ galaxies over the CANDELS fields. In performing this selection, we start with the source catalogs derived by Bouwens et al. (2015) and Skelton et al. (2014) over a $\sim$900 arcmin$^2$ region from the five CANDELS fields. In general, we will rely on the source catalogs from Bouwens et al. (2015) where they exist (covering a 750 arcmin$^2$ area or $\sim$83% of CANDELS).[^2] Otherwise, we will rely on the Skelton et al. (2014) catalogs and photometry. We then apply color criterion to identify a base sample of Lyman-break galaxies at $z\sim6.3$-9.0. In particular, over the CANDELS-UDS, COSMOS, and EGS fields, we use a $$\label{eq:1} \begin{split} & (I_{814}-J_{125} >2.2)\wedge (J_{125}-H_{160}<0.5) \wedge \\ & (I_{814}-J_{125} > 2(J_{125}-H_{160})+2.2) \end{split}$$ criterion. Over the CANDELS-GN and GS fields, we require that sources satisfy one of the two color criteria defined by Eq. \[eq:2\] or Eq. \[eq:3\]: $$\label{eq:2} \begin{split} & (z_{850}-Y_{105}>0.7)\wedge(J_{125}-H_{160}<0.45)\wedge \\ & (z_{850}-Y_{105}>0.8(J_{125}-H_{160})+0.7)\wedge \\ & ((I_{814}-J_{125}>1.0)\vee (SN(I_{814})<1.5)) \end{split}$$ $$\label{eq:3} \begin{split} & (Y_{105}-J_{125}>0.45)\wedge (J_{125}-H_{160}<0.5) \\ & \wedge(Y_{105}-J_{125}> 0.75(J_{125}-H_{160})+0.525) \end{split}$$ These color criteria are essentially identical to those from Bouwens et al. (2015) but allow for $J_{125}-H_{160}$ colors as red as 0.5 mag to match up with the color criteria of Oesch et al. (2014) and Bouwens et al. (2015) in searching for $z>8.5$ galaxies (i.e., $J_{125}-H_{160}>0.5$). In so doing, it was our goal to maximize the completeness of our selection for bright $z=7$-9 galaxies within the CANDELS program.[^3] These color criteria are motivated in Figure 3 of Bouwens et al. (2015) and result in a very similar redshift segregation as one achieves using photometric redshifts. We require that sources have $[3.6]-[4.5]$ colors redder than 0.5 mag (see Figure \[fig:color\]). This color criterion was chosen (1) so as to require slightly redder colors than was the average color measured by Labbe et al. (2013) for their faint $z\sim8$ sample from the HUDF (i.e., $\sim$0.4) and (2) such that sources would not easily satisfy the criterion simply due to noise (requiring $>$2$\sigma$ deviations for the typical source). To be certain that the IRAC colors we measured are robust, we exclude any sources where the subtracted flux from neighboring sources exceeds 65% of the original flux in a $2''$-diameter aperture (before subtraction). To ensure our selection is free of $z<7$ galaxies, we required that sources show no statistically significant flux at optical wavelengths. Sources that show at least a $1.5\sigma$ detection in terms of the inverse-variance-weighted mean $V_{606}$ and $I_{814}$ flux with HST were excluded. In addition, we also excluded sources detected at $>2.5\sigma$ in the deep optical imaging observations available over each field from the ground. We adopted a slightly less stringent threshold for detections in the ground-based observations, due to the impact of neighboring sources on the overall noise properties. Finally, we consider potential contamination by low-mass stars, particularly later $T$ and $Y$ dwarfs (T4 and later), where the $[3.6]-[4.5]$ color can become quite a bit redder than 0.5 mag (Kirkpatrick et al. 2011; Wilkins et al. 2014). To exclude such sources from our samples, both the spatial information we had on each source from the SExtractor stellarity parameter and total SED information were considered. Sources with measured stellarities $>$0.9 were identified as probable stars (where 0 and 1 corresponds to extended and point-like sources, respectively), as were sources with measured stellarity parameters $>$0.5 if the flux information we had available for sources was significantly better fit ($\Delta \chi^2 > 2$) with a low-mass stellar model from the SpecX prism library (Burgasser et al. 2004) than with the best-fit galaxy SED model, as derived by the Easy and Accurate Zphot from Yale (EAZY; @brammer08) software. Our SED fits with EAZY considered both the standard SED templates from EAZY and SED templates from the Galaxy Evolutionary Synthesis Models (GALEV; @kotulla09). Nebular emission lines, as described by @anders03, were added to the GALEV SED template models assuming a 0.2$Z_{\odot}$ metallicity. No sources were removed from our selection as probable low-mass stars. The procedure we use here to exclude low-mass stars from our selection is identical to that utilized by @bouwens15. [cccccccc]{} COSY-0237620370 & 10:00:23.76 & 02:20:37.00 & 25.06$\pm$0.06 & 1.03$\pm$0.15 & 7.14$\pm^{0.12}_{0.12}$ & $-$0.13$\pm$0.66 & \[1\],\[2\],\[3\]\ EGS-zs8-1 & 14:20:34.89 & 53:00:15.35 & 25.03$\pm$0.05 & 0.53$\pm$0.09 & 7.92$\pm^{0.36}_{0.36}$ & 1.00$\pm$0.60 & \[3\], \[4\]\ EGS-zs8-2 & 14:20:12.09 & 53:00:26.97 & 25.12$\pm$0.05 & 0.96$\pm$0.17 & 7.61$\pm^{0.26}_{0.25}$ & 0.66$\pm$0.37 & \[3\]\ EGSY-2008532660 & 14:20:08.50 & 52:53:26.60 & 25.26$\pm$0.09 & 0.76$\pm$0.14 & 8.57$_{-0.43}^{+0.22}$ & Validation of Selection Technique --------------------------------- Before applying the selection criteria from §3.1 to the $\sim$900-arcmin$^2$ CANDELS + ERS search fields, it is useful to first test these criteria on those data sets which feature deep $z$ and $Y$-band observations. The availability of observations at these wavelengths, together with observations at both redder and bluer wavelengths with HST, allows for very accurate estimates of the redshifts for individual sources. There are five data sets that possess these observations: (1) CANDELS GOODS-S, (2) CANDELS GOODS-N, (3) ERS, (4) CANDELS UDS, and (5) CANDELS COSMOS field. The first three feature these observations with HST and the latter two using ground-based telescopes. We apply selection criteria from the previous section to a $H_{160}$-band limiting magnitude of 26.7 mag for the first three fields and 26.5 mag for the latter two. Our decision to use these depths is partially guided by the sensivity of the Spitzer/IRAC data over these fields. Applying the selection criteria from the previous section to the CANDELS GN+GS and ERS fields ($H_{160,AB}<26.7$), we find 7 sources that satisfy our selection criteria. For each of these sources, we estimate photometric redshifts with EAZY. In fitting to the observed photometry, we used the same standard EAZY SED templates as we described in the previous section. We also applied the above selection criteria to the CANDELS-UDS and CANDELS-COSMOS fields, where it is also possible to estimate photometric redshifts, making use of the available HST observations and ground-based optical and near-IR $Y$ and $K$ band observations. Eight sources satisfy these criteria. All 15 of the sources selected using the criteria from the previous section are presented in Figure \[fig:validation\_of\_method\] and fall between $z=7.0$ and $z=8.3$, which is the expected range if a high-EW \[OIII\]+H$\beta$ line is responsible for red $[3.6]-[4.5]$ colors in these galaxies. This suggests that the criteria we propose in the previous section can be effective in identifying a fraction of $z\geq7$ galaxies that are present in fields with deep HST+Spitzer observations. The individual coordinates, colors, and estimated redshifts for individual sources from this validation sample can be found in Table \[tab:valid\_samp\] located in Appendix B. In recommending the use of the IRAC photometry to subdivide $z\sim6$-9 samples by redshift, we should emphasize that the most robust results will be obtained making use of only those sources with the smallest confusion corrections. While we took care in the selection of both our primary sample (and the sample we used to validate the technique) to avoid such souces, such sources were not excluded in making Figure 1 of Smit et al. (2015: resulting in a few $z>7$ sources with anomalously blue Spitzer/IRAC colors). Despite this issue with Smit et al. (2015) Figure 1, we emphasize that this is nevertheless not a major concern for sources in their $z=6.6$-6.9 sample. Only 2 of the 15 sources in the latter sample were subject to a $\sim$3$\times$ correction for flux from neighboring sources and those 2 sources (GSD-2504846559 and EGS-1350184593) are flagged as less reliable. Search Results for Bright $H_{160,AB}<25.5$ Galaxies ---------------------------------------------------- Here we focus on the identification of only the brightest $H_{160,AB}<25.5$ $z\geq7$ galaxies using our Spitzer/IRAC color criteria. This is to keep the current selection small and to focus on sources whose surface density was particularly poorly defined from previous work. Prior to this work, the only study which identified such bright $z\sim8$ sources was Bouwens et al. (2015). Focusing on the brightest sources is also valuable, since it allows us to obtain very precise constraints on SED shapes and its Spitzer/IRAC colors of the sources, as well as providing opportunities for follow-up spectroscopy (see §4.2). Applying the selection criteria described in §\[subsec:method\] on the CANDELS-GS, CANDELS-GN, CANDELS-UDS, CANDELS-COSMOS and CANDELS-EGS fields, we identify a total of 4 especially bright ($H_{160,AB}<25.5$) candidate $z\geq7$ galaxies. Our 4 candidate $z\geq$7 galaxies are presented in Table \[tab:table\_details\] and in Figure \[fig:postage\_stamp\]. We see from Figure \[fig:postage\_stamp\] that each candidate is clearly visible in the HST *H*$_{160}$ and *J*$_{125}$ filters, as well as the IRAC 3.6 $\mu$m and 4.5 $\mu$m bands. Of course, no significant detection is evident in the HST *V*$_{606}$ and *I*$_{814}$ bands for these sources. This would suggest that these sources show a break in their spectrum somewhere between 0.9$\mu$m and $1.2\mu$m and therefore have redshifts between $z\sim6$ and $z\sim8.5$. \[We discuss the impact of information from $Y$-band observations available over 3 of the 4 candidates in §4.1.\] 3 of these 4 bright sources are found in the CANDELS-EGS field. Sources from this field were not included in our earlier attempt to validate the present selection technique (§3.2), so only one of these new sources is in common with the 15 sources just discussed. To derive constraints on the redshift of each bright source, we again made use of EAZY. The photometry provided to EAZY included fluxes from HST filters, IRAC 3.6 $\mu$m and 4.5 $\mu$m filters and ground based telescopes. Using EAZY allows us to generate a best-fit SED of each galaxy candidate as well as its redshift likelihood distribution ($P(z)$), which we present in Figure \[fig:sed\_pz\], with the observed galaxy flux points overplotted. From the SED plots, we observe a near-flat rest-frame optical continuum, as well as emission lines dominating at the location of high flux points, highlighting the contribution of strong nebular emission lines to the instrument filters. One of our $z\geq7$ candidates, i.e., EGS-zs8-2, is sufficiently compact, as can be seen from Figure  \[fig:postage\_stamp\], that we considered the possibility that it may correspond to a star. To test this possibility, we compare its SED to all the stellar SEDs in the SpecX prism library and find the best-fitting stellar SED. The $\chi^2$ goodness-of-fit for the stellar SED is an order of magnitude greater than the galaxy SED. In addition, the SExtractor stellarity we measure for EGS-zs8-2 in the $J_{125}$ and $H_{160}$ bands is 0.60 and 0.33 (where 0 and 1 correspond to an extended and point source, respectively), which significantly favors EGS-zs8-2 corresponding an extended source. Bouwens et al. (2015) ran an extensive number of end-to-end simulations to test the possibility that point-like sources could scatter to such low measured stellarities. Stellarities of $\sim$0.60 are only found for $H_{160,AB}\sim25$-mag point-like sources in $<$2% of the simulations that Bouwens et al. (2015) run. Therefore, both because of the spatial and spectral information, we can be confident that the EGS-zs8-2 candidate is a $z\geq7$ galaxy and not a low-mass star. However, as we show in §4.2, perhaps the most convincing piece of evidence for this source corresponding to a $z>7$ galaxy is our discovery of a plausible 4.7$\sigma$ Ly$\alpha$ line in the spectrum of this source at 1.031$\mu$m (Figure \[fig:pascal\_lya\]). A second candidate from our selection, COSY-0237620370, is also very compact and could potentially also correspond to a low-mass star. However, like EGS-zs8-2, the photometry of the source is better fit with a galaxy SED than a stellar SED (with a $\chi^2 (star) - \chi^2 (galaxy) = 17.2$) and the source shows evidence for spatial extension, with a measured stellarity of 0.81 and 0.34 in the $J_{125}$ and $H_{160}$ bands, respectively. Stellarities even as high as 0.81 are only recovered in $\sim$5% of the end-to-end simulations Bouwens et al. (2015) run at $H_{160,AB}$-band magnitudes of $\sim$25.0. Earlier, Tilvi et al. (2013) obtained the same conclusion regarding this source based on medium-band observations over this candidate from the ZFOURGE program, where consistent fluxes are found in the near-infrared medium bands strongly arguing against this source corresponding to a low-mass star. Bowler et al. (2014) also conclude this source is extended and not a low-mass star, based on its spatial profile (see Figure 6 from Bowler et al. 2014) and based on its observed photometry where $\chi^2 (star) - \chi^2 (galaxy) = 13.0$. Flux information from HST, Spitzer/IRAC, and ground-based observations all have value in constraining the redshifts of the candidate $z\geq7$ galaxies we have identified in the present probe. While the HST flux information we have available for all three candidates in the $V_{606}I_{814}J_{125}JH_{140}H_{160}$ bands only allow us to place them in the redshift interval $z\sim6.5$-9.0 (*blue line* in Figure \[fig:pz\_subset\]), we can obtain improved constraints on the redshifts of the candidates incorporating the flux information from Spitzer/IRAC and from deep ground-based observations. Each of these three candidates appears to have redshifts robustly between $z\sim7.0$ and $z\sim8.6$ (*red and black lines* in Figure \[fig:pz\_subset\]). In addition, as we discuss in §4.1 and show in Figure \[fig:pz\_subset\], the availability of the $Y$/$Y_{105}$-band observations allows us to significantly improve our redshift constraints on all three candidates. We used the Bouwens et al. (2015) catalogs to search 83% of the total area of the CANDELS fields and the Skelton et al. (2014) photometric catalogs otherwise (in those regions over the WFC3/IR CANDELS fields which lack the deep HST/ACS data). As a check on the search results we obtained with the Bouwens et al. (2015) catalogs, we applied the same selection criteria to the Skelton et al. (2014) catalogs. Encouragingly, we identified 75% of our sample, with only one candidate missing due to its having a $[3.6]-[4.5]$ color of 0.47 mag. For all 4 candidates from our primary sample, we find that our derived \[3.6\]$-$\[4.5\] colors are almost identical to those quoted by Skelton et al. (2014), agreeing to $\leq$0.1 mag (and typically $\Delta$\[3.6\]$-$\[4.5\] of 0.05 mag). We also identified 1 additional bright $(H_{160,AB}<25.5$) $z\geq7$ candidate in the CANDELS-EGS field not identified in our primary search (see Appendix A). It seems clear examining its photometry that this source is extremely likely to be at $z\sim7$-9 (and indeed it appears in the Bouwens et al. 2015 $z\sim8$ sample). However, since its measured $[3.6]-[4.5]$ color is 0.22$\pm$0.06 mag in our photometric catalog (0.3 mag bluer than in the Skelton et al. 2014 catalog), we did not include it in our primary sample. We remark that photometry for this source was more challenging due to its being located close to a bright neighbor and its being a two-component source. Possible Evidence for Lensing Amplification of Selected $z>7$ Sources --------------------------------------------------------------------- For very high redshift sources ($z>>6$), it is expected that the sources with the brightest apparent magnitudes will benefit from gravitational lensing (Wyithe et al. 2011; Barone-Nugent et al. 2015; Mason et al. 2015; Fialkov & Loeb 2015), and indeed it is found that a small fraction of the brightest galaxies identified over the CANDELS program are consistent with being boosted by gravitational lensing (Barone-Nugent et al. 2015). To investigate whether any of the bright $z\geq7$ galaxies identified in our search might be gravitationally lensed, we considered all sources within 5$''$ of our candidates in the Skelton et al. (2014) catalogs and used the estimated redshifts, stellar masses, and sizes from these catalogs to derive Einstein radii for the foreground sources assuming a single isothermal sphere model. We then calculated the degree to which our bright $z\geq7$ galaxy candidates might be magnified by the foreground sources. In only one case was the expected magnification level $>$10% and this was for our $z\sim8.6$ candidate EGSY-2008532660. In this case, we identified two foreground galaxies which could significantly magnify this candidate (Figure \[fig:lens\]). The first was a $10^{10.4}$ $M_{\odot}$ mass, $z\sim1.4$ galaxy (14:20:08.81, 52:53:27.2) with a separation of 2.8$''$ from our $z\sim8.6$ candidate. The second was a $10^{11.2}$ $M_{\odot}$ mass, $z\sim3.1$ galaxy (14:20:08.37, 52:53:29.1) with a separation of 2.7$''$ from our $z\sim8.6$ candidate. Using the measured size of the two sources, we derive $\sigma\sim170$ km/s and $\sigma\sim370$ km/s for the velocity dispersion. We checked and these velocity dispersions are fairly similar to what fitting formula in Mason et al. (2015) yield (i.e., using the relation in their Table 1 and applying $H_{160}$-band or IRAC 3.6$\mu$m apparent magnitudes depending on whether we are considering the $z\sim1.4$ or $z\sim3.1$ source). Based on the observed separation of this source from our $z\sim8.6$ galaxy, we estimate a lensing magnification of 20% and a factor of 1.8 from the former and latter foreground sources. In computing these magnification factors, we assume that the mass profile of galaxies is an isothermal sphere and taking the magnification factor to be $1/(1-\theta_E/\theta)$ where $\theta$ is separation from the neighboring sources and $\theta_E$ is the Einstein radius. Looking at the morphology of EGSY-2008532660, we see no clear evidence to suggest that the galaxy is highly magnified and there is no obvious counterimage. However, we clearly cannot rule out smaller lensing amplification factors, particularly if the intrinsic size of the source is small. As the inferred stellar or halo masses for the neighboring galaxies is not precisely known, this translates into a modest uncertainty into the actual luminosity of this source (as much as 0.3 dex). Given this fact, we consider it safest for us to exclude it from analyses of the $UV$ LF. Validation of Our $z\sim8$ Selection ==================================== Here we attempt to determine the nature of the $z>7$ candidates we selected using the HST+Spitzer/IRAC+ground-based observations using some $Y$-band observations that became available over a few of our candidates and using the results of some follow-up spectroscopy that we performed (first reported in Oesch et al. 2015b). $Y$-band Photometric Observations --------------------------------- Deep observations at 1.05$\mu$m are particularly useful in ascertaining the nature of these candidates and also their redshift, due to the $Y$-band photometry providing constraints on the position of the Lyman break as it redshifts from $1.2\mu$m to 0.9$\mu$m. Deep observations at $1.05\mu$m are available for 3 of the 4 $z\geq7$ candidates that we selected as part of our $H_{160,AB}<25.5$ sample. $Y$-band observations of the COSY-0237620370 candidate are available from the 3-year UltraVISTA observations (McCracken et al. 2012), while HST $Y_{105}$-band observations are available over 2 other candidates in our selection as a result of some recent observations from the z9-CANDELS follow-up program (Bouwens 2014; Bouwens et al. 2016).[^4] We make these estimates in an identical way to what we did previously. Our redshift constraints, including the $Y$-band, are presented in Table \[tab:table\_details\]. Furthermore, we present the HST *Y*$_{105}$ filter images in Figure \[fig:postage\_stamp\], where we observe a clear detection in the *Y*$_{105}$ filters for EGS-zs8-1 and EGS-zs8-2 and no detection in the *V*$_{606}$ or *I*$_{814}$ filters, indicating a *z*$\sim$7 Lyman dropout. For EGS-zs8-1, however, we observe little to no detection in the *Y*$_{105}$ filter but a clear detection in the *J*$_{125}$ filter which indicates this galaxy is observed at *z*$\sim$8. Figure \[fig:sed\_pz\] and \[fig:pz\_yband\] presents the redshift likelihood distributions on our $z\geq7$ candidates, incorporating the $Y$-band observations from UltraVISTA and HST. It is evident from Figure \[fig:pz\_subset\] that the $Y$-band data greatly improves our constraints on the redshift of the individual candidates in our selection. Together with the results in §3.2 and Figure \[fig:validation\_of\_method\], these results largely validate our selection technique. Keck/MOSFIRE Spectroscopic Follow-up ------------------------------------ ### Observations and Reduction In addition to using photometric data in the Y-band to validate our method, we also tested this method by obtaining deep near-IR spectroscopy on 2 sources from the current selection. Oesch et al. (2015b) already provided a first description of the observational set-up we utilized for half of our targets, so we keep the current discussion short. A total of 4 hours of good Y-band spectroscopy were obtained in the CANDELS-EGS field with the Multi-Object Spectrometer for Infra-Red Exploration (MOSFIRE: McLean et al. 2012) instrument on the Keck I telescope. Two masks [see Fig. 2 of @oesch15] were utilized and our spectra were taken with 180 s exposures at a spectral resolution of R=3500 and R=2850 (for a 0.7” and 0.9” slit respectively) over 3 nights (April 18, April 23, April 25, 2014 - although due to poor weather conditions, April 18 was effectively lost), with the aim of searching for Ly$\alpha$ emission in EGS-zs8-1 and EGS-zs8-2. Each mask contains a slitlet placed on a star, which we use for monitoring the sky transparency and observing conditions of each exposure. These observations were reduced using a modified version of the DRP MOSFIRE reduction code pipeline (for details see Oesch et al. 2015b). The spectra complement the photometric data sets for these two galaxies and allow us to confirm their redshifts. ### Ly$\alpha$ Emission Lines The observations carried out with Keck/MOSFIRE revealed candidate Ly$\alpha$ emission lines in the spectra of both EGS-zs8-1 and EGS-zs8-2. The detection of a Ly$\alpha$ line for EGS-zs8-1 appears to be robust (a 6.1$\sigma$ detection with a line flux of $f_{Ly\alpha}=1.7\pm0.3\times 10^{-17}$ erg s$^{-1}$) and places that source at $z_{Ly\alpha}$=7.7302$\pm$0.0006, as first reported by Oesch et al. (2015b).[^5] The 1D and 2D spectra for our other targeted $z>7$ candidate EGS-zs8-2 is presented in Figure \[fig:pascal\_lya\] (see also Figure 2 in Oesch et al. 2015b for spectra of the confirmed $z=7.7302\pm0.0006$ candidate). Using a simple gaussian to determine the central wavelength of the observed line at 1.031$\mu$m (and ignoring asymmetry and other effects due to skylines surrounding this candidate Ly$\alpha$ line), we determine the spectroscopic redshift for the source to be $z_{Ly\alpha}=7.4770\pm0.0008$, with a detection significance of 4.7$\sigma$ for the line and a line flux of $f_{Ly\alpha}=1.6\pm0.3\times 10^{-17}$ erg s$^{-1}$ cm$^{-2}$. While this line is only detected at $4.7\sigma$ significance, its reality appears to be supported by subsequent near-infrared spectroscopy obtained on this source from independent observing efforts (D. Stark et al. 2016, in prep). In addition to the Ly$\alpha$-emission lines reported by Oesch et al. (2015b) and this work, Zitrin et al. (2015) report the detection of a 7.5$\sigma$ Ly$\alpha$ line for our EGSY-2008532660 candidate in new Keck/MOSFIRE observations (June 10-11, 2015). This redshift measurement sets a new high-redshift distance record for galaxies with spectroscopic confirmation. Our photometric selection therefore contains 3 of the 4 most distant, spectroscopically-confirmed galaxies to date. The $z_{Ly\alpha}=7.730$, $z_{Ly\alpha}=7.477$ and $z_{Ly\alpha}=8.683$ redshifts for EGS-zs8-1, EGS-zs8-2 and EGSY-2008532660, respectively, are in excellent agreement with the photometric redshifts derived for these galaxies using HST+IRAC+Ground based observations and our color criteria. The absolute magnitude and redshifts of EGS-zs8-1, EGS-zs8-2, and EGSY-2008532660 are presented in the top panel of Figure \[fig:pascal\] in relation to other $z>6.5$ galaxies with clear redshift determinations from Ly$\alpha$. The current specroscopy provides us with considerable reassurance that our proposed color technique is an effective method to search for bright, $z\geq7$ galaxies.[^6] Comparison with Previous Work ============================= Three of our four candidates were already identified as part of previous work. Tilvi et al. (2013) identified COSY-0237620370 as a $z\sim7$ galaxy by applying Lyman-break-like criteria to the deep medium-band ZFOURGE data and estimate a redshift of 7.16$_{-0.19}^{+0.35}$. This source was also identified by Bowler et al. (2014) as a $z\sim7$ galaxy (211127 in the Bowler et al. 2014 catalog) using the deep near-IR observations from the Ultra-VISTA program and derived a photometric redshift of 7.03$_{-0.11}^{+0.12}$ for the source (or 7.20 if the source exhibits prominent Ly$\alpha$ emission), similar to what we find here. Tilvi et al. (2013) derive a $[3.6]-[4.5]$ color of $1.96\pm0.54$ mag, while Bowler et al. (2014) find 0.7$\pm$0.3 mag, both of which are broadly consistent with what we find here. Bouwens et al. (2015) identified 3 of the 4 sources as part of their search for $z\sim7$-8 galaxies over the five CANDELS fields and segregated the sources into different redshift bins using the photometric redshift estimates. The full HST + Subaru Suprime-Cam $BgVriz$ + CFHT Megacam $ugriyz$ + UltraVISTA $YJHK_s$ photometry was used to estimate these redshifts for the candidate in the COSMOS field. Meanwhile, the HST + CFHT Megacam $ugriyz$ + WIRCam $K_s$ + Spitzer/IRAC $3.6\mu$m$4.5\mu$m photometry was used in the case of the two EGS candidates. Bouwens et al. (2015) derived a photometric redshift of $z=7.00$ for COSY-0237620370 over the CANDELS-COSMOS field and derived photometric redshifts of 8.1 for the two sources over the CANDELS-EGS field (EGS-zs8-1 and EGS-zs8-2, respectively), so the latter two candidates were placed in the $z\sim8$ sample of Bouwens et al. (2015). There was, however, some uncertainty as to both the robustness and also the precise redshifts of the CANDELS-EGS candidates from Bouwens et al. (2015). Prior to the present study, the use of the $[3.6]-[4.5]$ color has never been systematically demonstrated to work for the identification of galaxies with redshifts of $z>7$ despite there being $\sim$5 prominent examples of $z\geq7$ galaxies with particularly red $[3.6]-[4.5]$ colors (Bradley et al. 2008; Ono et al. 2012; Finkelstein et al. 2013; Tilvi et al. 2013; Laporte et al. 2014, 2015). Moreover, no $Y_{105}$-band observations were available over either $z\geq7$ candidate from the CANDELS-EGS field in the Bouwens et al. (2015) selection to validate potential $z\geq7$ galaxies (though such observations have fortuitously become available as a result of observations made from the z9-CANDELS follow-up program \[Bouwens et al. 2016\]). The apparent magnitudes of the $z=7.1$-8.5 galaxies identified as part of the current selection are much brighter than the typical galaxy at $z\sim8$, as is evident in both the upper and lower panels in Figure \[fig:pascal\]. In fact, 3 of the sources from our current IRAC-red $[3.6]-[4.5]>0.5$ selection appear to represent the brightest $z\gtrsim7.5$ galaxies known in the entire CANDELS program and constitute 3 of the 4 $z\sim8$ candidates shown in the lower panel of Figure \[fig:pascal\]. The only other especially bright $H_{160,AB}\sim25.0$ $z\sim8$ candidate shown in that lower panel is presented in the appendix (since it satisfies our $[3.6]-[4.5]>0.5$ selection criteria using an independent set of photometry, i.e., Skelton et al. 2014). Interestingly enough, all 4 of the brightest candidates shown in the lower panel of Figure \[fig:pascal\] are located in the CANDELS EGS field, providing a dramatic example of how substantial field-to-field variations in the surface densities of bright sources might be (though we note that EGSY-2008523660 is likely gravitationally lensed). This seems to be just a chance occurrence, as none of these candidates is clearly in a similar redshift window. The probability that the 4 brightest $z\sim8$ sources in the CANDELS program would be found in the same CANDELS field (even if one is gravitationally lensed) is $\sim$1%.[^7] Previously, this point had been strongly made by Bouwens et al. (2015) in discussing the number of bright sources over the different CANDELS fields (Figure 14, Appendices E and F from Bouwens et al. 2015) and also quite strikingly by Bowler et al. (2015) in comparing the number of bright $z\sim6$ galaxies over the UltraVISTA and UDS fields. Implications for the Bright End of the $z\sim8$ LF ================================================== In this section, we will examine the implications of our present search results for the volume density of luminous galaxies in the $z\sim7$-9 Universe. First, we estimate how complete we might expect our selection to be based on the $[3.6]-[4.5]$ color distribution in fields where the redshift can be constrained using deep $Y$-band data (§6.1). Second, we make use of our search results and our completeness estimates to set a constraint on the bright end of the $z\sim7$-9 LF (§6.2). [ccccccc]{} EGS-zs8-1 & 14:20:34.89 & 53:00:15.35 & 25.03$\pm$0.05 & 7.9$\pm$0.4 & 0.53$\pm$0.09 & \[1\], \[8\]\ & & & & ($z_{spec}$=7.7302$\pm$0.0006)\ EGS-zs8-2 & 14:20:12.09 & 53:00:26.97 & 25.12$\pm$0.05 & $7.6\pm0.3$ & 0.96$\pm$0.17 & \[1\]\ & & & & ($z_{spec}$=7.4770$\pm$0.0008)\ EGSY-2008532660 & 14:20:08.50 & 52:53:26.60 & 25.26$\pm$0.09 & 8.57$_{-0.43}^{+0.22}$ & 0.76$\pm$0.14 &\ & & & & ($z_{spec}$=8.683$_{-0.004}^{+0.001}$)\ GNDY-6379018085 & 12:36:37.90 & 62:18:08.50 & 25.44$\pm$0.04 & 7.508 & 0.88$\pm$0.11 & \[6\]\ BORGY-9469443552 & 04:39:46.94 & $-$52:43:55.20 & 25.56$\pm$0.20 & 8.29$_{-1.01}^{+0.34}$ & — & \[1\],\[2\],\[3\], \[7\]\ GSDY-2499348180 & 03:32:49.93 & $-$27:48:18.00 & 25.58$\pm$0.05 & 7.84$_{-0.29}^{+0.15}$ & 0.08$\pm$0.09 & \[1\],\[3\],\[4\],\[5\]\ BORGY-6504943342 & 14:36:50.49 & 50:43:34.20 & 25.69$\pm$0.08 & 7.49$_{-3.17}^{+0.13}$ & — & \[1\]\ COSY-0235624462 & 10:00:23.56 & 02:24:46.20 & 25.69$\pm$0.07 & 7.84$_{-0.18}^{+0.37}$ & 0.88$\pm$0.61 & \[1\]\ BORGY-2463351294 & 22:02:46.33 & 18:51:29.40 & 25.78$\pm$0.15 & 7.93$_{-0.21}^{+0.59}$ & — & \[1\]\ BORGY-2447150300 & 10:32:44.71 & 50:50:30.00 & 25.91$\pm$0.20 & 7.93$_{-0.19}^{+0.48}$ & — & \[1\]\ BORGY-5550543040 & 07:55:55.05 & 30:43:04.00 & 25.98$\pm$0.21 & 7.66$_{-5.63}^{+0.82}$ & — & \[1\]\ Median & & & & & 0.82$_{-0.20}^{+0.07}$ &\ \ \ EGSY-9597563148 & 14:19:59.75 & 52:56:31.40 & 25.03$\pm$0.10 & 8.19 & 0.22$\pm$0.06 & \[1\] \[3.6\]-\[4.5\] Color Distribution of $z>7$ Galaxies and the Implications for the Completeness of our red IRAC Criteria and the \[OIII\]+H$\beta$ EWs ----------------------------------------------------------------------------------------------------------------------------------------------------- In our attempts to identify bright $z>7$ galaxies, we only consider those sources with red $[3.6]-[4.5]>0.5$ Spitzer/IRAC colors to ensure that the sources we select are robustly at $z>7$ (see §3.2). However, by making this requirement, we potentially exclude those $z>7$ galaxies which have bluer \[3.6\]-\[4.5\] colors, either due to lower-EW \[OIII\]+H$\beta$ lines or simply as a result of noise in the photometry. To determine how important this effect is, we look at the \[3.6\]-\[4.5\] color distribution of galaxies which we can robustly place at a redshift $z>7$ (where both lines in \[OIII\] doublet fall in the \[4.5\] band). The most relevant sources are those bright galaxies we can place at $z>7$ based on the available HST+ground-based photometry and which include deep flux measurements at $1\mu$m. Such measurements are available for the CANDELS GOODS-S, GOODS-N, UDS, and COSMOS fields, and a small fraction of the CANDELS EGS field. For our fiducial results here, we only consider selected sources from those fields brightward of $H_{160,AB}=26$ and with redshift estimates greater than $z\gtrsim7.5$. This is to ensure that we only include bona-fide $z=7.1$-9.1 galaxies (where the \[OIII\]+H$\beta$ line in the 4.5$\mu$m band) in our selection. Photometric redshift errors often have an approximate size of $\Delta$z$\sim$0.3 to this magnitude limit, and so to avoid $z<7$ sources scattering into our selection, we kept our cuts fairly conservative. The list of such sources at such bright magnitudes is still somewhat limited at present, with only the bright $\sim$25.6-mag galaxy in the CANDELS South from Yan et al. (2012) and Oesch et al. (2012), a bright $25.7$-mag galaxy in the CANDELS COSMOS field from Bouwens et al. (2015), a bright $\sim$25.5 mag source over the CANDELS GOODS-North field by Finkelstein et al. (2013), two bright sources over the CANDELS EGS field where $Y_{105}$-band photometry is available (EGS-zs8-1, EGS-zs8-2), and a third bright source over the CANDELS EGS field where the $J_{125}-H_{160}$ color allows us to place it at $z>8$ (EGSY-2008532660). Of these sources, five out of the six sources have \[3.6\]$-$\[4.5\] colors in excess of 0.5, and therefore for simplicity, we will assume that our IRAC red selection is 83% complete, but we emphasize that the completeness correction we derive from this selection is uncertain and could be much larger (as indeed one would expect if the \[3.6\]$-$\[4.5\] color measurement derived by Labb[é]{} et al. 2013, i.e., $\sim$0.4 mag, for the average stacked $z\sim8$ galaxy are indicative). To investigate this possibility, we examined a slightly larger sample of objects over the four fields where we have photometric redshifts using $Y$-band imaging. Considering sources to a $H_{160}$-band magnitude limit of 26.2 over the CANDELS-UDS and COSMOS fields and 26.7 over the CANDELS GOODS-North and GOODS-South while extending the photometric redshift selection to $z>7.3$, 6 out of 9 sources satisfy the \[3.6\]$-$\[4.5\]$>$0.5 criterion. While this suggests the actual fraction of $z>7$ galaxies with such red IRAC colors may be less than 83%, this fainter sample is still consistent with our fiducial percentage. It also reassuring that our suggested selection criteria would also apply to GN-108036, the bright $JH_{140}=25.17$ $z=7.213$ galaxy found by Ono et al. (2012), given its measured \[3.6\]$-$\[4.5\] color of 0.58$\pm$0.18 mag. We include a list of those sources and other sources in Table \[tab:brightz8\] from the Bouwens et al. (2015) catalog and also including the bright $z=7.508$ galaxy from Finkelstein et al. (2013). In Figure \[fig:histo\_data\], we present the \[3.6\]-\[4.5\] color distribution for the brightest sources we know to robustly lie at $z\gtrsim7.5$ based on spectroscopy or from the available HST+ground-based photometry for those sources that lie in regions of CANDELS with $Y$-band observations or with $J_{125}-H_{160}$ colors red enough to confidently place the sources at $z>8$. The median \[3.6\]$-$\[4.5\] color that we measure is 0.82$_{-0.20}^{+0.08}$ mag. Such a color implies a minimum EW of $\sim$1300Å$\,$ for the \[OIII\]+H$\beta$ lines, assuming a flat stellar continuum and no line contribution to the \[3.6\] band. However, we emphasize that if there is also a substantial line contribution to the \[3.6\] band, e.g., from H$\gamma$, H$\delta$, and \[OII\], then the implied EW of the \[OIII\]+H$\beta$ lines would be much larger. For example, adopting the line ratios from the Anders & Fritze-v. Alvensleben (2003), 0.2$\,Z_{\odot}$ model would imply an EW of $\gtrsim$2100Å$\,\,$for the \[OIII\]+H$\beta$ lines. Some correction is required to the median \[3.6\]$-$\[4.5\] color measurement to account for the fact that the $z>7$ sources from the CANDELS EGS field were explicitly selected because of their red IRAC colors. If we assume that the intrinsic \[3.6\]$-$\[4.5\] color for sources over the CANDELS EGS field is $\sim$0.6 mag (which is the value we find from the 3 candidates over the other CANDELS fields: see Table \[tab:brightz8\]) and the noise + scatter is $\sim$0.4 (the value from the other fields), we compute a bias of 0.24 mag from a simple Monte-Carlo simulation. Accounting for such biases reduces the median \[3.6\]-\[4.5\] color of the population by 0.24 mag, which implies a median EW of $\sim$800Å$\,$and 1500Å, ignoring and accounting for a possible nebular contribution to the \[3.6\] band respectively. Volume Density of Bright $z\sim8$ Galaxies ------------------------------------------ Here we use the search results from the previous section to set a constraint on the bright end of the $z\sim8$ LF. We begin this section by calculating the total selection volume in which we would expect to find bright $z\geq7$ galaxies with our selection criteria. We will estimate the selection volumes in a similar way to the methodology used by Bouwens et al. (2015) in deriving the LFs from the full CANDELS program. In short, we create mock catalogs over each search field, with sources distributed over a range in both redshift $z\sim6$-10 and apparent magnitude ($H_{160,AB}=24$ to 26). We then take the two-dimensional $i_{775}$-band images of similar luminosity, randomly-selected $z\sim4$ galaxies from the HUDF (Bouwens et al. 2007, 2011, 2015) and create mock images of the sources at higher-redshift using the two dimensional pixel-by-pixel $z\sim4$ galaxies as a guide (see Bouwens et al. 1998, 2003), adopting random orientations relative to their orientation in the HUDF and scaling their physical sizes as $(1+z)^{-1.2}$, which is the approximate relationship that has been found comparing the mean size of galaxies at fixed luminosity, as a function of redshift (Oesch et al. 2010; Grazian et al. 2012; Ono et al. 2013; Holwerda et al. 2015; Kawamata et al. 2015; Shibuya et al. 2015). Individual sources were assigned $UV$ colors based on their $UV$ luminosity, using the $\beta$-$M_{UV}$ relationship derived by Bouwens et al. (2014) and allowing for an intrinsic scatter $\sigma_{\beta}$ of 0.35 at high luminosities ($M_{UV,AB}=[-22,-20]$) as found by Bouwens et al. (2009, 2012) and systematically decreasing to 0.15 at lower luminosities as found by Rogers et al. (2014). In addition to the HST images we created for individual sources, we also constructed simulated ground-based and Spitzer/IRAC images for these sources which we added to the real ground-based + Spitzer/IRAC data. These simulated images were generated based on the mock $H_{160}$-band images we constructed for individual sources and convolving by the $H_{160}$-to-IRAC, $H_{160}$-to-ground kernels that <span style="font-variant:small-caps;">Mophongo</span> (Labb[é]{} et al. 2010) derived from the observations. In producing simulated IRAC images for the mock sources, we assume a rest-frame EW of 300Å$\,$for H$\alpha$+\[NII\] emission and 500Å$\,$for \[OIII\]+H$\beta$ emission over the entire range $z=4$-9, a flat rest-frame optical color, and a $H_{160}$-optical continuum color of 0.2-0.3 mag, to match the observational results of Shim et al. (2011), Stark et al. (2013), Gonz[á]{}lez et al. (2012, 2014), Labb[é]{} et al. (2013), Smit et al. (2014, 2015), and Oesch et al. (2013). We took the simulated images we created for individual sources and added them to the real HST, ground-based, and Spitzer/IRAC observations. These simulated images were, in turn, used to construct catalogs and our selection criteria applied to the derived catalogs in exactly the same way as we applied these criteria to the real observations (including excluding sources which violated our confusion criteria). Summing the results over all five CANDELS fields, we compute a total selection volume of 1.6$\times$10$^{6}$ Mpc$^{3}$ per 1-mag interval for galaxies with $H_{160,AB}$ magnitudes brightward of 25.5. If we assume that the present selection of $z\sim8$ galaxies is complete, this would imply a volume density of $<$1.4$\times$10$^{-6}$ Mpc$^{-3}$ mag$^{-1}$ and $3.8_{-2.1}^{+3.7}$$\times$10$^{-6}$ Mpc$^{-3}$ mag$^{-1}$ for $H_{160,AB}\sim24.5$-25.0 and $H_{160,AB}\sim25.0$-25.5 galaxies, respectively. We ignore the contribution of the $z\sim8.6$ candidate galaxy EGSY-2008532660, given the evidence that it may be slightly magnified (§3.4). However, we cannot assume that the present selection of bright $z\sim8$ galaxies is complete, since not every $z\sim8$ galaxy exhibits such a red $[3.6]-[4.5]$ color. In the previous section, we found that only 5 out of the 6 bright ($H_{160,AB}<26$), secure $z\gtrsim7$ sources within CANDELS showed such red galaxies. Correcting the volume densities given in the previous paragraph to account for this slight empirically-derived incompleteness (0.83$_{-0.16}^{+0.11}$), we estimate volume densities of $<$1.7$\times$10$^{-6}$ Mpc$^{-3}$ mag$^{-1}$ and $4.7_{-2.7}^{+4.6}$$\times$10$^{-6}$ Mpc$^{-3}$ mag$^{-1}$ for $H_{160,AB}\sim24.5$-25.0 and $H_{160,AB}\sim25.0$-25.5 galaxies, respectively. The uncertainty in the completeness estimate is included in the error we quote for the volume density. It is interesting to compare these constraints on the volume density of bright $z\sim7.1$-8.5 galaxies with other recent constraints which are available on the volume density of bright end of the $UV$ LFs at $z\sim4$, $z\sim6$, $z\sim8$, and $z\sim10$ galaxies from state-of-the-art studies (e.g., Bouwens et al. 2015). The results are shown in Figure \[fig:lf8\], and it is clear that our result lies somewhere midway between the $z\sim7$ and $z\sim8$ LFs, as one might expect given the redshift distribution of the sources that make up our $z=7.1$-8.5 sample. Summary ======= In this paper, we take advantage of the deep Spitzer/IRAC observations available over all five CANDELS in conjunction with the HST+ground-based data to conduct a search over 900 arcmin$^2$ to find bright $z\sim8$ galaxies. To identify galaxies at such high redshifts, we select those galaxies with especially red Spitzer/IRAC \[3.6\]$-$\[4.5\] colors (i.e., $>$0.5), in the hopes of identifying those $z\gtrsim6$ galaxies which show the presence of a strong \[OIII\]+H$\beta$ line in 4.5$\mu$m band. Such a selection is useful for the CANDELS program, given the lack of uniformly deep $Y$-band observations over all five fields. Our selection yielded 4 $z\geq7$ candidates brighter than an $H_{160,AB}$ magnitude of 25.5. Each of these four selected candidates was requried to be undetected ($<$2.5$\sigma$) at optical wavelengths ($<$1$\mu$m), as defined by the inverse-variance-weighted mean flux measurement, be undetected in the $V_{606}$-band ($<$1.5$\sigma$), and show a $I_{814}-J_{125}$ color redward of 1.5. Fortuitously, 3 of our 4 selected $z\geq7$ candidates had deep $Y$-band observations available from either deep ground-based observations or from the new z9-CANDELS follow-up program (Bouwens 2014; Bouwens et al. 2016) with HST. The available $Y$-band observations provide clear confirmation of the $z\geq 7$ redshifts we estimate for three of four candidates found in our search. The redshift estimates we obtain for three of our selected candidates lie significantly above $z\sim7$, with EGS-zs8-2 having a redshift estimate of 7.6$\pm$0.3, EGS-zs8-1 having a redshift estimate of 7.9$\pm$0.4, and EGSY-2008532660 having a redshift estimate of 8.6$_{-0.4}^{+0.2}$. We also obtained spectroscopic observations on two of our candidate $z>7$ galaxies in the near-IR and find probable Ly$\alpha$ lines in their spectra consistent with redshifts of 7.4770$\pm$0.0008 and 7.7302$\pm$0.0006. The detection of Ly$\alpha$ emission for these candidates is significant at 4.7$\sigma$ and 6.1$\sigma$ significance, respectively. The second of these sources was featured in Oesch et al. (2015b). Remarkably enough, a third candidate from our list was spectroscopically confirmed to lie at $z=8.683$ by Zitrin et al. (2015). These sources represent the brightest $z\geq7.5$ candidates we identified over the entire CANDELS program and are 0.5-mag brighter than $z\geq7.5$ candidates identified anywhere else on the sky. Coindentally enough, they all lie in the same CANDELS field, again suggesting large field-to-field variations for brightest $z\geq7$ galaxies. See also discussion in Bouwens et al. (2015) and Bowler et al. (2015). Using these candidates, we estimate the volume density of bright ($H_{160,AB}<25.5$) $z\geq7$ galaxies in the early Universe based on our selected sample and estimate that 17$_{-11}^{+16}$% of $z>7$ galaxies do not show such red colors. The volume density estimate we derive lies midway between the volume density of luminous $z\sim6$ galaxies Bouwens et al. (2015) derive and the volume density of luminous $z\sim8$ galaxies. The median \[3.6\]$-$\[4.5\] color distribution for our selection and other bright $z\gtrsim7.5$ galaxies from the literature is 0.82$_{-0.20}^{+0.08}$ mag (observed) and 0.58$_{-0.20}^{+0.08}$ (correcting for the approximate selection bias: see §6.1). This strongly points to the existence of extremely high EW nebular emission lines in typical star-forming galaxies at $z>7$. Assuming no contribution from nebular line emission to the \[3.6\] band implies a \[OIII\]+H$\beta$ EW of $\sim$800Å. However, allowing for contamination of the \[3.6\] band in accordance with the expectations of Anders & Fritze-v. Alvensleben (2003) would imply a median EW of $\sim$1500 Å. These results are in reasonable agreement but perhaps slightly higher than Smit et al. (2015) estimate for the IRAC-blue sources they selected at $z=6.6$-6.9. Smit et al. (2015) estimate a typical $[OIII]+H_{\beta}$ EW of 1085Å$\,$ for their selected sources. These estimates are similar albeit slightly higher than those estimated by Labb[é]{} et al. (2013), Laporte et al. (2014, 2015), and Huang et al. (2016). In the near future, we would expect the brightest $z\sim8$ galaxies to be identified within the $\sim$1 deg$^2$ wide-area UltraVISTA field (McCracken et al. 2012) by combining the progressively deeper $YJHK_s$ observations with constraints from the optical Subaru+CFHT observations and Spitzer/IRAC observations from SPLASH (Capak et al. 2013) and SMUVS (Caputi et al. 2014). Another significant source of bright $z\sim8$ candidates will be the new BoRG$_{[z910]}$ program (Trenti 2014), which uses a huge allotment of 500 orbits to cover a 500 arcmin$^2$ area to $\gtrsim26.5$ mag depth ($5\sigma$). We thank Robert Barone-Nugent, Daniel Schaerer and Dan Stark for valuable conversations. This work has benefited significantly from the public reductions of the SEDS program and hence the efforts of Matt Ashby, Giovanni Fazio, Steve Willner, and Jiasheng Huang. We are grateful to Dan Stark, Sirio Belli, and Richard Ellis for communicating with us with some unpublished results they also obtained on EGS-zs8-2 (April 2015) where they also find a $>$3$\sigma$ line (putatively Ly$\alpha$) at 1.031$\mu$m (to appear in D. Stark et al. 2016, in prep). We acknowledge the support of NASA grant NAG5-7697, NASA grant HST-GO-11563, and a NWO vrij competitie grant 600.065.140.11N211. Anders, P., & Fritze-v. Alvensleben, U. 2003, , 401, 1063 Ashby, M. L. N., Willner, S. P., Fazio, G. G., et al. 2013, , 769, 80 Atek, H., Richard, J., Jauzac, M., et al. 2015, , 814, 69 Barone-Nugent, R. L., Wyithe, J. S. B., Trenti, M., et al. 2015, , 450, 1224 Bertin, E., & Arnouts, S. 1996, , 117, 393 Bouwens, R., Broadhurst, T. and Silk, J. 1998, , 506, 557 Bouwens, R. J., Illingworth, G. D., Rosati, P., et al. 2003, , 595, 589 Bouwens, R. J., Illingworth, G. D., Franx, M., & Ford, H. 2007, , 670, 928 Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2011, , 737, 90 Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2012, , 754, 83 Bouwens, R. 2014, HST Proposal, 13792 Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2015, , 803, 34 Bouwens, R. J., Oesch, P. A., Labbe, I., et al. 2016, submitted, arXiv:1506.01035 Bowler, R. A. A., Dunlop, J. S., McLure, R. J., et al. 2014, , 440, 2810 Bowler, R. A. A., Dunlop, J. S., McLure, R. J., et al. 2015, , 452, 1817 Burgasser, A. J., McElwain, M. W., Kirkpatrick, J. D., et al. 2004, , 127, 2856 Bradley, L. D., Bouwens, R. J., Ford, H. C., et al. 2008, , 678, 647 Bradley, L. D., Trenti, M., Oesch, P. A., et al. 2012, , 760, 108 Bradley, L. D., Zitrin, A., Coe, D., et al. 2014, , 792, 76 Brammer, G. B., van Dokkum, P. G., & Coppi, P. 2008, , 686, 1503 Capak, P., Aussel, H., Ajiki, M., et al. 2007, , 172, 99 Capak, P., Aussel, H., Bundy, K., et al. 2013, Spitzer Proposal, 10042 Caputi, K., Ashby, M., Fazio, G., et al. 2014, Spitzer Proposal, 11016 de Barros, S., Schaerer, D., & Stark, D. P. 2014, , 563, A81 Ellis, R. S., McLure, R. J., Dunlop, J. S., et al. 2013, , 763, LL7 Fialkov, A., & Loeb, A. 2015, , 806, 256 Finkelstein, S. L., Papovich, C., Dickinson, M., et al. 2013, , 502, 524 Finkelstein, S. L., Ryan, R. E., Jr., Papovich, C., et al. 2014, ArXiv e-prints Furusawa, H., Kosugi, G., Akiyama, M., et al. 2008, , 176, 1 Fontana, A., Dunlop, J. S., Paris, D., et al. 2014, , 570, AA11 Galametz, A., Grazian, A., Fontana, A., et al. 2013, , 206, 10 Gonz[á]{}lez, V., Bouwens, R. J., Labb[é]{}, I., et al. 2012, , 755, 148 Gonz[á]{}lez, V., Bouwens, R., Illingworth, G., et al. 2014, , 781, 34 Grazian, A., Castellano, M., Fontana, A., et al. 2012, , 547, A51 Grogin, N. A., Kocevski, D. D., Faber, S. M., et al. 2011, , 197, 35 Guo, Y., Ferguson, H. C., Giavalisco, M., et al. 2013, , 207, 24 Holwerda, B. W., Bouwens, R., Oesch, P., et al. 2015, , 808, 6 Huang, K.-H., Brada[v c]{}, M., Lemaux, B. C., et al. 2016, , 817, 11 Ishigaki, M., Kawamata, R., Ouchi, M., et al. 2015, , 799, 12 Jiang, L., Egami, E., Mechtley, M., et al. 2013, , 772, 99 Kawamata, R., Ishigaki, M., Shimasaku, K., Oguri, M., & Ouchi, M. 2015, , 804, 103 Kirkpatrick, J. D., Cushing, M. C., Gelino, C. R., et al. 2011, , 197, 19 Koekemoer, A. M., Faber, S. M., Ferguson, H. C., et al. 2011, , 197, 36 Kotulla, R., Fritze, U., Weilbacher, P., & Anders, P. 2009, , 396, 462 Kriek, M., Shapley, A. E., Reddy, N. A., et al. 2015, , 218, 15 Kron, R. G. 1980, , 43, 305 Labb[é]{}, I., et al. 2010a, , 708, L26 Labb[é]{}, I., et al. 2010b, , 716, L103 Labb[é]{}, I., Oesch, P. A., Bouwens, R. J., et al. 2013, , 777, LL19 Labb[é]{}, I., Oesch, P. A., Illingworth, G. D., et al. 2015, , 221, 23 Laporte, N., Streblyanska, A., Clement, B., et al. 2014, , 562, L8 Laporte, N., Streblyanska, A., Kim, S., et al. 2015, , 575, A92 Lorenzoni, S., Bunker, A. J., Wilkins, S. M., et al. 2013, , 429, 150 Mason, C. A., Treu, T., Schmidt, K. B., et al. 2015, , 805, 79 McLean, I. S., Steidel, C. C., Epps, H. W., et al. 2012, , 8446, 84460J McCracken, H. J., Milvang-Jensen, B., Dunlop, J., et al. 2012, , 544, A156 McLeod, D. J., McLure, R. J., Dunlop, J. S., et al. 2015, , 450, 3032 McLure, R. J., Dunlop, J. S., Bowler, R. A. A., et al. 2013, , 432, 2696 Oesch, P.A., et al. 2010a, , 709, L16 Oesch, P. A., Bouwens, R. J., Illingworth, G. D., et al. 2012, , 759, 135 Oesch, P. A., Labb[é]{}, I., Bouwens, R. J., et al. 2013, , 772, 136 Oesch, P. A., Bouwens, R. J., Illingworth, G. D., et al. 2014, , 786, 108 Oesch, P. A., Bouwens, R. J., Illingworth, G. D., et al. 2015a, , 808, 104 Oesch, P. A., van Dokkum, P. G., Illingworth, G. D., et al. 2015b, , 804, L30 Oke, J. B., & Gunn, J. E. 1983, , 266, 713 Ono, Y., Ouchi, M., Mobasher, B., et al. 2012, , 744, 83 Ono, Y., Ouchi, M., Curtis-Lake, E., et al. 2013, , 777, 155 Pentericci, L., Fontana, A., Vanzella, E., et al. 2011, , 743, 132 Rogers, A. B., McLure, R. J., Dunlop, J. S., et al. 2014, , 440, 3714 Schaerer, D., & de Barros, S. 2009, , 502, 423 Schenker, M. A., Stark, D. P., Ellis, R. S., et al. 2012, , 744, 179 Schenker, M. A., Robertson, B. E., Ellis, R. S., et al. 2013, , 768, 196 Schmidt, K. B., Treu, T., Trenti, M., et al. 2014, , 786, 57 Shibuya, T., Kashikawa, N., Ota, K., et al. 2012, , 752, 114 Shibuya, T., Ouchi, M., & Harikane, Y. 2015, , 219, 15 Shim, H., Chary, R.-R., Dickinson, M., et al. 2011, , 738, 69 Skelton, R. E., Whitaker, K. E., Momcheva, I. G., et al. 2014, , 214, 24 Smit, R., Bouwens, R. J., Labb[é]{}, I., et al. 2014, , 784, 58 Smit, R., Bouwens, R. J., Franx, M., et al. 2015, , 801, 122 Stark, D. P., Schenker, M. A., Ellis, R., et al. 2013, , 763, 129 Szalay, A. S., Connolly, A. J., & Szokoly, G. P. 1999, , 117, 68 Tilvi, V., Papovich, C., Tran, K.-V. H., et al. 2013, , 768, 56 Trenti, M., Bradley, L. D., Stiavelli, M., et al. 2011, , 727, L39 Treu, T., Schmidt, K. B., Trenti, M., Bradley, L. D., & Stiavelli, M. 2013, , 775, LL29 Trenti, M. 2014, HST Proposal, 13767 Vanzella, E., Pentericci, L., Fontana, A., et al. 2011, , 730, L35 Wilkins, S. M., Coulton, W., Caruana, J., et al. 2013, , 435, 2885 Wilkins, S. M., Stanway, E. R., & Bremer, M. N. 2014, , 439, 1038 Windhorst, R. A., Cohen, S. H., Hathi, N. P., et al. 2011, , 193, 27 Wyithe, J. S. B., Yan, H., Windhorst, R. A., & Mao, S. 2011, , 469, 181 Yan, H., Yan, L., Zamojski, M. A., et al. 2011, , 728, LL22 Yan, H., Finkelstein, S. L., Huang, K.-H., et al. 2012, , 761, 177 Zheng, W., Postman, M., Zitrin, A., et al. 2012, , 489, 406 (Z12) Zheng, W., Shu, X., Moustakas, J., et al. 2014, , 795, 93 Zitrin, A., Zheng, W., Broadhurst, T., et al. 2014, , 793, L12 A. Other Candidate $z\geq7$ Galaxies ==================================== In addition to our application of our criteria to the catalogs Bouwens et al. (2015) compiled over a 750 arcmin$^2$ search area within CANDELS, we also made use of the catalogs from the 3D-HST team (Skelton et al. 2014) over the same region. Our rationale to do so was to maximize the completeness of our selection for bright $z\geq7$ galaxies. One additional $z\geq7$ galaxy candidate is found which did not make it into our fiducial selection using the Bouwens et al. (2015) catalogs (because it had a measured $[3.6]-[4.5]$ color of $\sim$0.2 mag).[^8] We tabulate its coordinates, $H_{160,AB}$-band magnitude, $[3.6]-[4.5]$ color, and estimated redshift in Table \[tab:table\_details2\]. Postage images of the $z\sim8$ candidate is presented in Figure \[fig:postage\_extra\]. Model fits to the photometry Skelton et al. (2014) provide on the source, as well as the inferred redshift likelihood distribution, are also presented in Figure \[fig:sed\_extra\]. [ccccccc]{} EGSY-9597563148 & 14:19:59.76 & 52:56:31.40 & 25.03$\pm$0.10 & 0.53$\pm$0.26 & 8.19$_{-0.87}^{+0.23}$ & \[1\]\ $~~~$component-a & 14:19:59.78 & 52:36:31.30 & 25.73$\pm$0.14\ $~~~$component-b & 14:19:59.73 & 52:36:31.70 & 25.83$\pm$0.13 B. Sources Used to Validate our Proposed $[3.6]-[4.5]>0.5$ Selection ==================================================================== In §3.2, we considered a selection of sources from the four CANDELS fields with deep $Y$-band observations to test the idea that we could use an IRAC color criterion, i.e., $[3.6]-[4.5]>0.5$, combined with an optical dropout criterion to identify galaxies at $z>7$ even in the absence of $Y$-band data. In Table \[tab:valid\_samp\], we provide a compilation of the 15 sources that we identified which satisfied the primary selection criteria fom the paper but which are brighter than 26.7 mag in the $H_{160}$-band (and brighter than 26.5 mag over the CANDELS UDS and COSMOS fields). [cccccc]{} GNDY-6487514332 & 12:36:48.752 & 62:14:33.29 & 26.4 & $0.6_{-0.6}^{+0.7}$ & 7.66\ GNDY-7048017191 & 12:37:04.805 & 62:17:19.14 & 26.2 & $1.1_{-0.2}^{+0.3}$ & 7.84\ GNWY-7379420231 & 12:37:37.941 & 62:20:23.14 & 26.5 & $0.5_{-0.5}^{+1.5}$ & 8.29\ GNWZ-7455218088 & 12:37:45.529 & 62:18:08.87 & 26.5 & $0.7_{-0.2}^{+0.2}$ & 7.16\ GSDZ-2468850074 & 03:32:46.889 & $-$27:50:07.45 & 26.0 & $1.1_{-0.1}^{+0.1}$ & 7.24\ GSWY-2249353259 & 03:32:24.934 & $-$27:53:25.94 & 26.1 & $0.6_{-0.3}^{+0.2}$ & 8.11\ GSDY-2209651370 & 03:32:20.964 & $-$27:51:37.02 & 26.3 & $1.0_{-0.8}^{+1.8}$ & 7.84\ COSY-0439027359 & 10:00:43.90 & 2:27:35.9 & 26.6 & $0.7_{-0.2}^{+0.2}$ & 7.33\ COSZ-0237620370 & 10:00:23.76 & 2:20:37.0 & 25.1 & $1.0_{-0.1}^{+0.2}$ & 7.14\ COSY-0235624462 & 10:00:23.56 & 2:24:46.2 & 25.7 & $0.9_{-0.1}^{+0.1}$ & 7.84\ UDSY-4133353345 & 02:17:41.333 & $-$5:15:33.45 & 25.8 & $0.5_{-0.2}^{+0.2}$ & 7.41\ UDSY-4308785165 & 02:17:43.087 & $-$5:08:51.65 & 26.3 & $0.7_{-0.5}^{+0.8}$ & 7.84\ UDSZ-4199355469 & 02:17:41.993 & $-$5:15:54.69 & 26.5 & $1.8_{-0.6}^{+2.6}$ & 7.08\ UDSY-1765825082 & 02:17:17.658 & $-$5:12:50.82 & 26.3 & $1.1_{-0.1}^{+0.1}$ & 7.93\ UDSY-5428621201 & 02:16:54.286 & $-$5:12:12.01 & 26.1 & $1.0_{-0.7}^{+1.1}$ & 7.49\ [^1]: In principle, the wide-area ($\sim$1 deg$^2$) UDS and UltraVISTA programs have great potential to find large numbers of bright $z\gtrsim6$ sources as demonstrated by the recent Bowler et al. (2014) results (see also Bowler et al. 2015), but may not yet probe deep enough to sample the $z\gtrsim8$ galaxy population. [^2]: Bouwens et al. (2015) only considered those regions in CANDELS where deep optical and near-IR observations are available from the CANDELS observations. [^3]: We remark that any contaminants in a particularly bright selection would be generally easy to identify given the depth of the HST, Spitzer, and supporting ground-based observations. [^4]: The purpose of the z9-CANDELS program was to determine the nature of high-probability but uncertain candidate $z\sim9$-10 galaxies over the CANDELS-UDS, COSMOS, and EGS fields. In some cases, bright candidate $z\sim8$ galaxies were located nearby bright $z\sim9$-10 candidates and could be readily observed in the same pointings. [^5]: The flux uncertainties that we derive for this candidate and EGS-zs8-2 is almost an order-of-magnitude larger than found in observations of similar $z>7$ (e.g., Finkelstein et al. 2013). This is in part due to the significantly poorer seeing conditions we were subject for the observations (1.00$''$ FWHM instead 0.65$''$ for Finkelstein et al. 2013). Another potentially significant contributing factor is our relatively conservative account of the uncertainties in the line flux measurements, including uncertainties that arise from the sky subtraction. The uncertainties we derive are consistent with typical values reported by the MOSDEF program (Kriek et al. 2015). [^6]: Interestingly enough, D. Stark et al. (2016, in prep) have also spectroscopically confirmed that the fourth source (COSY-0237620370) from our sample lies at z=7.15. As such, Ly$\alpha$ emission has been found in all 4 galaxies that make up our selection. Our entire sample has therefore been spectroscopically confirmed to lie in the redshift range $z=7.1$-9.1, with the spectroscopic redshifts being in excellent agreement with our derived photometric redshifts. [^7]: There is only one source from our combined $z\sim8$ selection with Bouwens et al. (2015) which would be potentially easier to select as a $z\sim8$ galaxy over the CANDELS EGS field. It is presented in Appendix A. Its redshift is not well constrained (lying anywhere between $z\sim 7.1$ and 8.5), but it would be marginally easier to find over the CANDELS EGS field since the Bouwens et al. (2015) $z\sim8$ sample extends down to $z\sim7$ over that field while the Bouwens et al. (2015) $z\sim8$ samples over the other fields only extend down to $z\sim7.3$. [^8]: While such differences might seem to be a concern, the \[3.6\]$-$\[4.5\] colors we measure for the 4 other sources in our selection agree to $<$0.1 mag with the Skelton et al. (2014) values ($\sim$0.05 mag differences are typical).
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this note we classify invariant star products with quantum momentum maps on symplectic manifolds by means of an equivariant characteristic class taking values in the equivariant cohomology. We establish a bijection between the equivalence classes and the formal series in the second equivariant cohomology, thereby giving a refined classification which takes into account the quantum momentum map as well.' author: - '**Thorsten Reichert**[^1], **Stefan Waldmann**[^2]\' date: July 2015 title: Classification of Equivariant Star Products on Symplectic Manifolds --- Introduction {#sec:Introduction} ============ The classification of formal star products [@bayen.et.al:1978a] up to equivalence is well-understood, both for the symplectic and the Poisson case, see e.g. the textbook [@waldmann:2007a] for more details on deformation quantization. While the general classification in the Poisson case is a by-product of the formality theorem of Kontsevich [@kontsevich:2003a; @kontsevich:1997:pre], the symplectic case can be obtained easier by various different methods [@nest.tsygan:1995a; @bertelson.cahen.gutt:1997a; @deligne:1995a; @gutt.rawnsley:1999a; @weinstein.xu:1998a; @fedosov:1996a]. The result is that in the symplectic case there is an intrinsically defined *characteristic class* $$c(\star) \in \frac{[\omega]}{{\nu}} + \HdR^2(M, \mathbb{C}){\llbracket {\nu}\rrbracket}$$ for every star product $\star$ which is a formal series in the second de Rham cohomology. By convention, one places the symplectic form as reference point in order $\nu^{-1}$. Then $\star$ and $\star'$ are equivalent iff $c(\star) = c(\star')$. Moreover, if we denote by ${\mathrm{Def}}(M, \omega)$ the set of equivalence classes of star products quantizing $(M, \omega)$, the characteristic class induces a bijection $$\label{eq:cBijection} c\colon {\mathrm{Def}}(M, \omega) \ni [\star] \; \mapsto \; c(\star) \in \frac{[\omega]}{{\nu}} + \HdR^2(M, \mathbb{C}){\llbracket {\nu}\rrbracket}.$$ If one has in addition a symmetry in form of a group action of a Lie group by symplectic or Poisson diffeomorphisms, one is interested in invariant star products. Again, the classification is known both in the symplectic and in the Poisson case, at least under certain assumptions on the action. The general Poisson case makes use of the equivariant formality theorem of Dolgushev [@dolgushev:2005a], which can be obtained whenever there is an invariant connection on the manifold. Such an invariant connection exists e.g. if the group action is proper, but also in far more general situations. In the easier symplectic situation one can make use of Fedosov’s construction of a star product [@fedosov:1994a] and obtain the *invariant characteristic class* $c{^{\mathrm{inv}}}$, now establishing a bijection $$\label{eq:InvariantCharClass} c{^{\mathrm{inv}}}\colon {\mathrm{Def}}{^{\mathrm{inv}}}(M, \omega) \ni [\star] \; \mapsto \; c(\star) \in \frac{[\omega]}{{\nu}} + \HdR^{\mathrm{inv,2}}(M, \mathbb{C}){^{\mathrm{inv}}}{\llbracket {\nu}\rrbracket},$$ such that $\star$ and $\star'$ are invariantly equivalent iff $c{^{\mathrm{inv}}}(\star) = c{^{\mathrm{inv}}}(\star')$, see [@bertelson.bieliavsky.gutt:1998a]. Here one uses the more refined notion of invariant equivalence of invariant star products, where the equivalence $S = \id + \sum_{r=1}^\infty {\nu}^r S_r$ with $S(f \star g) = Sf \star' Sg$ for $f, g \in \Cinfty(M){\llbracket {\nu}\rrbracket}$ is now required to be *invariant*. Moreover, $\HdR^{\mathrm{inv,2}}(M, \mathbb{C})$ denotes the invariant second de Rham cohomology, i.e. invariant closed two-forms modulo differentials of invariant one-forms. Again, one needs an invariant connection for this to work. In symplectic and Poisson geometry, the presence of a symmetry group is typically not enough: one wants the fundamental vector fields of the action to be Hamiltonian by means of an $\Ad^*$-equivariant momentum map $J\colon M \longrightarrow \lie{g}^*$, where $\lie{g}$ is the Lie algebra of the group $G$ acting on $M$. The notion of a momentum map has been transferred to deformation quantization in various flavours, see e.g. the early work [@arnal.cortet.molin.pinczon:1983a]. The resulting general notion of a quantum momentum map is due to [@xu:1998a] but was already used in examples in e.g. [@bordemann.brischle.emmrich.waldmann:1996a]. We will follow essentially the conventions from [@mueller-bahns.neumaier:2004b; @mueller-bahns.neumaier:2004a], see also [@gutt.rawnsley:2003a; @hamachi:2002a]: a quantum momentum map is a formal series ${\mathbf{J}} \in C^1\left(\lie{g}, \Cinfty(M)\right){\llbracket {\nu}\rrbracket}$ such that for all $\xi \in \lie{g}$ the function ${\mathbf{J}}(\xi)$ generates the fundamental vector field by $\star$-commutators and such that one has the equivariance condition that $[{\mathbf{J}}(\xi), {\mathbf{J}}(\eta)]_\star = {\nu}{\mathbf{J}}([\xi, \eta])$ for $\xi, \eta \in \lie{g}$. Here the zeroth order is necessarily an equivariant momentum map in the classical sense. We assume the zeroth order to be fixed once and for all. The classification we are interested in is now for the pair of a $G$-invariant star product $\star$ and a corresponding quantum momentum map ${\mathbf{J}}$ with respect to the equivalence relation determined by *equivariant equivalence*: we say $(\star, {\mathbf{J}})$ and $(\star', {\mathbf{J}}')$ are equivariantly equivalent if there is a $G$-invariant equivalence transformation $S$ relating $\star$ to $\star'$ as before and $S{\mathbf{J}} = {\mathbf{J}}'$. In most interesting cases, the group $G$ is connected and hence invariance is equivalent to infinitesimal invariance under the Lie algebra action by fundamental vector fields. Thus it is reasonable to assume a Lie algebra action from the beginning, whether it actually comes from a corresponding Lie group action or not. The set of equivalence classes for this refined notion of equivalence will then be denoted by ${\mathrm{Def}}{_\lie{g}}(M,\omega)$. The main result of this work is then the following: \[theorem:MainTheorem\] Let $(M, \omega)$ be a connected symplectic manifold with a strongly Hamiltonian action $\lie{g} \ni \xi \mapsto X_\xi \in \Secinfty(TM)$ by a real finite-dimensional Lie algebra $\lie{g}$. Suppose there exists a $\lie{g}$-invariant connection. Then there exists a characteristic class $$\label{eq:TheClassIsCool} {c{_\lie{g}}}\colon {\mathrm{Def}}{_\lie{g}}(M,\omega) \longrightarrow \frac{[\omega - J_0]}{\nu} + {\mathrm{H}{_\lie{g}}}^2(M){\llbracket {\nu}\rrbracket}$$ establishing a bijection to the formal series in the second equivariant cohomology. Under the canonical map ${\mathrm{H}{_\lie{g}}}^2(M) \longrightarrow \HdR^{\mathrm{inv, 2}}(M)$ the class becomes the invariant characteristic class. The main idea is to base the construction of this class on the Fedosov construction. This is where we need the invariant connection in order to obtain invariant star products and quantum momentum maps. The crucial and new aspect compared to the existence and uniqueness statements obtained earlier is that we have to find an invariant equivalence transformation $S$ for which we can explicitly compute $S{\mathbf{J}}$ in order to compare it to ${\mathbf{J}}'$. Of course, the above theorem only deals with the symplectic situation which is substantially easier than the genuine Poisson case. Here one can expect similar theorems to hold, however, at the moment they seem out of reach. The difficulty is, in some sense, to compute the effect of invariant equivalence transformations on quantum momentum maps by means of some chosen equivariant formality, say the one of Dolgushev. On a more conceptual side, this can be seen as part of a much more profound equivariant formality conjecture stated by Nest and Tsygan [@tsygan:2010a; @nest:2013a]. From that point of view, our result supports their conjecture. One of our motivations to search for such a characteristic class comes from the classification result of $G$-invariant star products up to equivariant Morita equivalence [@jansen.neumaier.schaumann.waldmann:2012a], where a reminiscent of the equivariant class showed up in the condition for equivariant Morita equivalence. The paper is organized as follows: in Section \[sec:preliminaries\] we collect some preliminaries on invariant star products and the existence of quantum momentum maps. Section \[sec:Fedosov\] contains a brief reminder on those parts of Fedosov’s construction which we will need in the sequel. It contains also the key lemma to prove our main theorem. In Section \[sec:classification\] we establish a relative class which allows to determine whether two given pairs of star products and corresponding quantum momentum maps are equivariantly equivalent. In the last Section \[sec:char\] we define the characteristic class and complete the proof of the main theorem. Preliminaries {#sec:preliminaries} ============= Throughout this paper let $(M,\omega)$ denote a connected, symplectic manifold, $\left\{\argument, \argument\right\}$ the corresponding Poisson bracket, $\Formen^\bullet(M)$ the differential forms on $M$, ${Z}^\bullet(M)$ the closed forms, $\lie{g}$ a real finite-dimensional Lie algebra, and $\nabla$ a torsion-free, symplectic connection on $M$. Then every anti-homomorphism $\lie{g} \longrightarrow {\Secinfty_{\mathrm{sympl}}}(TM)\colon \xi \longmapsto X_\xi$ from $\lie{g}$ into the symplectic vector fields on $M$ gives rise to a representation of $\lie{g}$ on $\Cinfty(M)$ via $\xi \mapsto (f \mapsto - \Lie_{X_\xi} f)$ where $\Lie$ denotes the Lie derivation. For convenience, we will abbreviate $\Lie_{X_\xi}$ to $\Lie_\xi$ and analogously for the insertion $\ins_\xi$. In most cases, the Lie algebra action arises as the infinitesimal action of a Lie group action by a Lie group $G$ acting symplectically on $M$. In the case of a connected Lie group, we can reconstruct the action of $G$ as usual. However, we do not assume to have a Lie group action since in several cases of interest the vector fields $X_\xi$ might not have complete flows. Since the main focus here will be on the interaction between symmetries conveyed by $\lie{g}$ and formal star products on $M$, we shall briefly recall the relevant basic definitions, following notations and conventions from [@neumaier:2001a], see also [@waldmann:2007a Sect. 6.4] for a more detailed introduction. Let $\Cinfty(M){\llbracket {\nu}\rrbracket}$ be the space of formal power series in the formal parameter ${\nu}$ with coefficients in $\Cinfty(M)$. A star product on $(M,\omega)$ is a bilinear map $$\label{eq:StarProduct} \star\colon \Cinfty(M) \times \Cinfty(M) \longrightarrow \Cinfty(M){\llbracket {\nu}\rrbracket}\colon (f,g) \longmapsto f\star g = \sum\limits_{k=0}^\infty {\nu}^k C_k(f,g),$$ such that its ${\nu}$-bilinear extension to $\Cinfty(M){\llbracket {\nu}\rrbracket}\times \Cinfty(M){\llbracket {\nu}\rrbracket}$ is an associative product, $C_0(f,g) = fg$ and $C_1(f,g) - C_1(g,f) = \left\{f,g\right\}$ holds for all $f,g\in\Cinfty(M)$, and $C_k\colon \Cinfty(M) \times \Cinfty(M) \longrightarrow \Cinfty(M)$ is a bidifferential operator vanishing on constants for all $k\geq 1$. Such a star product is called $\lie{g}$-invariant if the ${\nu}$-linear extension of $\Lie_\xi$ to $\Cinfty(M){\llbracket {\nu}\rrbracket}$ is a derivation of $\star$ for all $\xi\in\lie{g}$. Recall further that a linear map $J_0\colon \lie{g} \longrightarrow \Cinfty(M)$ is called a (classical) Hamiltonian for the action if it satisfies $\Lie_\xi f = - \left\{ J_0(\xi), f \right\}$ for all $\xi \in \lie{g}$ and $f \in \Cinfty(M)$. It is called a (classical) momentum map, if in addition $J_0([\xi,\eta]) = \left\{ J_0(\xi), J_0(\eta) \right\}$ holds for all $\xi,\eta \in \lie{g}$. We can adapt a similar notion for star products on $M$ by replacing the Poisson bracket with the $\star$-commutator, compare also [@xu:1998a]. We will generally adopt the notation $\ad_\star\left( f \right) g = \left[ f, g \right]_\star$ with $\left[\argument, \argument\right]_\star$ being the commutator with respect to the product $\star$. Finally, we will, for any vector space $V$, denote by $C^k(\lie{g}, V)$ the space of $V$-valued, $k$-multilinear, alternating forms on $\lie{g}$. The following definition is by now the standard notion [@xu:1998a; @mueller-bahns.neumaier:2004a]: \[def:prelim\_qham\] Let $\star$ be a $\lie{g}$-invariant star product. A map ${\mathbf{J}} \in C^1\left( \lie{g}, \Cinfty(M) \right){\llbracket {\nu}\rrbracket}$ is called a quantum momentum map if $$\label{eq:ConditionQMM} \Lie_\xi = -{\frac{1}{{\nu}}\ad}_\star\left( {\mathbf{J}}(\xi) \right) \qquad \text{and} \qquad {\mathbf{J}}\left( [\xi, \eta] \right) = \frac{1}{{\nu}} \left[ {\mathbf{J}}(\xi), {\mathbf{J}}(\eta) \right]_\star$$ hold for all $\xi,\eta \in \lie{g}$. If only the first equality is satisfied, we will call ${\mathbf{J}}$ a quantum Hamiltonian. Evaluating the above equations in zeroth order in ${\nu}$ for any quantum Hamiltonian (quantum momentum map) ${\mathbf{J}}$, one can readily observe that $J_0 = {\mathbf{J}}\at{{\nu}= 0}$ is a classical Hamiltonian (momentum map). Conversely, we will say that ${\mathbf{J}}$ deforms the Hamiltonian (momentum map) $J_0$. Having the previous definitions at hand, one can define various flavours of equivalences between star products. Let $\star$ and $\star'$ be star products on $M$. \[item:equivalence\] They are called equivalent if there is a formal series $T = \id + \sum_{k=1}^\infty {\nu}^k T_k$ of differential operators $T_k: \Cinfty(M) \longrightarrow \Cinfty(M)$ such that $$\label{eq:Equivalence} f\star g = T^{-1}\left( T(f) \star' T(g) \right) \qquad \textrm{and} \qquad T(1) = 1$$ for all $f,g\in\Cinfty(M){\llbracket {\nu}\rrbracket}$. In this case $T$ is called an equivalence from $\star$ to $\star'$. \[item:InvariantEquivalence\] If $\star$ and $\star'$ are $\lie{g}$-invariant star products we will call an equivalence transformation $T$ from $\star$ to $\star'$ a $\lie{g}$-invariant equivalence if $\Lie_\xi T = T \Lie_\xi$ holds for all $\xi\in\lie{g}$. \[item:EquivariantEquivalence\] If in addition ${\mathbf{J}}$ and ${\mathbf{J'}}$ are quantum momentum maps of $\star$ and $\star'$ respectively, we will call the pairs $(\star, {\mathbf{J}})$ and $(\star',{\mathbf{J'}})$ equivariantly equivalent if there is a $\lie{g}$-invariant equivalence $T$ from $\star$ to $\star'$ such that $T{\mathbf{J}} = {\mathbf{J'}}$. The first two versions of equivalence [@nest.tsygan:1995a; @fedosov:1996a; @gutt.rawnsley:1999a; @bertelson.cahen.gutt:1997a; @deligne:1995a; @weinstein.xu:1998a] and invariant equivalence [@bertelson.bieliavsky.gutt:1998a] were discussed in the literature already extensively, leading to the well-known classification results. In this work we will deal with the third version. For the concluding classification result, we will need the equivariant cohomology (in the Cartan model) on $M$ with respect to $\lie{g}$, for more details, see e.g. the monograph [@guillemin.sternberg:1999a]. Since we are mainly interested in the Lie algebra case, the underlying complex is the complex of equivariant differential forms on $M$, that is $$\label{eq:def_cohomology} \Omega{_\lie{g}}^k(M) = \bigoplus\limits_{2i+j=k} \left(\Sym^i(\lie{g}^*) \tensor \Omega^j(M)\right){^{\mathrm{inv}}},$$ where $\vphantom{\Omega}{^{\mathrm{inv}}}$ denotes the space of $\lie{g}$-invariants with respect to the coadjoint representation on $\Sym^\bullet(\lie{g}^*)$ and $-\Lie_\xi$ on $\Omega^\bullet(M)$. Equivalently, one can view the elements $\alpha \in \Omega{_\lie{g}}^k(M)$ as $\Omega^k(M)$-valued polynomials $\lie{g} \longrightarrow \Omega^\bullet(M)$ subject to the equivariance condition $$\label{eq:prelim_equivariance_condition} \alpha\left( [\xi,\eta] \right) = -\Lie_\xi \alpha(\eta) \qquad \textrm{for all } \xi,\eta\in\lie{g}.$$ The differential ${{\mathrm{d}}{_\lie{g}}}\colon \Omega{_\lie{g}}^k(M) \longrightarrow \Omega{_\lie{g}}^{k+1}(M)$ is defined to be $$\left({{\mathrm{d}}{_\lie{g}}}\alpha \right)(\xi) = {\mathrm{d}}\left( \alpha(\xi)\right) + \ins_\xi \alpha(\xi) \qquad \textrm{or} \qquad {{\mathrm{d}}{_\lie{g}}}\alpha = {\mathrm{d}}\alpha + \ins_\bullet \alpha,$$ where ${\mathrm{d}}$ denotes the de Rham differential on $M$ and $\ins_\xi$ the insertion of $X_\xi$ into the first argument. The equivariant cohomology on $M$ with respect to $\lie{g}$ is then, as usual, defined by ${\mathrm{H}{_\lie{g}}}(M) = \ker {{\mathrm{d}}{_\lie{g}}}/ \image {{\mathrm{d}}{_\lie{g}}}$ and we will denote the equivariant class of the representative $\alpha \in \Omega{_\lie{g}}^\bullet(M)$ by $\left[\alpha \right]{_\lie{g}}$. Fedosov Construction {#sec:Fedosov} ==================== Since our central classification result will make heavy use of the Fedosov construction [@fedosov:1994a], we shall collect and recall the relevant results here briefly. As in the previous section, we will follow the exposition in [@neumaier:2001a], see also [@waldmann:2007a Sect. 6.4] for further details. Let us start by defining the formal Weyl algebra $$\label{eq:FormalWeylAlgebra} {\mathcal{W}}\tensor {\Lambda}(M) = \prod\limits_{k=0}^\infty \left( \mathbb{C} \tensor \Secinfty\left( {\Sym}^k T^*M \tensor {\Anti}T^*M \right) \right){\llbracket {\nu}\rrbracket},$$ where $(M,\omega)$ is a symplectic manifold. Then ${\mathcal{W}}\tensor {\Lambda}$ (we will frequently drop the reference to $M$) is obviously an associative graded commutative algebra with respect to the pointwise symmetrized tensor product in the first factor and the $\wedge$-product in the second factor. The resulting product we will denote by $\mu$. We can additionally observe that ${\mathcal{W}}\tensor {\Lambda}$ is graded in various ways and define the corresponding degree maps on elements of the form $a = (X \tensor \alpha) {\nu}^k$ with $X\in{\Sym}^\ell T^*M$ and $\alpha\in{\Anti}^m T^*M$ as $$\label{eq:TheDegrees} \degs a = \ell a, \qquad \dega a = ma, \quad \textrm{and} \quad \deg_{\nu}a = ka,$$ extend them as derivations to ${\mathcal{W}}\tensor {\Lambda}$, and refer to the first two as symmetric and antisymmetric degree, respectively. Finally, the so called total degree ${\mathrm{Deg}}= \degs + 2\deg_{\nu}$ will be needed later on. We can then proceed and define another associative product on ${\mathcal{W}}\tensor {\Lambda}$ first locally in a chart $(U, x)$ by $$\label{eq:fedosov_prod} a {\mathbin{\circ_{\tiny{\mathrm{F}}}}}b = \mu \circ \exp\left\{ \frac{{\nu}}{2} \omega^{ij} \inss(\partial_i) \tensor \inss(\partial_j) \right\} (a \tensor b),$$ with $a, b \in {\mathcal{W}}\tensor {\Lambda}$, where $\omega\at{U} = \frac{1}{2} \omega_{ij} \D x^i \wedge \D x^j$ and $\omega^{ik} \omega_{jk} = \delta^i_j$, and with $\inss(\partial_i)$ being the insertion of $\partial_i$ into the first argument on ${\mathcal{W}}$. The tensorial character of the insertions then shows that this is actually globally well-defined and yields an associative product since the insertions in the symmetric tensors are commuting derivations. It is easy to see that ${\mathbin{\circ_{\tiny{\mathrm{F}}}}}$ is neither $\degs$- nor $\deg_{\nu}$-graded but that it is $\dega$- and ${\mathrm{Deg}}$-graded. We can additionally use the latter to obtain a filtration of ${\mathcal{W}}\tensor{\Lambda}$. To that end, let ${\mathcal{W}}_k\tensor{\Lambda}$ denote those elements of ${\mathcal{W}}\tensor{\Lambda}$ whose total degree is greater than or equal to $k$. We then have $$\label{eq:TheFiltration} {\mathcal{W}}\tensor{\Lambda}= {\mathcal{W}}_0\tensor{\Lambda}\supseteq {\mathcal{W}}_1\tensor{\Lambda}\supseteq \cdots \supseteq \left\{0\right\} \quad \textrm{and} \quad \bigcap\limits_{k=0}^\infty {\mathcal{W}}_k\tensor{\Lambda}= \left\{0\right\}.$$ We will frequently use this filtration together with Banach’s fixed point theorem to find unique solutions to equations of the form $a = L(a)$ for $a\in{\mathcal{W}}\tensor{\Lambda}$ and $L: {\mathcal{W}}\tensor{\Lambda}\longrightarrow {\mathcal{W}}\tensor{\Lambda}$ such that $L$ is contracting with respect to the total degree. For details see e.g. [@waldmann:2007a Sect. 6.2.1]. Essential for the Fedosov construction are then the following operators on ${\mathcal{W}}\tensor {\Lambda}$ $$\label{eq:deltaDef} \begin{split} \delta = \left( 1 \tensor {\mathrm{d}}x^i \right) \inss(\partial_i) \qquad \delta^* = \left( {\mathrm{d}}x^i \tensor 1 \right) \insa(\partial_i) \qquad \nabla = \left(1 \tensor {\mathrm{d}}x^i \right) \nabla_{\partial_i}, \end{split}$$ where $\nabla$ is any torsion-free, symplectic connection on $M$. Again, it is clear that these definitions yield chart-independent operators. From the definition of $\delta$ and $\delta^*$ one can easily calculate that $\delta$ is a graded derivation of ${\mathbin{\circ_{\tiny{\mathrm{F}}}}}$ and $\delta^2 = \left(\delta^*\right)^2 = 0$. With the help of the projection $\sigma\colon {\mathcal{W}}\tensor {\Lambda}\longrightarrow \Cinfty(M){\llbracket {\nu}\rrbracket}$ onto symmetric and antisymmetric degree $0$ as well as a normalized version of $\delta^*$ which is defined on homogeneous elements $a \in {\mathcal{W}}^k \tensor {\Lambda}^\ell$ as $$\label{eq:deltaInv} \delta^{-1} a = \begin{cases} \frac{1}{k+\ell}\,\delta^* a & \textrm{for } k+\ell \neq 0 \\ 0 & \textrm{for } k+\ell = 0, \end{cases}$$ one finds that $$\label{eq:fedosov_delta_homotopy} \id_{{\mathcal{W}}\tensor {\Lambda}} - \sigma = \delta \delta^{-1} + \delta^{-1}\delta.$$ For $\nabla$ on the other hand, one can check that $\nabla$ is a graded derivation of ${\mathbin{\circ_{\tiny{\mathrm{F}}}}}$ and that $$\label{eq:CurvatureShowsUp} \nabla^2 = -{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(R)$$ with $R \in {\mathcal{W}}^2 \tensor {\Lambda}^2$ being the curvature tensor of $\nabla$. The ingenious new element of Fedosov is then a graded derivation of ${\mathbin{\circ_{\tiny{\mathrm{F}}}}}$, which is subject of the following theorem. \[thm:fedosov\_der\] Let $\Omega \in {\nu}{Z}^2(M){\llbracket {\nu}\rrbracket}$ be a series of closed two forms on $M$. Then there exists a unique $r\in{\mathcal{W}}_2 \tensor {\Lambda}^1$ such that $$\label{eq:Fedosovr} r = \delta^{-1} \left( \nabla r - \frac{1}{{\nu}} r {\mathbin{\circ_{\tiny{\mathrm{F}}}}}r + R + 1 \tensor \Omega \right).$$ The Fedosov derivation $$\label{eq:fedosov_der_def} {\mathfrak{D}}= -\delta + \nabla - {{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}\left( r \right)$$ is then a graded ${\mathbin{\circ_{\tiny{\mathrm{F}}}}}$-derivation of antisymmetric degree $1$ with ${\mathfrak{D}}^2 = 0$. One finds that on elements $a \in {\mathcal{W}}\tensor {\Lambda}^{\bullet \geq 1}$ with *positive* antisymmetric degree there is a homotopy operator corresponding to ${\mathfrak{D}}$, which is given by $$\label{eq:FedosovHomotopy} {\mathfrak{D}}^{-1} a = - \delta^{-1} \frac{1} {\id - \left[\delta^{-1}, \nabla - {{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(r)\right]} \quad \textrm{such that} \quad {\mathfrak{D}}{\mathfrak{D}}^{-1} a + {\mathfrak{D}}^{-1}{\mathfrak{D}}a = a$$ where ${\mathfrak{D}}$ is constructed from $r$ according to the previous theorem. Using ${\mathfrak{D}}^{-1}$ one can show that there is a unique isomorphism of $\mathbb{C}{\llbracket {\nu}\rrbracket}$-vector spaces $\tau: \Cinfty(M){\llbracket {\nu}\rrbracket}\longrightarrow \ker {\mathfrak{D}}\cap {\mathcal{W}}\tensor{\Lambda}$, called the Fedosov-Taylor series, with inverse being the projection $\sigma$ restricted to the codomain of $\tau$. The Fedosov-Taylor series $\tau$ can explicitly be written as $\tau(f) = f - {\mathfrak{D}}^{-1}\left( 1\tensor {\mathrm{d}}f\right)$. We can now proceed to define the Fedosov star product ${\star}_\Omega$ on $\Cinfty(M){\llbracket {\nu}\rrbracket}$ as the pullback of ${\mathbin{\circ_{\tiny{\mathrm{F}}}}}$ with $\tau$, $$\label{eq:FedosovStarProduct} f{\star}_\Omega g = \sigma\left( \tau(f) {\mathbin{\circ_{\tiny{\mathrm{F}}}}}\tau(g) \right)$$ for all $f,g\in\Cinfty(M){\llbracket {\nu}\rrbracket}$, where we explicitly referenced the formal series of two-forms $\Omega$ from which $r$ in the Fedosov derivation ${\mathfrak{D}}$ has been constructed. Of course, in this very brief discourse we omitted numerous details, which are however fully displayed in [@fedosov:1996a], see also [@neumaier:2002a] and [@waldmann:2007a Sect. 6.4]. As a final remark let us briefly note that the construction obviously depends on the choice of the torsion-free, symplectic connection $\nabla$. However, as mentioned in the previous section, we fix one such connection and will mostly omit any explicit mention. Instead, let us focus on another aspect of the Fedosov construction, namely on it’s connection with symmetries in the form of representations of a Lie algebra $\lie{g}$ on $\Cinfty(M){\llbracket {\nu}\rrbracket}$. The first result we would like to cite from [@mueller-bahns.neumaier:2004a] clarifies under which circumstances the above construction yields a $\lie{g}$-invariant star product. \[proposition:fedosov\_inv\_star\] The Fedosov star product obtained from a torsion-free, symplectic connection $\nabla$ and $\Omega \in {\nu}{Z}^2(M){\llbracket {\nu}\rrbracket}$ is $\lie{g}$-invariant if and only if $$\label{eq:InvarianceOfNablaOmega} \left[\nabla, \Lie_\xi \right] = 0 \qquad \textrm{and} \qquad \Lie_\xi \Omega = 0$$ for all $\xi\in\lie{g}$. In other words, both ingredients, the symplectic connection and the series of closed two-forms, have to be $\lie{g}$-invariant. Therefore we shall assume from now on that we have an *invariant, torsion-free, symplectic connection* $\nabla$ fixed once and for all. Its existence can be guaranteed under various assumptions on the action of $\lie{g}$. One rather simple option is to assume that the Lie algebra action integrates to a *proper* action of $G$, for which one has invariant connections. However, having an invariant connection is far less restrictive than having a proper action. One crucial ingredient in the proof of the previous statement, which will also come in handy for our purposes later on, is an expression of the Lie derivative on ${\mathcal{W}}\tensor{\Lambda}$ in terms of the Fedosov derivation, the so-called deformed Cartan formula. To formulate it one uses, for each $X\in{\Secinfty_{\mathrm{sympl}}}(TM)$, the one-form $$\label{eq:thetaXDef} \theta_X = \ins_X\omega$$ and the symmetrized covariant derivative acting on ${\mathcal{W}}\tensor {\Lambda}$, explicitly given by $$\label{eq:SymCovDerDef} D = [\delta^*, \nabla] = ({\mathrm{d}}x^i \tensor 1) \nabla_{\partial_i}.$$ The the deformed Cartan formula is as follows [@neumaier:2002a; @neumaier:2001a] for the Fedosov derivation ${\mathfrak{D}}$ based on $r$ as in : \[thm:fedosov\_lie\] Let $X \in {\Secinfty_{\mathrm{sympl}}}(M,\omega)$ be a symplectic vector field. Then the Lie derivative on ${\mathcal{W}}\tensor{\Lambda}$ and the Fedosov derivation ${\mathfrak{D}}$ are related as follows: $$\label{eq:fedosov_lie} \Lie_X = {\mathfrak{D}}\insa(X) + \insa(X){\mathfrak{D}}- {{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}\left( \theta_X \tensor 1 + \frac{1}{2} D\theta_X \tensor 1 - \insa(X)r \right).$$ As it turns out, there is also a very convenient expression for the Fedosov-Taylor series of any quantum Hamiltonian ${\mathbf{J}}$ of a Fedosov star product ${\star}_\Omega$. Later on, the crucial part in the following lemma will be that for once, $\tau({\mathbf{J}})$ only depends on ${\mathbf{J}}$ in symmetric and antisymmetric degree $0$ and secondly, that the only dependence on $\Omega$ lies in the summand $\insa r$. The remaining parts only depend on the symplectic 2-form $\omega$ and the symplectic, $\lie{g}$-invariant connection $\nabla$ on $(M,\omega)$. From [@mueller-bahns.neumaier:2004a] we recall the following formulation: \[thm:fedosov\_qham\_taylor\] Let ${\star}_\Omega$ be a Fedosov star product constructed from $\Omega\in{\nu}{Z}^2(M){\llbracket {\nu}\rrbracket}$ with quantum Hamiltonian ${\mathbf{J}}$. Then the Fedosov-Taylor series of ${\mathbf{J}}$ is given by $$\label{eq:fedosov_qham_taylor} \tau\left({\mathbf{J}}(\xi) \right) = {\mathbf{J}}(\xi) + \theta_\xi \tensor 1 + \frac{1}{2} D\theta_\xi \tensor 1 + \insa(\xi) r$$ for all $\xi\in\lie{g}$ where $\theta_\xi = \ins_\xi\omega$. Finally, we will need a special class of equivalences between Fedosov star products in order to compare quantum Hamiltonians of different star products. The construction in the following lemma will allow us to assign to each pair $(\Omega, C) \in {\nu}{Z}^2(M){\llbracket {\nu}\rrbracket}\times {\nu}\Formen^1(M){\llbracket {\nu}\rrbracket}$ an equivalence from the Fedosov star product constructed with $\Omega$ to the one constructed from $\Omega - {\mathrm{d}}C$. Implicitly, this construction is in Fedosov’s book in [@fedosov:1996a Sect. 5.5] but we need the more particular and more explicit formula from [@neumaier:2001a Sect. 3.5.1.1]: \[thm:fedosov\_equi\] Let ${\star}_\Omega$ and ${\star}_{\Omega'}$ be two Fedosov star products constructed from $\Omega, \Omega' \in {\nu}{Z}^2(M) {\llbracket {\nu}\rrbracket}$ respectively and let additionally $\Omega - \Omega' = {\mathrm{d}}C$ for a fixed $C \in {\nu}\Formen^1(M) {\llbracket {\nu}\rrbracket}$. Then there is an equivalence $T_C$ from ${\star}_\Omega$ to ${\star}_{\Omega'}$ given by $$\label{eq:fedosov_equi_def} T_C = \sigma \circ \mathcal{A}_h \circ \tau,$$ where $\mathcal{A}_h = \exp\left\{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)\right\}$ and $h\in{\mathcal{W}}_3$ is obtained as the unique solution of $$\label{eq:fedosov_equi_constr} h = C \tensor 1 + \delta^{-1}\left( \nabla h - {{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(r) h - \frac{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)} {\exp\left\{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)\right\} - \id} (r'-r) \right)$$ with $\sigma(h) = 0$. Furthermore, the Fedosov derivation of $h$ is given by $$\label{eq:fedosov_equi_prop} {\mathfrak{D}}h = -1 \tensor C + \frac{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)}{\exp\left\{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)\right\} - \id} (r'-r).$$ In [@neumaier:2001a Sect. 3.5.1.1], the case of Fedosov star products of Wick type was considered. The argument transfers immediately to the more general situation we need here. Nevertheless, for convenience we sketch the proof. First of all, let us consider the map $\mathcal{A}_h\colon {\mathcal{W}}\tensor{\Lambda}\longrightarrow {\mathcal{W}}\tensor{\Lambda}$ given by $\mathcal{A}_h = \exp\left\{ {{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}\left( h \right) \right\}$ for any $h\in{\mathcal{W}}_3\tensor{\Lambda}^0$. Counting degrees, one finds that ${{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}\left( h \right)$ increases the total degree by at least one, what guarantees that $\mathcal{A}_h$ is well-defined. Furthermore, since ${{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}\left( h \right)$ is a graded derivation of ${\mathbin{\circ_{\tiny{\mathrm{F}}}}}$, we have $\mathcal{A}_h\left( a {\mathbin{\circ_{\tiny{\mathrm{F}}}}}b \right) = \mathcal{A}_h(a) {\mathbin{\circ_{\tiny{\mathrm{F}}}}}\mathcal{A}_h(b)$ for all $a,b\in{\mathcal{W}}\tensor{\Lambda}$ and thus $\mathcal{A}_h$ is actually an algebra automorphism of $\left( {\mathcal{W}}\tensor{\Lambda}, {\mathbin{\circ_{\tiny{\mathrm{F}}}}}\right)$ with inverse given by $\mathcal{A}_h^{-1} = \mathcal{A}_{-h}$. Next, we propose that $S_h = \sigma \circ \mathcal{A}_h \circ \tau$ is an equivalence from ${\star}_\Omega$ to ${\star}_{\Omega'}$ if $$\label{eq:fedosov_equi_proof_1} {\mathfrak{D}}' = \mathcal{A}_h \circ {\mathfrak{D}}\circ \mathcal{A}_{-h}$$ holds, where we denoted by ${\mathfrak{D}}'$ the Fedosov derivation constructed from $\Omega'$. Here we quickly note that, since $\tau$ maps functions into $\ker {\mathfrak{D}}$, we have ${\mathfrak{D}}\tau(f) = 0$ for all $f\in\Cinfty(M){\llbracket {\nu}\rrbracket}$ and hence also ${\mathfrak{D}}' \mathcal{A}_h \tau(f) = \mathcal{A}_h {\mathfrak{D}}\tau(f) = 0$ because of . Additionally, with the help of the Fedosov-Taylor series $\tau'$ constructed from ${\mathfrak{D}}'$, one can easily observe that $\mathcal{A}_h \tau(f) = (\tau' \circ \sigma) \left( \mathcal{A}_h \tau(f) \right) = \tau'\left( S_h(f) \right)$, which finally enables us to show $$S_h\left( f{\star}_\Omega g \right) = \sigma\left( \mathcal{A}_h \tau(f) {\mathbin{\circ_{\tiny{\mathrm{F}}}}}\mathcal{A}_h \tau(g) \right) = \sigma\left( \tau'\left(S_h(f)\right) {\mathbin{\circ_{\tiny{\mathrm{F}}}}}\tau'\left(S_h(g)\right) \right) = S_h(f) {\star}_{\Omega'} S_h(f).$$ Again, the inverse of $S_h$ is obviously given by $S_h^{-1} = \sigma \circ \mathcal{A}_{-h} \circ \tau'$. Given these preliminary considerations, the goal of this proof will be to solve for $h$. To this end, let us rewrite said equation by using the definition of $\mathcal{A}_h$, which results in $${\mathfrak{D}}' = \mathcal{A}_h {\mathfrak{D}}\mathcal{A}_{-h} = {\mathfrak{D}}- {{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}\left( \frac{\exp\left\{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)\right\} - \id}{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)} \right) \left({\mathfrak{D}}h\right),$$ where we again exploited the fact that ${\ad_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)$ and ${\mathfrak{D}}$ are graded ${\mathbin{\circ_{\tiny{\mathrm{F}}}}}$-derivations and thus $\left[ {\ad_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h), {\mathfrak{D}}\right] = -{\ad_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}\left( {\mathfrak{D}}h \right)$. Comparing ${\mathfrak{D}}$ and ${\mathfrak{D}}'$ from we see that the above equation is satisfied if $r' - r - \frac{\exp\left\{ {{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}\left( h \right) \right\} - \id}{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}\left( h \right)} \left( {\mathfrak{D}}h \right)$ is ${\mathbin{\circ_{\tiny{\mathrm{F}}}}}$-central. We claim here that we can even find an $h(C)\in{\mathcal{W}}_3$ such that $$\label{eq:fedosov_equi_proof_2} r' - r - \frac{\exp\left\{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)\right\} - \id}{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)} \left({\mathfrak{D}}h\right) = 1 \tensor C,$$ where $C$ is the series of 1-forms with ${\mathrm{d}}C = \Omega - \Omega'$ from the prerequisites. We further claim that this $h$ can be obtained as the unique solution of with $\sigma(h) = 0$. First of all, from counting the involved degrees we know that has indeed a unique solution. We will now proceed to use this solution to define $$B = \frac{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)}{\exp\left\{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)\right\} - \id} \left( r' - r \right) - {\mathfrak{D}}h - 1 \tensor C.$$ At this point we will merely cite a technical result from [@neumaier:2001a Sect. 3.5.1.1], which essentially is only a tedious calculation, concerning the Fedosov derivation of $B$. One obtains $$\begin{split} {\mathfrak{D}}B &= \frac{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)}{\exp\left\{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)\right\} - \id} \sum\limits_{s=0}^\infty \frac{1}{s!} \left(\frac{1}{{\nu}}\right)^{s-1} \sum\limits_{t=0}^{s-2} {\ad_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)^t {\ad_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(B) {\ad_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)^{s-2-t} \times \\ &\quad\times \frac{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)}{\exp\left\{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)\right\} - \id} \left(r' - r \right) = R_{h,r',r}(B), \end{split}$$ where we denote the right hand side as a linear operator $R_{h, r', r}(B)$ acting on $B$. Applying $\delta^{-1}$ to both sides and using $\delta^{-1} B = 0$, $\sigma(h) = 0$ and as well as , we arrive at $$B = \delta^{-1} \left( \nabla B - {{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(r) B - R_{h,r',r}(B) \right).$$ Yet again, by counting degrees, we observe that the above equation has a unique solution and that $B=0$ is this solution. From here it is easy to see that $B=0$ is equivalent to $h$ satisfying which completes the construction of $T_C$ as $T_C = S_{h(C)}$. \[thm:fedosov\_inv\_equi\] Let $\Omega, \Omega' \in {\nu}{Z}^2(M){^{\mathrm{inv}}}{\llbracket {\nu}\rrbracket}$ with $\Omega - \Omega' = {\mathrm{d}}C$ for $C \in {\nu}\Formen^1(M){^{\mathrm{inv}}}{\llbracket {\nu}\rrbracket}$ and $h$ as in . For all $\xi \in \lie{g}$ we have $$\label{eq:fedosov_inv_equi} \Lie_\xi h = 0.$$ For all $\xi \in \lie{g}$ we have $$\label{eq:TCinvariant} \Lie_\xi \circ T_C = T_C \circ \Lie_\xi,$$ i.e. the equivalence $T_C$ is $\lie{g}$-invariant. We come now to the key lemma needed for the proof of our main theorem. If we are interested in the equivariant classification, we need to know the effect of an invariant equivalence transformation on quantum momentum maps. For the particular equivalences from we have the following result: \[thm:fedosov\_qham\_equi\] Let $\Omega \in {\nu}{Z}^2(M){^{\mathrm{inv}}}{\llbracket {\nu}\rrbracket}$ and $C \in {\nu}\Formen^1(M){^{\mathrm{inv}}}{\llbracket {\nu}\rrbracket}$. Then for any quantum Hamiltonian ${\mathbf{J}}$ of the Fedosov star product ${\star}_\Omega$ and the $\lie{g}$-invariant equivalence $T_C$ obtained from $C$ via we have $$\label{eq:fedosov_qham_equi} {\mathbf{J}}(\xi) + \ins_\xi C - T_C{\mathbf{J}}(\xi) = 0.$$ This proof is essentially a straightforward calculation using , , and as well as the fact that $\dega h = 0$ and thus $\insa(\xi) h = 0$ for the unique solution $h$ of . We have $$\begin{aligned} {{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h) \tau\left({\mathbf{J}}(\xi)\right) &= -{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}\left( {\mathbf{J}}(\xi) + \theta_\xi \tensor 1 + \frac{1}{2} D\theta_\xi \tensor 1 - \insa(\xi) r \right) h \\ &= \left( \Lie_\xi - {\mathfrak{D}}\insa(\xi) + \insa(\xi){\mathfrak{D}}\right) h \\ &= \insa(\xi){\mathfrak{D}}h \\ &= \insa(\xi) \left( -1 \tensor C + \frac{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)}{\exp\left\{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)\right\} - \id} (r'-r) \right) \\ &= - 1 \tensor \ins_\xi C + \frac{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)}{\exp\left\{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}(h)\right\} - \id} \left( {\mathbf{J'}}(\xi) - \tau'\left({\mathbf{J'}}(\xi)\right) - {\mathbf{J}}(\xi) + \tau\left({\mathbf{J}}(\xi)\right) \right), \end{aligned}$$ where we denoted by $\tau'$ the Fedosov-Taylor series corresponding to, and by ${\mathbf{J'}}$ any quantum Hamiltonian of the Fedosov star product constructed from $\Omega - {\mathrm{d}}C$ (one might, for example, choose ${\mathbf{J'}} = T_C{\mathbf{J}}$). Next, applying $\left(\exp\left\{ {{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}\left( h \right) \right\} - \id \right) / {{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}\left( h \right)$ to the above equation yields $$\label{eq:fedosov_qham_equi_2} \left( \exp\left\{{{\frac{1}{{\nu}}\ad}_{{\mathbin{\circ_{\tiny{\mathrm{F}}}}}}}\left( h \right) \right\} - \id \right) \tau\left( {\mathbf{J}}(\xi) \right) + 1 \tensor \ins_\xi C = \left( {\mathbf{J'}}(\xi) - \tau\left({\mathbf{J'}}(\xi)\right) - {\mathbf{J}}(\xi) + \tau\left({\mathbf{J}}(\xi)\right) \right).$$ Finally, we can apply $\sigma$, observe that the right hand side cancels out entirely and that the left hand side results in the desired terms after using . Classification {#sec:classification} ============== With the previous sections as preparations we can proceed towards our central classification result. Namely, we will demonstrate that pairs of Fedosov star products and quantum momentum mappings of those star products, respectively, are equivariantly equivalent if and only if a certain class in the second equivariant cohomology vanishes. Said class will turn out to be $\left[ (\Omega - {\mathbf{J}}) - (\Omega' - {\mathbf{J'}})\right]{_\lie{g}}$ where $\Omega$ and $\Omega'$ are the series of closed two forms from which the Fedosov star products have been constructed and ${\mathbf{J}}$ and ${\mathbf{J'}}$ respective quantum momentum maps. To this end we will firstly employ two results from [@mueller-bahns.neumaier:2004a] that will guarantee that our classes are in fact well defined: \[thm:class\_qham\_cocycle\] A $\lie{g}$-invariant Fedosov star product for $(M,\omega)$ obtained from $\Omega \in {\nu}{Z}^2(M){^{\mathrm{inv}}}{\llbracket {\nu}\rrbracket}$ admits a quantum Hamiltonian if and only if there is an element ${\mathbf{J}} \in C^1\left( \lie{g}, \Cinfty(M) \right){\llbracket {\nu}\rrbracket}$ such that $$\label{eq:class_qham_cocycle} {\mathrm{d}}{\mathbf{J}}(\xi) = \ins_\xi \left( \omega + \Omega \right)$$ for all $\xi \in \lie{g}$. We then have $\Lie_\xi = -{\frac{1}{{\nu}}\ad}_\star({\mathbf{J}}(\xi))$. Note that since quantum Hamiltonians for the same star product differ only by an element in $C^1\left( \lie{g}, \mathbb{C} \right){\llbracket {\nu}\rrbracket}$, holds for every quantum Hamiltonian of ${\star}_\Omega$. The second result from [@mueller-bahns.neumaier:2004a] is the following consequence: \[corollary:class\_qmom\_cocycle\] Let ${\star}_\Omega$ be a $\lie{g}$-invariant Fedosov star product constructed from $\Omega \in {\nu}{Z}^2(M){^{\mathrm{inv}}}{\llbracket {\nu}\rrbracket}$. Then there exists a quantum momentum map if and only if there is an element ${\mathbf{J}} \in C^1(\lie{g}, \Cinfty(M){\llbracket {\nu}\rrbracket})$ such that $$\label{eq:class_qmom_cocycle} \ins_\xi (\omega + \Omega) = {\mathrm{d}}{\mathbf{J}}(\xi) \qquad \textrm{and} \qquad \left( \omega + \Omega \right)\left( X_\xi, X_\eta \right) = {\mathbf{J}}\left( \left[ \xi, \eta \right] \right).$$ The following little calculation, which is valid for any quantum momentum map ${\mathbf{J}}$, $${\mathbf{J}}([\xi,\eta]) = {\frac{1}{{\nu}}\ad}_{\star}\left( {\mathbf{J}}(\xi) \right) {\mathbf{J}}(\eta) = - \ins_\xi {\mathrm{d}}{\mathbf{J}}(\eta) = (\omega + \Omega)(X_\xi, X_\eta)$$ then shows that any quantum momentum map necessarily satisfies . And vice versa, any quantum Hamiltonian satisfying is in fact a quantum momentum map. However, we will only use the above results in the following capacity, namely to show that first, any quantum momentum map ${\mathbf{J}}$ is an element of $\Omega{_\lie{g}}^2(M){\llbracket {\nu}\rrbracket}$. For this we have to demonstrate that the equivariance condition holds, which reads ${\mathbf{J}}([\xi,\eta]) = - \ins_\xi {\mathrm{d}}{\mathbf{J}}(\eta)$ and is obviously fulfilled as shown by the previous calculation. Second, let us show that the equivariant cochain $\omega + \Omega - {\mathbf{J}} \in \Omega{_\lie{g}}^2(M){\llbracket {\nu}\rrbracket}$, with $\Omega \in {\nu}{Z}^2(M){^{\mathrm{inv}}}{\llbracket {\nu}\rrbracket}$ and ${\mathbf{J}}$ being a quantum momentum map for the Fedosov star product ${\star}_\Omega$, is ${{\mathrm{d}}{_\lie{g}}}$-closed. Indeed, $${{\mathrm{d}}{_\lie{g}}}\left( \omega + \Omega - {\mathbf{J}} \right)(\xi) = \ins_\xi (\omega + \Omega) - {\mathrm{d}}{\mathbf{J}}(\xi) = 0.$$ Using this observation we can restate the above condition on the existence of a quantum momentum map as follows: there exists a quantum momentum map iff there exists a map ${\mathbf{J}}$ such that $\omega + \Omega - {\mathbf{J}} \in \Omega{_\lie{g}}^2(M){\llbracket {\nu}\rrbracket}$ and ${{\mathrm{d}}{_\lie{g}}}\left( \omega + \Omega - {\mathbf{J}} \right) = 0$, i.e. $\omega + \Omega$ extends to an equivariant two-cocycle. This is the direct analog of the classical situation. With those preliminary considerations done, we can prove the following auxiliary lemma: \[thm:class\_self\] Let ${\mathbf{J}}$ and ${\mathbf{J'}}$ be quantum momentum maps of a $\lie{g}$-invariant star product $\star$ deforming the same momentum map $J_0$. Then there exists a $\lie{g}$-invariant self-equivalence $A$ of $\star$ with $A{\mathbf{J}} = {\mathbf{J'}}$ if and only if ${\mathbf{J'}}-{\mathbf{J}}$ is a ${{\mathrm{d}}{_\lie{g}}}$-coboundary. First of all, from the defining property $\Lie_\xi = -{\frac{1}{{\nu}}\ad}_\star\left({\mathbf{J}}\right) = -{\frac{1}{{\nu}}\ad}_\star\left({\mathbf{J'}}\right)$ it is clear that $j = {\mathbf{J'}} - {\mathbf{J}}$ is central, hence a constant function on $M$ for all $\xi \in \lie{g}$ and consequently a ${{\mathrm{d}}{_\lie{g}}}$-cocycle. Here we use that $M$ is connected. Now assume that there is a $\theta \in {\nu}\Formen^1(M){^{\mathrm{inv}}}{\llbracket {\nu}\rrbracket}$ with ${{\mathrm{d}}{_\lie{g}}}\theta = j$, which is equivalent to $\ins_\xi \theta = j(\xi)$ and ${\mathrm{d}}\theta = 0$. Consequently the self-equivalence $A = \exp\left\{ {\frac{1}{{\nu}}\ad}_\star(\theta)\right\}$ is well defined: on sufficiently small open subsets $U \subseteq M$ we have $\theta\at{U} = {\mathrm{d}}t_U$. Hence we can calculate locally $${\frac{1}{{\nu}}\ad}_\star\left( t_U \right) {\mathbf{J}}(\xi)\at{U} = \Lie_\xi t_U = j(\xi)\at{U} \quad \textrm{and hence} \quad \left({\frac{1}{{\nu}}\ad}_\star \left(t_U\right) \right)^k J(\xi)\at{U} = 0$$ for all $k \ge 2$. This allows to compute $$A{\mathbf{J}}(\xi)\at{U} = \exp\left\{{\frac{1}{{\nu}}\ad}_\star\left(t_U\right)\right\} {\mathbf{J}}(\xi)\at{U} = {\mathbf{J}}(\xi)\at{U} + j(\xi)\at{U} = {\mathbf{J'}}(\xi)\at{U}.$$ For the second part, assume that there is a $\lie{g}$-invariant self-equivalence $A$ of $\star$ with $A{\mathbf{J}} = {\mathbf{J'}}$. Then there is a closed, $\lie{g}$-invariant one-form $\theta \in {\nu}\Formen(M){^{\mathrm{inv}}}{\llbracket {\nu}\rrbracket}$ with $A = \exp\left\{ -{\frac{1}{{\nu}}\ad}_\star\left(\theta\right) \right\}$, see e.g. [@waldmann:2007a Thm. 6.3.18] for the case without invariance. The invariance of $\theta$ is clear from the invariance of $A$. We can again calculate locally $$j(\xi)\at{U} = A{\mathbf{J}}(\xi)\at{U} - {\mathbf{J}}(\xi)\at{U} = \left( \sum\limits_{k=1}^{\infty} \frac{1}{k!} \left({\frac{1}{{\nu}}\ad}_\star\left(t_U\right)\right)^{k-1} \right) {\frac{1}{{\nu}}\ad}_\star\left({\mathbf{J}}(\xi)\at{U} \right) t_U,$$ where $\theta\at{U} = {\mathrm{d}}t_U$. Since the term in the brackets is a power series starting with $\id$ it is invertible and its inverse is again a power series in ${\frac{1}{{\nu}}\ad}_\star\left(t_U\right)$ starting with $\id$. Applying the inverse to both sides, using that $j(\xi)\at{U}$ is constant and ${\mathrm{d}}\theta = 0$, we arrive at $$ j(\xi)\at{U} = \Lie_\xi t_U = {{\mathrm{d}}{_\lie{g}}}\theta(\xi)\at{U}.$$ Using this result we can now phrase the first classification result of invariant star products with quantum momentum maps on a connected symplectic manifold. To this end, we define the *equivariant relative class* $$\label{eq:RelClassDef} {c{_\lie{g}}}\left( \Omega', {\mathbf{J'}}; \Omega, {\mathbf{J}} \right) = \left[ \left( \Omega' - {\mathbf{J'}} \right) - \left( \Omega - {\mathbf{J}} \right) \right]{_\lie{g}}$$ of two Fedosov star products built out of the data of the closed two-forms and the quantum momentum maps. Note that we use for both star products the *same* $\lie{g}$-invariant symplectic connection. \[thm:class\_full\] Let $\Omega, \Omega' \in {\nu}\Formen^2(M){^{\mathrm{inv}}}{\llbracket {\nu}\rrbracket}$ and ${\star}_\Omega$, ${\star}_{\Omega'}$ their corresponding Fedosov star products. Let furthermore ${\mathbf{J}}$ and ${\mathbf{J'}}$ be quantum momentum maps of ${\star}_\Omega$ and ${\star}_{\Omega'}$ respectively, deforming the same momentum map $J_0$. Then there exists a $\lie{g}$-invariant equivalence $S$ from ${\star}_\Omega$ to ${\star}_{\Omega'}$ such that $S{\mathbf{J}} = {\mathbf{J'}}$ if and only if $$\label{eq:class_full} {c{_\lie{g}}}\left( \Omega', {\mathbf{J'}}; \Omega, {\mathbf{J}} \right) = 0.$$ First, as a preliminary step, we need to show that ${c{_\lie{g}}}$ is well-defined at all, i.e. that $\left( \Omega' - {\mathbf{J'}} \right) - \left( \Omega - {\mathbf{J}} \right)$ is ${{\mathrm{d}}{_\lie{g}}}$-closed which, however, is nothing more than a simple application of . Next, assume we are given such an equivalence $S$. This necessarily implies that $\Omega - \Omega' = {\mathrm{d}}C$ for some $C \in {\nu}\Formen^1(M){^{\mathrm{inv}}}{\llbracket {\nu}\rrbracket}$ and hence we can obtain another equivalence $T_C$ from . Additionally, from we can deduce that $\left[ T_C{\mathbf{J}} - S{\mathbf{J}} \right]{_\lie{g}}= 0$. Therefore we are able to calculate with the help of that $$\begin{aligned} {c{_\lie{g}}}\left( \Omega', {\mathbf{J'}}; \Omega, {\mathbf{J}}\right) &= \left[ \left(\Omega' - {\mathbf{J'}} \right) - \left(\Omega - {\mathbf{J}} \right) \right]{_\lie{g}}\\ &= \left[ \left(\Omega' - S{\mathbf{J}}\right) - \left(\Omega - {\mathbf{J}} \right) - \left({\mathbf{J}} + \ins_\xi C - T_C{\mathbf{J}} \right) \right]{_\lie{g}}\\ &= \left[ \left(T_C{\mathbf{J}} - S{\mathbf{J}} \right) - \left({\mathrm{d}}C + \ins_\xi C \right) \right]{_\lie{g}}\\ &= 0. \end{aligned}$$ On the other hand, assume that ${c{_\lie{g}}}\left( \Omega', {\mathbf{J'}}; \Omega, {\mathbf{J}} \right) = 0$. Its representatives exterior degree-two part is just $\Omega' - \Omega$ and thus we know that there exists a $C \in {\nu}\Formen^1(M){^{\mathrm{inv}}}{\llbracket {\nu}\rrbracket}$ such that $\Omega - \Omega' = {\mathrm{d}}C$. This again allows us to obtain a $\lie{g}$-invariant equivalence $T_C$ from ${\star}_\Omega$ to ${\star}_{\Omega'}$ with the help of . As before, we use to calculate $$0 = {c{_\lie{g}}}\left( \Omega', {\mathbf{J'}}; \Omega, {\mathbf{J}} \right) = \left[ \left( \Omega' - {\mathbf{J'}} \right) - \left( \Omega - {\mathbf{J}} \right) - \left({\mathbf{J}} + \ins_\xi C - T_C{\mathbf{J}} \right) \right]{_\lie{g}}= \left[T_C{\mathbf{J}} - {\mathbf{J'}}\right]{_\lie{g}}.$$ Thus we obtain a $\lie{g}$-invariant self-equivalence $A$ of ${\star}_{\Omega'}$ from with $AT_C{\mathbf{J}} = {\mathbf{J'}}$. Hence arrive at the desired equivalence $S = A\circ T_C$. Characteristic Class {#sec:char} ==================== From the classification of star products and $\lie{g}$-invariant star products due to [@nest.tsygan:1995a; @nest.tsygan:1995b; @bertelson.cahen.gutt:1997a; @deligne:1995a; @weinstein.xu:1998a] and [@bertelson.bieliavsky.gutt:1998a] we already know that two Fedosov star products (invariant Fedosov star products) ${\star}_\Omega$, ${\star}_{\Omega'}$ are equivalent if and only if the relative class $c(\star_\Omega, \star_{\Omega'}) = [\Omega - \Omega']$ ($c{^{\mathrm{inv}}}(\star_\Omega, \star_{\Omega'}) = [\Omega - \Omega']{^{\mathrm{inv}}}$) in the de Rham (invariant de Rham) cohomology vanishes. from the previous section is then the specialization of those to equivariant star products. However, there are slightly stronger results for the two previously known cases, namely there are bijections $c\colon {\mathrm{Def}}(M,\omega) \longrightarrow \HdR^2(M){\llbracket {\nu}\rrbracket}$ and $c{^{\mathrm{inv}}}\colon {\mathrm{Def}}{^{\mathrm{inv}}}(M,\omega) \longrightarrow \HdR{^{\mathrm{inv},2}}(M){\llbracket {\nu}\rrbracket}$, respectively, between equivalence classes of star products (invariant star products) to the second de Rham (invariant de Rham) cohomology which is defined on Fedosov star products by $$c({\star}_\Omega) = \frac{1}{{\nu}} [\omega + \Omega] \in \frac{[\omega]}{{\nu}} + \HdR(M){\llbracket {\nu}\rrbracket}\qquad \textrm{and} \qquad c{^{\mathrm{inv}}}({\star}_\Omega) = \frac{1}{{\nu}} [\omega + \Omega]{^{\mathrm{inv}}}\in \frac{[\omega]{^{\mathrm{inv}}}}{{\nu}} + \HdR{^{\mathrm{inv}}}(M){\llbracket {\nu}\rrbracket},$$ respectively, and extended to all star products by the fact that every star product (invariant star product) is equivalent (invariantly equivalent) to a Fedosov star product (invariant Fedosov star product). The aforementioned relative class is then precisely the difference of the images of those maps (up to a normalization factor), i.e. $$\frac{1}{{\nu}} c(\star_\Omega, \star_{\Omega'}) = c(\star_\Omega) - c(\star_{\Omega'}) \qquad \textrm{and} \qquad \frac{1}{{\nu}} c{^{\mathrm{inv}}}(\star_\Omega, \star_{\Omega'}) = c{^{\mathrm{inv}}}(\star_\Omega) - c{^{\mathrm{inv}}}(\star_{\Omega'}).$$ In the following, we will similarly define a bijection ${c{_\lie{g}}}\colon {\mathrm{Def}}{_\lie{g}}(M,\omega) \longrightarrow \frac{1}{\nu}[\omega - J_0]{_\lie{g}}+ {\mathrm{H}{_\lie{g}}}(M){\llbracket {\nu}\rrbracket}$ from the equivalence classes of equivariant star products to the equivariant cohomology. In view of the classification result it is tempting to define the class simply by taking the equivariant class of $\Omega$ and ${\mathbf{J}}$. However, it is not completely obvious that this is only depending of $\star$ and ${\mathbf{J}}$ as we have to control the behaviour of ${\mathbf{J}}$ under invariant self-equivalences. Nevertheless, with the previous results this turns out to be correct. Hence we can state the following definition: \[def:char\_class\] Let ${\star}_\Omega$ be the Fedosov star product constructed from $\Omega \in {\nu}{Z}^2(M){\llbracket {\nu}\rrbracket}$ and ${\mathbf{J}}$ a quantum momentum map of ${\star}_\Omega$. Then the equivariant characteristic class of $(\star, {\mathbf{J}})$ is defined by $$\label{eq:TheClass} {c{_\lie{g}}}\left( {\star}_\Omega, {\mathbf{J}} \right) = \frac{1}{{\nu}} \left[ (\omega + \Omega) - {\mathbf{J}} \right]{_\lie{g}}\in \frac{[\omega - J_0]{_\lie{g}}}{{\nu}} + {\mathrm{H}{_\lie{g}}}^2(M){\llbracket {\nu}\rrbracket}.$$ Here we need to verify that ${c{_\lie{g}}}\left( {\star}_\Omega, {\mathbf{J}} \right)$ is well-defined by showing that $(\omega + \Omega) - {\mathbf{J}}$ is ${{\mathrm{d}}{_\lie{g}}}$-closed, which is equivalent to . Using this equivariant class we can reformulate slightly: \[thm:class\_full\_abs\] Let $\Omega, \Omega' \in {\nu}{Z}^2(M){\llbracket {\nu}\rrbracket}$ and ${\star}_\Omega$, ${\star}_{\Omega'}$ their corresponding Fedosov star products. Let furthermore ${\mathbf{J}}$ and ${\mathbf{J'}}$ be quantum momentum maps of ${\star}_\Omega$ and ${\star}_{\Omega'}$ respectively, deforming the same momentum map. Then there exists a $\lie{g}$-invariant equivalence $S$ from ${\star}_\Omega$ to ${\star}_{\Omega'}$ such that $S{\mathbf{J}} = {\mathbf{J}}$ if and only if $$\label{eq:ClassesEqual} {c{_\lie{g}}}\left( {\star}_{\Omega'}, {\mathbf{J'}} \right) = {c{_\lie{g}}}\left( {\star}_\Omega, {\mathbf{J}} \right).$$ Finally, we wish to extend from only Fedosov star products to all star products on $M$ and their corresponding quantum momentum maps. To do so, we first cite a result from [@bertelson.bieliavsky.gutt:1998a] stating that for every $\lie{g}$-invariant star product $\star$ there is a $\lie{g}$-invariant equivalence $S$ to a $\lie{g}$-invariant Fedosov star product ${\star}_\Omega$. Given a quantum momentum map ${\mathbf{J}}$ of $\star$ we can use $S$ to assign the equivariant class ${c{_\lie{g}}}\left( \star, {\mathbf{J}} \right) \coloneqq {c{_\lie{g}}}\left( {\star}_\Omega, S{\mathbf{J}} \right)$ to the pair $(\star, {\mathbf{J}})$. This class obviously does not depend on the choice of either $S$ or $\Omega$, since, given another $\lie{g}$-invariant equivalence $T$ to another Fedosov star product ${\star}_{\Omega'}$, we immediately acquire a $\lie{g}$-invariant equivalence $T \circ S^{-1}$ between ${\star}_\Omega$ and ${\star}_{\Omega'}$ with $(T\circ S^{-1}) S{\mathbf{J}} = T{\mathbf{J}}$, showing (with the help of ) that ${c{_\lie{g}}}\left( {\star}_\Omega, S{\mathbf{J}} \right) = {c{_\lie{g}}}\left( {\star}_{\Omega'}, T {\mathbf{J}} \right)$. In conclusion, ${c{_\lie{g}}}$ defines a map $${c{_\lie{g}}}\colon {\mathrm{Def}}{_\lie{g}}(M,\omega) \longrightarrow \frac{[\omega - J_0]{_\lie{g}}}{{\nu}} + {\mathrm{H}{_\lie{g}}}^2(M){\llbracket {\nu}\rrbracket}$$ from the set ${\mathrm{Def}}{_\lie{g}}(M, \omega)$ of equivalence classes of star products on $M$ with quantum momentum maps to the second equivariant cohomology ${\mathrm{H}{_\lie{g}}}^2(M){\llbracket {\nu}\rrbracket}$. The map ${c{_\lie{g}}}$ is then easily recognized to be invertible with inverse given as $$\frac{1}{{\nu}} \left[ \omega + \Omega - {\mathbf{J}} \right]{_\lie{g}}\longmapsto \left[{\star}_\Omega, {\mathbf{J}}\right]{_\lie{g}},$$ once we remember that $\Omega{_\lie{g}}^2(M) = \Omega^2(M){^{\mathrm{inv}}}\oplus \Sym^1(\lie{g}^*){^{\mathrm{inv}}}$, which completes the proof of our main theorem. As a final remark, let us note that the three classification results for star products, invariant star products and equivariant star products are connected by the sequence of maps $${\mathrm{H}{_\lie{g}}}^2(M) \longrightarrow \HdR{^{\mathrm{inv},2}}(M) \longrightarrow \HdR^2(M),$$ where the first map is the projection of ${\mathrm{H}{_\lie{g}}}^2(M)$ onto the first summand and the second map is the natural inclusion of invariant differential forms into the differential forms. This shows in particular that equivariantly equivalent star products are invariantly equivalent and likewise invariantly equivalent star products are equivalent. [10]{} <span style="font-variant:small-caps;">Arnal, D., Cortet, J. C., Molin, P., Pinczon, G.:</span> *Covariance and Geometrical Invariance in [$*$-Quantization]{}*. J. Math. Phys. **24**.2 (1983), 276–283. <span style="font-variant:small-caps;">Bayen, F., Flato, M., Fr[[ø]{}]{}nsdal, C., Lichnerowicz, A., Sternheimer, D.:</span> *Deformation Theory and Quantization*. Ann. Phys. **111** (1978), 61–151. <span style="font-variant:small-caps;">Bertelson, M., Bieliavsky, P., Gutt, S.:</span> *Parametrizing Equivalence Classes of Invariant Star Products*. Lett. Math. Phys. **46** (1998), 339–345. <span style="font-variant:small-caps;">Bertelson, M., Cahen, M., Gutt, S.:</span> *Equivalence of Star Products*. Class. Quant. Grav. **14** (1997), A93–A107. <span style="font-variant:small-caps;">Bordemann, M., Brischle, M., Emmrich, C., Waldmann, S.:</span> *Phase Space Reduction for Star Products: An Explicit Construction for [$\mathbb C P^n$]{}*. Lett. Math. Phys. **36** (1996), 357–371. <span style="font-variant:small-caps;">Deligne, P.:</span> *D[é]{}formations de l’Alg[è]{}bre des Fonctions d’une Vari[é]{}t[é]{} Symplectique: Comparaison entre Fedosov et DeWilde, Lecomte*. Sel. Math. New Series **1**.4 (1995), 667–697. <span style="font-variant:small-caps;">Dolgushev, V. A.:</span> *Covariant and equivariant formality theorems*. Adv. Math. **191** (2005), 147–177. <span style="font-variant:small-caps;">Fedosov, B. V.:</span> *A Simple Geometrical Construction of Deformation Quantization*. J. Diff. Geom. **40** (1994), 213–238. <span style="font-variant:small-caps;">Fedosov, B. V.:</span> *Deformation Quantization and Index Theory*. Akademie Verlag, Berlin, 1996. <span style="font-variant:small-caps;">Guillemin, V. W., Sternberg, S.:</span> *Supersymmetry and Equivariant de Rham Theory*. Springer-Verlag, Berlin, Heidelberg, New York, 1999. <span style="font-variant:small-caps;">Gutt, S., Rawnsley, J.:</span> *Equivalence of star products on a symplectic manifold; an introduction to Deligne’s [Č]{}ech cohomology classes*. J. Geom. Phys. **29** (1999), 347–392. <span style="font-variant:small-caps;">Gutt, S., Rawnsley, J.:</span> *Natural Star Products on Symplectic Manifolds and Quantum Moment Maps*. Lett. Math. Phys. **66** (2003), 123–139. <span style="font-variant:small-caps;">Hamachi, K.:</span> *Quantum moment maps and invariants for [$G$]{}-invariant star products.* Rev. Math. Phys. **14**.6 (2002), 601–621. <span style="font-variant:small-caps;">Jansen, S., Neumaier, N., Schaumann, G., Waldmann, S.:</span> *Classification of Invariant Star Products up to Equivariant Morita Equivalence on Symplectic Manifolds*. Lett. Math. Phys. **100** (2012), 203–236. <span style="font-variant:small-caps;">Kontsevich, M.:</span> *Deformation Quantization of Poisson Manifolds, I*. Preprint **q-alg/9709040** (September 1997). <span style="font-variant:small-caps;">Kontsevich, M.:</span> *Deformation Quantization of [P]{}oisson manifolds*. Lett. Math. Phys. **66** (2003), 157–216. <span style="font-variant:small-caps;">M[ü]{}ller-Bahns, M. F., Neumaier, N.:</span> *Invariant Star Products of [W]{}ick Type: Classification and Quantum Momentum Mappings*. Lett. Math. Phys. **70** (2004), 1–15. <span style="font-variant:small-caps;">M[ü]{}ller-Bahns, M. F., Neumaier, N.:</span> *Some remarks on [$\mathfrak{g}$]{}-invariant Fedosov star products and quantum momentum mappings*. J. Geom. Phys. **50** (2004), 257–272. <span style="font-variant:small-caps;">Nest, R.:</span> *On some conjectures related to [$[Q,R]$]{} for [H]{}amiltonian actions on [P]{}oisson manifolds*, 2013. Conference talk at the Workshop on Quantization and Reduction 2013 in Erlangen. <span style="font-variant:small-caps;">Nest, R., Tsygan, B.:</span> *Algebraic Index Theorem*. Commun. Math. Phys. **172** (1995), 223–262. <span style="font-variant:small-caps;">Nest, R., Tsygan, B.:</span> *Algebraic Index Theorem for Families*. Adv. Math. **113** (1995), 151–205. <span style="font-variant:small-caps;">Neumaier, N.:</span> *Klassifikationsergebnisse in der [D]{}eformationsquantisierung*. PhD thesis, Fakult[ä]{}t f[ü]{}r Physik, Albert-Ludwigs-Universit[ä]{}t, Freiburg, 2001. Available at [<https://www.freidok.uni-freiburg.de/data/2100>]{}. <span style="font-variant:small-caps;">Neumaier, N.:</span> *Local $\nu$-[E]{}uler Derivations and [D]{}eligne’s Characteristic Class of [F]{}edosov Star Products and Star Products of Special Type*. Commun. Math. Phys. **230** (2002), 271–288. <span style="font-variant:small-caps;">Tsygan, B.:</span> *Equivariant deformations, equivariant algebraic index theorems, and a [P]{}oisson version of [$[Q, R] = 0$]{}*, 2010. Unpublished notes. <span style="font-variant:small-caps;">Waldmann, S.:</span> *Poisson-[G]{}eometrie und [D]{}eformationsquantisierung. [E]{}ine [E]{}inf[ü]{}hrung*. Springer-Verlag, Heidelberg, Berlin, New York, 2007. <span style="font-variant:small-caps;">Weinstein, A., Xu, P.:</span> *Hochschild cohomology and characteristic classes for star-products*. In: <span style="font-variant:small-caps;">Khovanskij, A., Varchenko, A., Vassiliev, V. (eds.):</span> *Geometry of differential equations. Dedicated to V. I. Arnold on the occasion of his 60th birthday*, 177–194. American Mathematical Society, Providence, 1998. <span style="font-variant:small-caps;">Xu, P.:</span> *Fedosov $*$-Products and Quantum Momentum Maps*. Commun. Math. Phys. **197** (1998), 167–197. [^1]: `[email protected]` [^2]: `[email protected]`
{ "pile_set_name": "ArXiv" }
--- abstract: | OSIRIS-REx is the third spacecraft in the NASA New Frontiers Program and is planned for launch in 2016. OSIRIS-REx will orbit the near-Earth asteroid (101955) Bennu, characterize it, and return a sample of the asteroid’s regolith back to Earth. The Regolith X-ray Imaging Spectrometer (REXIS) is an instrument on OSIRIS-REx designed and built by students at MIT and Harvard. The purpose of REXIS is to collect and image sun-induced fluorescent X-rays emitted by Bennu, thereby providing spectroscopic information related to the elemental makeup of the asteroid regolith and the distribution of features over its surface. Telescopic reflectance spectra suggest a CI or CM chondrite analog meteorite class for Bennu, where this primitive nature strongly motivates its study. A number of factors, however, will influence the generation, measurement, and interpretation of the X-ray spectra measured by REXIS. These include: the compositional nature and heterogeneity of Bennu, the time-variable Solar state, X-ray detector characteristics, and geometric parameters for the observations. In this paper, we will explore how these variables influence the precision to which REXIS can measure Bennu’s surface composition. By modeling the aforementioned factors, we place bounds on the expected performance of REXIS and its ability to ultimately place Bennu in an analog meteorite class. author: - | Niraj K. Inamdar$^{a}$[^1], Richard P. Binzel$^{a}$, Jae Sub Hong$^{b}$, Branden Allen$^{b}$,\ Jonathan Grindlay$^{b}$, Rebecca A. Masterson$^{c}$\ $^{a}$\ \ $^{b}$\ $^{c}$ bibliography: - 'report.bib' title: 'Modeling the Expected Performance of the REgolith X-ray Imaging Spectrometer (REXIS)' --- Introduction {#sec:intro} ============ In 2016, NASA is scheduled to launch OSIRIS-REx (“**O**rigins **S**pectral **I**nterpretation **R**esource **I**dentification **S**ecurity **R**egolith **Ex**plorer”), a mission whose goal is to characterize and ultimately return a sample of the near-Earth asteroid (101955) Bennu (formerly 1999 RQ$_{36}$ and hereafter Bennu)[@ORExReview]. Bennu was chosen as the target asteroid for OSIRIS-REx for several reasons. Spectral similarities in different near-infrared bands to B-type asteroids 24 Themis and 2 Pallas raise the intriguing possibility that Bennu is a transitional object between the two. Furthermore, Bennu’s reflectance spectra suggest that it may be related to a CI or CM carbonaceous chondrite analog meteorite class [@clark2011asteroid]. Carbonaceous chondrites are believed to be amongst the most primitive material in the Solar System, undifferentiated and with refractory elemental abundances very similar to the Sun’s. The discovery of water on the surface of 24 Themis provides additional scientific motivation for studying Bennu. Bennu belongs to a class of asteroids known as near-Earth asteroids (NEA). Its semimajor axis is roughly $1~\textrm{AU}$, and its orbit crosses Earth’s [@campins2010origin]. While this makes Bennu a particularly accessible target for exploration, it also makes Bennu a non-negligible impact risk to Earth. Calculations of Bennu’s orbital elements suggest an impact probability of $\sim 10^{-4}-10^{-3}$ by the year 2182, which coupled with its relatively large size (mean radius $\sim 250~\textrm{m}$) makes Bennu one of the most hazardous asteroids known [@milani2009long]. Taken together, these unique features make Bennu an attractive target for future study. In order to better characterize Bennu’s composition and physical state, OSIRIS-REx is equipped with a suite of instruments, amongst which is REXIS. REXIS, a student experiment aboard OSIRIS-REx, is an X-ray imaging spectrometer (“**RE**golith **X**-ray **I**maging **S**pectrometer”) whose purpose is to reconstruct elemental abundance ratios of Bennu’s regolith by measuring X-rays fluoresced by Bennu in response to Solar X-rays (Fig. \[fig:REXIS\_PoO\])[@allen2013regolith; @JonesSmithREXIS]. More details regarding REXIS’s systems-level organization and operation can be found in Jones, *et al.* [@JonesSmithREXIS]. ![REXIS principle of operation demonstrated schematically. Except for the Solar X-ray Monitor, which is shown to scale relative to REXIS, the rest of the figure is not to scale. X-rays from the Sun impinge the regolith of Bennu, giving rise to X-ray fluorescence. These X-rays enter REXIS, where they are collected by CCDs. The radiation cover (shown in yellow) serves as a shade to prevent Solar radiation from entering REXIS. At the same time, the Solar X-ray Monitor (SXM), which is mounted on a different surface of OSIRIS-REx, collects Solar X-rays directly in order to understand the time variance of the Solar X-ray spectrum. The OSIRIS-REx spacecraft is not shown.[]{data-label="fig:REXIS_PoO"}](REXIS_PoO.pdf){width="80.00000%"} Description of REXIS -------------------- REXIS is comprised by two distinct, complementary instruments. The first is the primary spectrometer. Measuring approximately 37 cm high and 20 cm wide, it is mounted on the main instrument deck of OSIRIS-REx and houses 4 charged coupled devices (CCDs) that measure X-rays emitted by the Bennu’s regolith (Fig. \[fig:REXIS\_geom\]). REXIS images X-rays by means of a coded aperture mask mounted atop the spectrometer tower. The X-ray shadow pattern cast by the mask on the detector plane and knowledge of the mask pattern allows for a reprojection of the measured X-rays back onto the asteroid, so that localized enhancements in the X-ray signal on roughly 50 m scales can be identified on Bennu’s surface. During the mission cruise phase, a radiation cover protects the CCDs from bombardment by nonionizing radiation (such as Solar protons) that can create charge traps in the CCDs and subsequently degrade the detector resolution[@RadDamChandra]. This radiation cover is opened prior to calibration and asteroid observations (see below). REXIS will observe Bennu for an overall observation period of $\sim 400$ hours. During this time, OSIRIS-REx will be in a roughly circular orbit along the asteroid’s terminator with respect to the Sun and about 1 km from the asteroid barycenter. REXIS will also collect calibration data. Since OSIRIS-REx orbits Bennu at approximately 1 km from the asteroid barycenter, and thus has a field of view that extends beyond the asteroid limb, cosmic sources of X-rays are a potential source of noise. Therefore, prior to asteroid observation, REXIS will observe the cosmic X-ray background (CXB) for a total of 3 hours. Furthermore, a period of 112 hours will be devoted to internal calibration to determine sources of X-ray noise intrinsic to the instrument itself. Throughout the operational lifetime of REXIS, a set of internal $^{55}$Fe radiation sources (which decay via electron capture to $^{55}$Mn with a primary intensity centered at $5.89~\mathrm{keV}$) will be used to calibrate the CCD gain. The asteroid X-ray spectrum measured by REXIS depends on both the elemental abundances of the asteroid regolith and the Solar state at the time of measurement. In order to remove this degeneracy, a secondary instrument is required to measure Solar activity. The Solar X-ray Monitor (SXM), which is mounted on the Sun-facing side of REXIS, measures Solar activity and hence performs this function. The SXM contains a silicon drift diode (SDD) detecting element manufactured by Amptek, and generates a histogram of the Solar X-ray spectrum over each 32 s observational cadence. The Solar X-rays collected by the SXM allow for a time-varying reconstruction of the Solar state, so that, in principle, the only unknowns during interpretation of the asteroid spectrum are the regolith elemental abundances. The elemental abundances that we infer from the collected spectra are then used to map Bennu back to an analog meteorite class. During the REXIS observation period, X-rays emitted by Bennu are collected on board by CCDs (CCID-41s manufactured by MIT Lincoln Laboratory). The spectra that are generated from these data are then used to interpret the elemental abundance makeup of the asteroid. The baseline CCD data flow in a single stream, and REXIS data are processed in three distinct “modes”: (Fig. \[fig:REXIS\_DataPipe\]). These are: Spectral Mode. : Only the overall accumulation of spectral CCD data over the instrument’s observational period are considered. No attempt is made at producing local elemental abundance or abundance ratio maps. Instead, the data are used to determine the average composition of the asteroid from the spectral data collected in order to correlate Bennu to a meteorite class of similar composition. Collimator Mode. : Coarse spatially resolved measurements of elemental abundances on the surface of Bennu are carried out in collimator mode using time resolved spectral measurements combined with the instrument attitude history and field of view (FOV) response function. The FOV response function is uniquely determined by the instrument focal length as well as the diameter and open fraction of the coded aperture mask. Imaging Mode. : Higher spatial resolution spectral features on the asteroid surface are identified by applying coded aperture imaging[@Caroli]. In each time step, the data are the same as in collimator mode, though the distribution of counts on the detector plane is reprojected (using the known mask pattern and an appropriate deconvolution technique) onto the asteroid surface. Science processing modes occur on the ground. Here, we are concerned with the performance of REXIS in Spectral Mode; discussion of REXIS’s performance in imaging and collimator mode may be found in Allen, *et al.*[@allen2013regolith] Placing Bennu Within an Analog Meteorite Class ---------------------------------------------- One of the goals of REXIS is to place Bennu within an analog meteorite class. Meteorites of similar class can often be grouped based on chemical or isotopic similarity. In particular, it has been recognized that major chondritic and achondritic meteorite groups can be distinguished on the basis of elemental abundance ratios, as can various subchondritic types [@NittlerData]. In Fig. \[fig:Nittler\], we show how various meteorite classes can be grouped on the basis of elemental abundance ratios of Fe/Si, Mg/Si, and S/Si. REXIS therefore collects X-rays between energies of 0.5 and 7.5 keV, within which prominent Fe, Mg, S, and Si emission features are found. The particular X-ray energies associated with these elements are summarized in Table \[tab:SummaryOfElLines\]. Consistent with the measurement of the X-ray signatures of these elements, REXIS has two high-level requirements associated with its performance in Spectral Mode. These are - REX-3: REXIS shall be able to measure the global ratios of Mg/Si, Fe/Si, and S/Si of Bennu within 25% for that of a CI chondrite illuminated by a 4 MK, A3.3 Sun. - REX-6: REXIS shall meet performance requirements given no less than 420 hours of observation time of Bennu. The first reflects the fact that REXIS must measure the stated elemental abundance ratios to within 25% those of a typical CI chondrite during the quiet Sun. 25% error is sufficient to distinguish between achondritic and chondritic types, as well as amongst various chondrite types, as indicated in Fig. \[fig:Nittler\] by the dashed line ellipses. The second requirement reflects the fact that REXIS must attain its science objectives within its allotted observation period. Line Designation Energy center \[eV\] Notes ---------------------------- ---------------------- --------------------------------------------------------------- Fe-L$\alpha$ 705.0 Due to proximity, is combined with Fe-L$\beta$ Fe-L$\beta$ 718.5 Due to proximity, is combined with Fe-L$\alpha$ Mg-K$\alpha_1$/K$\alpha_2$ 1,253.60 Due to proximity, is combined with Mg-K$\beta$ Mg-K$\beta$ 1,302.2 Due to proximity, is combined with Mg-K$\alpha_1$/K$\alpha_2$ Si-K$\alpha_1$/K$\alpha_2$ 1,739.98/1,739.38 — S-K$\alpha_1$/K$\alpha_2$ 2,307.84/2,306.64 — : Summary of lines of interest and their energies[@xdb2001]. In some cases, due to the close proximity of spectral features to one another, they are combined with one another in the analysis below. \[tab:SummaryOfElLines\] [0.68]{} ![Trends in meteorite classification as a function of elemental abundance ratios. In the upper panel, we see how achondritic and chondritic meteorite specimens can be distinguished on the basis of their Mg/Si and Fe/Si elemental abundance ratios[@NittlerData]. To differentiate between various chondrite subtypes, we rely on the Mg/Si and S/Si ratios (lower panel). In both panels, we show the REXIS requirement of 25% error centered around a CI chondrite-like baseline. The expected REXIS performance under nominal conditions is indicated in the magenta ellipses. These error ellipses represent systematic error; further consideration of statistical error places the confidence in these calculations of systematic error at $3.5\sigma$ (see Sec. \[sec:Results\]). Composition data for meteorites are from Nittler, *et al.* [@NittlerData]. []{data-label="fig:Nittler"}](FeSi_MgSi_Nittler_trim_label.pdf "fig:"){width="\textwidth"} \[FeSi\_MgSi\] [0.68]{} ![Trends in meteorite classification as a function of elemental abundance ratios. In the upper panel, we see how achondritic and chondritic meteorite specimens can be distinguished on the basis of their Mg/Si and Fe/Si elemental abundance ratios[@NittlerData]. To differentiate between various chondrite subtypes, we rely on the Mg/Si and S/Si ratios (lower panel). In both panels, we show the REXIS requirement of 25% error centered around a CI chondrite-like baseline. The expected REXIS performance under nominal conditions is indicated in the magenta ellipses. These error ellipses represent systematic error; further consideration of statistical error places the confidence in these calculations of systematic error at $3.5\sigma$ (see Sec. \[sec:Results\]). Composition data for meteorites are from Nittler, *et al.* [@NittlerData]. []{data-label="fig:Nittler"}](SSi_MgSi_Nittler_trim_label.pdf "fig:"){width="\textwidth"} \[SSi\_MgSi\] Our purpose in this work is to determine, for a given asteroid regolith composition, Solar state, and instrument characteristics, the spectrum that we expect to collect from the asteroid and the impact of data collection and processing on the eventual reconstruction of the hypothetical elemental abundances of Bennu. We model the expected performance of REXIS in its Spectral Mode and place bounds on its ability to place Bennu within an analog meteorite class. We will accomplish this in several steps. First, we model the ideal X-ray spectra that we expect to be generated by Bennu and the Sun. We then model the instrument response for both the spectrometer and the SXM, accounting for factors such as total throughput, detector active area, quantum efficiency, and spectral broadening. We then model the data processing. Here, we combine the instrument response-convolved spectra from both Bennu and the Sun to determine how well we can reconstruct Bennu’s elemental abundances and place the asteroid within an analog meteorite class. We show that REXIS can accomplish its required objectives with sufficient margin. Methodology {#sec:Method} =========== Our overall methodology in simulating the expected performance of REXIS is summarized in Fig. \[fig:Spect\_Pipeline\]. Our basic procedure is to first simulate physical observables—in our case, asteroid and Solar spectra—under expected conditions. We then simulate the process of data collection for both the spectrometer and the SXM. Finally, we simulate the interpretation of the data and assess our ability to reconstruct the original observables using our processed data. In order to assess our expected performance, throughout the entire modeling process, we keep track of all simulated quantities, including those that would be unknowns during the mission lifetime, such as the actual Solar and asteroid spectrum. Simulating Observables ---------------------- The baseline observables for the spectrometer and the SXM are the asteroid and Solar X-ray spectra, respectively. For the discussion that follows, we denote the asteroid spectrum $I_{\mathrm{B}}(E)$ and the Solar spectrum $I_{\astrosun}(E)$. The cosmic X-ray background spectrum, which we must also consider, is denoted $I_{\mathrm{CXB}}(E)$. In each case, the spectrum is a function of energy $E$ and has units of $\mathrm{photons/cm^2/s/Sr/keV}$. Based on ground observations, the expected asteroid spectrum $I_{\mathrm{B}}$ is that from a CI-like asteroid regolith. Since the OSIRIS-REx mission occurs during the Solar minimum, the expected Solar spectrum is that from a quiet Sun. ### Asteroid Spectrum Asteroid spectra are calculated using the standard fluorescence equation for the intensity of the fluorescent lines[@JenkinsQXS]. We also include contributions from coherent scattering.[^2] The contribution from incoherent scattering is at least an order of magnitude less than that from coherent scattering and is ignored here [@LimNitt1]. We assume the asteroid, which is modeled as a sphere of radius 280 m, is viewed in a circular terminator orbit 1 km from the asteroid center. From the point of view of REXIS, half of the asteroid is illuminated while the other half is dark. Furthermore, the asteroid is not uniformly bright on its Sun-facing side, and the energy-integrated flux peaks at a point offset from the asteroid nadir. The effect of these angles is taken into account when generating the asteroid spectrum (for more details, see Appendix \[sec:AsteroidSec\]). The asteroid spectrum itself is a function of the Solar spectrum. It is also, to a much lesser extent, a function of the CXB, which is significantly lower in intensity than the incident Solar radiation, and which is only effective at inducing fluorescence at energies much higher than we are concerned with. In generating $I_{\mathrm{B}}(E)$, we use $I_{\astrosun}(E)$, as discussed below in Sec. \[sec:SolarSpec\]. ### Solar Spectrum {#sec:SolarSpec} We calculate Solar X-ray spectra $I_{\astrosun}(E)$ using the CHIANTI atomic database[@CHIANTI_1; @CHIANTI_2] and SolarSoftWare package [@SSW_pack]. The Solar spectrum is that generated by the Solar corona, the primary source of X-rays from the Sun. Since REXIS will be observing Bennu during the Solar minimum, we model the expected Solar spectrum by using the quiet Sun differential emission measure (DEM) derived from the quiet Sun data of Dupree, *et al.*[@Dupree] and elemental abundances of Meyer [@Meyer; @Anders]. The DEM is a quantity that encodes the plasma temperature dependence of the contribution function and hence intensity of the radiation [@LimNitt1; @SolarCorona; @LandiDEM]. The DEM can be derived from observations, and for the quiet Sun it tends to peak at a single temperature (in the range of about $3-6~\mathrm{MK}$), so that to first order, the Solar corona can be approximated as comprising an isothermal plasma. In general, however, the actual Solar X-ray spectrum will require an integration of the DEM over all temperatures present in the plasma along the observer’s line of sight (see Appendix \[sec:SolarModelSec\]). For higher coronal temperatures, access to higher energy states leads to a so-called hardening of the Solar spectrum [@LimNitt1], an effect which is most pronounced during a Solar flare. In this case, the DEM peaks at more than one temperature. Since we expect the majority of our observations to take place while the Sun is relatively inactive, during data processing, we take advantage of the fact that the corona can be approximated as isothermal (for more details, see Appendix \[sec:SolarModelSec\]). Finally, we note that the Solar X-ray spectrum depends on the elemental abundances of the Solar corona, for which several models are available [@LimNitt1]. However, our results are relatively insensitive to the coronal elemental abundance model employed. ### CXB Spectrum {#sec:CXBSpec} The CXB spectrum that we use in our models is calculated following Lumb, *et al*[@LumbCXB]. In this model, $I_{\mathrm{CXB}}(E)$ is calculated by assuming that the CXB comprises two optically thin components[@MEKAL] and a power law component [@Zombeck]. In general, the CXB flux becomes comparable to the asteroid flux at $\sim 2~\mathrm{keV}$, near the S-K complex (see Fig. \[fig:Spect\_Comp\]). Measurement of sulfur is critical, since it enables us to differentiate amongst different chondritic varieties (Fig. \[fig:Nittler\]). Hence, we ultimately find that measurement of the S/Si ratio is most sensitive to this particular source of noise and requires the longest amount of measurement time to achieve statistical significance (see Sec. \[sec:ObsTime\]). ### Internal Background {#sec:IntBack} Fluorescence from the REXIS instrument itself can be present in the signal we measure. Incident X-rays primarily from Bennu (but also from the CXB) can strike the inner portions of the instrument and induce fluorescence. Ideally, a ray-tracing simulation would be carried out to determine the extent of this internal noise. For our work, however, we use data from the Chandra ACIS instrument that has been suitably scaled down to match the detector area of the REXIS CCDs[@ACIS]. A comparison of Bennu’s spectrum with that of the CXB and the internal background is shown in Fig. \[fig:Spect\_Comp\]. [0.625]{} ![ Comparison of spectra of interest at an early stage of our model development. On the left panel, we show our preliminary model of Bennu’s spectrum as emitted directly from the asteroid (solid magenta line). We also show Bennu’s spectrum after detector quantum efficiency, molecular contamination, and the optical blocking filter (OBF; Fig. \[fig:InstResInputs\]) are taken into account. On the right panel, we show Bennu’s spectrum compared to sources of noise. Fluorescent lines from Bennu are shown as orange markers, while the complete Bennu spectrum, including scattering, is shown in solid red. The cosmic X-ray background (CXB) is shown in black, while internal noise (due to fluorescence from the instrument itself) is shown with the dotted blue line. The Si-K complex from Bennu, which is the prominent set of lines between 2 and 3 keV, is most strongly subject to the effects of noise. Internal background has been scaled from Chandra data [@ACIS]. This model does not include oxygen as it falls just below our model cut-off. []{data-label="fig:Spect_Comp"}](Bennu_only_label_trim.pdf "fig:"){width="\textwidth"} \[fig:FeSi\_MgSi\] \   [0.625]{} ![ Comparison of spectra of interest at an early stage of our model development. On the left panel, we show our preliminary model of Bennu’s spectrum as emitted directly from the asteroid (solid magenta line). We also show Bennu’s spectrum after detector quantum efficiency, molecular contamination, and the optical blocking filter (OBF; Fig. \[fig:InstResInputs\]) are taken into account. On the right panel, we show Bennu’s spectrum compared to sources of noise. Fluorescent lines from Bennu are shown as orange markers, while the complete Bennu spectrum, including scattering, is shown in solid red. The cosmic X-ray background (CXB) is shown in black, while internal noise (due to fluorescence from the instrument itself) is shown with the dotted blue line. The Si-K complex from Bennu, which is the prominent set of lines between 2 and 3 keV, is most strongly subject to the effects of noise. Internal background has been scaled from Chandra data [@ACIS]. This model does not include oxygen as it falls just below our model cut-off. []{data-label="fig:Spect_Comp"}](All_comparison_label_trim.pdf "fig:"){width="\textwidth"} \[fig:SSi\_MgSi\] Instrument Response ------------------- The next step after simulating the observables is to estimate how these will convolve with the instrument response. Thus we simulate the data collection process by applying the instrument response for both the spectrometer and the SXM to our model spectrum. Inputs into the instrument response models include (along with the symbols that we use to denote each): - Observation time, $T_{\mathrm{obs}}$ - Coded aperture mask throughput, $F$ (spectrometer only) - Grasp, $G = A_E \Omega$ - Effective detector area, $A_E$ - Solid angle subtended by source with respect to detector, $\Omega$ - Detector quantum efficiency, $Q(E)$ (a function of energy $E$) - Detector histogram bin width, $\Delta E$ - Gain drift - Detector spectral resolution, $\mathrm{FWHM}$ In all cases where we are evaluating our results, we assume our measurements are well described within the realm of Poisson statistics. The origin of the values used for each of these inputs varies. In sections below, we detail how each of these inputs is derived for our simulations. After the asteroid and Solar spectra have been convolved with the detector response functions, the basic output for each will be a histogram of photon counts as a function of energy. In Table \[tab:SpectObsInputs\], we summarize some of the major observational inputs into our simulations, while others are given in the text that follows. Parameter Value ---------------------------------------------------- --------------------- Open fraction 40.5% Histogram binning $\Delta E$ \[$\mathrm{eV/bin}$\] $\sim 15$ Gain drift \[$\mathrm{eV}$\] $\pm 15$ Total observation time $T_{\mathrm{obs}}$ 423 hours CXB calibration period $T_{\mathrm{CXB}}$ 3 hours Internal background calibration period $T_{\mathrm{int}}$ Solar state Quiet Sun Regolith composition $\sim$CI chondritic : Observational inputs for spectrometer instrument response. \[tab:SpectObsInputs\] ### Observation Time The observation time, $T_{\mathrm{obs}}$, for the spectrometer is taken to be 423 hours. For the SXM, Solar spectra are recorded as histograms in 32 s intervals, roughly the time scale over which the Solar state can vary substantially. Time $T_{\mathrm{CXB}}$ and $T_{\mathrm{int}}$ is also allocated for CXB and internal calibration, respectively (see Sec. \[sec:BackSub\]). ### Coded Aperture Mask Throughput The overall throughput of the instrument depends on the open fraction of the coded aperture mask (the fraction of open mask pixels to total mask pixels). For REXIS, nominally half of the coded aperture pixels are open. However, the presence of a structural grid network to support the closed pixels reduces the throughput further. Since the grid width is 10% of the nominal pixel spacing, the count rate of photons incident on the REXIS detectors will be reduced by a factor $F = 0.5\times \left(1 - 0.1\right)^2 = 0.405$ due to the presence of the coded aperture mask. ### Grasp {#sec:grasp} The grasp $G$, which has units of $\mathrm{cm^2Sr}$, is the quantity that encodes the solid angle $\Omega$ subtended by the target with respect to the detector, and the averaged detector geometric area $A_E$ that sees the target; detector efficiency is not accounted for in this term. The detector area does not comprise a single point, and since the field of view is not a simple cone, $G$ must in general be calculated numerically. We calculate $G$ for the CCDs using custom ray tracing routines in MATLAB and IDL. Since portions of the detector area can see the CXB that extends beyond the limb of the asteroid during observation, we keep track of this as well during our calculations[^3]. In Table \[tab:SpectGraspInputs\], we summarize $G$ for the CCDs, including individual contributions from Bennu and the CXB. For the SXM, we assume that the solid angle subtended by the Sun is given by that for a distant source, $\pi\left(R_{\astrosun}/1~\mathrm{AU}\right)^2$ where $R_{\astrosun}$ is the Sun’s radius. Since the Sun is located at such a distance that its incident rays can be treated as parallel, and since there are no structural elements driving the SXM viewing geometry substantially, we calculate $G$ for the SXM by simply multiplying the solid angle subtended by the Sun by the detector’s $0.25~\mathrm{cm^2}$ area (Table \[tab:SXMInputs\]). ---------------------------------------- ------- ------- ------------- Bennu CXB Total REXIS Averaged geometric detector area $A_E$ \[$\mathrm{cm^2}$\] Solid angle $\Omega$ \[$\mathrm{Sr}$\] 0.254 0.185 — Grasp $G$ \[$\mathrm{cm^2 Sr}$\] 3.85 0.388 4.24 ---------------------------------------- ------- ------- ------------- : Geometric inputs for spectrometer instrument response during primary observation period. These inputs assume a 280 m spherical asteroid radius, average 1 km asteroid centroid-to-spacecraft orbit, 9.84 cm mask coded area diameter, and a 20 cm focal length. During the calibration period, when REXIS observes only the sky, the entire $4.24~\mathrm{cm^2 Sr}$ grasp is devoted to the CXB. \[tab:SpectGraspInputs\] Parameter Value ---------------------------------------------------- ----------------------------------------------------------------------- Histogram binning $\Delta E$ \[$\mathrm{eV/bin}$\] $\sim 30$ Single integration time 32 s SDD area $A_E$ \[$\mathrm{cm^2}$\] 0.25 Solid angle $\Omega$ \[$\mathrm{Sr}$\] $\pi\left(R_{\astrosun}/1~\mathrm{AU}\right)^2 = 6.79 \times 10^{-5}$ Grasp $G$ \[$\mathrm{cm^2 Sr}$\] $1.70\times 10^{-6}$ : Inputs for SXM response. \[tab:SXMInputs\] ### Detector Quantum Efficiency The detector quantum efficiency, $Q(E)$, gives the overall reduction in counts registered by the detector due to absorption of incoming X-rays both by material overlying the CCDs and by the CCD material itself. In the case of the CCDs, we use the known material stackup[@BautzCCD] and widely-available photoabsorption cross section data[@hubbell1996tables] to determine the energy-dependent attenuation and hence quantum efficiency of the detector. We also include other possible sources of detection inefficiency, including built-up molecular contamination[@ACIS] and the optical blocking filter (OBF), which is a thin aluminum film deposited on the CCDs in order to prevent saturation from optical light. The combined contribution of all these to $Q(E)$ is shown in Fig. \[fig:InstResInputs\].[^4] For the SXM, SDD efficiency curves are taken from manufacturer’s data[@AMPTEK]. ### Detector Histogram Binning The histograms that are generated by the spectrometer data are binned in intervals of width $\Delta E$. Photons detected by the REXIS CCDs are assigned a 9 bit energy value, so that over an energy range of $0.5-7.5~\mathrm{keV}$, $\Delta E = 7~\mathrm{keV}/2^9 \sim 15~\mathrm{eV}$ (Table \[tab:SpectObsInputs\]). For the SXM, there are 256 energy bins, so that $\Delta E \sim 30~\mathrm{eV}$ (Table \[tab:SXMInputs\]). ### Gain Drift {#sec:GainDrift} Our ability to accurately define line features depends on our ability to accurately calibrate the gain of the detectors. In the case of the spectrometer, we employ on-board $^{55}$Fe calibration sources in order to determine the line centers. The strength of the $^{55}$Fe sources has been chosen to ensure that within a given time period, the sources’ line centers can be determined with $3\sigma$ accuracy to within one bin width. In our work, we shift the gain at $5.9~\mathrm{keV}$ randomly by $\pm~15~\mathrm{eV}$ for each simulation we perform. In the case of the SXM, we will use known Solar spectral features to accurately calibrate the gain over each integration period. Since the count rate for the SXM is so high, we can accurately determine line centers without counting statistics having too great an effect. ### Detector Spectral Resolution The detector energy resolution, which we denote by $\mathrm{FWHM}$ (full width half maximum), describes the width of the Gaussian distribution that a delta function-like spectral line would assume due to broadening. Natural broadening, which is typically on the order of a few eVs, is negligible in comparison to broadening from the detector itself. For the CCDs, the $\mathrm{FWHM}$ is a function of both photon energy and detector temperature[@Suzaku]. The detector temperature drives dark current, which subsequently increases the width of the Gaussian. REXIS’s required detector operating temperature $T$ is $-60~\mathrm{^\circ C}$ or below, while the predicted temperature at the time of writing is $\sim 20~\mathrm{^\circ C}$ colder than the requirement. Since the CCD temperature is the strongest driver of spectral resolution for a given line, and since the CCDs are passively cooled, in our results below, we calculate the performance over the range of detector temperatures between the requirement and prediction. CCD $\mathrm{FWHM}$ is determined using a combination of experimental data and analytical expressions, in a procedure outlined in Appendix \[sec:FWHMCalc\]. Initial test results show that CCD performance is near or at Fano-limited. $\mathrm{FWHM}$ as a function of detector temperature for energies at the line centers of interest is shown in Fig. \[fig:InstResInputs\]. For the SXM, the situation is somewhat simpler, since the SXM is cooled actively via a thermoelectric cooler. In this case, based on the manufacturer’s test data, we have assumed that $\mathrm{FWHM}\left(5.9~\mathrm{keV}\right) = 125~\mathrm{eV}$. [0.625]{} ![Drivers of spectral resolution and instrument response. In the left panel, we show the energy resolution of the CCDs as a function of detector temperature for various energies. The energies indicated are those associated with the line centers of spectral lines of interest. Initial tests have indicated that detector performance is at or near Fano-limited at the required detector operating temperature of $-60~\mathrm{^\circ C}$. Curves have been calculated using the method given in Appendix \[sec:FWHMCalc\]. In the right panel, we show the quantum efficiency of the CCDs as a function of energy. Several sources of efficiency degradation are indicated. Molecular contamination is indicated by the solid blue line. The effect of the optical blocking filter, whose purpose is to attenuate optical light from Bennu that could cause saturation of the detectors, is indicated by the dash-dotted red line. The dashed black line indicates the radiation attenuation due to the composition stackup of the back-illuminated CCID-41. The total effect of all these are indicated by the thick orange line. Note the quantum efficiency estimate for our back illuminated CCD includes a conservative margin. []{data-label="fig:InstResInputs"}](FWHM_label_trim.pdf "fig:"){width="\textwidth"} \[fig:FWHM\] \   [0.625]{} ![Drivers of spectral resolution and instrument response. In the left panel, we show the energy resolution of the CCDs as a function of detector temperature for various energies. The energies indicated are those associated with the line centers of spectral lines of interest. Initial tests have indicated that detector performance is at or near Fano-limited at the required detector operating temperature of $-60~\mathrm{^\circ C}$. Curves have been calculated using the method given in Appendix \[sec:FWHMCalc\]. In the right panel, we show the quantum efficiency of the CCDs as a function of energy. Several sources of efficiency degradation are indicated. Molecular contamination is indicated by the solid blue line. The effect of the optical blocking filter, whose purpose is to attenuate optical light from Bennu that could cause saturation of the detectors, is indicated by the dash-dotted red line. The dashed black line indicates the radiation attenuation due to the composition stackup of the back-illuminated CCID-41. The total effect of all these are indicated by the thick orange line. Note the quantum efficiency estimate for our back illuminated CCD includes a conservative margin. []{data-label="fig:InstResInputs"}](QE_label_trim.pdf "fig:"){width="\textwidth"} \[fig:QE\] ### Calculating the Instrument Response Function In this section, we summarize how all the above inputs combine to generate the instrument response and a spectrum histogram. We denote the baseline intensity as $I_0(E)$ \[= $I_{\mathrm{B}}(E)$, $I_{\astrosun}(E)$, or $I_{\mathrm{CXB}}(E)$\] and multiply $I_0(E)$ by the relevant geometrical and time integration factors. The number of counts $C_0(E)$ accumulated by the detector over a given integration period $T_{\mathrm{obs}}$ is thus $$\begin{aligned} C_0(E) = I_0(E) \cdot G \cdot T_{\mathrm{obs}} \cdot \Delta E.\end{aligned}$$ During primary observation, $G$ for the asteroid and the CXB are those given in Table \[tab:SpectGraspInputs\]. During the calibration period for the CXB, however, $G$ is that for the whole spectrometer (i.e. $4.24~\mathrm{cm^2 Sr}$) and instead of $T_{\mathrm{obs}}$, we have $T_{\mathrm{CXB}}$. $C_0(E)$ will be reduced due to the quantum efficiency of the detector, and the resulting count distribution $C_1(E)$ is given by $$\begin{aligned} C_1(E) &= C_0(E)\cdot Q(E) \nonumber \\ &= I_0(E) \cdot Q(E) \cdot G \cdot T_{\mathrm{obs}} \cdot \Delta E.\end{aligned}$$ In Fig. \[fig:Spect\_Comp\], we show how Bennu’s modeled spectrum compares to the CXB and internal background, plotting for each $C_1(E)/T_{\mathrm{obs}}/\Delta E$. Consider a function $\mathrm{Poisson}[C(E)]$ which takes as an input a number of counts for a given energy $C(E)$ and outputs a Poisson-distributed random number from a distribution whose mean is $C(E)$. Applying Poisson statistics to $C_1(E)$ then gives $$\begin{aligned} C_2(E) &= \mathrm{Poisson}\left[C_1(E)\right] \nonumber \\ &= \mathrm{Poisson}\left[I_0(E) \cdot Q(E) \cdot G \cdot T_{\mathrm{obs}} \cdot \Delta E\right]. \end{aligned}$$ The effect of the detector state upon the spectrum is accounted for by imposing an effective broadening upon each count value in the spectrum, the broadening having the shape of a Gaussian with a given $\mathrm{FWHM}$. For the CCDs, $\mathrm{FWHM} = \mathrm{FWHM}(E,T)$, where $T$ is the detector temperature and $E$ is photon energy. This broadening will not have the shape of a precise Gaussian, however, and to simulate the stochastic nature of the broadening, we generate a random distribution sampled from a Gaussian of given energy and $\mathrm{FWHM}$, with the total number of counts given by $C_2(E)$. Let $\mathrm{Gaussian}\left[E,C(E),\mathrm{FWHM}(E,T)\right]$ denote the generic Gaussian function that takes as an input the energy $E$, the counts at that energy $C(E)$, and the $\mathrm{FWHM}(E,T)$. Then the distribution of counts $C_3(E)$ is given by the convolution of $\mathrm{Gaussian}$ and $C_2(E)$: $$\begin{aligned} C_3(E) = C_2(E) \ast \mathrm{Gaussian}\left[E,C_2(E),\mathrm{FWHM}(E,T)\right].\end{aligned}$$ A histogram is then generated by binning $C_3(E)$ into the required number of bins. Consider a generic binning function that takes as inputs a counts profile $C(E)$ over what may be regarded as a continuous energy range $E \in [0.5~\mathrm{keV},7.5~\mathrm{keV}]$ and bins it into a new profile $C'(E)$ over an energy range $E'$, where $$\begin{aligned} E' = \left\{0.5~\mathrm{keV}, 0.5~\mathrm{keV} + \Delta E, 0.5~\mathrm{keV} + 2\Delta E,...,7.5~\mathrm{keV}\right\}. \end{aligned}$$ Denote this function $C'(E') = \mathrm{Binning}\left[C(E),E;E'\right]$. Then the final histogram profile $C_3'(E')$ over the binned energies is given by $$\begin{aligned} C_3'(E') = \mathrm{Binning}\left[C_3(E),E;E'\right]. \end{aligned}$$ In Fig. \[fig:Det60\_hist\] we show a simulated histogram for a detector temperature $T = -60~\mathrm{^\circ C}$. For reference, the spectral features associated with our lines of interest are shown in thick colored lines. Noise subtraction (see Sec. \[sec:BackSub\] below) has been applied. In Fig. \[fig:Quiet\_Fit\], we show the simulated histogram for the quiet Sun (solid magenta line), along with the idealized spectrum from which it is derived (dotted red line). ![Example histogram of detector at temperature of $-60~\mathrm{^{\circ}C}$, where these results follow from our preliminary model in Fig. \[fig:Spect\_Comp\].[]{data-label="fig:Det60_hist"}](Det60_rev_trim.pdf){width=".65\textwidth"} Data Processing --------------- In this section, we detail our methods for processing the simulated spectrometer and SXM data (right hand and bottom side of Fig. \[fig:Spect\_Pipeline\]). We first begin by detailing the process of noise background subtraction. Next, we discuss how we perform histogram counts for all our lines of interest. We then discuss the method for reconstructing the Solar spectrum and, finally, how we generate calibration curves to map from histogram count ratios back to elemental abundance weight ratios. ### Spectrometer Background Subtraction {#sec:BackSub} As noted, asteroid spectral features (in particular, sulfur) are sensitive to noise from the CXB and from the instrument itself. As a result, REXIS devotes periods of time for both CXB and internal noise calibration. The data gathered during the calibration period are used to subtract out sources of noise from the data product. To simulate the noise subtraction procedure, we consider the total observed histogram counts $C_3'(E')$, which includes both the internal and CXB signal. We then simulate the calibration period data. Let internal counts as a function of energy be given by $C'_{\mathrm{int}}(E')$ and CXB counts by $C'_{\mathrm{CXB}}(E')$, where we have assumed that Poisson statistics and binning have been performed on each, and that the counts have been accumulated over the calibration periods, $T_{\mathrm{int}}$ and $T_{\mathrm{CXB}}$, respectively, for each. Then the procedure for background subtraction is to scale each calibration count value up so that the integration time matches that of the primary observation. Thus, the spectrum that we consider after accounting for noise subtraction is given by $$\begin{aligned} C_3'(E') - C'_{\mathrm{int}}(E')\frac{T_{\mathrm{obs}}}{T_{\mathrm{int}}} - C'_{\mathrm{CXB}}(E') \frac{T_{\mathrm{obs}}}{T_{\mathrm{CXB}}}.\end{aligned}$$ In Fig. \[fig:Det60\_hist\], we see the effect of noise subtraction. The thin, dashed line shows the input spectrum with CXB and internal noise especially prominent at higher energies. After subtracting out CXB and internal background, there remains some high frequency noise (right side of Fig. \[fig:Det60\_hist\]) since Poisson statistics are included for both the simulated observation and calibration data. We note again that, since REXIS performs its CXB calibration with a sky observation in the absence of Bennu, $C'_{\mathrm{CXB}}(E')$ is calculated with a grasp given by that for the entire spectrometer. ### Spectrometer Line Counting The quantity of various elements present in Bennu’s regolith are determined by the strength of the corresponding spectral features and hence counts in the spectrometer histogram. When counting, for a given line center, we consider all the counts within the $\mathrm{FWHM}$ of a Gaussian centered about that line center. As discussed above, the $\mathrm{FWHM}$ is a function of both line center energy $E$ and detector temperature $T$. Onboard calibration data, which gives us $\mathrm{FWHM}$ at $5.9~\mathrm{keV}$, and pre-flight test data allow us in principle to estimate the detector $\mathrm{FWHM}$ for each CCD frame that is processed. For the purposes of our simulations here, we assume complete knowledge of $\mathrm{FWHM}$. Furthermore, as we demonstrate below (Sec. \[sec:Accs\]), our results are relatively insensitive to $\mathrm{FWHM}$. We do not assume that we know the actual line centers with complete certainty. By employing gain drift (see Sec. \[sec:GainDrift\] above), we allow for the misidentification of the line centers. In Fig. \[fig:Det60\_hist\], we indicate the counting zones for the lines of interest by vertical lines. *All* counts within a given $\mathrm{FWHM}$ counting zone are considered to come from that line of interest. Therefore, there will naturally be contamination within each zone from both the continuum background, noise sources, and other lines. During our simulations, we therefore keep track of the ideal, expected count number from each line in addition to those we count directly from the histogram. The error between the two affects how well we are able to reconstruct weight ratios from these counts. More details on the counting scheme and the definition of “accuracy” are given in Appendix \[sec:DefofAcc\]. ### Solar Spectrum Reconstruction The method by which we use the SXM histogram to reconstruct the Solar spectrum is shown schematically in Fig. \[fig:SXM\_flow\]. First we take the quiet Sun spectrum (see Sec. \[sec:SolarSpec\]) and convolve it with the detector response to generate a synthetic “observed” histogram (two blue boxes on the upper left of Fig. \[fig:SXM\_flow\]). Then we utilize a database of isothermal spectra which we generate beforehand and convolve those with the known detector response to generate isothermal spectrum-derived histograms (boxes on the lower left). We use the observed histogram and those generated from the database to determine a best fit. The unconvolved isothermal spectrum whose convolved form provides the best fit is used in our later analysis. In Fig. \[fig:Quiet\_Fit\], we summarize the results of the fitting procedure. In Fig. \[fig:Quiet\_Fit\], we see that there are good fits over the energy ranges corresponding to our elements of interest. We have found that using a linear combination of isothermal spectra when fitting against the observed Solar spectrum can provide better fits. This result is to be expected since the realistic quiet spectrum is indeed, via integration over the DEM, a linear combination of isothermal spectra. For simplicity here, however, we focus only on single-temperature fits. The quality of the fit depends also on the characteristics of the isothermal spectrum database. These spectra are dependent on factors such as the coronal elemental abundance model employed. While we do not claim to have explored the full model space of elemental abundance models available, we have ensured that the abundance models used for both the DEM-convolved realistic spectrum and that from which the isothermal spectrum database is derived are distinct and randomly chosen. ![Fitting of quiet Solar spectra in the isothermal approximation. Example of fitting procedure for a simulated quiet Sun. First, a DEM-folded, quiet Sun Solar spectrum model is convolved with the SXM detector response function, which simulates ground data for the Solar state. Then, we search for a best fit isothermal spectrum and convolve that with the detector response function. The isothermal spectrum whose detector response has the best fit with that of the realistic spectrum is then used to generate the calibration curves (see Fig. \[fig:CalCurves\]). The histogram shown is over 32 s, while in this particular case, the isothermal best fit is a $3.1~\mathrm{MK}$ spectrum. []{data-label="fig:Quiet_Fit"}](Quiet_fit_label_rev){width="80.00000%"} ### Calibration Curve Generation: Mapping Count Ratios to Weight Ratios For a given Solar state, in order to make the transition from histogram count ratios to elemental abundance ratios, we make use of so-called calibration curves [@LimNitt1]. Calibration curves map, for a given Solar state, elemental abundance ratios to ideal count ratios. To generate calibration curves, we take the unconvolved Solar spectrum and simulate asteroid spectra corresponding to a wide range of meteorite compositions. The range of elemental abundances afforded by this range allows us to consider realistic weight ratios that may be expected from Bennu. The simulated spectra allow us to determine the expected count ratios for a given weight ratio, which then allows us to map histogram counts back to elemental abundances. In Table \[tab:CalInputs\], we detail the various meteoritic compositions used to generate the calibrations curves for our elements of interest. In Fig. \[fig:CalCurves\], we show calibration curves for our elemental ratios of interest. Various meteorite groups are indicated by the symbols. Solid lines indicate second-order fits to the realistic quiet Solar spectrum, while the dotted line indicates the fit based on the reconstructed Solar spectrum, which we discuss further below. For simplicity, we show the symbols only for the realistic quiet Sun fit and omit those for the reconstructed fit. The baseline weight ratios and expected count ratios for a CI chondrite-like regolith are indicated by the blue circles. ------------- -------- -------- -------- -------- -------- -------- O Mg Al Si S Fe CI 46.4 9.70 0.865 10.64 5.41 18.2 CM 43.2 11.5 0.130 12.70 2.70 21.3 CM$^{*}$ 38.92 8.99 1.334 11.916 2.122 25.466 CV 37.0 14.3 1.680 15.70 2.20 23.5 CO 37.0 14.5 1.400 15.80 2.20 25.0 CK — 14.7 1.470 15.80 1.70 23.0 CR — 13.7 1.150 15.00 1.90 23.8 CH — 11.3 1.050 13.50 0.35 38.0 H 35.70 14.1 1.06 17.1 2.0 27.2 L 37.70 14.9 1.16 18.6 2.2 21.75 LL 40.00 15.3 1.18 18.9 2.1 19.8 EH 28.00 10.73 0.82 16.6 5.6 30.5 EL 31.00 13.75 1.00 18.8 3.1 24.8 R — 12.9 1.06 18.0 4.07 24.4 K — 15.4 1.30 16.9 5.5 24.7 Acap.$^{*}$ — 15.6 1.20 17.7 2.7 23.5 Lod.$^{*}$ 25.858 16.299 0.0952 11.248 0.4257 43.92 Dio.$^{*}$ 20.42 16.528 1.00 24.28 0.204 12.729 IAB$^{*}$ 26.62 11.92 1.31 14.48 7.04 33.9 ------------- -------- -------- -------- -------- -------- -------- : Summary of inputs for calibration curves. Values are taken from Lodders and Fegley[@PSC] unless marked with “$^{*}$”, in which case the data comes from Nittler, *et al.*[@NittlerData]. If “—” appears, weight percent values were not available in the reference for that element, and the remainder of the percent balance was allocated to that element for the sake of the ideal asteroid spectrum simulation. Since O is not one of our elements of interest, the development of the calibration curves is not incumbent upon accurate knowledge of O. \[tab:CalInputs\] [0.483]{} ![Weight ratio to count ratio calibration curves for the elemental abundance ratios of interest. Calibration curves are a function of weight ratio and Solar spectrum. Curves indicate second order fits to simulated spectra of asteroids with the same composition as the major meteorite types indicated (see Table \[tab:CalInputs\]). Several artificial, “diagnostic” compositions have also been included to improve the fidelity of the fit. The solid curve indicates spectra generated using the DEM-folded quiet Solar spectrum. The dotted curve indicates spectra generated using the isothermal best fit (see Fig. \[fig:Quiet\_Fit\]). The baseline weight ratios and expected count ratios for a CI chondritic-like regolith are indicated by the blue circles. []{data-label="fig:CalCurves"}](FeSi_label_1temp_trim.pdf "fig:"){width="\textwidth"} [0.471]{} ![Weight ratio to count ratio calibration curves for the elemental abundance ratios of interest. Calibration curves are a function of weight ratio and Solar spectrum. Curves indicate second order fits to simulated spectra of asteroids with the same composition as the major meteorite types indicated (see Table \[tab:CalInputs\]). Several artificial, “diagnostic” compositions have also been included to improve the fidelity of the fit. The solid curve indicates spectra generated using the DEM-folded quiet Solar spectrum. The dotted curve indicates spectra generated using the isothermal best fit (see Fig. \[fig:Quiet\_Fit\]). The baseline weight ratios and expected count ratios for a CI chondritic-like regolith are indicated by the blue circles. []{data-label="fig:CalCurves"}](MgSi_label_1temp_trim.pdf "fig:"){width="\textwidth"} [0.483]{} ![Weight ratio to count ratio calibration curves for the elemental abundance ratios of interest. Calibration curves are a function of weight ratio and Solar spectrum. Curves indicate second order fits to simulated spectra of asteroids with the same composition as the major meteorite types indicated (see Table \[tab:CalInputs\]). Several artificial, “diagnostic” compositions have also been included to improve the fidelity of the fit. The solid curve indicates spectra generated using the DEM-folded quiet Solar spectrum. The dotted curve indicates spectra generated using the isothermal best fit (see Fig. \[fig:Quiet\_Fit\]). The baseline weight ratios and expected count ratios for a CI chondritic-like regolith are indicated by the blue circles. []{data-label="fig:CalCurves"}](SSi_label_1temp_trim.pdf "fig:"){width="\textwidth"} Results {#sec:Results} ======= In this section, we detail our results based on the simulations and analysis presented in the previous sections. We first show how, assuming perfect knowledge of the Solar state, the count ratios we measure map back to elemental abundance weight ratio errors with respect to CI chondrite-like baseline composition. Next, based on these count ratio errors, we present the required observation times to achieve statistical significance on our measurements. Finally, we present a qualitative discussion of how the error generated during Solar spectrum reconstruction develops a permissible error space in which we interpret our results. Weight Ratio Accuracy {#sec:Accs} --------------------- Using the calibration curves given in Fig. \[fig:CalCurves\], we can map the errors incurred by our histogram counting procedure into subsequent errors in weight ratio. In general, since the relationship between counts and regolith weight is roughly linear, the correspondence between count ratio error and weight ratio error is also roughly linear. In Table \[tab:Rattempsum\], we list the weight ratio errors for our required detector temperature ($T = -60~\mathrm{^{\circ} C}$) and our current best prediction for the detector temperature ($T \sim -80~\mathrm{^{\circ} C}$). In all cases, the predicted error is less than the requirement. The weight ratio errors over the range of temperatures between $T = -60~\mathrm{^{\circ} C}$ and $-80~\mathrm{^{\circ} C}$ are shown on the left panel of Fig. \[fig:DetSumm\]. The error bars in the figure represent the error spread over 20 simulations at each detector temperature, with each simulation incorporating the effect of factors such as Poisson statistics, gain drift, and noise subtraction. We note that, over most of the temperatures, there is not necessarily a degradation of performance with increasing detector temperature, as we might naively expect due to the decrease in spectral resolution. The relative insensitivity of our spectral performance on detector temperature (or equivalently $\mathrm{FWHM}$) is primarily due to the fact that taking count ratios effectively cancels some of the effect of this systematic error present in each of the individual lines. In Fig. \[fig:Nittler\], we indicate with magenta ellipses the accuracy error due to these systematic effects at $T = -60~\mathrm{^{\circ} C}$. [0.68]{} ![Spectral simulation results assuming perfect knowledge of the Solar state. In the left panel, we show the weight ratio error for our line ratios of interest as a function of detector temperature. The 25% requirement is indicated by the magenta line. For all temperatures, we are able to meet our requirement with margin. In the right panel, we show the observation time required to achieve statistical significance on our spectral measurements. The requirement that we achieve significance within the allotted mission time is shown by the magenta line. The observation time is derived from the count rates from the lines of interest, CXB, and internal noise, and from the difference between predicted accuracy error and required accuracy error. S/Si, which is most susceptible to noise from the internal background and CXB, requires the greatest observation time. Error bars are $1\sigma$ over 20 simulations. []{data-label="fig:DetSumm"}](Weight_ratio_label_trim.pdf "fig:"){width="\textwidth"} \   [0.68]{} ![Spectral simulation results assuming perfect knowledge of the Solar state. In the left panel, we show the weight ratio error for our line ratios of interest as a function of detector temperature. The 25% requirement is indicated by the magenta line. For all temperatures, we are able to meet our requirement with margin. In the right panel, we show the observation time required to achieve statistical significance on our spectral measurements. The requirement that we achieve significance within the allotted mission time is shown by the magenta line. The observation time is derived from the count rates from the lines of interest, CXB, and internal noise, and from the difference between predicted accuracy error and required accuracy error. S/Si, which is most susceptible to noise from the internal background and CXB, requires the greatest observation time. Error bars are $1\sigma$ over 20 simulations. []{data-label="fig:DetSumm"}](Rat_times_label_trim.pdf "fig:"){width="\textwidth"} -- -- ------- -------------------- ------------- -------------------------- Predicted accuracy Requirement Margin error \[$\%$\] \[$\%$\] (Requirement/Prediction) Fe/Si 13.7 $\leq 25\%$ 1.8 Mg/Si 9.1 $\leq 25\%$ 2.7 S /Si 10.3 $\leq 25\%$ 2.4 Fe/Si 7.9 $\leq 25\%$ 3.2 Mg/Si 9.7 $\leq 25\%$ 2.6 S /Si 3.0 $\leq 25\%$ 8.3 -- -- ------- -------------------- ------------- -------------------------- : Summary of REX-3 systematic accuracy error performance. We show the accuracy error for each elemental abundance ratio of interest at two different detector temperatures. The detector temperature of $-60~\mathrm{^{\circ}C}$ represents the required detector operating temperature, while $-80~\mathrm{^{\circ}C}$ represents the current best prediction for the detector temperature at the time of writing. \[tab:Rattempsum\] Observation Time {#sec:ObsTime} ---------------- The results in the above section represent systematic error. That is, they represent errors intrinsic to the behavior of the instrumentation itself. We must also consider statistical error to account for the stochastic nature of photon emission and place a statistical significance on our expected results. In order to account for statistical error, we consider the quadratic difference between our expected *count* ratio error and allowed count ratio error. Then, assuming Poisson statistics, based on the count rates within each energy range of interest from both the fluorescent lines and noise sources, we determine the required observation time to achieve an $N\sigma$ statistical significance level. Here, we choose $N = 3.5$, corresponding to $> 99\%$ confidence. A detailed calculation of the above, as well as expected count rates, is given in Appendix \[sec:StatTime\] and Table \[tab:Countsum\], respectively. Our required observation times for the two detector temperatures discussed above are given in Table \[tab:ObsTimeSum\], while those for the range of temperatures in between are given in the right panel of Fig. \[fig:DetSumm\]. Again, in all cases, we are able to achieve our required performance with margin. As noted above, the S/Si ratio, which is most subject to the effect of CXB and internal noise, requires the greatest amount of observation time to achieve statistical significance. The magenta error ellipses shown in Fig. \[fig:Nittler\] thus have a $3.5\sigma$ statistical confidence associated them. -- -- ------- ---------------------------------- ------------- -------------------------- Observation time for $3.5\sigma$ Requirement Margin confidence \[hours\] \[hours\] (Requirement/Prediction) Fe/Si 0.9 $\leq 420$ 467 Mg/Si 0.7 $\leq 420$ 600 S /Si 108 $\leq 420$ 3.9 Fe/Si 0.6 $\leq 420$ 700 Mg/Si 0.6 $\leq 420$ 700 S /Si 33 $\leq 420$ 12.7 -- -- ------- ---------------------------------- ------------- -------------------------- : Summary of REX-6 observation time requirement expected performance for detector temperature of $-60~\mathrm{^{\circ}C}$ and $-80~\mathrm{^{\circ}C}$. The observation time is based on obtaining sufficient photon statistics to achieve $3.5\sigma$ confidence in the accuracy results. \[tab:ObsTimeSum\] Calibration Curves and Mapping Errors {#sec:ErrorSpace} ------------------------------------- In the results above, we have assumed perfect knowledge of the Solar state in mapping count ratio errors to weight ratio errors. Hence, we have used the solid red calibration curves shown in Fig. \[fig:CalCurves\]. In reality, we will have to use the reconstructed Solar spectrum-derived calibration curves (dotted red lines in Fig. \[fig:CalCurves\]) in order to perform the mapping. From the calibration curves shown in Fig. \[fig:CalCurves\], it is clear that the difference between the curves based on the actual and reconstructed Solar spectra are within the error of the fit itself over the weight ratio ranges of interest, so that we cannot claim a truly meaningful difference between the quality of each fit. While we have not accounted for this effect in the results presented above, we demonstrate graphically in Fig. \[fig:Cal\_Curve\_Error\_Space\] the error space that develops from reconstructing the Solar spectrum. Fig. \[fig:Cal\_Curve\_Error\_Space\] shows how, in the most extreme case (Fe/Si) the calibration curves diverge for the different input Solar spectra. The lines marked “baseline” map to a CI chondrite-type composition under the actual Solar spectrum. The 25% identification requirement then places a range of permissible errors about this baseline (the shaded grey area). So long as the combined misidentification of the Solar spectrum and the count ratio error does not exceed the bounds set by the shaded region, REXIS can still achieve its objectives. In our case, for the $T=-60~\mathrm{^\circ C}$ case, the Fe/Si error is the most marginal ($13.7\%$ error), although it still falls within the required performance region. Since Fe/Si is not subject to the the same statistical fluctuation as e.g. S/Si, statistical significance on Fe/Si can still easily be achieved within the allotted observation time. ![Calibration curve error space. The solid red line is the calibration curve generated by the quiet Sun (red line in Fig. \[fig:Quiet\_Fit\]). This is the calibration curve that would be generated if we had perfect knowledge of the Solar state. The dashed red line is the calibration curve generated by the best single temperature fit based on simulated SXM data (green line in Fig. \[fig:Quiet\_Fit\]).[]{data-label="fig:Cal_Curve_Error_Space"}](Cal_Curve_Error_Space.pdf){width=".8\textwidth"} Discussion and Conclusions ========================== In the previous sections, we have presented the methodology and results of spectral performance modeling of REXIS. We have shown, by simulating Solar and asteroid X-ray spectra, the subsequent data product, and the data processing, how well REXIS can be expected to identify Bennu as a CI chondrite analog. We have shown that our two primary requirements—that REXIS is capable of identifying a baseline CI chondrite meteorite analog for Bennu to within 25% and that it can accomplish this within the allotted mission observation time—are attainable with margin. Future Work ----------- This work represents the first step in understanding REXIS’s science performance in Spectral Mode, and there are numerous opportunities to extend and refine this work. We summarize some of these below. ### Other Baseline Regolith Compositions for Bennu Throughout this work, we have assumed a baseline CI chondrite-like regolith composition for Bennu. In reality, ground measurements have suggested a possible CM chondrite-like composition. While we should not expect any substantial difference in expected performance if we assumed a CM-type composition, it is worthwhile to consider the possibility that the baseline composition of the regolith is something radically different (for instance, achondritic). ### Higher Order Observational Effects We have assumed here that the orbit is perfectly circular and that Bennu is a perfect sphere, e.g. we do not take into account surface roughness. However, it is possible to use the Bennu shape model[@ShapeModel] and OSIRIS-REx orbit for REXIS science operations in order to model the effect of both shape and orbit to a higher level of fidelity. ### Active Sun Modeling and Reconstruction We have assumed throughout this work that the Sun is in a quiet state, which it will be for the majority of REXIS’s science operations. However, the Sun will occasionally flare, creating a higher flux and harder (i.e. greater intensity at higher energies) spectrum. This in turn will substantially affect Bennu’s spectrum. Performing a similar analysis to the above, but assuming a flare Sun, should be carried out. The flare Sun, however, cannot be approximated as isothermal, although a two-temperature model may suffice \[see Appendix \[sec:SolarModelSec\] and also Lim and Nittler (2009)[@LimNitt1]\]. ### Improved SXM Modeling In this work, we have assumed a relatively simple geometry and instrument response function for the SXM. Much as we have done for the spectrometer, it is possible to compute higher fidelity values for the SXM grasp and, with continued testing, better characterize the SXM response function in general. ### Radiation Damage Preliminary work has suggested that under OSIRIS-REx’s expected radiation environment, degradation in CCD spectral resolution due to non-ionizing radiation damage still permits REXIS to meet its science objectives. This is in part due to the presence of the radiation cover [@RadDam; @Harrison; @RadDamChandra] and the fact that REXIS is primarily concerned with the measurement of count ratios, which reduces the effect of spectral degradation on weight ratio reconstruction error. (Sec. \[sec:Accs\]). However, continued characterization of the REXIS CCDs should allow for a more definite characterization of REXIS’s radiation environment and the effect of radiation damage on spectral performance (especially as a function of time), which we have not accounted for here. ### Internal Background {#internal-background} The internal background spectrum we have used here is scaled from Chandra data. In the future, a more accurate model making use of the actual REXIS geometry should be employed to determine the fluorescent signature of the REXIS structure in response to X-rays from Bennu and the CXB. In this case, a simulation framework such as GEANT4[@GEANT] can be used to determine the intensity of X-ray emission from REXIS itself incident upon the CCDs. ### Further Exploration of Error Space While our discussion of the error space in Sec. \[sec:ErrorSpace\] was somewhat qualitative, it is worthwhile to be more quantitative about our approach. Furthermore, we may continue to characterize how each of the model inputs discussed in Sec. \[sec:Method\] (such as molecular contamination, the OBF, and the spectral resolution) independently affect spectral performance. Acknowledgments {#acknowledgments .unnumbered} =============== The authors would like to thank Dr. Steve Kissel of the MIT Kavli Institute for CCD test data, Beverly LaMarr of the MIT Kavli Institute for discussions regarding radiation damage to CCDs, and Dr. Lucy Lim of NASA GSFC and Dr. Ben Clark of the Space Science Institute for many helpful discussions and suggestions concerning asteroid spectroscopy. This work was conducted under the support of the OSIRIS-REx program through research funds from Goddard Space Flight Center. Modeling Bennu {#sec:AsteroidSec} ============== Fluorescence ------------ We model Bennu as a sphere of $280~\mathrm{m}$ radius, with OSIRIS-REx viewing Bennu in a terminator orbit at $1~\mathrm{km}$ from the asteroid barycenter. Denote a point on Bennu by $P$, the center of REXIS’s detector plane by $R$, and the center of mass of the Sun by $S$. Let the angle between surface normal at $P$ and $\overline{SP}$ be given by $\psi_{\mathrm{in}}$, and that between the surface normal at $P$ and $\overline{PR}$ by $\psi_{\mathrm{out}}$. Then the intensity for the $k^{\mathrm{th}}$ fluorescence line as measured by REXIS is given by [@JenkinsQXS] $$\begin{aligned} I_k(E_k) = \frac{\Omega_{\astrosun}}{\Omega_B }\int_{\mathrm{Bennu}} \frac{d\Omega_B Q_{k}(E_k)}{4\pi \Delta E \sin \psi_{\mathrm{in}}} \int_{E_k}^{\infty}\frac{I_{\odot}(E)dE}{\sum_{j}W_j\left[\mu_j(E)\csc\psi_{\mathrm{in}} + \mu_j(E_k)\csc\psi_{\mathrm{out}}\right]}\end{aligned}$$ where $Q_{k}(E_k)$ is a factor that encompasses the probability and quantum yield associated with emission of the $k^{\mathrm{th}}$ line. $\Omega_{\astrosun}$ is the solid angle subtended by the Sun with respect to Bennu, $\Delta E$ is an arbitrarily-chosen energy bin, and $\Omega_B$ is the solid angle subtended by Bennu with respect to REXIS. If we assume all incident Solar X-rays are parallel to one another (valid since the Sun can effectively be considered a point source with respect to Bennu), then $\psi_{\mathrm{in}}$, $\psi_{\mathrm{out}}$, and $d\Omega_B$ can be related to one another by straightforward geometry, and an integration over the entire surface area of Bennu amounts to an integration over all the $\psi_{\mathrm{out}}$ within the REXIS field of view. If $W_k$ is the weight fraction of the element associated with the $k^{\mathrm{th}}$ line, $r_k$ is the jump ratio, $\omega_k$ is the fluorescence yield, and $f_k$ is the fraction of the series to which $k$ belongs that is devoted to $k$, we have $$\begin{aligned} Q_{k}(E_k) = W_k \frac{r_k - 1}{r_k}\omega_k f_k.\end{aligned}$$ Since fluorescent intensity is monochromatic, the total contribution to the spectrum from fluorescence is the sum of all the $I_k$. An example of the calculation of the line/probability factor for the Fe-K series is given in Table \[tab:FeKSeries\]. Edge/Series $r_k$ $\omega_k$ Line $E_k$ \[eV\] $f_k$ $Q_{k}/W_k$ ------------- ------- ------------ ------------- -------------- ------------------------- ----------------------- K$\alpha_3$ $6,267.40$ $2.76096\times 10^{-4}$ $7.649\times 10^{-4}$ K$\alpha_2$ $6,392.10$ $2.94023\times 10^{-1}$ $8.145\times 10^{-1}$ K$\alpha_1$ $6,405.20$ $5.80277\times 10^{-1}$ $1.608$ K$\beta_3$ $7,059.30$ $4.25566\times 10^{-2}$ $1.179\times 10^{-1}$ K$\beta_1$ $7,059.30$ $8.21556\times 10^{-2}$ $2.276\times 10^{-1}$ K$\beta_5$ $7,110.00$ $7.12115\times 10^{-4}$ $1.973\times 10^{-3}$ : Series information for Fe-K and calculation of associated line probability/yield factor, $Q_{k}(E_k)$. \[tab:FeKSeries\] Coherent scattering ------------------- The continuous spectrum due to coherent scattering is given by $$\begin{aligned} I_{\mathrm{scattering}}(E) = \frac{1}{\Omega_B }\int_{\Omega} \frac{d\sigma}{d\Omega}I_{\odot}(E)N_A d\Omega_B,\end{aligned}$$ where $N_A$ is Avogadro’s number and the integration effected over the solid angle $\Omega$. The differential scattering cross section $d\sigma/d\Omega$ is given by $$\begin{aligned} \frac{d\sigma}{d\Omega} = \frac{r_e^2}{4\pi}\left(1 - \cos^2\theta \right)\left|F\right(E,\theta)|^2,\end{aligned}$$ where $r_e$ is the classical electron radius, given by $2.82\times 10^{-15}~\mathrm{m}$, and where $|F(E,\theta)|^2$ the modulus squared of the (complex) atomic form factor $F(E,\theta)$ for the element in question, dependent upon both the energy of the incident radiation and the scattering angle $\theta \equiv \left|\psi_{\mathrm{in}} + \psi_{\mathrm{out}}\right|$. Incoherent scattering, being at least an order of magnitude smaller than coherent scattering for all energies and elements of interest, is not considered. The total intensity of radiation emitted by the asteroid (in units of $\mathrm{photons/Sr/s/eV/cm^2}$) is given by the sum of fluorescent and scattered radiation: $$\begin{aligned} I(E) = \sum_{k}I_k(E_k) + I_{\mathrm{scattering}}(E).\end{aligned}$$ Modeling the Solar Spectrum {#sec:SolarModelSec} =========================== Here we briefly discuss how the Solar corona is modeled in order to determine its X-ray spectrum. Denote the power per unit volume emitted by a plasma undergoing an atomic transition from quantum states $j \rightarrow i$ by $P_{ij}$, and the wavelength (or equivalently, the energy) associated with this transition by $\lambda_{ij}$. Then the intensity $I_{\odot}(\lambda_{ij})$ of the radiation at the surface of the body of interest (say Bennu) for this transition is given by $$\begin{aligned} I_{\odot}(\lambda_{ij}) = \frac{1}{4\pi R^2}\int_{V}P_{ij}dV,\end{aligned}$$ where $R$ is the distance from the Sun to Bennu and $V$ is the plasma volume. $P_{ij}$ can be written [@SolarCorona] $$\begin{aligned} P_{ij} = 0.8 A_{X} G(T,\lambda_{ij})\frac{hc}{\lambda_{ij}}N_e^2,\end{aligned}$$ where $N_e$ is the local electron density, $A_X$ is the abundance of the $X$th element with respect to hydrogen, and $G$ is the so-called contribution function (not to be confused with the grasp), which is a function of both the plasma temperature $T$ (not to be confused with the detector temperature) and the wavelength $\lambda_{ij}$ associated with the transition. $P_{ij}$ is dependent upon both temperature and electron density, and it is possible to decompose it into density and temperature dependent parts. We define the “differential emission measure”, or $\mathrm{DEM}$, as follows $$\begin{aligned} \label{eq:DEMdef} \int_V N_e^2 dV = \int_T \mathrm{DEM}(T) dT,\end{aligned}$$ so that $$\begin{aligned} P_{ij} = 0.8 A_{X} G(T,\lambda_{ij})\frac{hc}{\lambda_{ij}} \mathrm{DEM}(T)\frac{dT}{dV}.\end{aligned}$$ On the left hand side of Eq. , the integration is effected over the plasma volume; on the right hand side, over the possible coronal temperatures. Defining $$\begin{aligned} \phi(T,\lambda) \equiv \sum_{X}\sum_{ij} 0.8 A_{X} G(T,\lambda_{ij})\frac{hc}{\lambda_{ij}}\end{aligned}$$ (the first sum extending over all species and the second sum extending over all transition pairs), we have, using Eq. , $$\begin{aligned} \label{eq:DEMtoI} I_{\odot}(\lambda) = \int_T \phi(T,\lambda) \mathrm{DEM}(T) dT.\end{aligned}$$ Instead of integrating over all possible coronal temperatures, in certain cases, it is possible to consider only those coronal temperatures at which the emission measure is greatest. For quiet Solar regions, it has been found that the emission measure peaks at one value, while the $\mathrm{DEM}$ for active regions tends to peak at two values. Thus we may write $$\begin{aligned} \label{eq:TempModels} I_{\odot}(\lambda) &= \phi(T_1,\lambda)\mathrm{EM}(T_1) \hspace{3.325cm}\textrm{(quiet Sun)} \\ I_{\odot}(\lambda) &= \phi(T_1,\lambda)\mathrm{EM}(T_1) + \phi(T_2,\lambda)\mathrm{EM}(T_2) \hspace{0.5cm}\textrm{(active Sun)}\end{aligned}$$ where $\mathrm{EM}(T)$ is a single temperature emission measure, which for all practical purposes here amounts to a simple numerical prefactor. Definition of Accuracy {#sec:DefofAcc} ====================== ![Example of line contamination in the simplified case of only two lines, $I_1(E)$ (blue) and $I_2(E)$ (red); their sum is in black. The counts $C_{11} + C_{12}$ account for all the counts under the sum in the $\mathrm{FWHM}$ zone of line 1 and the counts $C_{22} + C_{21}$ are all the counts in the $\mathrm{FWHM}$ zone of line 2. $C_{11}$ are the counts that are due to line 1 in the line 1 $\mathrm{FWHM}$ zone, and $C_{22}$ are the counts due to line 2 in the line 2 $\mathrm{FWHM}$ zone, while $C_{12}$ is the contamination from line 2 into the FWHM zone of line 1, and $C_{21}$ the contamination of line 1 into the $\mathrm{FWHM}$ zone of line 2. []{data-label="fig:ContamExam"}](Contamination_Example_trim.pdf){width="70.00000%"} REXIS’s spectral performance requirement states that the reconstructed weight ratios of the asteroid regolith be within a certain percent of that of the baseline composition. REXIS itself can only measure count ratios, and we use the calibration curves to make the correspondence between count ratios and weight ratios. For convenience, we denote this function $\mathrm{CC}: \textrm{count ratio} \rightarrow \textrm{weight ratio}$, with the inverse mapping $\mathrm{CC}^{-1}: \textrm{weight ratio}\rightarrow \textrm{count ratio}$. Fig. \[fig:ContamExam\] demonstrates REXIS’s counting procedure in forming count ratios. For simplicity, the figure shows only two spectral features $I_{1}$ and $I_{2}$, shown in blue and red, respectively. First, we define counting zones centered about each line center. The width of these zones are given by the full-width half-maximum $\mathrm{FWHM}$ of the Gaussian centered at each energy. All the counts within each zone are considered to be from the respective line, although in reality, there will be some contamination from other spectral features. Thus in our simplified case, the total number of counts in $\mathrm{FWHM}_1$ will include contributions from $I_{1}$, denoted $C_{11}$, and those from $I_2$, denoted $C_{12}$. Likewise, the total number of counts in $\mathrm{FWHM}_2$ will include contributions from $I_2$ (given by $C_{22}$) and $I_1$ (given by $C_{21}$). The total count ratio of the first feature to the second $\rho_{1/2}$ is then given by $$\begin{aligned} \rho_{1/2} = \frac{C_{11} + C_{12}}{C_{22} + C_{21}}.\end{aligned}$$ In the more general case applicable to REXIS, we have the total number of binned detector counts given by $C_3'(E')$, so that for the $\rho_{k/\mathrm{Si}}$ line ratio, $$\begin{aligned} \rho_{k/\mathrm{Si}} = \frac{\sum_{E_k - \mathrm{\tiny{FHWM}}_k/2}^{E_k + \mathrm{\tiny{FHWM}}_k/2}C'_3(E')}{\sum_{E_{\mathrm{Si}} - \mathrm{\tiny{FHWM}}_{\mathrm{Si}}/2}^{E_{\mathrm{Si}} + \mathrm{\tiny{FHWM}}_{\mathrm{Si}} /2}C'_3(E')}.\end{aligned}$$ We map $\rho_{k/\mathrm{Si}}$ to the equivalent weight ratio $\varpi_{k/\mathrm{Si}}$ by using the calibration curve: $\varpi_{k/\mathrm{Si}} = \mathrm{CC}\left(\rho_{k/\mathrm{Si}}\right)$. The accuracy error is then calculated by comparing the measured weight ratio $\varpi_{k/\mathrm{Si}}$ with the regolith input weight ratio $\varpi_{k/\mathrm{Si},0}$ corresponding to a CI chondrite-like composition. If we denote the weight ratio error requirement by $\eta$, then we may write the requirement as $$\begin{aligned} \left|1 - \frac{\varpi_{k/\mathrm{Si}}}{\varpi_{k/\mathrm{Si},0}} \right| \leq \eta,\end{aligned}$$ where $\eta = 0.25$. In some cases (e.g., Appendix \[sec:StatTime\]), we may wish to consider only the errors in count ratios. In this case, we consider the expected count ratio from the lines of interest, ignoring contamination and other effects. If we denote this ratio by $\rho_{k/\mathrm{Si},0}$, we have $$\begin{aligned} \rho_{k/\mathrm{Si},0} = \frac{C_1\left(E_{k}\right)}{C_1\left(E_{\mathrm{Si}}\right)},\end{aligned}$$ where care has been taken to ensure that the effect of quantum efficiency has been accounted for. Indeed, when the calibration curves are generated, $\rho_{k/\mathrm{Si},0}$ is calculated for a whole range of baseline compositions, and a second order fit performed on the various $(\varpi_{k/\mathrm{Si},0},\rho_{k/\mathrm{Si},0})$ pairs (Fig. \[fig:CalCurves\]). The REXIS performance requirement in terms of count ratio is denoted by $\eta_C$, and is then given by $$\begin{aligned} \left|1 - \frac{\rho_{k/\mathrm{Si}}}{\rho_{k/\mathrm{Si},0}} \right| \leq \eta_C.\end{aligned}$$ $\eta_C$ can be related to $\eta$ in a relatively straightforward way that is most clearly demonstrated graphically by means of the shaded region shown in Fig. \[fig:Cal\_Curve\_Error\_Space\]; it is given mathematically by $$\begin{aligned} \left|1 - \frac{\mathrm{CC}^{-1}\left[\left(1 - \eta\right) \times \varpi_{k/\mathrm{Si},0}\right]}{\mathrm{CC}^{-1}\left( \varpi_{k/\mathrm{Si},0}\right)}\right| = \eta_C.\end{aligned}$$ Statistical Error {#sec:StatTime} ================= Suppose for a given detector temperature that the total error in counts due to systematic error is $\Delta$, that the requirement is given by $\eta_{C}$, and that statistical error is given by $\sigma$. We suppose that the errors can be summed quadratically: $$\begin{aligned} \eta_{C} = \sqrt{\Delta^2 + \sigma^2},\end{aligned}$$ so that $$\begin{aligned} \sigma = \sqrt{\eta_{C}^2 - \Delta^2}.\end{aligned}$$ For the $k^{\mathrm{th}}$ line and an $N$ confidence level, $$\begin{aligned} \label{eq:CountExp} \frac{\sigma^2}{N^2} &= \frac{ \dot{N}_{k}/T_{\mathrm{tot}} + \dot{N}_{\mathrm{CXB},k}/T_{\mathrm{CXB}} + \dot{N}_{\mathrm{int},k}/T_{\mathrm{int}}}{\left(\dot{N}_{k} - \dot{N}_{\mathrm{CXB},k} - \dot{N}_{\mathrm{int},k}\right)^2} + \frac{ \dot{N}_{\mathrm{Si}}/T_{\mathrm{tot}} + \dot{N}_{\mathrm{CXB},\mathrm{Si}}/T_{\mathrm{CXB}} + \dot{N}_{\mathrm{int},\mathrm{Si}}/T_{\mathrm{int}}}{\left(\dot{N}_{\mathrm{Si}} - \dot{N}_{\mathrm{CXB},\mathrm{Si}} - \dot{N}_{\mathrm{int},\mathrm{Si}}\right)^2}.\end{aligned}$$ $T_{\mathrm{CXB}}$ and $T_{\mathrm{int}}$ are the CXB and internal calibration times (see Sec. \[sec:BackSub\]). $\dot{N}_{k}$, $\dot{N}_{\mathrm{CXB},k}$, and $\dot{N}_{\mathrm{int},k}$ refer respectively to the total count rates, CXB count rates, and internal background count rates within the $k^{\mathrm{th}}$ $\mathrm{FWHM}$ counting zone. More precisely, $\dot{N}_{k}$, $\dot{N}_{\mathrm{CXB},k}$, and $\dot{N}_{\mathrm{int},k}$ are given by $C_3'/T_{\mathrm{obs}}$, $C_{\mathrm{CXB}}'/T_{\mathrm{obs}}$, and $C_{\mathrm{int}}'/T_{\mathrm{obs}}$ summed over each $\mathrm{FWHM}$ zone. Rearranging Eq. \[eq:CountExp\], we get $$\begin{aligned} \frac{\sigma^2}{N^2} &= \underbrace{\left[\frac{ \dot{N}_{k}}{\left(\dot{N}_{k} - \dot{N}_{\mathrm{CXB},k} - \dot{N}_{\mathrm{int},k}\right)^2} + \frac{ \dot{N}_{\mathrm{Si}}}{\left(\dot{N}_{\mathrm{Si}} - \dot{N}_{\mathrm{CXB},\mathrm{Si}} - \dot{N}_{\mathrm{int},\mathrm{Si}}\right)^2}\right]}_{\equiv L}\frac{1}{T_{\mathrm{tot}}} + \nonumber \\ & \underbrace{\frac{ \dot{N}_{\mathrm{CXB},k}/T_{\mathrm{CXB}} + \dot{N}_{\mathrm{int},k}/T_{\mathrm{int}} }{\left(\dot{N}_{k} - \dot{N}_{\mathrm{CXB},k} - \dot{N}_{\mathrm{int},k}\right)^2} + \frac{\dot{N}_{\mathrm{CXB},\mathrm{Si}}/T_{\mathrm{CXB}} + \dot{N}_{\mathrm{int},\mathrm{Si}}/T_{\mathrm{int}}}{\left(\dot{N}_{\mathrm{Si}} - \dot{N}_{\mathrm{CXB},\mathrm{Si}} - \dot{N}_{\mathrm{int},\mathrm{Si}}\right)^2}}_{\equiv R}.\end{aligned}$$ With $L$ and $R$ defined as above, the total observation time $T_{\mathrm{obs}}$ required for $N\sigma$ confidence is given by $$\begin{aligned} T_{\mathrm{obs}} = \frac{L}{R- \sigma^2/N^2}.\end{aligned}$$ A summary of the expected count rates within each $\mathrm{FWHM}$ zone are given in Table \[tab:Countsum\]. Line $E_-$ \[keV\] $E_+$\[keV\] $\dot{N}_{k}$ $\dot{N}_{\mathrm{int},k}$ $\dot{N}_{\mathrm{CXB},k}$ ------ --------------- -------------- --------------- ---------------------------- ---------------------------- -- -- Fe-L 0.658 0.751 4.73 0.038 3.73 Mg-K 1.1830 1.3230 2.64 0.0291 1.69 Si-K 1.6620 1.8139 1.46 0.0745 0.948 S-K 2.2250 2.3910 0.967 0.0296 0.888 : Summary of expected count rate with detector temperature $T = -60~\mathrm{^\circ C}$. $E_-$ and $E_+$ are the lower and upper limits to the $\mathrm{FWHM}$ zone for each line, respectively. $\dot{N}_{k}$ is the total number of counts in each $\mathrm{FWHM}$ zone (i.e., from Bennu and background), while $\dot{N}_{\mathrm{CXB}, k}$ and $\dot{N}_{\mathrm{int}, k}$ are CXB and internal count rates, respectively, in each zone. \[tab:Countsum\] Calculating the Energy Resolution of the Detector {#sec:FWHMCalc} ================================================= To determine the energy and detector temperature dependence of the detector resolution $\mathrm{FWHM}$, we require two pieces of experimental data: $\mathrm{FWHM}$ as a function of energy $E$ at a fixed temperature $T_0$, and $\mathrm{FWHM}$ as a function of temperature $T$ at a fixed energy $E_0$. The two pieces of information can be combined then to determine the general dependence of $\mathrm{FWHM}$ on $E$ and $T$[^5]: $$\begin{aligned} \label{eq:FWHM_rearr} \mathrm{FWHM}(E,T) = \sqrt{\mathrm{FWHM}^2(E,T_0) + \mathrm{FWHM}^2(E_0,T) - \mathrm{FWHM}^2(E_0,T_0)}\end{aligned}$$ In the case of REXIS, energy resolution for the CCID-41 has been experimentally determined as a function of energy at $T_0 = -90~\mathrm{^{\circ}C}$[^6], and as a function of temperature at $E_0 = 5.89~\mathrm{keV}$[^7]. These two pieces of information together allow us to use Eq. \[eq:FWHM\_rearr\] to generate Fig. \[fig:InstResInputs\]. [^1]: Corresponding author. Email: [email protected] [^2]: All X-ray data, including fluorescent line energies, fluorescence yields, jump ratios, relative intensities, photoabsorption cross sections, and scattering cross sections are derived from the compilations of Elam, *et al.*[@ELAM] and Kissel [@Kissel]. The Kissel scattering cross section data may be found at the following URLs: - <http://ftp.esrf.eu/pub/scisoft/xop2.3/DabaxFiles/f0_rf_Kissel.dat> - <http://ftp.esrf.eu/pub/scisoft/xop2.3/DabaxFiles/f0_mf_Kissel.dat> - <http://ftp.esrf.eu/pub/scisoft/xop2.3/DabaxFiles/f1f2_asf_Kissel.dat> - <http://ftp.esrf.eu/pub/scisoft/xop2.3/DabaxFiles/f0_EPDL97.dat> [^3]: For a given differential area element on the detector surface, we calculate the solid angle subtended by Bennu and the CXB and multiply each by the differential area; we then average these values over the area of the detector. [^4]: At the time of submission of this paper, some of the quantum efficiency data shown in Fig. \[fig:InstResInputs\] is no longer up to date. However, the impact on our results are negligible, and future work will incorporate more accurate data. For more on the characterization of the REXIS CCDs, see Ryu, *et al.*[@ryu2014development] [^5]: Personal communication, M. Bautz. [^6]: Personal communication, M. Bautz. [^7]: Personal communication, S. Kissel.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study the time evolution of entanglement of two spins in anisotropically coupled quantum dot interacting with the unpolarized nuclear spins environment. We assume that the exchange coupling strength in the z-direction $J_z$ is different from the lateral one $J_l$. We observe that the entanglement decays as a result of the coupling to the nuclear environment and reaches a saturation value, which depends on the value of the exchange interaction difference $J=\| J_l-J_z\|$ between the two spins and the strength of the applied external magnetic field. We find that the entanglement exhibits a critical behavior controlled by the competition between the exchange interaction $J$ and the external magnetic field. The entanglement shows a quasi-symmetric behavior above and below a critical value of the exchange interaction. It becomes more symmetric as the external magnetic field increases. The entanglement reaches a large saturation value, close to unity, when the exchange interaction is far above or below its critical value and a small one as it closely approaches the critical value. Furthermore, we find that the decay rate profile of entanglement is linear when the exchange interaction is much higher or lower than the critical value but converts to a power law and finally to a Gaussian as the critical value is approached from both directions. The dynamics of entanglement is found to be independent of the exchange interaction for isotropically coupled quantum dot.' address: - '$^1$Department of Physics, King Saud University, Riyadh 11451-2455, Saudi Arabia' - '$^2$Department of Physics, Purdue University, West Lafayette, IN 47907, USA' - '$^3$Department of Physics, Ain Shams University, Cairo 11566, Egypt' - '$^4$Department of Chemistry and Birck Nanotechnology Center, Purdue University, West Lafayette IN 47907, USA' author: - 'Gehad Sadiek,$^{1,2,3}$ Zhen Huang,$^{4}$ Omar Aldossary,$^{1}$ and Sabre Kais$^{4}$' title: 'Nuclear-induced time evolution of entanglement of two-electron spins in anisotropically coupled quantum dot' --- Introduction ============ Coherence and entanglement lie in the heart of quantum mechanics since its birth and are of fundamental interest in modern physics. Particular fields where they play a crucial role are quantum information processing and quantum computing [@Nielsen; @Bouwmeester; @gruska; @macchiavelleo]. Decoherence is considered as one of the main obstacles toward achieving a practical quantum computing system as it acts to randomize the relative phases of the two-state quantum computing units (qubits) due to coupling to the environment[@Decoherence]. Quantum entanglement is a nonlocal correlation between two (or more) quantum systems such that the description of their states has to be done with reference to each other even if they are spatially well separated. Entanglement arises naturally when manipulating linear superpositions of quantum states to implement the different proposed quantum computing algorithms [@Shor; @Grover]. Different physical systems have been proposed as reliable candidates for the underlying technology of quantum computing and quantum information processing [@Barenco; @ibm-stanford; @NMR1; @NMR3; @TrappedIones; @CvityQED; @JosephsonJunction]. There has been special interest in solid state systems as they can be utilized to build up integrated networks that can perform quantum computing algorithms at large scale. Particularly, the semiconductor quantum dot is considered as one of the most promising candidates for playing the role of a qubit [@spin-qgate; @spin-qubit; @spin-orbit]. The main idea is to use the spin [**S**]{} of the valence electron on a single quantum dot as a two-state quantum system, which gives rise to a well-defined qubit. On the other hand, the strong coupling of the electron spin on the quantum dot to its environment stands as a real challenge toward achieving the high coherent control over the spin required for information and computational processing. The main mechanisms responsible for spin decoherence are spin-orbit coupling and spin-nuclear spins (of the surrounding lattice) coupling [@spin-orbit; @spin-nucspin]. Recent experiments show that the spin relaxation time due to spin-orbit coupling (tens of ms) is about six orders of magnitude longer than that due to coupling to nuclear spins (few ns) [@Kroutfar; @Elzerman; @Florescu]. This made the electron spin coupling to the nuclear spins the target of large number of theoretical [@theory-Golovach; @theory-Coish; @theory-Shenvi; @theory-Klauser; @theory-analytical] and experimental [@experim-Huttel; @experim-Tyryshkin; @experim-Abe; @experim-Johnson; @experim-Koppens; @experim-Petta] research works. The dipolar interaction between the nuclei in the dot does not conserve the total nuclear spin and as a result the change in the nuclear spin configuration happens within $T_{nuc} \sim 10^{-4} s$ which is the precession time of a single nuclear spin in the local magnetic field of the other spins. This precession time is much longer than the decoherence time of the electron spin due to hyperfine coupling in the dot ($\sim 10^{-6}$ s), therefore we can safely ignore this interaction as well as the fluctuation of the nuclear magnetic field when treating the entanglement and decoherence problem of the electron spins. In addition to the proposals for using the single quantum dot as a qubit, there has been others for using coupled quantum dots as quantum gates[@spin-qgate; @spin-orbit]. The aim is to find a controllable mechanism for forming an entanglement between the two quantum dots (two qubits) in such a way to produce a fundamental quantum computing gate such as XOR. In addition, we need to be able to coherently manipulate such an entangled state to provide an efficient computational process. Such coherent manipulation of entangled states has been observed in other systems such as isolated trapped ions [@trapped-ions] and superconducting junctions [@supercond-junc]. Recently an increasing effort to investigate entangled states in coupled quantum dots has emerged. The coherent control of a two-electron spin state in a coupled quantum dot was achieved experimentally, where the coupling mechanism is the Heisenberg exchange interaction between the electron spins [@experim-Johnson; @experim-Koppens; @experim-Petta]. The mixing of the two-electron singlet and triplet states due to coupling to the nuclear spins was observed. The induced decoherence (leading to the singlet triplet mixing) in the system has been studied theoretically, where each one of the two electrons were assumed to localized on its own dot and coupled to a different nuclear spin environement [@theory-coher-control-1; @theory-coher-control-2; @huang1]. There has been proposals for experimental scheme to create, detect and control entangled spin states in coupled quantum dots [@proposed-scheme]. Recently, there has been an increasing interest in studying the anisotropy in the exchange interaction in coupled quantum dot systems, which is mainly due to the spin-orbit coupling, the unavoidable asymmetry of the dots structure and the effect of the external magnetic field [@Anisotropic_J1; @Anisotropic_J2; @Anisotropic_J3; @Anisotropic_J4]. Furthermore, there has been proposals to utilize this anisotropy, rather than removing it, in introducing a set of quantum logic gates to be implemented in quantum computing [@Anisotropic_J_QComput1; @Anisotropic_J_QComput2; @Anisotropic_J_QComput3]. In this paper we study the dynamics of entanglement of two electron spins confined in a system of two laterally coupled quantum dots with one net electron per dot. The two electron spins are coupled through exchange interaction. Modeling the coupling between the two electrons on a couple dot system by exchange interaction, including the low barrier case, has been studied in detail in Ref. [@spin-orbit]. We assume that the exchange coupling is anisotropic such that the two spins couple in the transverse (z-direction) with a coupling strength that is different from the one in the lateral direction (x and y-directions). An external magnetic field is applied perpendicularly to the plane of confinement. We assume that the potential barrier between the two dots is low enough that the two electrons are delocalized over the two dots and as a result are coupled to a common nuclear environment which splits the two spins space into two unmixable subspaces (one contains the singlet state and the other contains the three triplets). Therefore, the singlet-triplet mixing can not take place for this system set up in contrast to the case studied in Ref. [@theory-coher-control-1] and [@theory-coher-control-2]. Nevertheless, mixing within the triplets subspace is possible and in order to study it we assume that the system was initially prepared in a maximally entangled triplet spin state with a total antisymmetric wave function, which is experimentally an achievable task [@experim-Koppens; @experim-Petta]. The two spins are coupled through hyperfine interaction to the surrounding unpolarized nuclear spins in the two dots. We study the time evolution of the entanglement of the two spins induced by the nuclear spins at different strengths of the external magnetic field and exchange interaction, which can be tuned and measured experimentally by controlling the different gate voltages on the two dots [@experim-Petta]. We found that the entanglement which is initially maximum decays as a result of coupling to the nuclear spins within a decoherence time of the order of $10^{-4}$. The entanglement decay rate and its saturation value vary over wide range and are determined by the value of the exchange interaction difference (between the transverse and lateral components) and the strength of the external magnetic field. The entanglement exhibits a critical behavior which is decided based on the competition between the exchange interaction difference (regardless of its sign) and the external magnetic field, it shows a quasi-symmetric behavior above and below a critical value of the exchange interaction difference which is determined based on the value of the external magnetic field. The behavior becomes more symmetric as the external magnetic field increases. Comparable decay rates and saturation values were observed for the entanglement on both sides of the critical exchange interaction value. Far from the critical exchange value, the entanglement shows high saturation values and slow power law decay rates. Close to the critical value, the saturation values become much lower and the decay rate shows Gaussian profiles. Our results suggest that for maintaining the entanglement between the two spins, the difference between the values of the exchange interaction difference and the external magnetic field should be tuned to be large. This paper is organized as follows. In sec. II we introduce our model and the calculations of the spin correlation functions. In sec. III we show the evaluation of entanglement. In sec. IV we present our results, discussing and explaining the different important findings we obtained. We close with our conclusions in Sec V. Two electron spins coupled to a bath of nuclear spins ===================================================== We consider two coupled quantum dots with a net single valence electron per dot. The two electron spins $\bf{S_1}$ and $\bf{S_2}$ couple to each other through exchange interaction and couple to the same external magnetic field $\bf{B}$ and to the common unpolarized nuclear spins on the two dots $\{ \bf{I}_i\}$ through hyperfine interaction. The exchange interaction is taken to be anisotropic with the coupling strength in the z-direction to be $J_z$ and in the x and y directions as $J_l$. The spin-orbit coupling and the dipolar interactions between the nuclei are ignored. The system is described by the Hamiltonian = J\_z S\_[1z]{} S\_[2z]{} + J\_l ( S\_[1x]{} S\_[2x]{}+ S\_[1y]{} S\_[2y]{} ) + g \_B **[S]{} B + **[S]{} **[h]{}\_[2N]{}, where g is the electron $g$ factor and $\mu_B$ the Bohr magneton. $\bf{S} = \bf{S_{1}} + \bf{S_{2}}$ and $ \bf{h}_{2N}$ $ = \sum_{i} A_i \bf{I}^i = g \mu_B \bf{H}_{2N}$, where $\bf{H}_{2N}$ is the two dot nuclear magnetic field and the sum is running over the entire space. $A_i$ is the coupling constant of $\bf{S}$ with the $\it{i}$th nucleus and is equal to $A v_0 |\psi_A (\bf{r_i})|^2$, where A is a hyperfine constant, $v_0 (= a^2 a_z / N) $ is the volume of the crystal cell, $a$ and $a_z$ are the single dot sizes in the lateral and transverse directions, and $\psi_A (\bf{r}_i)$ is the two electrons envelope antisymmetric wave function at the nuclear site $\bf{r}_i$. For a typical single quantum dot size, the number of nuclei $N = 10^6$ and their typical nuclear magnetic field affecting the electron spin through hyperfine coupling is of the order of $\sim A/(\sqrt{N}g \mu_B)$ [@optical-orientation], which is approximately of magnitude $\simeq 5$ mT [@experim-Petta], with an associated electron precession frequency $\omega_N \simeq A/\sqrt{N}$, where $\omega_N \gg 1/T_{nuc}$, and $a \sim 15$ nm. The presence of an external magnetic field causes Zeeman splitting for the electron spin and in addition has a polarizing effect on the nuclear spins. The magnitude of these effects and the associated electron precession frequency depends on the strength of the applied field which we consider as a varying parameter in this work and is changed over a wide range. Therefore, the precession frequency about the external magnetic field can be higher or lower than $\omega_N$ depending on the strength of the external field compared to the nuclear field. We consider some particular unpolarized nuclear configuration, represented in terms of the $\hat{I}_{iz}$ nuclear spin eigenbasis as $|{I_{z}^{i}}\rangle$, with $I_{i}^{z}=\pm 1/2$, where we have considered nuclear spin 1/2 for simplicity. This nuclear configurations have the same time evolution as the more general initial tensor product states corresponding to arbitrary directions of individual spins [@spin-nucspin]. The Hamiltonian $H$ (Eq. 1) is symmetric under exchange of the two electron spins $\bf{S_1}$ and $\bf{S_2}$ which reflects the fact that in this model the two electrons are considered indistinguishable (delocalized over the two dots). This Hamiltonian splits the the two spins space into two subspaces, one of them contains the singlet state and the other contains the three triplet states. In this system set up (described by the Hamiltonian $H$), the two subspaces never get mixed. The only mechanism by which these two subspaces can get mixed is that each electron spin couples to a different nuclear magnetic field (or even a different magnetic field), which is not the case here. We assume that the two coupled electron spins are initially $(t=0)$ in a triplet, maximally entangled, state described in the coupled representation by $|T_0\rangle = |\Uparrow\Downarrow + \Downarrow\Uparrow\rangle$. We are interested in investigating the dynamics of entanglement of this state (dynamics of entanglement and its non-monoticity behavior has been studied in many works, for example see Ref. \[40\] and references therein). For that purpose we evaluate the correlators C\_[ij]{}(t)=n| S\_[i]{}(t) S\_[j]{}(t) |n , i,j=x,y,z where $|n \rangle$ is the total system (two electrons and nuclei) state given by $|T_0\rangle |\{I_{iz}\}\rangle$ and $\hat S_{i}(t)= e^{it\hat{H}} \hat S_{i} e^{-it\hat{H}}$. The initial state $|n \rangle$ is an eigenstate of the Hamiltonian \_0 = J\_z \_[1z]{} \_[2z]{} + J\_l(\_[1+]{}\_[2-]{}+\_[1-]{}\_[2+]{}) + \_z \_[i=1]{}\^2 \_[iz]{} +\_[2Nz]{} \_[i=1]{}\^2 \_[iz]{}, with an eigenenergy $\epsilon_n = -J_z/4 + J_l/2$, where $\epsilon_z = g \mu_B B_z$, consequently we can expand in the perturbation (where we use the fact that for a typical nuclear configuration $ A_{k}^2 / h_{2n} << 1$ which acts as our expansion parameter as can be noticed in the coming calculations [@spin-nucspin; @optical-orientation]) = \_[i=1]{}\^2(\_[i+]{} \_[2N-]{}+\_[i-]{}\_[2N+]{}), where the total Hamiltonian $\hat{H}=\hat{H_0}+\hat{V}$. Using the time evolution operator $\hat{U}(t)= \hat{T} \exp[-it\int_{0}^{t}dt' \hat{V}(t')]$, where $\hat{T}$ is the usual time-ordering operator, we obtain C\_[ij]{}(t) = C\_[ij]{}\^[(0)]{} + C\_[ij]{}(t) where $C_{ij}^{(0)}$ is the correlator corresponding to $\hat{V} = 0$, while $C_{ij}(t)$ is corresponding to the corrections due to the perturbation $\hat{V}$ and are given by C\_[ij]{}\^[(0)]{} = n| S\_[i]{} \_[j]{}|n and C\_[ij]{}(t) &=& n|\^(t) \_[i]{}(t) (t) \^(t) \_[j]{}(t) |n\ &=& \_[k\_1]{} n|\^(t) \_[i]{}(t) (t) |k\_1k\_1| \^(t) \_[j]{}(t) |n\ &+& \_[k\_2]{} n|\^(t) \_[i]{}(t) (t) |k\_2k\_2| \^(t) \_[j]{}(t) |n where the intermediate state $|k_1> = |T_{+1}\rangle |\{I_{z}^{i}\}\rangle=| \Uparrow\Uparrow \rangle |\{\cdot\cdot\cdot ,I_{z}^{k_{1}}= -1/2, \cdot\cdot\cdot \}\rangle$ and $|k_2> = |T_{-1}\rangle |\{I_{z}^{i}\}\rangle=| \Downarrow\Downarrow\rangle |\{\cdot\cdot\cdot ,I_{z}^{k_{2}}= +1/2, \cdot\cdot\cdot \}\rangle$, while for the singlet state $|S_{0}\rangle$ we have $\langle T_{0}|\hat{V}|S_{0}\rangle = 0$. The non-vanishing correlators to the leading order in $\hat{V}$ read S\^z\_[12]{}=n| S\_[1z]{} S\_[2z]{} |n = - + (\_1(t) + \_2(t)) , S\^x\_[12]{}=n| S\_[1x]{} S\_[2x]{} |n = , S\^y\_[12]{}=n| S\_[1y]{} S\_[2y]{} |n = , \[M1z\] M\^z\_1=n| S\_[1z]{}|n = (\_1(t) - \_2(t)) , \[M2z\] M\^z\_2=n| S\_[2z]{}|n = (\_1(t) - \_2(t)) , where \[first-order\] \_i(t) = 4 \_[k\_i]{} \^[2]{}(\_[n,k\_i]{} t/2) , i= 1, 2 where $V_{n,k_1}= \langle n |\hat{V}| k_1\rangle = A_{k_1}/\sqrt{2}$ is the matrix element between initial state $|n\rangle = |T_0 \rangle |\{\cdot\cdot\cdot ,I_{z}^{k_{1}}= 1/2, \cdot\cdot\cdot \} \rangle$ and the intermediate state $|k_1\rangle$ and $V_{n,k_2} = A_{k_2}/\sqrt{2}$ is between $|n\rangle = |T_0 \rangle |\{\cdot\cdot\cdot ,I_{z}^{k_{2}}= - 1/2, \cdot\cdot\cdot \} \rangle$ and $|k_2\rangle$ and \[omega\_k1\] \_[n,k\_1]{} = \_n - \_[k\_1]{} = \[-(\_[z]{}+(h\_z)\_[2n]{})\]- A\_[k\_1]{}\_[-]{} - A\_[k\_1]{} , \[omega\_k2\] \_[n,k\_2]{} = \_n - \_[k\_2]{} = \[+(\_[z]{}+(h\_z)\_[2n]{})\]+ A\_[k\_2]{} \_[+]{} + A\_[k\_2]{} , where $J=J_l-J_z$ is the exchange interactions difference due to the anisotropy of the coupling and $(h_z)_{2n}=\langle n|h_{2Nz} |n \rangle \simeq \langle n|\sum_{i \ne k_i} I_{z}^i|n \rangle$. We will use $h_{2n}$ instead of $(h_z)_{2n}$ from now on for simplicity. As a result of the large value of N, we can replace the sum over $k_i$ by integrals ($\sum_{k_i} F_{k_i} = 1/v_0 \int F({\bf r}) + O(1/N)$) and we obtain \_1(t) = \[I\_0 -I\_1 (\_[-]{} t) - I\_2 (\_[-]{} t)\] , \_2(t) = \[I\_0 -I\_1 (\_[+]{} t) + I\_2 (\_[+]{} t)\] , where I\_[0]{}= dx dy dz \[A (x,y,z) t\]\^2, I\_[1]{}= dx dy dz \[A (x,y,z)\]\^2 , I\_[2]{}= dx dy dz \[A (x,y,z)\]\^2 , and where (x,y,z) = A v\_[0]{} |(x,y,z)|\^2 . $|\psi(x,y,z)|^2$ is the two electrons envelope antisymmetric wave-function which we construct starting from the two single state wave functions (which is the typical wave function used to describe the electron state in the confining potential of a quantum dot system [@spin-orbit; @spin-nucspin]) \_1(x\_1,y\_1,z\_1) = e\^[-(x\_[1]{}-b)\^2/a\^2]{} e\^[-y\_[1]{}\^2/a\^2]{} e\^[-z\_[1]{}\^2/a\_[z]{}\^2]{},\ \_2(x\_2,y\_2,z\_2) = e\^[-(x\_[2]{}+b)\^2/a\^2]{} e\^[-y\_[2]{}\^2/a\^2]{} e\^[-z\_[2]{}\^2/a\_[z]{}\^2]{} which are mixed to obtain an antisymmetric wave function $\psi(x_1, y_1, z_1; x_2, y_2, z_2)$. Finally integrating out the coordinates of one of the two electrons, squaring and normalizing we obtain |(x,y,z)|\^2 = e\^[-2 y\^2/a\^2]{} e\^[-2 z\^2/a\_[z]{}\^2]{} e\^[-2 x\^2/a\^2]{} \[(4 b x/a\^2) - e\^[-2 b\^2/a\^2]{}\], where $2b$ is the interdot distance. In our calculations we set $b=a/2$. The integrals $I_0$, $I_1$ and $I_2$ were evaluated numerically.****** Evaluating the entanglement of formation ======================================== In this paper we investigate the time evolution of two-spin entangled state by calculating the entanglement of formation $E$ between them. The concept of entanglement of formation is related to the amount of entanglement needed to obtain the maximum entanglement state $\rho$, where $\rho$ is the density matrix in the two-spin uncoupled representation basis. It was shown by Wootters[@Wootters98] that $$E(\rho)=\mathcal{E}(C(\rho)),$$ where the function $\mathcal{E}$ is given by $$\mathcal{E}=h(\frac{1+\sqrt{1-C^2}}{2}),$$ where $h(x)=-xlog_2x-(1-x)log_2(1-x)$ and the concurrence $\textit{C}$ is defined as $$C(\rho)=max\{0,~\lambda_1-\lambda_2-\lambda_3-\lambda_4~\}.$$ For a general state of two qubits, $\lambda_i$’s are the eigenvalues, in decreasing order, of the Hermitian matrix $$R \equiv~\sqrt{\sqrt{\rho}~\tilde{\rho}~\sqrt{\rho}},$$ where  $\tilde{\rho}$  is the spin-flipped state of the density matrix $\rho$, and defined as\ $$\tilde{\rho}=(\sigma_y \otimes \sigma_y) \rho^*(\sigma_y \otimes \sigma_y).$$ where $\rho^*$ is the complex conjugate of $\rho$. Alternatively, the $\lambda_i$’s are the square roots of the eigenvalues of the non-Hermitian $\rho \tilde{\rho}$. Since the density matrix $\rho$ follows from the symmetry properties of the Hamiltonian, the $\rho$ must be real and symmetrical[@Osterloh02], plus the symmetric property in X and Y direction in our system ,we obtain $$\rho={\left(\begin {array}{cccc} \rho_{1,1} & 0 & 0 & 0 \\ 0 & \rho_{2,2} & \rho_{2,3} & 0 \\ 0 & \rho_{2,3} & \rho_{3,3} & 0\\ 0 & 0 & 0 & \rho_{4,4}\end {array} \right)}.\\$$ The $\lambda_i$’s can be written as $$\label{eigenvalue} \lambda_a = \lambda_b= \sqrt{\rho_{1,1} \rho_{4,4}},~ \lambda_c= |\sqrt{\rho_{2,2} \rho_{3,3}} + | \rho_{2,3} ||,~ \lambda_d =| \sqrt{\rho_{2,2} \rho_{3,3}} - | \rho_{2,3} ||.$$ Using the definition $<A>=Tr(\rho A)$, we can express all the matrix elements in the density matrix in terms of the different spin-spin correlation functions [@huang2; @huang3]: $$\rho_{1,1}=\frac{1}{2} M_1^z + \frac{1}{2} M_2^z + S_{12}^z + \frac{1}{4},$$ $$\rho_{2,2}=\frac{1}{2} M_1^z -\frac{1}{2} M_2^z - S_{12}^z + \frac{1}{4},$$ $$\rho_{3,3}=-\frac{1}{2} M_1^z +\frac{1}{2} M_2^z - S_{12}^z + \frac{1}{4},$$ $$\rho_{4,4}= - \frac{1}{2} M_1^z -\frac{1}{2} M_2^z + S_{12}^z + \frac{1}{4},$$ $$\rho_{2,3}= S_{12}^x + S_{12}^y,$$ Note that if the sign of $J$ is inverted (i.e. $J \rightarrow -J$), $\omega_{+} \rightarrow -\omega_{-}$ and vice versa and as a result $\Gamma_1$ and $\Gamma_2$ exchange their expressions ($\Gamma_1 \leftrightarrow \Gamma_2$). The only physical quantities that are affected by this sign flip are $M_{1}^{z}$ and $M_{2}^{z}$ (Eqs. (\[M1z\]), (\[M2z\])) which flip sign as well. Despite these sign flips the eigenvalues of the matrix $R$ given by Eq. \[eigenvalue\] are not affected because under this sign change the density matrix elements $\rho_{1,1} \leftrightarrow \rho_{4,4}$ and $\rho_{2,2} \leftrightarrow \rho_{3,3}$ whereas the other elements are invariant. results and discussions ======================= In this paper, we focus on the dynamic behavior of entanglement of two coupled electron spins under different external magnetic field strengths and at different values of the Heisenberg exchange interactions difference $J$. Interestingly, the entanglement is independent of the sign of the exchange interaction difference $J$, as was explained in sec. 3, which means that regardless of which coupling component is dominant ($J_l$ or $J_z$) the behavior of the entanglement as a function of $J$ stays the same. Firstly, we study the nuclear-induced time evolution of entanglement in absence of external magnetic field and with a very small anisotropy in the exchange interaction. In fig. \[fig1\], we plot the entanglement of two quantum dots versus time with $J=A/5000$ and $\epsilon_z=0$. The initial state of the coupled quantum dot is triplet [**$\frac{1}{\sqrt{2}}(|\Uparrow \Downarrow \rangle + |\Downarrow \Uparrow \rangle)$**]{}, where the two spins are maximally entangled. As shown in fig. \[fig1\], the entanglement is initially maximum (E=1). Once the interaction between the nuclear magnetic field and quantum dots is turned on, the entanglement starts decaying with a rapid oscillation and reaches a saturation value of about 0.921 within a decay time of the order of $10^{-4}$s. As can be noticed the envelope of this oscillating entanglement decreases with a power law profile under this condition before reaching saturation. From the process of constructing the density matrix to calculate the entanglement, the entanglement, magnetization and spin correlations are directly related. We plot the time evolution of the spin correlation function in the z-direction $S_{12}^{z}$ in the inner panel. The correlation function exhibits a very similar behavior to that of the entanglement, it begins with a maximum value, -0.25, then decays to a saturation value of about -0.236 within the same decay time. Under the first order perturbation theory, the spin correlations in x and y directions are unaffected by nuclear spins. This leads to the higher saturation value of entanglement in our calculation, which may change at higher orders. Furthermore, because of the initial unpolarized state for the nuclear spins and absence of external magnetic field, the magnetization of the two quantum dots remains zero. The effect of the external magnetic field with a very small anisotropy in exchange coupling between the two spins is shown in fig. \[fig2\] where $\epsilon_z$ is set to be $A/50>>h_{2n}$ and $J=A/5000$. The entanglement evolves with a similar decaying behavior to what we have observed in fig. \[fig1\] except that the decaying amplitude of oscillation is much smaller and the saturation value, for both entanglement and correlation, is much higher (0.99935 and -0.24989 respectively), which emphasis the important role the external magnetic field plays, as expected, significantly enhancing the entanglement between the two spins. In fig. \[fig3\], we focus on the role played by exchange interaction difference $J$. The dynamics of entanglement is considered at different values of $J$ at two different specific external magnetic fields, zero and $A/50$. In the two panels, $J$ is set to be $A/150$ and $A/130$ in the upper and lower panels respectively. Interestingly, comparing with fig. 1, the saturation value of the entanglement increases, from 0.92 to 0.997, as we increase $J$ from $A/5000$ to $A/150$ but then decreases back, to 0.93, as we increase $J$ further to $A/130$. This indicates that the larger $J$ does not always help maintaining the entanglement between the two spins, i.e. the entanglement is not a monotonic function of J. Obviously, there is an inversion point of E between these values of J. The effect of the magnetic field is clear in the three case, it suppresses the amplitude of the decaying oscillation and enhances the entanglement as a result of its polarizing effect, giving raise to a higher saturation value in the three cases.\ In fact, the entanglement shows very similar behavior at different magnitudes of the external magnetic field and exchange interaction difference, however the asymptotic saturation values and decay rates vary significantly based on the values of J and $\epsilon_z$. In principle, each electron spin is in a precession motion about two fields, magnetic which is uniform and the non-uniform nuclear field. This precession motion is the reason behind the oscillatory behavior of the entanglement, whereas coupling to the non-uniform nuclear field is responsible for the decay where the spin energy is dissipated to the nuclear environment (the same behavior was discussed in detail in Ref. \[18\] for a single spin on a single dot). Therefore, it is of great interest to study the mutual effect of the exchange interaction difference and the external magnetic field on the entanglement. The phase diagram of the system is shown in fig. \[fig4\] where The saturation value of entanglement is plotted versus $J$ at different values of the external magnetic field. As the entanglement mostly reaches its saturation value at $t \sim 10^{-4} s$, we use the value of E at $t=10^{-4}$ to approximately represent the saturation value at each J. There are three pairs of curves with each pair corresponding to a specific value of the external magnetic field given by, from left to right, $A/50, A/500, 0$. The pair of curves corresponding to $\epsilon_z=A/50$ are replotted in the inner panel for clarity. It is very interesting to note that for each $\epsilon_z$ the behavior of the entanglement is almost symmetric about a critical value of J, we call it $J_c$. On each pair, the entanglement saturation value is initially unity or very close to unity ,depending on the external magnetic field strength, then starts to decrease as J increases. It decays exponentially as $J$ approaches $J_c$ but once $J$ exceeds $J_c$ it raises up exponentially again reaching a value of unity or close. The decay rate of the entanglement saturation value for $J > J_c$ is more rapid than $J < J_c$ but they become more symmetric as $\epsilon_z$ increases. Surprisingly, this phase diagram shows a very peculiar role played by the exchange interaction difference, namely that small values $(J<<J_c)$ and great values as well $(J >> J_c)$ enhances the entanglement between the two spins, while as $J \rightarrow J_c$ it is reduced significantly. Unfortunately, we can not investigate the entanglement behavior when $J$ is very close or equal to $J_c$ because the system diverges under this condition. The reason for this divergence is the degeneracy (electronic levels crossing) that takes place between the initial state and the intermediate states which causes a break down of the first-order perturbation theory. This degeneracy is manifested in fig. \[fig5\], where the energy difference $\omega_{n,k1}$ between the initial state $|n\rangle$ and the first intermediate state $|k_1\rangle$ and the difference $\omega_{n,k2}$ between $|n\rangle$ and $|k_2\rangle$ (given by Eqs. \[omega\_k1\] and \[omega\_k2\]) are plotted versus the exchange interaction difference $J$ for chosen $h_{2n}=A/500$ and $\epsilon_z=A/50$. As can be seen in fig. \[fig5\], $\omega_{n,k1}$ vanishes at a particular value of $J$ which turns out to be the same as $J_c$ observed in the inner panel in fig. \[fig4\] where the same values of parameters are used in that case. Since $\omega_{n,k1}$ appears squared in the denominator of the first order perturbation term (Eq. \[first-order\]), it is responsible for the divergence when it vanishes. Also, the symmetric behavior of the entanglement about $J_c$ is due to the linear dependence of the magnitude of $\omega_{n,k1}$ on $J$ as it increases or decreases away from $J_c$. Similarly, $\omega_{n,k2}$ may cause the same effect if negative values of $J$ were considered. The value of the critical exchange interaction $J_c$ corresponding to each $\epsilon_z$ can be easily deduced from the phase diagram to be equal to $2(h_{2n}+\epsilon_z)$ coinciding with the observations of fig. \[fig5\], which emphasis that the external magnetic field does not only enhance the entanglement but also dictates the value of $J_c$. After the important observations from the phase diagram (fig. \[fig4\]), in particular concerning the entanglement critical behavior upon varying J, it is interesting to further investigate the effect of $J$ on the entanglement above, below and close to $J_c$ at a fixed value of the external magnetic field. Therefore, the dynamics of entanglement at different exchange interaction values below $J_c$ in fig. \[fig6\] and above it in fig. \[fig7\] is examined. Where $\epsilon_z$ is set to be $A/100$ and consequently $J_c$ turns out to be $A/41.7$. In fig. \[fig6\] we consider the exchange interaction values $A/45, A/50$ and $A/55$ $(< J_c)$ whereas in fig. \[fig7\] $A/40, A/37$ and $A/35$ $(> J_c)$. As we can see in the two graphs, when the value of $J$ is far from $J_c$ , either higher or lower, the entanglement oscillation is small and the decay rate is slow reaching high saturation values that approach unity as J becomes much smaller or much higher than $J_c$. As J gets very close to $J_c$ the entanglement exhibits very rapid oscillation with very large initial amplitude and decays very rapidly as well reaching a low saturation value, 0.74, for $J < J_c$ and very low one, 0.22, for $J > J_c$. The contrast between those values gets smaller and smaller with increasing the magnetic field as can be noticed from the phase diagram (fig. \[fig4\]). To compare the decay rates of entanglement at different $J$ values, the corresponding entanglement decay profiles are plotted in the inner panels in figs. \[fig6\] and \[fig7\]. As shown, the decay profiles are linear when $J$ is much higher (or lower) than $J_c$ converting to a power-law and then to a Gaussian as J gets closer to $J_c$ from above or below. The critical behavior of the entanglement based on the strength of $J$ points out that there is a competition between the effects of the external magnetic field, which enhances the entanglement through polarizing the two spins in its direction, and the exchange interaction difference, which enhances the entanglement on the other hand by favoring anti-ferromagnetic correlation. Therefore when one of the two effects is dominant, i.e. $J<< \epsilon_z$ (or $>> \epsilon_z$), there will be a large net correlation one way or the other leading to strong entanglement. Tuning the two effects to be comparable reduces the net correlation significantly and forces the entanglement to attenuate to a low value. conclusions =========== In Summary, we study the time evolution of entanglement of two electron spins in anisotropically coupled quantum dot under coupling with nuclear spins in an applied external magnetic field in the triplet $({\bf S} = 1)$ subspace. Our main observations is that the dynamics of entanglement exhibits a critical behavior determined by the competition between the value of the exchange interaction difference between the two spins and the strength of the external applied magnetic field. When one of them dominates, the entanglement saturates to a very large value close to the maximum. On the other hand, when they are comparable the entanglement attenuates and may reach very low saturation values. We found that for each fixed value of the external magnetic field there is a corresponding critical value of the exchange interaction difference where the entanglement between the two spins as a function of the exchange interaction exhibits symmetrical behavior about it. When we studied the time evolution of the entanglement at values of the exchange interaction difference above and below the critical value, we observed that the entanglement decay profiles are linear when the exchange interaction value is much higher or lower than the critical value and saturates to large values but then converts to a power law and finally to a Gaussian as it approaches the critical value from above or below reaching low saturation values. The system diverges when the exchange interaction value becomes very close or equal to the critical value which might be a sign of a critical behavior taking place in the environment consisting of the nuclear spins. For an isotropic coupling the entanglement becomes absolutely independent of the Heisenberg exchange interaction. Our observations suggest that tunning the applied external magnetic field and (or) the exchange interaction difference between the two spins in such a way that they differ significantly in value may sustain the correlation between the two spins in the coupled quantum dot and lead to a strong entanglement between them. Acknowledgments {#acknowledgments .unnumbered} =============== This work has been supported in part by King Saud University, College of Science Research Center, grant no. Phys2006/19. One of the authors (G.S.) would like to thank E. I. Lashin for useful discussions. References {#references .unnumbered} ========== [10]{} Nielsen M A and Chuang I L 2000 [*Quantum Computation and Quantum Information*]{} (Cambridge: Cambridge University Press). Boumeester D, Ekert A, and Zeilinger A (eds) 2000 [*The physics of Quantum information: Quantum Cryptography, Quantum Teleportation, Quantum Computing*]{} (Berlin: Springer). Gruska J 1999 [*Quantum Computing*]{} (New York: McGraw-Hill). Machhiavello C, Palma G M and Zeilinger Z 2000 [*Quantum Computation and Quantum Information Theory*]{} (New Jersey: World Scientific). For a review, see Zurek W H 1991 Phys. Today [**44**]{} 36. Shor P W 1994 [*Proc. of the 35th Ann. Symp. on Foundations of Computer Science*]{} ed Goldwasser S (Los Alamitos: IEEE Computer Society Press). Grover L K 1997 [*Phys. Rev. Lett.*]{} [**79**]{} 325. Barenco Adriano, Deutsch David, Ekert Artur and Jozsa Richard 1995 [*Phys. Rev. Lett.*]{} [**74**]{} 4083. Vandersypen L M K, Steffen Matthias, Breyta Gregory, Yannoni Costantino S, Sherwood Mark H, and Chuang Isaac L 2001 [*Nature*]{} [**414**]{} 883. Chuang Isaac L, Gershenfeld Neil A, and Kubinec Mark 1998 [*Phys. Rev. Lett.*]{} [**80**]{} 3408. Jones J A, Mosca M, and Hansen R H 1998 [*Nature*]{} [**393**]{} 344. Cirac J I and Zoller P 1995 [*Phys. Rev. Lett.*]{} [**74**]{} 4091; Monroe C, Meekhof D M, King B E, Itano W M and Wineland D J 1995 [**75**]{} 4714. Turchette Q A, Hood C J, Lange W, Mabuchi H and Kimble H J 1995 [*Phys. Rev. Lett.*]{} [**75**]{} 4710. Averin D V 1998 [*Solid State Commun.*]{} [**105**]{} 659; Shnirman A, Schon G, and Hermon Z 1997 [*Phys. Rev. Lett.*]{} [**79**]{} 2371. Loss D and DiVincenzo D P 1998 [*Phys. Rev. A*]{} [**57**]{} 120. Destefani C F, Ulloa Sergio E, and Marques G E 2004 [*Phys. Rev. B*]{} [**70**]{} 205315. Burkard G, Loss Daniel and DiVincenzo David P 1999 [*Phys Rev. B*]{} [**59**]{} 2070. Khaetskii A, Loss Daniel and Glazman Leonid 2002 [*Phys Rev. Lett.*]{} [**88**]{} 186802; Khaetskii A, Loss Daniel and Glazman Leonid 2003 [*Phys. Rev. B*]{} [**67**]{}, 195329. Kroutfar M, Ducommun Yann, Heiss Dominik, Bichler Max, Schuh Dieter, Abstreiter Gerhard and Finley Jonathan J 2004 [*Nature*]{} (London) [**432**]{} 81. Elzerman J M, Hanson R, Beveren L H Willems van, Witkamp B, Vandersypen L M K and Kouwenhoven L P 2004 [*Nature*]{} (London) [**430**]{} 431. Florescu M and Hawrylak P 2006 [*Phys. Rev. B*]{} [**73**]{} 045304. Golovach Vitaly N, Khaetskii Alexander, and Loss Daniel 2004 [*Phys Rev. Lett*]{} [**93**]{} 016601. Coish W A and Loss Daniel 2004 [*Phys. Rev. B*]{} [**70**]{} 195340. Shenvi Neil, de Sousa Rogerio, and Whaley K B 2005 [*Phys. Rev. B*]{} [**71**]{} 144419. Klauser D, Coish W A, and Loss Daniel 2006 [*Phys. Rev. B*]{} [**73**]{} 205302. Deng Changxue and Hu Xuedong 2006 [*Phys. Rev. B*]{} [**73**]{} 241303(R). Huttel A K, Weber J, Holleitner A W, Weinmann D, Eberl K and Blick R H 2004 [*Phys. Rev. B*]{} [**69**]{} 073302. Tyryshkin A M, Lyon S A, Astashkin A V, and Raitsimring A M 2003 [*Phys. Rev. B*]{} [**68**]{} 193207. Abe Eisuke, Itoh Kohei M, Isoya Junichi, and Yamasaki Satoshi 2004 [*Phys. Rev. B*]{} [**70**]{} 033204. Johnson A C, Petta J R, Taylor J M, Yacoby A, Lukin M D, Marcus C M, Hanson M P and Gossard A C 2005 [*Nature*]{} [**435**]{} 925. Koppens F H L, Folk J A, Elzerman J M, Hanson R, Beveren L H Willems van, Vink I T, Tranitz H P, Wegscheider W, Kouwenhoven L P and Vandersypen L M K 2005 [*Science*]{} [**309**]{} 1346. Petta J R, Johnson A C, Taylor J M, Laird E A, Yacoby A, Lukin M D, Marcus C M, Hanson M P and Gossard A C 2005 [*Science*]{} [**309**]{} 2180. Chiaverini J, Britton J, Leibfried D, Knill E, Barrett M D, Blakestad R B, Itano W M, Jost J D, Langer C, Ozeri R, Schaetz T, and Wineland D J 2005 [*Science*]{} [**308**]{} 997. Vion D, Aassime A, Cottet A, Joyez P, Pothier H, Urbina C, Esteve D and Devoret M H 2002 [*Science*]{} [**296**]{} 886. Coish W A and Loss Daniel 2005 [*Phys. Rev. B*]{} [**72**]{} 125337. Klauser D, Coish W A and Loss Daniel 2006 [*Phys. Rev. B*]{} [**73**]{} 205302. Huang Z, Sadiek G and Kais S 2006 [*J. Chem. Phys.*]{} [**124**]{} 144513. Blaauboer M and DiVincenzo D P 2005 [*Phys. Rev. Lett*]{} [**95**]{} 160402. K. V. Kavokin 2001 [*Phys. Rev. B*]{} [**64**]{} 075305. S. C. Badescu, Y. Lyanda-Geller and T. L. Reinecke 2005 [*AIP Conference Proceedings*]{} [**772**]{} 763. O. Olendski and T. V. Shahbazyan 2007 [*Phys. Rev. B*]{} [**75**]{} 041306(R). Mircea Trif, Vitaly N. Golovach and Daniel Loss 2007 [*Phys. Rev. B*]{} [**75**]{} 085307. L. A. Wu and D. A. Lidar 2002 [*Phys. Rev. A*]{} [**65**]{} 042318. Aian-Ao Wu and Daniel A. Lidar 2002 [*Phys. Rev. B*]{} [**66**]{} 062314. D. Stepanenko and N. E. Bonesteel 2004 [*Phys. Rev. L*]{} [**93**]{} 140501. Dyakonov M I and Perel V I 1984 [*Optical Orientation*]{} ed Meier F and Zakharchenya B P (Amsterdam: North-Holland) p 11. Huang Z and Kais S 2006 [*Phys. Rev. A*]{} [**73**]{}, 022339. Wootters W K 1998 [*Phys. Rev. Lett.*]{} [**80**]{} 2245. Osterloh A, Amico L, Falci G, and Fazio Rosario 2002 [*Nature*]{} [**416**]{} 608. Huang Z and Kais S 2005 [*Int. J. Quant. Information*]{} [**3**]{} 483. ![Time evolution of entanglement and spin correlation in z-direction in absence of external magnetic field and for very small anisotropy $J=A/5000$. The nuclear magnetic field $h_{2n}=A/500$, where A is the hyperfine constant.[]{data-label="fig1"}](fig1){width="100.00000%" height="0.6\textheight"} ![Time evolution of entanglement and spin correlation in the z-direction for very small anisotropy $J=A/5000$ with a large external magnetic field $\epsilon_z=A/50 >> h_{2n}$.[]{data-label="fig2"}](fig2){width="100.00000%" height="0.6\textheight"} ![Dynamics of entanglement at different exchange interaction differences $A/150,$ and $A/130$ with the external magnetic field $\epsilon_z$ set at 0 and $A/50$.[]{data-label="fig3"}](fig3){width="100.00000%" height="0.6\textheight"} ![The phase diagram of entanglement. The saturation value of the entanglement is plotted vs. the exchange interaction difference J at different values of the external magnetic field $\epsilon_z=0$, $A/500$, and $A/50$. The case corresponding to $\epsilon_z=A/50$ is replotted in the inner panel to demonstrate the symmetric behavior of E at high external magnetic field.[]{data-label="fig4"}](fig4){width="100.00000%" height="0.6\textheight"} ![The energy differences $\omega_{n,k1}$ (solid line) and $\omega_{n,k2}$ (dashed line) are plotted versus the exchange interaction difference J for chosen $h_{2n}=A/500$ and $\epsilon_z=A/50$.[]{data-label="fig5"}](fig5){width="100.00000%" height="0.6\textheight"} ![Dynamics of entanglement with the external magnetic field fixed at $\epsilon_z=A/100$ while the exchange interaction difference $J$ is set at values below the critical value ($J_c=A/41.7$), $J=A/45$, $A/50$ and $A/55$. The inner panel shows the corresponding entanglement decay profiles.[]{data-label="fig6"}](fig6){width="100.00000%" height="0.6\textheight"} ![Dynamics of entanglement with the external magnetic field fixed at $\epsilon_z=A/100$ while the exchange interaction difference $J$ is set at values above the critical value ($J_c=A/41.7$), $J=A/35$, $A/37$ and $A/40$. The inner panel shows the corresponding entanglement decay profiles.[]{data-label="fig7"}](fig7){width="100.00000%" height="0.6\textheight"}
{ "pile_set_name": "ArXiv" }
--- abstract: 'The associated production of a Higgs boson with a $b$ quark is a discovery channel for the lightest MSSM neutral Higgs boson. We consider the SUSY QCD contributions from squarks and gluinos and discuss the decoupling properties of these effects. A detailed comparison of our exact ${\cal O}(\alpha_s)$ results with those of a widely used effective Lagrangian approach, the $\Delta_b$ approximation, is presented. The $\Delta_b$ approximation is shown to accurately reproduce the exact one-loop SQCD result to within a few percent over a wide range of parameter space.' author: - 'S. Dawson$^{a}$, C. B. Jackson$^{b}$, P. Jaiswal$^{a,c}$' bibliography: - 'mssm\_decoup.bib' title: ' SUSY QCD corrections to Higgs-b Production: Is the $\Delta_b$ Approximation Accurate?' --- 5.2in Introduction ============ Once a light Higgs-like particle is discovered it will be critical to determine if it is the Higgs Boson predicted by the Standard Model. The minimal supersymmetric Standard Model (MSSM) presents a comparison framework in which to examine the properties of a putative Higgs candidate. The MSSM Higgs sector contains $5$ Higgs bosons–$2$ neutral bosons, $h$ and $H$, a pseudoscalar boson, $A$, and $2$ charged bosons, $H^\pm$. At the tree level the theory is described by just $2$ parameters, which are conveniently chosen to be $M_A$, the mass of the pseudoscalar boson, and $\tan\beta$, the ratio of vacuum expectation values of the $2$ neutral Higgs bosons. Even when radiative corrections are included, the theory is highly predictive[@Djouadi:2005gj; @Gunion:1989we; @Carena:2002es]. In the MSSM, the production mechanisms for the Higgs bosons can be significantly different from in the Standard Model. For large values of $\tan\beta$, the heavier Higgs bosons, $A$ and $H$, are predominantly produced in association with $b$ quarks. Even for $\tan\beta\sim 5$, the production rate in association with $b$ quarks is similar to that from gluon fusion for $A$ and $H$ production[@Dittmaier:2011ti]. For the lighter Higgs boson, $h$, for $\tan\beta \gsim 7$ the dominant production mechanism at both the Tevatron and the LHC is production with $b$ quarks for light $M_A$ ($\lsim 200~GeV$), where the $b {\overline b} h$ coupling is enhanced . Both the Tevatron[@Benjamin:2010xb] and the LHC experiments[@Chatrchyan:2011nx] have presented limits Higgs production in association with $b$ quarks, searching for the decays $h\rightarrow \tau^+\tau^-$ and $b \overline{b}$[^1]. These limits are obtained in the context of the MSSM are sensitive to the $b$-squark and gluino loop corrections which we consider here. The rates for $bh$ associated production at the LHC and the Tevatron have been extensively studied[@Dawson:2005vi; @Campbell:2004pu; @Maltoni:2003pn; @Dawson:2004sh; @Dittmaier:2003ej; @Dicus:1998hs; @Dawson:2003kb; @Maltoni:2005wd; @Campbell:2002zm; @Carena:2007aq; @Carena:1998gk] and the NLO QCD correction are well understood, both in the $4$- and $5$- flavor number parton schemes[@Dawson:2005vi; @Campbell:2004pu; @Maltoni:2003pn]. In the $4$- flavor number scheme, the lowest order processes for producing a Higgs boson and a $b$ quark are $gg\rightarrow b {\overline b}h$ and $ q {\overline q}\rightarrow b {\overline b} h$[@Dittmaier:2003ej; @Dawson:2004sh; @Dicus:1998hs]. In the $5-$ flavor number scheme, the lowest order process is $b g \rightarrow b h$ (${\overline {b}} g \rightarrow {\overline {b}}h$). The two schemes represent different orderings of perturbation theory and calculations in the two schemes produce rates which are in qualitative agreement[@Campbell:2004pu; @Dittmaier:2011ti]. In this paper, we use the $5$-flavor number scheme for simplicity. The resummation of threshold logarithms[@Field:2007ye], electroweak corrections[@Dawson:2010yz; @Beccaria:2010fg] and SUSY QCD corrections[@Dawson:2007ur] have also been computed for $bh$ production in the $5-$ flavor number scheme. Here, we focus on the role of squark and gluino loops. The properties of the SUSY QCD corrections to the $b {\overline b} h$ vertex, both for the decay $h\rightarrow b {\overline b} $[@Dabelstein:1995js; @Hall:1993gn; @Carena:1999py; @Guasch:2003cv] and the production, $b {\overline b}\rightarrow h$[@Haber:2000kq; @Harlander:2003ai; @Guasch:2003cv; @Dittmaier:2003ej], were computed long ago. The contributions from $b$ squarks and gluinos to the lightest MSSM Higgs boson mass are known at $2$-loops[@Heinemeyer:2004xw; @Brignole:2002bz], while the $2$-loop SQCD contributions to the $b{\overline b}h$ vertex is known in the limit in which the Higgs mass is much smaller than the squark and gluino masses[@Noth:2010jy; @Noth:2008tw]. The contributions of squarks and gluinos to the on-shell $b {\overline b} h$ vertex are non-decoupling for heavy squark and gluino masses and decoupling is only achieved when the pseudoscalar mass, $M_A$, also becomes large. An effective Lagrangian approach, the $\Delta_b$ approximation[@Carena:1999py; @Hall:1993gn], can be used to approximate the SQCD contributions to the on-shell $b {\overline b}h$ vertex and to resum the $(\alpha_s \tan\beta/M_{SUSY})^n$ enhanced terms. The numerical accuracy of the $\Delta_b$ effective Lagrangian approach has been examined for a number of cases. The $2-$loop contributions to the lightest MSSM Higgs boson mass of ${\cal O}(\alpha_b\alpha_s)$ were computed in Refs. [@Heinemeyer:2004xw] and [@Brignole:2002bz], and it was found that the majority of these corrections could be absorbed into a $1-$loop contribution by defining an effective $b$ quark mass using the $\Delta_b$ approach. The sub-leading contributions to the Higgs boson mass (those not absorbed into $\Delta_b$) are then of ${\cal O}(1~GeV)$. The $\Delta_b$ approach also yields an excellent approximation to the SQCD corrections for the decay process $h\rightarrow b {\overline b}$[@Guasch:2003cv]. It is particularly interesting to study the accuracy of the $\Delta_b$ approximation for production processes where one of the $b$ quarks is off-shell. The SQCD contributions from squarks and gluinos to the inclusive Higgs production rate in association with $b$ quarks has been studied extensively in the 4FNS in Ref. [@Dittmaier:2006cz], where the the lowest order contribution is $gg\rightarrow b {\overline b}h$. In the 4FNS, the inclusive cross section including the exact 1-loop SQCD corrections is reproduced to within a few percent using the $\Delta_b$ approximation. However, the accuracy of the $\Delta_b$ approximation for the MSSM neutral Higgs boson production in the 5FNS has been studied for only a small set of MSSM parameters in Ref. [@Dawson:2007ur]. The major new result of this paper is a detailed study of the accuracy of the $\Delta_b$ approach in the 5FNS for the $bg\rightarrow bh$ production process. In this case, one of the $b$ quarks is off-shell and there are contributions which are not contained in the effective Lagrangian approach. The plan of the paper is as follows: Section $2$ contains a brief review of the MSSM Higgs and $b$ squark sectors and also a review of the effective Lagrangian approximation. The calculation of Ref. [@Dawson:2007ur] is summarized in Section 2. We include SQCD contributions to $bh$ production which are enhanced by $m_b \tan\beta$ which were omitted in Ref. [@Dawson:2007ur]. Analytic results for the SQCD corrections to $bg\rightarrow bh$ in the extreme mixing scenarios in the $b$ squark sector are presented in Section 3. Section 4 contains numerical results for the $\sqrt{s}=7$ TeV LHC. Finally, our conclusions are summarized in Section 5. Detailed analytic results are relegated to a series of appendices. Basics ====== MSSM Framework -------------- In the simplest version of the MSSM there are two Higgs doublets, $H_u$ and $H_d$, which break the electroweak symmetry and give masses to the $W$ and $Z$ gauge bosons. The neutral Higgs boson masses are given at tree level by, $$M_{h,H}^2={1\over 2} \biggl[M_A^2+M_Z^2\mp\sqrt{(M_A^2+M_Z^2)^2-4M_A^2M_Z^2\cos^2 2\beta}\biggr] \, , \label{mhtree}$$ and the angle, $\alpha$, which diagonalizes the neutral Higgs mass is $$\tan 2\alpha=\tan 2\beta \biggl( {M_A^2+M_Z^2\over M_A^2-M_Z^2}\biggr) \, . \label{alphatree}$$ In practice, the relations of Eqs. \[mhtree\] and \[alphatree\] receive large radiative corrections which must be taken into account in numerical studies. We use the program FeynHiggs[@Heinemeyer:1998yj; @Degrassi:2002fi; @Heinemeyer:1998np] to generate the Higgs masses and an effective mixing angle, $\alpha_{eff}$, which incorporates higher order effects. The scalar partners of the left- and right- handed $b$ quarks, ${\tilde b}_L$ and ${\tilde b}_R$, are not mass eigenstates, but mix according to, $$L_M=-({\tilde b}^*_L, {\tilde b}^*_R)M_{\tilde b}^2 \left( \begin{array}{c} {\tilde b}_L \\ {\tilde b}_R \end{array} \right)\, .$$ The ${\tilde b}$ squark mass matrix is, $$M_{{\tilde b}}^2=\left( \begin{array}{cc} {\tilde m}_L^2 & m_b X_b\\ m_b X_b & {\tilde m}_R^2\\ \end{array} \right)\, , \label{squark}$$ and we define, $$\begin{aligned} X_b&=& A_b-\mu\tan\beta\nonumber \\ {\tilde m}^2_L&=& { M}_Q^2 +m_b^2+M_Z^2\cos 2\beta (I_3^b-Q_b\sin^2\theta_W)\nonumber \\ {\tilde m}^2_R&=& {M}_D^2 +m_b^2+M_Z^2\cos 2\beta Q_b\sin^2\theta_W\, .\end{aligned}$$ ${ M}_{Q,D}$ are the soft SUSY breaking masses, $I_3^b=-1/2$, and $Q_b=-1/3$. The parameter $A_b$ is the trilinear scalar coupling of the soft supersymmetry breaking Lagrangian and $\mu$ is the Higgsino mass parameter. The $b$ squark mass eigenstates are ${\tilde b}_1$ and ${\tilde b}_2$ and define the $b$-squark mixing angle, ${\tilde\theta_b}$ $$\begin{aligned} {\tilde b}_1&=& \cos {\tilde\theta_b} {\tilde b}_L +\sin{\tilde\theta_b} {\tilde b}_R \nonumber \\ {\tilde b}_2&=& -\sin{\tilde\theta_b} {\tilde b}_L +\cos{\tilde\theta_b} {\tilde b}_R\, . \nonumber \\\end{aligned}$$ At tree level, $$\sin 2 {\tilde\theta_b} ={2m_b(A_b-\mu \tan\beta)\over M_{{\tilde b}_1}^2 -M_{{\tilde b}_2}^2} \label{s2bdef}$$ and the sbottom mass eigenstates are, $$M^2_{{\tilde b}_1,{\tilde b}_2} ={1\over 2}\biggl[{\tilde m}_L^2+{\tilde m}_R^2\mp \sqrt{({\tilde m}_L^2-{\tilde m}_R^2)^2+4m_b^2 X_b^2}\biggr] \, .$$ $\Delta_{b}$ Approximation: The Effective Lagrangian Approach {#sec:db} ------------------------------------------------------------- Loop corrections which are enhanced by powers of $\alpha_s\tan\beta$ can be included in an effective Lagrangian approach. At tree level, there is no $ {\overline \psi}_L b_R H_u$ coupling in the MSSM, but such a coupling arises at one loop and gives an effective interaction[@Carena:1999py; @Hall:1993gn; @Guasch:2003cv][^2], $$L_{eff}=-\lambda_b {\overline \psi}_L\biggl(H_d+{\Delta_b\over \tan\beta} H_u\biggr)b_R+h.c. \,\,\quad . \label{effdef}$$ Eq. \[effdef\] shifts the $b$ quark mass from its tree level value, [^3] $$m_b\rightarrow{\lambda_b v_1\over \sqrt{2}} (1+\Delta_b)\, ,$$ and also implies that the Yukawa couplings of the Higgs bosons to the $b$ quark are shifted from the tree level predictions. This shift of the Yukawa couplings can be included with an effective Lagrangian approach[@Carena:1999py; @Guasch:2003cv], $$\begin{aligned} L_{eff}&=&-{m_b\over v_{SM}}\biggl({1\over 1+\Delta_b}\biggr) \biggl(-{\sin \alpha \over \cos\beta}\biggr)\biggl(1-{\Delta_b\over \tan\beta \tan \alpha}\biggr) {\overline b} b h\, . \label{mbdef}\end{aligned}$$ The Lagrangian of Eq. \[mbdef\] has been shown to sum all terms of ${\cal O}(\alpha_s\tan\beta)^n$ for large $\tan\beta$[@Carena:1999py; @Hall:1993gn].[^4] This effective Lagrangian has been used to compute the SQCD corrections to both the inclusive production process, $b {\overline b} \rightarrow h$, and the decay process, $h\rightarrow b {\overline b}$, and yields results which are within a few percent of the exact one-loop SQCD calculations[@Guasch:2003cv; @Dittmaier:2006cz]. The expression for $\Delta_b$ is found in the limit $m_b << M_h, M_Z <<M_{{\tilde b}_1}, M_{{\tilde b}_2}, M_{\tilde g}$ . The $1$-loop contribution to $\Delta_b$ from sbottom/gluino loops is[@Carena:1994bv; @Hall:1993gn; @Carena:1999py] $$\Delta_b={2\alpha_s(\mu_S)\over 3 \pi} M_{\tilde g} \mu \tan\beta I(M_{\tilde {b_1}}, M_{\tilde{ b_2}}, M_{\tilde g})\, , \label{db}$$ where the function $I(a,b,c)$ is, $$I(a,b,c)={1\over (a^2-b^2)(b^2-c^2)(a^2-c^2)}\biggl\{a^2b^2\log\biggl({a^2\over b^2}\biggr) +b^2c^2\log\biggl({b^2\over c^2}\biggr) +c^2a^2\log\biggl({c^2\over a^2}\biggr)\biggr\}\, ,$$ and $\alpha_s(\mu_S)$ should be evaluated at a typical squark or gluino mass. The $2-$loop QCD corrections to $\Delta_b$ have been computed and demonstrate that the appropriate scale at which to evaluate $\Delta_b$ is indeed of the order of the heavy squark and gluino masses[@Noth:2010jy; @Noth:2008tw]. The renormalization scale dependence of $\Delta_b$ is minimal around $\mu_0/3$, where $\mu_0\equiv (M_{\tilde g}+m_{\tilde {b}_1}+m_{\tilde{b}_2})/3$. In our language this is a high scale, of order the heavy SUSY particle masses. The squarks and gluinos are integrated out of the theory at this high scale and their effects contained in $\Delta_b$. The effective Lagrangian is then used to calculate light Higgs production at a low scale, which is typically the electroweak scale, $\sim 100~GeV$. Using the effective Lagrangian of Eq. \[effdef\], which we term the Improved Born Approximation (or $\Delta_b$ approximation), the cross section is written in terms of the effective coupling, $$g_{bbh}^{\Delta_b}\equiv g_{bbh}\biggl({1\over 1+\Delta_b}\biggr) \biggl(1-{\Delta_b\over \tan\beta \tan \alpha}\biggr) \, , \label{effcoup}$$ where $$g_{bbh}=-\biggl({\sin\alpha\over \cos\beta}\biggr) {{\overline m_b}(\mu_R)\over v_{SM}}\, .$$ We evaluate ${\overline{m_b}}(\mu_R)$ using the $2-$loop ${\overline{MS}}$ value at a scale $\mu_R$ of ${\cal O}(M_h)$, and use the value of $\alpha_{eff}$ determined from FeynHiggs. The Improved Born Approximation consists of rescaling the tree level cross section, $\sigma_0$, by the coupling of Eq. \[effcoup\][^5], $$\sigma_{IBA}=\biggl({g_{bbh}^{\Delta_b}\over g_{bbh}}\biggr)^2\sigma_0 \, . \label{sigibadef}$$ The Improved Born Approximation has been shown to accurately reproduce the full SQCD calculation of $pp \rightarrow {\overline t} b H^+$[@Berger:2003sm; @Dittmaier:2009np]. The one-loop result including the SQCD corrections for $b g\rightarrow b h$ can be written as, $$\begin{aligned} \sigma_{SQCD}&\equiv& \sigma_{IBA}\biggl(1+\Delta_{SQCD}\biggr) \, ,\end{aligned}$$ where $\Delta_{SQCD}$ is found from the exact SQCD calculation summarized in Appendix B. The Improved Born Approximation involves making the replacement in the tree level Lagrangian, $$m_b\rightarrow {m_b\over 1+\Delta_b}\, . \label{mdef}$$ Consistency requires that this substitution also be made in the squark mass matrix of Eq. \[squark\][@Hofer:2009xb; @Accomando:2011jy] $$M_{{\tilde b}}^2\rightarrow \left( \begin{array}{cc} {\tilde m}_L^2 & \biggl({m_b\over 1+\Delta_b}\biggr) X_b\\ \biggl({m_b\over 1+\Delta_b}\biggr) X_b & {\tilde m}_R^2\\ \end{array} \right)\, . \label{squark2}$$ The effects of the substitution of Eq. \[mdef\] in the $b$-squark mass matrix are numerically important, although they generate contributions which are formally higher order in $\alpha_s$. Eqs. \[db\] and \[squark2\] can be solved iteratively for $M_{{\tilde b}_1}$, $M_{{\tilde b}_2}$ and $\Delta_b$ using the proceedure of Ref. [@Hofer:2009xb][^6]. SQCD Contributions to $g b\rightarrow b h$ ------------------------------------------ The contributions from squark and gluino loops to the $g b\rightarrow b h$ process have been computed in Ref. [@Dawson:2007ur] in the $m_b=0$ limit. We extend that calculation by including terms which are enhanced by $m_b \tan\beta$ and provide analytic results in several useful limits. The tree level diagrams for $g(q_1) + b(q_2) \to b(p_b) + h(p_h)$ are shown in Fig. \[fg:bghb\_feyn\]. ![Feynman diagrams for $ g(q_1)+b (q_2)\rightarrow b(p_b)+ h(p_h)$.[]{data-label="fg:bghb_feyn"}](bghb_feyn_lo.eps) We define the following dimensionless spinor products $$\begin{aligned} M_{s}^{\mu} & = & \frac{\overline{u}\left(p_{b}\right)\left({\setbox0=\hbox{$q$} \dimen0=\wd0 \setbox1=\hbox{/} \dimen1=\wd1 \ifdim\dimen0>\dimen1 \rlap{\hbox to \dimen0{\hfil/\hfil}} q \else \rlap{\hbox to \dimen1{\hfil$q$\hfil}} / \fi}_{1}+{\setbox0=\hbox{$q$} \dimen0=\wd0 \setbox1=\hbox{/} \dimen1=\wd1 \ifdim\dimen0>\dimen1 \rlap{\hbox to \dimen0{\hfil/\hfil}} q \else \rlap{\hbox to \dimen1{\hfil$q$\hfil}} / \fi}_{2}\right)\gamma^{\mu}u\left(q_{2}\right)}{s}\nonumber \\ M_{t}^{\mu} & = & \frac{\overline{u}\left(p_{b}\right)\gamma^{\mu}\left({\setbox0=\hbox{$p$} \dimen0=\wd0 \setbox1=\hbox{/} \dimen1=\wd1 \ifdim\dimen0>\dimen1 \rlap{\hbox to \dimen0{\hfil/\hfil}} p \else \rlap{\hbox to \dimen1{\hfil$p$\hfil}} / \fi}_{b}-{\setbox0=\hbox{$q$} \dimen0=\wd0 \setbox1=\hbox{/} \dimen1=\wd1 \ifdim\dimen0>\dimen1 \rlap{\hbox to \dimen0{\hfil/\hfil}} q \else \rlap{\hbox to \dimen1{\hfil$q$\hfil}} / \fi}_{1}\right)u\left(q_{2}\right)}{t}\nonumber \\ M_{1}^{\mu} & = & q_{2}^{\mu}\frac{\overline{u}\left(p_{b}\right)u\left(q_{2}\right)}{u}\nonumber \\ M_{2}^{\mu} & = & \frac{\overline{u}\left(p_{b}\right)\gamma^{\mu}u\left(q_{2}\right)}{m_{b}}\nonumber \\ M_{3}^{\mu} & = & p_{b}^{\mu}\frac{\overline{u}\left(p_{b}\right){\setbox0=\hbox{$q$} \dimen0=\wd0 \setbox1=\hbox{/} \dimen1=\wd1 \ifdim\dimen0>\dimen1 \rlap{\hbox to \dimen0{\hfil/\hfil}} q \else \rlap{\hbox to \dimen1{\hfil$q$\hfil}} / \fi}_{1}u\left(q_{2}\right)}{m_{b}t}\nonumber \\ M_{4}^{\mu} & = & q_{2}^{\mu}\frac{\overline{u}\left(p_{b}\right){\setbox0=\hbox{$q$} \dimen0=\wd0 \setbox1=\hbox{/} \dimen1=\wd1 \ifdim\dimen0>\dimen1 \rlap{\hbox to \dimen0{\hfil/\hfil}} q \else \rlap{\hbox to \dimen1{\hfil$q$\hfil}} / \fi}_{1}u\left(q_{2}\right)}{m_{b}s}\, , \label{eq: SME}\end{aligned}$$ where $s=(q_1+q_2)^2, t=(p_b-q_1)^2$ and $u=(p_b-q_2)^2$. In the $m_b=0$ limit, the tree level amplitude depends only on $M_s^\mu$ and $M_t^\mu$, and $M_1^\mu$ is generated at one-loop. When the effects of the $b$ mass are included, $M_2^\mu$, $M_3^\mu$, and $M_4^\mu$ are also generated. The tree level amplitude is $$\begin{aligned} \mathcal{A}_{\alpha\beta}^{a}\mid_0 & = & -g_{s} g_{bbh}\left(T^{a}\right)_{\alpha\beta}\epsilon_{\mu}(q_1)\left\{ M_{s}^{\mu}+M_{t}^{\mu}\right\} \, ,\end{aligned}$$ and the one loop contribution can be written as $$\mathcal{A}_{\alpha\beta}^{a} =-\frac{\alpha_{s}(\mu_R)}{4\pi}g_{s}g_{bbh}\left(T^{a}\right)_{\alpha\beta}\sum_{j}X_{j}M_{j}^{\mu}\epsilon_{\mu}(q_1)\, . \label{onedef}$$ In the calculations to follow, only the non-zero $X_j$ coefficients are listed and we neglect terms of ${\cal O}(m_b^2/s)$ if they are not enhanced by $\tan\beta$. The renormalization of the squark and gluino contributions is performed in the on-shell scheme and has been described in Refs. [@Dawson:2007ur; @Berge:2007dz; @Noth:2010jy]. The bottom quark self-energy is $$\begin{aligned} \Sigma_{b}\left(p\right) & = & {\setbox0=\hbox{$p$} \dimen0=\wd0 \setbox1=\hbox{/} \dimen1=\wd1 \ifdim\dimen0>\dimen1 \rlap{\hbox to \dimen0{\hfil/\hfil}} p \else \rlap{\hbox to \dimen1{\hfil$p$\hfil}} / \fi} \biggl(\Sigma_{b}^{V}(p^2)-\Sigma_{b}^{A}(p^2) \gamma_{5}\biggr)+m_{b}\Sigma_{b}^{S}(p^2)\, .\end{aligned}$$ The $b$ quark fields are renormalized as $b\rightarrow \sqrt{Z_b^V} b$ and $Z_b^V\equiv \sqrt{1+\delta Z_b^V}$. The contribution from the counter-terms to the self-energy is, $$\begin{aligned} \Sigma_{b}^{\mathrm{ren}}\left(p\right) & = & \Sigma_{b}\left(p\right)+\delta\Sigma_b(p)\nonumber \\ \delta\Sigma_{b}\left(p\right)&=&{\setbox0=\hbox{$p$} \dimen0=\wd0 \setbox1=\hbox{/} \dimen1=\wd1 \ifdim\dimen0>\dimen1 \rlap{\hbox to \dimen0{\hfil/\hfil}} p \else \rlap{\hbox to \dimen1{\hfil$p$\hfil}} / \fi}\left(\delta Z_{b}^{V}-\delta Z_{b}^{A}\gamma_{5}\right)-m_{b}\delta Z_{b}^{V}-\delta m_{b}\, .\end{aligned}$$ Neglecting the $\gamma_5$ contribution, the renormalized self-energy is then given by $$\begin{aligned} \Sigma_{b}^{\mathrm{ren}}\left(p\right) & = & \left({\setbox0=\hbox{$p$} \dimen0=\wd0 \setbox1=\hbox{/} \dimen1=\wd1 \ifdim\dimen0>\dimen1 \rlap{\hbox to \dimen0{\hfil/\hfil}} p \else \rlap{\hbox to \dimen1{\hfil$p$\hfil}} / \fi}-m_{b}\right) \left(\Sigma_{b}^{V}(p^2)+\delta Z_{b}^{V}\right)+m_{b}\left( \Sigma_{b}^{S}(p^2)+\Sigma_{b}^{V}(p^2)-\frac{\delta m_{b}}{m_{b}}\right)\, .\end{aligned}$$ The on-shell renormalization condition implies $$\begin{aligned} \left.\Sigma_{b}^{\mathrm{ren}}\left(p\right)\right|_{{\setbox0=\hbox{$p$} \dimen0=\wd0 \setbox1=\hbox{/} \dimen1=\wd1 \ifdim\dimen0>\dimen1 \rlap{\hbox to \dimen0{\hfil/\hfil}} p \else \rlap{\hbox to \dimen1{\hfil$p$\hfil}} / \fi}=m_{b}} & = & 0\\ lim_{{\setbox0=\hbox{$p$} \dimen0=\wd0 \setbox1=\hbox{/} \dimen1=\wd1 \ifdim\dimen0>\dimen1 \rlap{\hbox to \dimen0{\hfil/\hfil}} p \else \rlap{\hbox to \dimen1{\hfil$p$\hfil}} / \fi}\rightarrow m_b} \biggl( \frac{\Sigma_{b}^{\mathrm{ren}}\left(p\right)}{{\setbox0=\hbox{$p$} \dimen0=\wd0 \setbox1=\hbox{/} \dimen1=\wd1 \ifdim\dimen0>\dimen1 \rlap{\hbox to \dimen0{\hfil/\hfil}} p \else \rlap{\hbox to \dimen1{\hfil$p$\hfil}} / \fi}-m_{b}} \biggr) & = & 0 \, .\end{aligned}$$ The mass and wavefunction counter-terms are[^7] $$\begin{aligned} \frac{\delta m_{b}}{m_{b}} & = & \left[\Sigma_{b}^{S}\left(p^{2}\right)+\Sigma_{b}^{V}\left(p^{2}\right)\right]_{p^{2}=m_{b}^{2}}\nonumber \\ & = & \frac{\alpha_{s}(\mu_R)}{3\pi}\sum_{i=1}^{2}\left[\left(-1\right)^{i}\frac{M_{\tilde{g}}}{m_{b}}s_{2\tilde{b}}B_{0}-B_{1}\right]\left(0;M_{\tilde{g}}^{2},M_{\tilde{b}_{i}}^{2}\right)\label{eq: del mb}\\ \delta Z_{b}^{V} & = & -\left.\Sigma_{b}^{V}\left(p^{2}\right)\right|_{p^{2}=m_{b}^{2}} -2m_b^2 {\partial\over \partial p^2} \biggl(\Sigma_b^V(p^2) +\Sigma_S(p^2)\biggr)\mid_{p^2=m_b^2}\nonumber \\ & = & \frac{\alpha_{s}(\mu_R)}{3\pi}\sum_{i=1}^{2}\biggl[B_{1} +2m_b^2 B_1^\prime-(-1)^i 2m_b M_{\tilde g} s_{2{\tilde b}}B_0^\prime \biggr] \left(0;M_{\tilde{g}}^{2},M_{\tilde{b}_{i}}^{2}\right) \, ,\label{eq:delZ}\end{aligned}$$ where we consistently neglect the $b$ quark mass if it is not enhanced by $\tan\beta$. The Passarino-Veltman functions $B_0\left(0;M_{\tilde{g}}^{2},M_{\tilde{b}_{i}}^{2}\right)$ and $B_1\left(0;M_{\tilde{g}}^{2},M_{\tilde{b}_{i}}^{2}\right)$ are defined in Appendix A. Using the tree level relationship of Eq. \[s2bdef\], the mass counterterm can be written as, $$\begin{aligned} {\delta m_b\over m_b} &=& {2\alpha_s(\mu_R)\over 3\pi} M_{\tilde g} A_b I(M_{{\tilde b}_1},M_{{\tilde b}_2}, M_{\tilde g}) -\Delta_b - \frac{\alpha_{s}(\mu_R)}{3\pi}\sum_{i=1}^{2} B_1\left(0; M_{\tilde{g}}^{2}, M_{\tilde{b}_{i}}^{2}\right) \, . \label{mbnice}\end{aligned}$$ The external gluon is renormalized as $g_\mu^A\rightarrow \sqrt{Z_3}g_\mu^A= \sqrt{1+\delta Z_3}g_\mu^A$ and the strong coupling renormalization is $g_s\rightarrow Z_g g_s$ with $\delta Z_g=-\delta Z_3/2$. We renormalize $g_s$ using the ${\overline {MS}}$ scheme with the heavy squark and gluino contributions subtracted at zero momentum[@Nason:1987xz], $$\delta Z_3=- {\alpha_s(\mu_R)\over 4 \pi} \biggl[ {1\over 6} \Sigma_{{\tilde q}_i} \biggl( {4\pi\mu_R^2\over M_{{\tilde q}_i}^2} \biggr)^\epsilon +2\biggl( {4\pi\mu_R^2\over M_{\tilde g}^2} \biggr)^\epsilon \biggr] {1\over \epsilon}\Gamma(1+\epsilon)\, .$$ In order to avoid overcounting the effects which are contained in $g_{bbh}^{\Delta_b}$ to ${\cal O} (\alpha_s)$, we need the additional counterterm, $$\delta_{CT}=\Delta_b\biggl( 1+{1\over \tan\beta\tan\alpha}\biggr) \, . \label{ctdef}$$ The total contribution of the counterterms is, $$\sigma_{CT}=\sigma_{IBA}\biggl( 2 \delta Z_b^V+\delta Z_3+2 \delta Z_g+2{\delta m_b\over m_b}+2 \delta_{CT}\biggr) =2\sigma_{IBA}\biggl( \delta Z_b^V+{\delta m_b\over m_b}+ \delta_{CT}\biggr) \, . \label{cttot}$$ The $\tan\beta$ enhanced contributions from $\Delta_b$ cancel between Eqs. \[mbnice\] and \[ctdef\]. The expressions for the contributions to the $X_i$, as defined in Eq. \[onedef\], are given in Appendix B for arbitrary squark and gluino masses, and separately for each $1-$ loop diagram. Results for Maximal and Minimal Mixing in the $b$-Squark Sector =============================================================== Maximal Mixing -------------- The squark and gluino contributions to $bg\rightarrow bh$ can be examined analytically in several scenarios. In the first scenario, $$\mid {\tilde m}_L^2 -{\tilde m}_R^2\mid << {m_b\over 1+\Delta_b} \mid X_b\mid\, .$$ We expand in powers of ${\mid {\tilde m}_L^2 -{\tilde m}_R^2\mid \over m_b X_b}$. In this case the sbottom masses are nearly degenerate, $$\begin{aligned} M_S^2&\equiv &{1\over 2} \biggl[ M_{{\tilde b}_1}^2+M_{{\tilde b}_2}^2 \biggr] \nonumber \\ \mid M_{{\tilde b}_1}^2-M_{{\tilde b}_2}^2\mid &=& \biggl({2m_b \mid X_b\mid \over 1+\Delta_b}\biggr) \biggl(1+{( {\tilde m}_L^2 -{\tilde m}_R^2)^2 (1+\Delta_b)^2\over 8 m_b^2 X_b^2}\biggr)<< M_S^2 \, .\end{aligned}$$ This scenario is termed maximal mixing since $$\sin 2{{\tilde{\theta}}}_b\sim 1- {({\tilde m}_L^2 -{\tilde m}_R^2)^2(1+\Delta_b)^2\over 8 m_b^2 X_b^2} \, .$$ We expand the contributions of the exact one-loop SQCD calculation given in Appendix B in powers of $1/M_S$, keeping terms to ${\cal O}\biggl({M_{EW}^2\over M_S^2}\biggr)$ and assuming $M_S\sim M_{\tilde g}\sim \mu\sim A_b\sim {\tilde m}_L\sim {\tilde m}_R >> M_W, M_Z, M_h\sim M_{EW}$. In the expansions, we assume the large $\tan\beta$ limit and take $m_b\tan\beta\sim {\cal {O}}(M_{EW})$. This expansion has been studied in detail for the decay $h\rightarrow b {\overline{b}}$, with particular emphasis on the decoupling properties of the results as $M_S$ and $M_{\tilde g} \rightarrow\infty$[@Haber:2000kq]. The SQCD contributions to the decay, $h\rightarrow b {\overline b}$, extracted from our results are in agreement with those of Refs. [@Haber:2000kq; @Accomando:2011jy] The final result for maximal mixing, summing all contributions, is, $$\begin{aligned} A_s &\equiv &-g_s T^A g_{bbh}M_s^\mu \biggl\{ 1+{\alpha_s(\mu_R)\over 4 \pi } X_i^s\biggr\} \nonumber \\ &=&-g_s T^A g_{bbh}M_s^\mu\biggl\{ 1+\biggl({\delta g_{bbh}\over g_{bbh}}\biggr)_{max} +{\alpha_s(\mu_R)\over 4\pi} {s\over M_S^2}\delta \kappa_{max}\biggr\} \nonumber \\ A_t &\equiv &-g_s T^A g_{bbh}M_s^\mu \biggl\{ 1+{\alpha_s(\mu_R)\over 4 \pi } X_i^t\biggr\} \nonumber \\ &=&-g_s T^A g_{bbh} M_t^\mu\biggl\{ 1+\biggl({\delta g_{bbh}\over g_{bbh}}\biggr)_{max}\biggr\} \nonumber \\ A_1 &\equiv &-g_s T^A g_{bbh}M_s^\mu \biggl\{ 1+{\alpha_s(\mu_R)\over 4 \pi } X_i^1\biggr\} \nonumber \\ &=&-g_s T^A g_{bbh}M_1^\mu \biggl( -{\alpha_s(\mu_R) u\over 2 \pi M_S^2} \biggr) \delta \kappa_{max}\, . \label{ansmax}\end{aligned}$$ The contribution which is a rescaling of the $b {\overline b} h$ vertex is, $$\biggl({\delta g_{bbh}\over g_{bbh}}\biggr)_{max}= \biggl({\delta g_{bbh}\over g_{bbh}}\biggr)^{(1)}_{max} +\biggl({\delta g_{bbh}\over g_{bbh}}\biggr)^{(2)}_{max}\, ,$$ where the leading order term in $M_{EW}/M_S$ is ${\cal O}(1)$, $$\biggl({\delta g_{bbh}\over g_{bbh}}\biggr)^{(1)}_{max} ={\alpha_s(\mu_R)\over 3 \pi} {M_{\tilde g} (X_b-Y_b)\over M_S^2} f_1(R)\, , \label{lomax}$$ with $Y_b\equiv A_b+\mu\cot\alpha$ and $R\equiv M_{\tilde g}/M_S$. Eq. \[lomax\] only decouples for large $M_S$ if the additional limit $M_A\rightarrow \infty$ is also taken[@Haber:2000kq; @Dawson:2007ur]. In this limit, $$X_b-Y_b\rightarrow {2\mu M_Z^2\over M_A^2} \tan\beta\cos 2\beta + {\cal O}\biggl({M_{EW}^4\over M_A^4} \biggr)\, .$$ The subleading terms of ${\cal O}(M_{EW}^2/M_S^2)$ are,[^8] $$\begin{aligned} \biggl({\delta g_{bbh}\over g_{bbh}}\biggr)^{(2)}_{max} &=&{\alpha_s(\mu_R)\over 3 \pi} \biggl\{ -{M_{\tilde g} Y_b\over M_S^2} \biggl[{M_h^2\over 12 M_S^2} f_3^{-1}(R) +{X_b^2 m_b^2\over 2 (1+\Delta_b)^2 M_S^4}f_3(R)\biggr] \nonumber \\ && -{m_b^2 X_bY_b\over 2 (1+\Delta_b)^2 M_S^4}f_3^{-1}(R)\nonumber \\ &&+ {M_Z^2\over 3 M_S^2} {c_\beta s_{\alpha+\beta}\over s_\alpha} I_3^b\biggl[ 3f_1(R)+\biggl( {2 M_{\tilde g} X_b\over M_S^2}-1\biggr)f_2(R)\biggr]\biggr\}\, \label{dg2max}\end{aligned}$$ The functions $f_i(R)$ are defined in Appendix C. The ${s\over M_S^2},{u\over M_S^2}$ terms in Eq. \[ansmax\] are not a rescaling of the lowest order vertex and cannot be obtained from the effective Lagrangian. We find, $$\delta \kappa_{max}= {1\over 4} \biggl[ f_3(R)+{1\over 9}f_3^{-1}(R) \biggr] -R{Y_b\over 2 M_S}\biggl[ f_2^\prime(R)+{1\over 9}{\hat f}_2(R)\biggr]\, . \label{dkapmax}$$ The $\delta\kappa_{max}$ term is ${\cal O}(1)$ in $M_{EW}/M_S$ and has its largest values for small $R$ and large ratios of $Y_b/M_S$, as can be seen in Fig. \[fg:dk\_max\]. Large effects can be obtained for $Y_b/M_S \sim 10$ and $M_{\tilde g} << M_S$. However, the parameters must be carefully tuned so that $A_b/M_S\lsim 1$ in order not to break color[@Gunion:1987qv]. ![Contribution of $\delta \kappa_{max}$ defined in Eq. \[dkapmax\] as a function of $R=M_{\tilde g}/M_S$.[]{data-label="fg:dk_max"}](g1g2.eps) The amplitude squared, summing over final state spins and colors and averaging over initial state spins and colors, including one-loop SQCD corrections is $$\begin{aligned} \left| {\overline{\mathcal{A}}}\right|_{max}^{2} & = & -\frac{2\pi\alpha_{s}(\mu_R)}{3}g_{bbh}^{2}\left[\left( \frac{u^{2}+M_{h}^{4}}{st}\right)\left[1+2 \biggl({\delta g_{bbh}\over g_{bbh}}\biggr)_{max} \right]+{\alpha_s(\mu_R)\over 2\pi} \frac{M_{h}^{2}}{M_{S}^{2}}\delta\kappa_{max}\right]\, . \label{eq:amp_sq_del}\end{aligned}$$ Note that in the cross section, the $\delta \kappa_{max}$ term is not enhanced by a power of $s$ and gives a contribution of ${\cal O}\biggl({M_{EW}^2\over M_S^2}\biggr)$. Expanding $\Delta_b$ in the maximal mixing limit, $$\Delta_b\rightarrow -{\alpha_s (\mu_S)\over 3 \pi} {M_{\tilde g}\mu\over M_S^2}\tan\beta f_1(R)+{\cal O}\biggl( {M_{EW}^4\over M_S^4}\biggr)\, . \label{mbmaxlim}$$ By comparison with Eq. \[effcoup\], $$\begin{aligned} \left| {\overline{\mathcal{A}}}\right|_{max}^{2} & = & -\frac{2\pi\alpha_{s}(\mu_R)}{3}(g_{bbh}^{\Delta_b})^{2} \left\{\left( \frac{u^{2}+M_{h}^{4}}{st}\right)\left[1+2 \biggl({\delta g_{bbh}\over g_{bbh}}\biggr)^{(2)}_{max} \right]\right. \nonumber \\ && \left.+{\alpha_s(\mu_R)\over 2\pi} \frac{M_{h}^{2}}{M_{S}^{2}}\delta\kappa_{max}\right\} +{\cal O}\biggl(\biggl[{M_{EW}\over M_S}\biggr]^4,\alpha_s^3\biggr)\, . \label{maxans}\end{aligned}$$ Note that the mis-match in the arguments of $\alpha_s$ in Eqs. \[mbmaxlim\] and \[maxans\] is higher order in $\alpha_s$ than the terms considered here. The $(\delta g_{bbh}/g_{bbh})^{(2)}_{max}$ and $\delta\kappa_{max}$ terms both correspond to contributions which are not present in the effective Lagrangian approach. These terms are, however, suppressed by powers of $M_{EW}^2/M_S^2$ and the non-decoupling effects discussed in Refs. [@Haber:2000kq] and [@Guasch:2003cv] are completely contained in the $g_{bbh}^{\Delta_b}$ term. Minimal Mixing in the $b$ Squark Sector --------------------------------------- The minimal mixing scenario is characterized by a mass splitting between the $b$ squarks which is of order the $b$ squark mass, $\mid M_{\tilde{b}_1}^2-M_{\tilde{b}_2}^2\mid \sim M_S^2$. In this case, $$\mid {\tilde m}_L^2-{\tilde m}_R^2\mid >> {m_b \mid X_b\mid\over (1+\Delta_b)}\, ,$$ and the mixing angle in the $b$ squark sector is close to zero, $$\cos 2 {\tilde \theta}_b\sim 1-{2m_b^2X_b^2\over (M_{{\tilde b}_1}^2-M_{{\tilde b}_2}^2)^2} \biggl({1\over 1+\Delta_b}\biggr)^2 \, .$$ The non-zero subamplitudes are $$\begin{aligned} A_s &=&- g_s T^A g_{bbh}M_s^\mu\biggl\{ 1+\biggl({\delta g_{bbh}\over g_{bbh}}\biggr)_{min} +{\alpha_s(\mu_R)\over 4\pi} {s\over {\tilde M}_g^2}\delta \kappa_{min} \biggr\}\nonumber \\ A_t&=&-g_s T^A g_{bbh} M_t^\mu\biggl\{ 1+\biggl({\delta g_{bbh}\over g_{bbh}}\biggr)_{min}\biggr\} \nonumber \\ A_1&=&-g_s T^A g_{bbh}M_1^\mu \biggl(- {\alpha_s(\mu_R) u\over 2 \pi {\tilde M}_g^2} \biggr) \delta \kappa_{min} \, .\end{aligned}$$ Expanding the exact one-loop results of Appendix B in the minimal mixing scenario, $$\delta \kappa_{min}={ 1\over 8}\Sigma_{i=1}^2 \biggl(R_i^2\biggl[{1\over 9}f_3^{-1}(R_i) +f_3(R_i)\biggr]\biggr) +{ Y_b\over M_{\tilde g}} {R_1^2 R_2^2\over R_2^2-R_1^2} \biggl( 3h_1(R_1,R_2,1)+{8\over 3} h_1(R_1,R_2,2)\biggr) \, , \label{minkapdef}$$ where $R_i=M_{\tilde g}/M_{\tilde{b}_i}$ and the functions $f_i(R_i)$ and $h_i(R_1,R_2,n)$ are defined in Appendix C. The $\delta \kappa_{min}$ function is shown in Fig. \[fg:dk\_min\]. For large values of $Y_b/ M_{\tilde g}$ it can be significantly larger than $1$. ![Contribution of $\delta \kappa_{min}$ defined in Eq. \[minkapdef\] as a function of $R_i=M_{\tilde g}/M_{{\tilde b}_i}$ .[]{data-label="fg:dk_min"}](g1g2min.eps) As in the previous section, the spin and color averaged amplitude-squared is, $$\begin{aligned} \mid {\overline A}\mid_{min}^2 &=&-{2\alpha_s(\mu_R)\pi\over 3}(g_{bbh}^2)\biggl\{ {(M_h^4+u^2)\over s t}\biggl[1+2\biggl({\delta g_{bbh}\over g_{bbh}}\biggr)_{min}\biggr]+{\alpha_s(\mu_R) \over 2\pi}\delta \kappa_{min} {M_h^2\over M_{\tilde g}^2} \biggr\} \, ,\end{aligned}$$ with, $$\biggl({\delta g_{bbh}\over g_{bbh}}\biggr)_{min}= \biggl({\delta g_{bbh}\over g_{bbh}}\biggr)^{(1)}_{min} +\biggl({\delta g_{bbh}\over g_{bbh}}\biggr)^{(2)}_{min} \, .$$ The leading order term in $M_{EW}/M_S$ is ${\cal O}(1)$, $$\biggl( {\delta g_{bbh}\over g_{bbh}} \biggr)^{(1)}_{min} ={2\alpha_s(\mu_R)\over 3 \pi} { (X_b-Y_b)\over M_{\tilde g}}{R_1^2 R_2^2\over R_1^2-R_2^2} h_1(R_1,R_2,0)\, . \label{mindg1}$$ The subleading terms are ${\cal O}\biggl({M_{EW}^2\over M_S^2}\biggr)$, $$\begin{aligned} \biggl({\delta g_{bbh}\over g_{bbh}}\biggr)^{(2)}_{min} &=&{\alpha_s\over 4\pi} \biggl\{ -\frac{8M_{\tilde{g}}Y_{b}}{3\Delta M_{\tilde{b}_{12}}^{2}} \left[ \frac{h_{2}\left(R_{1},R_{2}\right)M_{h}^{2}} {\Delta M_{\tilde{b}_{12}}^{2}} \right. \nonumber \\ && \left. \left. +\frac{m_{b}^{2}X_b^{2}} {\left(\Delta M_{\tilde{b}_{12}}^{2}\right)^{2} (1+\Delta_b)^2} \left\{ 2\mathcal{S}\left(\frac{f_{1}\left(R\right)} {M_{\tilde{b}}^{2}}\right)\right.\right.\left. +\frac{h_{1}\left(R_{1},R_{2},0\right)} {\Delta M_{\tilde{b}_{12}}^{2}} \right\} \right] \nonumber \\ && +\frac{4}{3}\frac{c_{\beta}s_{\alpha+\beta}} {s_{\alpha}}I_{3}^{b}M_{Z}^{2} \left[\mathcal{S}\left(\frac{3f_{1}\left(R\right) -f_{2}\left(R\right)}{3M_{\tilde{b}}^{2}}\right) -\frac{2M_{\tilde{g}}X_b} {\Delta M_{\tilde{b}_{12}}^{2}} \mathcal{A}\left(\frac{f_{1}\left(R\right)} {M_{\tilde{b}}^{2}}\right)\right] \nonumber \\ && +\frac{4}{3} \frac{c_{\beta}s_{\alpha+\beta}} {s_{\alpha}}\left(I_{3}^{b} -2Q^{b}s_{W}^{2}\right) M_{Z}^{2}\left[\mathcal{A} \left(\frac{3f_{1}\left(R\right)-f_{2}\left(R\right)} {3M_{\tilde{b}}^{2}}\right)\right. \nonumber \\ && \left. -\frac{2M_{\tilde{g}}X_b} {\Delta M_{\tilde{b}_{12}}^{2}} \left\{ \mathcal{S}\left(\frac{f_{1}\left(R\right)} {M_{\tilde{b}}^{2}}\right) +\frac{h_{1}\left(R_{1},R_{2},0\right)} {\Delta M_{\tilde{b}_{12}}^{2}}\right\} \right]\nonumber \\ & & + \frac{8}{3}\frac{m_{b}^{2}X_{b}Y_{b}} {\Delta M_{\tilde{b}_{12}}^{2}(1+\Delta_b)^2}\mathcal{A}\left(\frac{3f_{1}\left(R\right)-f_{2}\left(R\right)}{3M_{\tilde{b}}^{2}}\right)\biggr\} \, . \label{dg2min}\end{aligned}$$ The symmetric and anti-symmetric functions are defined, $$\begin{aligned} {\mathcal{S}}(f(R,M_{\tilde b}) &\equiv & {1\over 2} \biggl[ f(R_1, M_{{\tilde b}_1})+ f(R_2, M_{{\tilde b}_2})\biggr] \nonumber \\ {\mathcal{A}}(f(R,M_{\tilde b}) &\equiv & {1\over 2} \biggl[ f(R_1, M_{{\tilde b}_1})- f(R_2, M_{{\tilde b}_2})\biggr]\end{aligned}$$ and $\Delta M^2_{{\tilde b}_{12}}\equiv M_{\tilde{b}_1}^2- M_{\tilde{b}_2}^2$. The remaining functions are defined in Appendix C. By expanding $\Delta_b$ in the minimal mixing limit, we find the analogous result to that of the maximal mixing case, $$\begin{aligned} \mid {\overline A}\mid_{min}^2 &=&-{2\alpha_s\pi\over 3}(g_{bbh}^{\Delta_b})^2\biggl\{ {(M_h^4+u^2)\over s t}\biggl[1+2\biggl({\delta g_{bbh}\over g_{bbh}}\biggr)^{(2)}_{min}\biggr] \nonumber \\ && +{\alpha_s\over 2\pi}\delta \kappa_{min} {M_h^2\over M_{\tilde g}^2} \biggr\}+{\cal O}\biggl(\biggl[{M_{EW}\over M_S}\biggr]^4,\alpha_s^3\biggr) \, . \label{minansdef}\end{aligned}$$ The contributions which are not contained in $\sigma_{IBA}$ are again found to be suppressed by ${\cal O}\biggl(\biggl[{M_{EW}\over M_S}\biggr]^2\biggr)$. Numerical Results ================= We present results for $pp\rightarrow b ({\overline b})h$ at $\sqrt{s} = 7~TeV$ with $p_{Tb}>20~GeV$ and $\mid \eta_b\mid < 2.0$. We use FeynHiggs to generate $M_h$ and $\sin\alpha_{eff}$ and then iteratively solve for the $b$ squark masses and $\Delta_b$ from Eqs. \[db\] and \[squark2\]. We evaluate the 2-loop ${\overline {MS}}$ $b$ mass at $\mu_R=M_h/2$, which we also take to be the renormalization and factorization scales[^9]. Finally, Figs \[fg:maxmix\], \[fg:maxmix250\], \[fg:minmix\], and \[fg:compsig1\] use the CTEQ6m NLO parton distribution functions[@Nadolsky:2008zw]. Figs. \[fg:maxmix\], \[fg:maxmix250\] and \[fg:minmix\] show the percentage deviation of the complete one-loop SQCD calculation from the Improved Born Approximation of Eq. \[sigibadef\] for $\tan\beta=40$ and $\tan\beta=20$ and representative values of the MSSM parameters[^10]. In both extremes of $b$ squark mixing, the Improved Born Approximation approximation is within a few percent of the complete one-loop SQCD calculation and so is a reliable prediction for the rate. This is true for both large and small $M_A$. In addition, the large $M_S$ expansion accurately reproduces the full SQCD one-loop result to within a few percent. These results are expected from the expansions of Eqs. \[maxans\] and \[minansdef\], since the terms which differ between the Improved Born Approximation and the one-loop calculation are suppressed in the large $M_S$ limit. Fig. \[fg:compsig1\] compares the total SQCD rate for maximal and minimal mixing, which bracket the allowed mixing possibilities. For large $M_S$, the effect of the mixing is quite small, while for $M_S\sim 800~GeV$, the mixing effects are at most a few $fb$. The accuracy of the Improved Born Approximation as a function of $m_R$ is shown in Fig. \[fg:compsig2\] for fixed $M_A,\mu$, and $m_L$. As $m_R$ is increased, the effects become very tiny. Even for light gluino masses, the Improved Born Approximation reproduces the exact SQCD result to within a few percent. ![Percentage difference between the Improved Born Approximation and the exact one-loop SQCD calculation of $pp\rightarrow bh$ for maximal mixing in the $b$-squark sector at $\sqrt{s}=7~TeV$, $\tan\beta=40$, and $M_A=1~TeV$.[]{data-label="fg:maxmix"}](maxmix.eps) ![Percentage difference between the Improved Born Approximation and the exact one-loop SQCD calculation of $pp\rightarrow bh$ for maximal mixing in the $b$-squark sector at $\sqrt{s}=7~TeV$, $\tan\beta=20$, and $M_A=250~GeV$.[]{data-label="fg:maxmix250"}](maxma250.eps) ![Percentage difference between the Improved Born Approximation and the exact one-loop SQCD calculation for $pp \rightarrow bh$ for minimal mixing in the $b$ squark sector at $\sqrt{s}=7~ TeV$.[]{data-label="fg:minmix"}](minrat.eps) ![Comparison between the exact one-loop SQCD calculation for $pp \rightarrow bh$ for minimal and maximal mixing in the $b$ squark sector at $\sqrt{s}=7~ TeV$ and $\tan\beta=40$. The minimal mixing curve has $m_R=\sqrt{2}M_S$ and ${\tilde{\theta}}_b\sim 0$, while the maximal mixing curve has $m_R=M_S$ and ${\tilde{\theta}}_b\sim {\pi\over 4}$. []{data-label="fg:compsig1"}](compsig1.eps) ![ Percentage difference between the Improved Born Approximation and the exact one-loop SQCD calculation for $pp \rightarrow bh$ as a function of $m_R$ at $\sqrt{s}=7~ TeV$ and $\tan\beta=40$. []{data-label="fg:compsig2"}](mrvar.eps) ![Total cross section for $pp\rightarrow b (\overline{b})h$ production including NLO QCD and SQCD corrections (dotted lines) as a function of renormalization/factorization scale using CTEQ6m (black) and MSTW2008 NLO (red) PDFs. We take $M_{\tilde g}=1~TeV$ and the remaining MSSM parameters as in Fig. \[fg:maxmix\].[]{data-label="fg:susybh"}](susybh.eps) In Fig. \[fg:susybh\], we show the scale dependence for the total rate, including NLO QCD and SQCD corrections (dotted lines) for a representative set of MSSM parameters at $\sqrt{s}=7~TeV$. The NLO scale dependence is quite small when $\mu_R=\mu_F\sim M_h$. However, there is a roughly $\sim 5\%$ difference between the predictions found using the CTEQ6m PDFs and the MSTW2008 NLO PDFs[@Martin:2009iq]. In Fig. \[fg:susybh\_muf\], we show the scale dependence for small $\mu_F$ (as preferred by [@Maltoni:2005wd]), and see that it is significantly larger than in Fig. \[fg:susybh\]. This is consistent with the results of [@Harlander:2003ai; @Dittmaier:2011ti]. ![Total cross section for $pp\rightarrow b (\overline{b})h$ production including NLO QCD and SQCD corrections as a function of the factorization scale using MSTW2008 NLO PDFs. We take $M_{\tilde g}=1~TeV$ and the remaining MSSM parameters as in Fig. \[fg:maxmix\].[]{data-label="fg:susybh_muf"}](susybh_muf.eps) Conclusion ========== Our major results are the analytic expressions for the SQCD corrections to $b$ Higgs associated production in the minimal (Eqs.  \[dg2max\], \[dkapmax\] and \[maxans\]) and maximal (Eqs. \[minkapdef\], \[dg2min\] and \[minansdef\]) $b$ squark mixing scenarios for large $\tan\beta$ and squark masses,$M_S$. These results clearly demonstrate that deviations from the $\Delta_b$ approximation are suppressed by powers of $(M_{EW}/M_S)$ in the large $\tan \beta$ region. The $\Delta_b$ approximation hence yields an accurate prediction in the $5$ flavor number scheme for the cross section for squark and gluino masses at the $TeV$ scale. As a by-product of our calculation, we update the predictions for $b$ Higgs production at $\sqrt{s}=7~TeV$. Acknowledgements {#acknowledgements .unnumbered} ================ S. Dawson  and P.Jaiswal are supported by the United States Department of Energy under Grant DE-AC02-98CH10886. Appendix A: Passarino-Veltman Functions {#app:P-V .unnumbered} ======================================= The scalar integrals are defined as: $$\begin{aligned} \label{eq:A0} {i\over 16\pi^2}A_0(M_0^2) &=& \int \frac{d^nk}{(2\pi)^n} \frac{1}{N_0}\,, \nonumber\\ \label{eq:B0} {i\over 16\pi^2}B_0(p_1^2;M_0^2,M_1^2) &=& \int \frac{d^nk}{(2\pi)^n} \frac{1}{N_0 N_1}\,, \nonumber\\ \label{eq:C0} {i\over 16\pi^2}C_0(p_1^2,p_2^2,(p_1+p_2)^2;M_0^2, M_1^2,M_2^2) &=& \int \frac{d^nk}{(2\pi)^n} \frac{1}{N_0 N_1 N_2}\,, \nonumber\\ \label{eq:D0} {i\over 16\pi^2}D_0(p_1^2,p_2^2,p_3^2,p_4^2,(p_1+p_2)^2, (p_2+p_3)^2;M_0^2,M_1^2,M_2^2,M_3^2) &&\nonumber \\ \qquad\qquad\qquad =\int \frac{d^nk}{(2\pi)^n} \frac{1}{N_0 N_1 N_2 N_3}\,, &&\end{aligned}$$ where, $$\begin{aligned} N_0 &=& k^2 - M_0^2 \nonumber\\ N_1 &=& (k + p_1)^2 - M_1^2 \nonumber\\ N_2 &=& (k + p_1 + p_2)^2 - M_2^2 \nonumber\\ N_3 &=& (k + p_1 + p_2 + p_3)^2 - M_3^2 \,.\end{aligned}$$ The tensor integrals encountered are expanded in terms of the external momenta $p_i$ and the metric tensor $g^{\mu\nu}$. For the two-point function we write: $$\begin{aligned} {i\over 16\pi^2}B^\mu(p_1^2;M_0^2,M_1^2) &=& \int \frac{d^nk}{(2\pi)^n} \frac{k^\mu}{N_0 N_1} \nonumber\\ &\equiv& {i\over 16\pi^2}p_1^\mu B_1(p_1^2,M_0^2,M_1^2)\,,\end{aligned}$$ while for the three-point functions we have both rank-one and rank-two tensor integrals which we expand as: $$\begin{aligned} C^\mu(p_1^2,p_2^2,(p_1+p_2)^2;M_0^2,M_1^2,M_2^2) &=& p_1^\mu C_{11} + p_2^\mu C_{12} \,, \nonumber\\ C^{\mu\nu}(p_1^2,p_2^2,(p_1+p_2)^2; M_0^2,M_1^2,M_2^2) &=& p_1^\mu p_1^\nu C_{21} + p_2^\mu p_2^\nu C_{22} \nonumber\\ &+& (p_1^\mu p_2^\nu + p_1^\nu p_2^\mu) C_{23} + g^{\mu\nu} C_{24} \,,\end{aligned}$$ where: $${i\over 16\pi^2} C^\mu (C^{\mu\nu})(p_1^2,p_2^2,(p_1+p_2)^2; M_0^2,M_1^2,M_2^2) \equiv \int \frac{d^nk}{(2\pi)^n} \frac{k^\mu (k^\mu k^\nu)}{N_0 N_1 N_2}$$ Finally, for the box diagrams, we encounter rank-one and rank-two tensor integrals which are written in terms of the Passarino-Veltmann coefficients as: $$\begin{aligned} {i\over 16\pi^2}D^\mu(p_1^2,p_2^2,p_3^2,p_4^2,(p_1+p_2)^2,(p_2+p3)^2 ;M_0^2,M_1^2,M_2^2) &\equiv& \int \frac{d^nk}{(2\pi)^n} \frac{k^\mu}{N_0 N_1 N_2 N_3} \nonumber\\ \qquad\qquad\qquad ={i\over 16\pi^2} \biggl\{ p_1^\mu D_{11} + p_2^\mu D_{12} + p_3^\mu D_{13}\biggr\} \,. &&\end{aligned}$$ $$\begin{aligned} {i\over 16\pi^2}D^{\mu\nu}(p_1^2,p_2^2,p_3^2, p_4^2,(p_1+p_2)^2,(p_2+p_3)^2;M_0^2,M_1^2,M_2^2) &\equiv& \int \frac{d^nk}{(2\pi)^n} \frac{k^\mu k^\nu}{N_0 N_1 N_2 N_3} \nonumber\\ \quad\qquad\qquad = {i\over 16\pi^2} \biggl\{ g^{\mu\nu} D_{00} +{\hbox{tensor structures not needed here}}\biggr\} \,. &&\end{aligned}$$ Appendix B: One-Loop Results {#appendix-b-one-loop-results .unnumbered} ============================ In this appendix we give the non-zero contributions of the individual diagrams in terms of the basis functions of Eq. \[eq: SME\] and the decompositions of Eq. \[onedef\]. The contributions proportional to $m_b \tan \beta$ are new and were not included in the results of Ref.[@Dawson:2007ur]. Although we specialize to the case of the lightest Higgs boson, $h$, our results are easily generalized to the heavier neutral Higgs boson, $H$, and so the Feynman diagrams in this appendix are shown for $\phi_i=h,H$. The self-energy diagrams of Fig. \[fg:self\]: ![Self-energy diagrams, $S_1$ and $S_2$.[]{data-label="fg:self"}](self1.eps "fig:") ![Self-energy diagrams, $S_1$ and $S_2$.[]{data-label="fg:self"}](self2.eps "fig:") $$\begin{aligned} X_{S_{1}}^{\left(t\right)} & = & \frac{4}{3}\sum_{i=1}^{2}\left\{ B_{1}-\left(-1\right)^{i}\frac{2m_{b}M_{\tilde{g}}s_{2\tilde{b}}}{t}B_{0}\right\} \left(M_{\tilde{b}_{i}}^{2}\right)\nonumber \\ X_{S_{1}}^{\left(2\right)} & = & -\frac{4}{3}\sum_{i=1}^{2}\left(-1\right)^{i}\frac{m_{b}M_{\tilde{g}}s_{2\tilde{b}}}{t}B_{0}\left(M_{\tilde{b}_{i}}^{2}\right)\label{eq: S1}\end{aligned}$$ where we have have used the shorthand notation for the arguments of Passarino-Veltman functions, $ B_{0,1}\left(M_{\tilde{b}_{i}}^{2}\right)\equiv B_{0,1}\left(t;M_{\tilde{g}}^{2},M_{\tilde{b}_{i}}^{2}\right)$. $$\begin{aligned} X_{S_{2}}^{\left(s\right)} & = & \frac{4}{3}\sum_{i=1}^{2}\left\{ B_{1}-\left(-1\right)^{i}\frac{2m_{b}M_{\tilde{g}}s_{2\tilde{b}}}{s}B_{0}\right\} \left(M_{\tilde{b}_{i}}^{2}\right)\nonumber \\ X_{S_{2}}^{\left(2\right)} & = & -\frac{4}{3}\sum_{i=1}^{2}\left(-1\right)^{i}\frac{m_{b}M_{\tilde{g}}s_{2\tilde{b}}}{s}B_{0}\left(M_{\tilde{b}_{i}}^{2}\right)\label{eq:S2}\end{aligned}$$ and $B_{0,1}\left(M_{\tilde{b}_{i}}^{2}\right)\equiv B_{0,1}\left(s;M_{\tilde{g}}^{2},M_{\tilde{b}_{i}}^{2}\right)$ The vertex functions of Fig. \[fg:v12\]: ![Virtual diagrams, $V_1$ and $V_2$.[]{data-label="fg:v12"}](virt1.eps "fig:") ![Virtual diagrams, $V_1$ and $V_2$.[]{data-label="fg:v12"}](virt2.eps "fig:") Diagram $V_1$: $$\begin{aligned} X_{V_{1}}^{\left(s\right)} & = & \frac{s}{6}\sum_{i=1}^{2}\left\{ C_{12}+C_{23}-\left(-1\right)^{i}\frac{2m_{b}M_{\tilde{g}}s_{2\tilde{b}}}{t}\left(C_{0}+C_{11}\right)\right\} \left(M_{\tilde{b_{i}}}^{2}\right)\nonumber \\ X_{V_{1}}^{\left(t\right)} & = & -\frac{1}{6}\sum_{i=1}^{2}\left\{ t\left(C_{12}+C_{23}\right)+2C_{24}-\left(-1\right)^{i}2m_{b}M_{\tilde{g}}s_{2\tilde{b}}\left(C_{0}+C_{11}\right)\right\} \left(M_{\tilde{b_{i}}}^{2}\right)\nonumber \\ X_{V_{1}}^{\left(1\right)} & = & -\frac{u}{3}\sum_{i=1}^{2}\left\{ C_{12}+C_{23}-\left(-1\right)^{i}\frac{2m_{b}M_{\tilde{g}}s_{2\tilde{b}}}{t}\left(C_{0}+C_{11}\right)\right\} \left(M_{\tilde{b_{i}}}^{2}\right)\nonumber \\ X_{V_{1}}^{\left(3\right)} & = & -\frac{1}{3}\sum_{i}\left(-1\right)^{i}m_{b}M_{\tilde{g}}s_{2\tilde{b}}\left(C_{0}+C_{11}\right)\left(M_{\tilde{b_{i}}}^{2}\right)\label{eq: V1}\end{aligned}$$ where $ C_{0,11,12,23,24}\left(M_{\tilde{b}_{i}}^{2}\right)\equiv C_{0,11,12,23,24}\left(0,0,t;M_{\tilde{g}}^{2},M_{\tilde{b_{i}}}^{2},M_{\tilde{b_{i}}}^{2}\right)$. Diagram $V_{2}$: $$\begin{aligned} X_{V_{2}}^{\left(s\right)} & = & -\frac{1}{3} \sum_{i=1}^{2}C_{24}\left(M_{\tilde{b}_i}^{2}\right)\nonumber \\ X_{V_{2}}^{\left(1\right)} & = & -\frac{u}{3}\sum_{i=1}^{2}\left\{ C_{12}+C_{23}-\left(-1\right)^{i}\frac{2m_{b}M_{\tilde{g}}s_{2\tilde{b}}}{s}\left(C_{0}+C_{11}\right)\right\} \left(M_{\tilde{b}_i}^{2}\right)\nonumber \\ X_{V_{2}}^{\left(4\right)} & = & \frac{1}{3}\sum_{i}\left(-1\right)^{i}m_{b}M_{\tilde{g}}s_{2\tilde{b}}\left(C_{0}+C_{11}\right) \left(M_{\tilde{b}_i}^{2}\right)\label{eq: V2}\end{aligned}$$ where $ C_{0,11,12,23,24}\left(M_{\tilde{b}_{i}}^{2}\right)\equiv C_{0,11,12,23,24}\left(0,0,s;M_{\tilde{g}}^{2},M_{\tilde{b_{i}}}^{2},M_{\tilde{b_{i}}}^{2}\right)$. The vertex functions of Fig. \[fg:v34\]: ![Virtual diagrams, $V_3$ and $V_4$.[]{data-label="fg:v34"}](virt3.eps "fig:") ![Virtual diagrams, $V_3$ and $V_4$.[]{data-label="fg:v34"}](virt4.eps "fig:") Diagram $V_{3}$: $$\begin{aligned} X_{V_{3}}^{\left(s\right)} & = & \frac{3s}{2}\sum_{i=1}^{2}\left\{ C_{12} +C_{23}-\left(-1\right)^{i} \frac{2m_{b}M_{\tilde{g}}s_{2\tilde{b}}}{t} \left(C_{0}+C_{12}\right)\right\} \left(M_{\tilde{b_{i}}}^{2}\right)\nonumber \\ X_{V_{3}}^{\left(t\right)} & = & -\frac{3}{2}\sum_{i=1}^{2}\left\{ M_{\tilde{g}}^{2}C_{0}-2\left(1-\epsilon\right)C_{24}-\left(-1\right)^{i}2m_{b}M_{\tilde{g}}s_{2\tilde{b}}C_{12}\right\} \left(M_{\tilde{b_{i}}}^{2}\right)\nonumber \\ X_{V_{3}}^{\left(1\right)} & = & -3u\sum_{i=1}^{2}\left\{ C_{12}+C_{23}-\left(-1\right)^{i}\frac{2m_{b}M_{\tilde{g}}s_{2\tilde{b}}}{t}\left(C_{0}+C_{12}\right)\right\} \left(M_{\tilde{b_{i}}}^{2}\right)\nonumber \\ X_{V_{3}}^{\left(2\right)} & = & -\frac{3}{2}\sum_{i=1}^{2}\left(-1\right)^{i}m_{b}M_{\tilde{g}}s_{2\tilde{b}}C_{0}\left(M_{\tilde{b_{i}}}^{2}\right)\nonumber \\ X_{V_{3}}^{\left(3\right)} & = & -3\sum_{i=1}^{2}\left(-1\right)^{i}m_{b}M_{\tilde{g}}s_{2\tilde{b}}\left\{ C_{0}+C_{12}\right\} \left(M_{\tilde{b_{i}}}^{2}\right)\label{eq: V3}\end{aligned}$$ where $ C_{0,11,12,23,24}\left(M_{\tilde{b}_{i}}^{2}\right)\equiv C_{0,11,12,23,24}\left(0,0,t;M_{\tilde{g}}^{2},M_{\tilde{g}}^{2},M_{\tilde{b_{i}}}^{2}\right)$. Diagram $V_{4}$: $$\begin{aligned} X_{V_{4}}^{\left(s\right)} & = & -\frac{3}{2}\sum_{i=1}^{2}\left\{ M_{\tilde{g}}^{2}C_{0}-2\left(1-\epsilon\right)C_{24}-s\left(C_{12}+C_{23}\right)+\left(-1\right)^{i}2m_{b}M_{\tilde{g}}s_{2\tilde{b}}C_{0}\right\} \left(M_{\tilde{b_{i}}}^{2}\right)\nonumber \\ X_{V_{4}}^{\left(1\right)} & = & -3u\sum_{i=1}^{2}\left\{ C_{12}+C_{23}-\left(-1\right)^{i}\frac{2m_{b}M_{\tilde{g}}s_{2\tilde{b}}}{s}\left(C_{0}+C_{12}\right)\right\} \left(M_{\tilde{b_{i}}}^{2}\right)\nonumber \\ X_{V_{4}}^{\left(2\right)} & = & -\frac{3}{2}\sum_{i=1}^{2}\left(-1\right)^{i}m_{b}M_{\tilde{g}}s_{2\tilde{b}}C_{0}\left(M_{\tilde{b_{i}}}^{2}\right)\nonumber \\ X_{V_{4}}^{\left(4\right)} & = & 3\sum_{i=1}^{2}\left(-1\right)^{i}m_{b}M_{\tilde{g}}s_{2\tilde{b}}\left\{ C_{0}+C_{12}\right\} \left(M_{\tilde{b_{i}}}^{2}\right)\label{eq: V4}\end{aligned}$$ where $ C_{0,11,12,23,24}\left(M_{\tilde{b}_{i}}^{2}\right)\equiv C_{0,11,12,23,24}\left(0,0,s;M_{\tilde{g}}^{2},M_{\tilde{g}}^{2},M_{\tilde{b_{i}}}^{2}\right)$. The vertex functions of Fig. \[fg:v56\]: ![Virtual diagrams, $V_5$ and $V_6$.[]{data-label="fg:v56"}](virt5.eps "fig:") ![Virtual diagrams, $V_5$ and $V_6$.[]{data-label="fg:v56"}](virt6.eps "fig:") Diagram $V_{5}$: $$\begin{aligned} X_{V_{5}}^{\left(t\right)} & = & \frac{4}{3}\sum_{i,j=1}^{2}C_{h,ij}\left\{ \delta_{ij}m_{b}C_{11}+a_{ij}M_{\tilde{g}}C_{0}\right\} \left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\nonumber \\ X_{V_{5}}^{\left(2\right)} & = & \frac{4}{3}m_{b}\sum_{i,j=1,2}C_{h,ij}\delta_{ij}C_{12}\left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\label{eq: V5}\end{aligned}$$ where $ C_{0,11,12,23,24}\left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\equiv C_{0,11,12,23,24}\left(0, M_{h}^{2},t;M_{\tilde{g}}^{2},M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\, , $ the squark mixing matrix is defined, $$\left(\begin{array}{cc} a_{11} & a_{12}\\ a_{21} & a_{22}\end{array} \right)= \left( \begin{array}{cc} s_{2\tilde{b}}& c_{2\tilde{b}}\\ c_{2\tilde{b}} & -s_{2\tilde{b}}\end{array}\right)\label{eq: a}$$ and the light Higgs-squark-squark couplings $C_{h,ij}$, are normalized with respect to the Higgs-quark-quark coupling[@Gunion:1989we], $$\begin{aligned} C_{h,11}+C_{h,22} & = & 4m_{b}+\frac{2M_{Z}^{2}}{m_{b}}I_{3}^{b} \frac{s_{\alpha+\beta}c_{\beta}}{s_{\alpha}}\\ C_{h,11}-C_{h,22} & = & 2Y_{b}s_{2\tilde{b}} +\frac{2M_{Z}^{2}}{m_{b}}c_{2\tilde{b}}\left(I_{3}^{b}-2Q_{b}s_{W}^{2} \right)\frac{s_{\alpha+\beta}c_{\beta}}{s_{\alpha}}\\ C_{h,12}=C_{h,21} & = & Y_{b}c_{2\tilde{b}}-\frac{M_{Z}^{2}} {m_{b}}s_{2\tilde{b}}\left(I_{3}^{b}-2Q^{b}s_{W}^{2}\right) \frac{s_{\alpha+\beta}c_{\beta}}{s_{\alpha}} \, ,\end{aligned}$$ $s_W^2=\sin \theta_W^2=1-M_W^2/M_Z^2$ and $Y_b$ is defined below Eq. \[dg2max\]. Diagram $V_{6}$: $$\begin{aligned} X_{V_{6}}^{\left(s\right)} & = & \frac{4}{3}\sum_{i,j=1,2}C_{h,ij}\left\{ \delta_{ij}m_{b}C_{11}+a_{ij}M_{\tilde{g}}C_{0}\right\} \left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\nonumber \\ X_{V_{6}}^{\left(2\right)} & = & \frac{4}{3}m_{b}\sum_{i,j=1,2}C_{h,ij}\delta_{ij}C_{12}\left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\nonumber \\ X_{V_{6}}^{\left(t\right)} & = & X_{V_{6}}^{\left(3\right)}=X_{V_{6}}^{\left(4\right)}=0\label{eq: V6}\end{aligned}$$ where $ C_{0,11,12,23,24}\left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\equiv C_{0,11,12,23,24}\left(0,M_{h}^{2},s;M_{\tilde{g}}^{2},M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right) .$ The box diagram of Fig. \[fg:box1\]: ![Box diagram, $B_1$.[]{data-label="fg:box1"}](box1.eps) $$\begin{aligned} X_{B_{1}}^{\left(s\right)} & = & \frac{3M_{\tilde{g}}s}{2} \sum_{i,j=1,2}a_{ij}C_{h,ij}\left\{ D_{0}+D_{13}\right\} \left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\nonumber \\ X_{B_{1}}^{\left(t\right)} & = & -\frac{3M_{\tilde{g}}t}{2}\sum_{i,j=1,2}a_{ij}C_{h,ij} D_{13}\left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\nonumber \\ X_{B_{1}}^{\left(1\right)} & = & 3M_{\tilde{g}}u\sum_{i,j=1,2}a_{ij}C_{h,ij} \left\{ D_{11}-D_{13}\right\} \left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\nonumber \\ X_{B_{1}}^{\left(2\right)} & = & -\frac{3m_{b}}{2}\sum_{i,j=1,2}\delta_{ij}C_{h,ij}\left\{ M_{\tilde{g}}^{2}D_{0}-2D_{00}\right\} \left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\label{eq:B1}\end{aligned}$$ where, $ D_0\left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right) \equiv D_0\left(0,0,0,M_h^2,s,t;M_{\tilde{b}_{i}}^{2},M_{\tilde{g}}^{2},M_{\tilde{g}}^{2},M_{\tilde{b}_{j}}^{2}\right)$. The box diagram of Fig. \[fg:box2\]: ![Box diagram, $B_2$.[]{data-label="fg:box2"}](box2.eps) Diagram $B_{2}$: $$\begin{aligned} X_{B_{2}}^{\left(s\right)} & = & -\frac{M_{\tilde{g}}s}{6}\sum_{i,j=1,2}a_{ij}C_{h,ij} \left\{ D_{0}+D_{11}\right\} \left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\nonumber \\ X_{B_{2}}^{\left(t\right)} & = & \frac{M_{\tilde{g}}t}{6}\sum_{i,j=1,2}a_{ij}C_{h,ij} \left\{ D_{0}+D_{11}\right\} \left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\nonumber \\ X_{B_{2}}^{\left(1\right)} & = & \frac{M_{\tilde{g}}u}{3}\sum_{i,j=1,2}a_{ij}C_{h,ij} \left\{ D_{11}-D_{12}\right\} \left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\nonumber \\ X_{B_{2}}^{\left(2\right)} & = & -\frac{m_{b}}{3}\sum_{i,j=1,2}\delta_{ij}C_{h,ij}D_{00}\left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\label{eq:B2}\end{aligned}$$ where $ D_0\left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\equiv D_0\left(0,0,0,M_h^2,u,s;M_{\tilde{b}_{i}}^{2},M_{\tilde{g}}^{2},M_{\tilde{b}_{j}}^{2},M_{\tilde{b}_{j}}^{2}\right)$. The box diagram of Fig. \[fg:box3\]: ![Box diagram, $B_3$.[]{data-label="fg:box3"}](box3.eps) Diagram $B_{3}$: $$\begin{aligned} X_{B_{3}}^{\left(s\right)} & = & \frac{M_{\tilde{g}}s}{6}\sum_{i,j=1,2}a_{ij}C_{h,ij} \left\{ D_{0}+D_{12}\right\} \left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\nonumber \\ X_{B_{3}}^{\left(t\right)} & = & -\frac{M_{\tilde{g}}t}{6}\sum_{i,j=1,2}a_{ij}C_{h,ij} \left\{ D_{0}+D_{12}\right\} \left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\nonumber \\ X_{B_{3}}^{\left(1\right)} & = & \frac{M_{\tilde{g}}u}{3}\sum_{i,j=1,2}a_{ij}C_{h,ij} \left\{ D_{11}-D_{12}\right\} \left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\nonumber \\ X_{B_{3}}^{\left(2\right)} & = & -\frac{m_{b}}{3}\sum_{i,j=1,2}\delta_{ij}C_{h,ij}D_{00}\left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right) \label{eq:B3}\end{aligned}$$ where $ D_0\left(M_{\tilde{b}_{i}}^{2},M_{\tilde{b}_{j}}^{2}\right)\equiv D_0\left(0,0,0,M_h^2,u,t;M_{\tilde{b}_{i}}^{2},M_{\tilde{g}}^{2},M_{\tilde{b}_{j}}^{2},M_{\tilde{b}_{j}}^{2}\right)$. The vertex and external wavefunction counter terms, Eq. \[eq:delZ\], along with the subtraction of Eq. \[ctdef\], give the counterterm of Eq. \[cttot\]: $$\begin{aligned} X_{CT}^{\left(s\right)} & = &X_{CT}^{\left(t\right)} =\biggl({4\pi\over \alpha_s(\mu_R)}\biggr) \biggl[\delta Z_b^V+{\delta m_b\over m_b}+\delta_{CT}\biggr] \nonumber \\&=& \frac{4}{3}\biggl[ 2M_{\tilde g} Y_b I(M_{{\tilde b}_1}, M_{{\tilde b}_2}, M_{\tilde g}) + \sum_{i=1}^{2} \biggl( -\left(-1\right)^{i}2 m_{b} s_{2\tilde{b}}B_{0}^\prime +2m_b^2 B_1^\prime\biggr)(0;M_{\tilde g}^2, M_{{\tilde b}_i}^2)\biggr] \, . \label{eq:ct 1}\end{aligned}$$ Note that the counterterm contains no large $\tan\beta$ enhanced contribution. Appendix C: Definitions {#appendix-c-definitions .unnumbered} ======================= In this appendix we define the f unctions used in the expansions of the Passarino-Veltman integrals in the maximum and minimum mixing scenarios, where $R\equiv {M_{\tilde g}\over M_S}$ in the maximal mixing scenario, and $R_i\equiv{ M_{{\tilde b}_i}\over M_S}$ in the minimal mixing scenario: $$\begin{aligned} f_{1}\left(R\right) & = & \frac{2}{\left(1-R^{2}\right)^{2}}\left[1-R^{2}+R^{2}\log R^{2}\right] \nonumber \\ f_{2}\left(R\right) & = & \frac{3}{\left(1-R^{2}\right)^{3}}\left[1-R^{4}+2R^{2}\log R^{2}\right] \nonumber \\ f_{3}\left(R\right) & = & \frac{4}{\left(1-R^{2}\right)^{4}}\left[1+\frac{3}{2}R^{2}-3R^{4}+\frac{1}{2}R^{6}+3R^{2}\log R^{2}\right] \nonumber \\ f_{4}\left(R\right) & = & \frac{5}{\left(1-R^{2}\right)^{5}}\left[\frac{1}{2}-4R^{2}+4R^{6}-\frac{1}{2}R^{8}-6R^{4}\log R^{2}\right] \nonumber \\ h_{1}\left(R_{1},R_{2},n\right) & = & \left(\frac{R_{1}^{2}}{1-R_{1}^{2}}\right)^{n}\frac{\log R_{1}^{2}}{1-R_{1}^{2}}-\left(\frac{R_{2}^{2}}{1-R_{2}^{2}}\right)^{n}\frac{\log R_{2}^{2}}{1-R_{2}^{2}} \nonumber \\ & & -\sum_{j=0}^{n}(-1)^j\frac{j+2}{2}\left\{ \left(1-R_{1}^{2}\right)^{j-n}-\left(1-R_{2}^{2}\right)^{j-n}\right\} \nonumber \\ h_{2}\left(R_{1},R_{2}\right) & = & \frac{R_{1}^{2}+R_{2}^{2}-2}{\left(1-R_{1}^{2}\right)\left(1-R_{2}^{2}\right)}+\frac{1}{R_{1}^{2}-R_{2}^{2}}\Biggl[\frac{R_{1}^{2}+R_{2}^{2}-2R_{1}^{4}}{\left(1-R_{1}^{2}\right)^{2}}\log R_{1}^{2}\nonumber \\ & & -\frac{R_{1}^{2}+R_{2}^{2}-2R_{2}^{4}}{\left(1-R_{2}^{2}\right)^{2}}\log R_{2}^{2}\Biggr] \, .\end{aligned}$$ Further, $$\begin{aligned} f_{i}'\left(R\right) & \equiv & \frac{\mathrm{d}f_{i}\left(x\right)}{\mathrm{d}x^{2}}\Biggr|_{x=R}\nonumber \\ f_{i}^{-1}\left(R\right) & \equiv & \frac{f_{i}\left(1/R\right)}{R^{2}}\nonumber \\ \hat{f}_{i}\left(R\right) & \equiv & \frac{1}{R^{4}}\frac{\mathrm{d}f_{i}\left(x\right)}{\mathrm{d}x^{2}}\Biggr|_{x=1/R}\label{eq:func 2} \, .\end{aligned}$$ [^1]: The expected sensitivities of ATLAS and CMS to $b$ Higgs associated production are described in Refs. [@Aad:2009wy; @Ball:2007zza]. [^2]: The neutral components of the Higgs bosons receive vacuum expectation values: $\langle H_d^0\rangle={v_1\over \sqrt{2}},\langle H_u^0\rangle={v_2\over \sqrt{2}}$. [^3]: $v_{SM}=(\sqrt{2}G_F)^{-1/2}$, $v_1=v_{SM}\cos\beta$ [^4]: It is also possible to sum the contributions which are proportional to $A_b$, but these terms are less important numerically[@Guasch:2003cv]. [^5]: This is the approximation used in Ref. [@Dittmaier:2011ti] to include the SQCD corrections. [^6]: We use FeynHiggs only for calculating $M_h$ and $\sin\alpha_{eff}$. [^7]: $s_{2\tilde{b}}\equiv \sin 2{\tilde \theta}_b$. [^8]: We use the shorthand, $c_\beta= \cos\beta$, $s_{\alpha+\beta}=\sin(\alpha+\beta)$, etc. [^9]: $\Delta_b$ is evaluated using $\alpha_s(M_S)$. [^10]: Figs. \[fg:maxmix\], \[fg:maxmix250\] and \[fg:minmix\] do not include the pure QCD NLO corrections[@Dicus:1998hs].
{ "pile_set_name": "ArXiv" }
--- abstract: 'A method to calculate the average size of Davis-Putnam-Loveland-Logemann (DPLL) search trees for random computational problems is introduced, and applied to the satisfiability of random CNF formulas (SAT) and the coloring of random graph (COL) problems. We establish recursion relations for the generating functions of the average numbers of (variable or color) assignments at a given height in the search tree, which allow us to derive the asymptotics of the expected DPLL tree size, $2^{\, N \omega + o(N)}$, where $N$ is the instance size. $\omega$ is calculated as a function of the input distribution parameters (ratio of clauses per variable for SAT, average vertex degree for COL), and the branching heuristics.' author: - Rémi Monasson title: 'A generating function method for the average-case analysis of DPLL' --- Introduction and main results. ============================== Many efforts have been devoted to the study of the performances of the Davis-Putnam-Loveland-Logemann (DPLL) procedure [@Karp], and more generally, resolution proof complexity for combinatorial problems with randomly generated instances. Two examples are random $k$-Satisfiability ($k$-SAT), where an instance ${\cal F}$ is a uniformly and randomly chosen set of $M=\alpha\, N$ disjunctions of $k$ literals built from $N$ Boolean variables and their negations (with no repetition and no complementary literals), and random graph $k$-Coloring ($k$-COL), where an instance ${\cal F}$ is an Erdős-Rényi random graph from $G(N,p=c/N)$ [*i.e.*]{} with average vertex degree $c$. Originally, efforts were concentrated on the random width distribution for $k$-SAT, where each literal appear with a fixed probability. Franco, Purdom and collaborators showed that simplified versions of DPLL had polynomial average-case complexity in this case, see [@purd2; @Fra2] for reviews. It was then recognized that the fixed clause length ensemble might provide harder instances for DPLL [@fra0]. Chvátal and Szemerédi indeed showed that DPLL proof size is w.h.p. exponentially large (in $N$ at fixed ratio $\alpha$) for an unsatisfiable instance [@Chv]. Later on, Beame [*et al.*]{} [@Bea] showed that the proof size was w.h.p. bounded from above by $2^{c \, N/\alpha}$ (for some constant $c$), a decreasing function of $\alpha$. As for the satisfiable case, Frieze and Suen showed that backtracking is irrelevant at small enough ratios $\alpha$ ($\le 3.003$ with the Generalized Unit Clause heuristic, to be defined below) [@Fri], allowing DPLL to find satisfying assignment in polynomial (linear) time. Achlioptas, Beame and Molloy proved that, conversely, at ratios smaller than the generally accepted satisfiability threshold, DPLL takes w.h.p. exponential time to find a satisfying assignment [@Achl3]. Altogether these results provide explanations for the ‘easy-hard-easy’ (or, more precisely, ‘easy-hard-less hard’) pattern of complexity experimentally observed when running DPLL on random 3-SAT instances [@Mit]. A precise calculation of the average size of the search space explored by DPLL (and \#DPLL, a version of the procedure solving the enumeration problems \#SAT and \#COL) as a function of the parameters $N$ and $\alpha$ or $c$ is difficult due to the statistical correlations between branches in the search tree resulting from backtracking. Heuristic derivations were nevertheless proposed by Cocco and Monasson based on a ‘dynamic annealing’ assumption [@Coc; @Coc2; @Eindor]. Hereafter, using the linearity of expectation, we show that ‘dynamic annealing’ turns not to be an assumption at all when the expected tree size is concerned. We first illustrate the approach, based on the use of recurrence relations for the generating functions of the number of nodes at a given height in the tree, on the random $k$-SAT problem and the simple Unit Clause (UC) branching heuristic where unset variables are chosen uniformly at random and assigned to True or False uniformly at random [@fra2; @Fra]. Consider the following counting algorithm .3cm Procedure \#DPLL-UC\[${\cal F}$,A,S\] Call ${\cal F}_A$ what is left from instance ${\cal F}$ given partial variable assignment $A$; 1\. If ${\cal F}_A$ is empty, $S\to S+ 2^{N-|A|}$, Return; [*(Solution Leaf)*]{} 2\. If there is an empty clause in ${\cal F}_A$, Return; [*(Contradiction Leaf)*]{} 3\. If there is no empty clause in ${\cal F}_A$, let $\Gamma _1=\{$1-clauses$\ \in {\cal F}_A\}$, .6cm if $\Gamma _1 \ne \emptyset$, pick any 1-clause, say, $\ell$, and call DPLL\[${\cal F}$,A$\cup \ell$\]; [*(unit-propagation)*]{} .6cm if $\Gamma_1=\emptyset$, pick up an unset literal uniformly at random, say, $\ell$, and call 2.1cm DPLL\[${\cal F}$,A$\cup \ell$\], then DPLL\[${\cal F}$,A$\cup \bar \ell$\] ; [*(variable splitting)*]{} End; \#DPLL-UC, called with $A=\emptyset$ and $S=0$, returns the number $S$ of solutions of the instance ${\cal F}$; the history of the search can be summarized as a search tree with leaves marked with solution or contradiction labels. As the instance to be treated and the sequence of operations done by \#DPLL-UC are stochastic, so are the numbers $L_S$ and $L_C$ of solution and contradiction leaves respectively. [Let $k\ge 3$ and $\displaystyle{ \Omega (t,\alpha,k) = t + \alpha \; \log_2 \big(1 - \frac {k}{2^k} t^{k-1} +\frac{k-1}{2^k} {t^k} \big)}$. The expectations of the numbers of solution and contradiction leaves in the \#DPLL-UC search tree of random $k$-SAT instances with $N$ variables and $\alpha N$ clauses are, respectively, $\displaystyle{ L_S(N,\alpha,k)=2^{N \omega _S(\alpha,k) +o(N)}}$ with $\omega _S(\alpha,k)= \Omega (1,\alpha,k)$ and $\displaystyle{ L_C(N,\alpha,k)=2^{N \omega _C (\alpha ,k) +o(N)} \ \ with \ \ \omega _C(\alpha,k)=\max _{t\in [0;1]}\Omega (t,\alpha,k) }$.]{} An immediate consequence of Theorem 1 is that the expectation value of the total number of leaves, $L_S+L_C$, is $2^{N \omega _C (\alpha ,k) +o(N)}$. This result was first found by Méjean, Morel and Reynaud in the particular case $k= 3$ and for ratios $\alpha > 1$ [@mejean]. Our approach not only provides a much shorter proof, but can also be easily extended to other problems and more sophisticated heuristics, see Theorems 2 and 3 below. In addition, Theorem 1 provides us with some information about the expected search tree size of the decision procedure DPLL-UC, corresponding to \#DPLL-UC with Line 1 replaced with: [If ${\cal F}_A$ is empty, output Satisfiable; Halt]{}. [Let $\alpha > \alpha _u(k)$, the root of $\omega _C (\alpha,k) = 2+ \alpha \log _2(1-2^{-k})$ e.g. $\alpha _u (3)= 10.1286...$. The average size of DPLL-UC search trees for random $k$-SAT instances with $N$ variables and $\alpha N$ clauses equals $2^{N \omega _C(\alpha,k) +o(N)}$.]{} Functions $\omega_S,\omega_C$ are shown in Figure 1 in the $k=3$ case. They coincide and are equal to $1 - \alpha \log_2 (8/7)$ for $\alpha < \alpha ^* = 4.56429...$, while $\omega_C > \omega_S$ for $\alpha > \alpha ^*$. In other words, for $\alpha >\alpha^*$, most leaves in \#DPLL-UC trees are contradiction leaves, while for $\alpha < \alpha ^*$, both contradiction and solution leaf numbers are (to exponential order in $N$) of the same order. As for DPLL-UC trees, notice that $\displaystyle{ \omega _C(\alpha,k) \asymp \frac{2 \ln 2}{3 \,\alpha} = \frac{0.46209...}\alpha }$. This behaviour agrees with Beame et al.’s result ($\Theta (1/\alpha)$) for the average resolution proof complexity of unsatisfiable instances [@Bea]. Corollary 1 shows that the expected DPLL tree size can be estimated for a whole range of $\alpha$; we conjecture that the above expression holds for ratios smaller than $\alpha _u$ [*i.e.*]{} down to $\alpha ^*$ roughly. For generic $k\ge 3$, we have $\displaystyle{ \omega _C(\alpha,k) \asymp \frac{k-2}{k-1} \bigg( \frac{2^k \ln 2}{k (k-1) \, \alpha} \bigg) ^{1/(k-2)}}$; the decrease of $\omega_C$ with $\alpha$ is therefore slower and slower as $k$ increases. So far, no expression for $\omega$ has been obtained for more sophisticated heuristics than UC. We consider the Generalized Unit Clause (GUC) heuristic [@Fra; @Achl] where the shortest clauses are preferentially satisfied. The associated decision procedure, DPLL-GUC, corresponds to DPLL-UC with Line 3 replaced with: [Pick a clause uniformly at random among the shortest clauses, and a literal, say, $\ell$, in the clause; call DPLL\[${\cal F}$,A$\cup \ell$\], then DPLL\[${\cal F}$,A$\cup \bar \ell$\].]{} Define $m(x_2) = \frac 12 \big(1 + \sqrt{1+4x_2}) -2 x_2$, $y_3(y_2)$ the solution of the ordinary differential equation ${dy_3}/{dy_2} = 3(1+y_2-2 \;y_3)/(2m(y_2))$ such that $y_3(1)=1$, and $$\nonumber \label{refiuy} \omega ^g (\alpha) = \max _{\frac 34 < y_2 \le 1} \bigg[ \int _{y_2}^1 \frac {dz}{m(z)} \log _2 \big(2z +m(z)\big)\ \exp \left( - \int _{z}^1 \frac {dw}{m(w)}\right) + \alpha \log _2 y_3 (y_2) \bigg] \quad .$$ Let $\alpha > \alpha ^g _u = 10.2183...$, the root of $\omega ^g (\alpha) + \alpha \log _2(8/7) =2$. The expected size of DPLL-GUC search tree for random 3-SAT instances with $N$ variables and $\alpha N$ clauses is $2^{N\,\omega ^g (\alpha) + o(N)}$. Notice that, at large $\alpha$, $\displaystyle{ \omega ^g (\alpha ) \asymp \frac{3+\sqrt{5}}{6\,\ln 2}\big[ \ln \big( \frac{1+\sqrt{5}}{2} \big) \big]^2 \frac 1\alpha = \frac {0.29154...}\alpha }$ in agreement with the $1/\alpha$ scaling established in [@Bea]. Furthermore, the multiplicative factor is smaller than the one for UC, showing that DPLL-GUC is more efficient than DPLL-UC in proving unsatisfiability. .3cm A third application is the analysis of the counterpart of GUC for the random 3-COL problem. The version of DPLL we have analyzed operates as follows [@mol]. Initially, each vertex is assigned a list of 3 available colors. In the course of the procedure, a vertex, say, $v$, with the smallest number of available colors, say, $j$, is chosen at random and uniformly. DPLL-GUC then removes $v$, and successively branches to the $j$ color assignments corresponding to removal of one of the $j$ colors of $v$ from the lists of the neighbors of $v$. The procedure backtracks when a vertex with no color left is created ([contradiction]{}), or no vertex is left (a [proper coloring]{} is found). Define $\displaystyle{ \omega ^h (c) = \max _{0 < t < 1} \big[ \frac c6 t^2 - \frac c3 t - (1-t) \ln 2 + \ln \big( 3 -e^{-2 c\,t / 3} \big) \big] }$. Let $c > c ^h _u = 13.1538...$, the root of $\omega ^h (c) + \frac c6 = 2 \ln 3$. The expected size of DPLL-GUC search tree for deciding 3-COL on random graphs from $G(N,c/N)$ with $N$ vertices is $e^{\,N\,\omega ^h (c) + o(N)}$. Asymptotically, $\displaystyle{ \omega ^h (c ) \asymp \frac{3\, \ln 2}{2\, c^2}= \frac {1.0397...}{c^2} }$ in agreement with Beame et al.’s scaling ($\Theta (1/c^2)$) [@Cris]. An extension of Theorem 3 to higher values of the number $k$ of colors gives $\displaystyle{ \omega^h(c,k) \asymp \frac{k(k-2)}{k-1} \; \big[ \frac{2\, \ln 2}{k-1} \big] ^{1/(k-2)} c^{-(k-1)/(k-2)}}$. This result is compatible with the bounds derived in [@Cris], and suggests that the $\Theta( c ^{-(k-1)/(k-2)})$ dependence could hold w.h.p. (and not only in expectation). Recurrence equation for \#DPLL-UC search tree ============================================= Let ${\cal F}$ be an instance of the 3-SAT problem defined over a set of $N$ Boolean variables $X$. A partial assignment $A$ of length $T (\le N)$ is the specification of the truth values of $T$ variables in $X$. We denote by ${\cal F}_A$ the residual instance given $A$. A clause $c\in {\cal F}_A$ is said to be a $\ell$-clause with $\ell \in\{0,1,2,3\}$ if the number of false literals in $c$ is equal to $3-\ell$. We denote by $C _\ell({\cal F}_A)$ the number of $\ell$-clauses in ${\cal F}_A$. The instance ${\cal F}$ is said to be satisfied under $A$ if $C_\ell ({\cal F}_A)=0$ for $\ell=0,1,2,3$, unsatisfied (or violated) under $A$ if $C_0({\cal F}_A) \ge 1$, undetermined under $A$ otherwise. The clause vector of an undetermined or satisfied residual instance ${\cal F}_A$ is the three-dimensional vector $\vec C$ with components $C_1({\cal F}_A),C_2({\cal F}_A),C_3({\cal F}_A)$. The search tree associated to an instance ${\cal F}$ and a run of \#DPLL is the tree whose nodes carry the residual assignments $A$ considered in the course of the search. The height $T$ of a node is the length of the attached assignment. .3cm It was shown by Chao and Franco [@fra2; @Fra] that, during the first descent in the search tree [*i.e.*]{} prior to any backtracking, the distribution of residual instances remains uniformly random conditioned on the numbers of $\ell$-clauses. This statement remains correct for heuristics more sophisticated than UC [*e.g.*]{} GUC, SC$_1$ [@Fra; @Achl], and was recently extended to splitting heuristics based on variable occurrences by Kaporis, Kirousis and Lalas [@Kap]. Clearly, in this context, uniformity is lost after backtracking enters into play (with the exception of Suen and Frieze’s analysis of a limited version of backtracking [@Fri]). Though this limitation appears to forbid (and has forbidden so far) the extension of average-case studies of backtrack-free DPLL to full DPLL with backtracking, we point out here that it is not as severe as it looks. Indeed, let us forget about how \#DPLL or DPLL search tree is built and consider its final state. We refer to a branch (of the search tree) as the shortest path from the root node (empty assignment) to a leaf. The two key remarks underlying the present work can be informally stated as follows. First, the expected size of a \#DPLL search tree can be calculated from the knolwedge of the statistical distribution of (residual instances on) a single branch; no characterization of the correlations between distinct branches in the tree is necessary. Secondly, the statistical distribution of (residual instances on) a single branch is simple since, along a branch, uniformity is preserved (as in the absence of backtracking). More precisely, \[from Chao & Franco [@fra2]\] [ Let ${\cal F}_A$ be a residual instance attached to a node $A$ at height $T$ in a \#DPLL-UC search tree produced from an instance ${\cal F}$ drawn from the random 3-SAT distribution. Then the set of $\ell$-clauses in ${\cal F}_A$ is uniformly random conditioned on its size $C_\ell({\cal F}_A)$ and the number $N-T$ of unassigned variables for each $\ell \in\{0,1,2,3\}$.]{} the above Lemma is an immediate application of Lemma 3 in Achlioptas’ Card Game framework which establishes uniformity for algorithms ([*a*]{}) ‘pointing to a particular card (clause)’, or ([*b*]{}) ’naming a variable that has not yet been assigned a value’ (Section 2.1 in Ref. [@Achl]). The operation of \#DPLL-UC along a branch precisely amounts to these two operations: unit-propagation relies on action ([*a*]{}), and variable splitting on ([*b*]{}). Lemma 1 does not address the question of uniformity among different branches. Residual instances attached to two (or more) nodes on distinct branches in the search tree are correlated. However, these correlations can be safely ignored in calculating the average number of residual instances, in much the same way as the average value of the sum of correlated random variables is simply the sum of their average values. [Let $L ( \vec C , T)$ be the expectation of the number of undetermined residual instances with clause vector $\vec C$ at height $T$ in \#DPLL-UC search tree, and $\displaystyle{ G (x_1,x_2,x_3;T\,) =\sum_{\vec C}\; x_1 ^{\, C_1}\; x_2 ^{\, C_2}\; x_3 ^{\, C_3}\ L(\,{\vec C}\,,T\,) }$ its generating function. Then, for $0\le T <N$, -.4cm $$\begin{aligned} \label{eqev} G(x_1,x_2,x_3;T+1\,)&=& \frac 1{f_1}\; G \big( f_1,f_2,f_3; T\,\big) + \bigg( 2 - \frac 1{f_1} \bigg) \; G\big( 0, f_2,f_3; T\, \big) \nonumber \\ &-& 2 \; G(0,0,0;T) \end{aligned}$$ where $f_1,f_2,f_3$ stand for the functions $f_1 ^{(T)} (x_1)=x_1+ \frac 12 \mu (1 -2 x_1)$, $f_2 ^{(T)} (x_1,x_2)=x_2+ \mu ( x_1+1 -2x_2)$, $f_3 ^{(T)}(x_2,x_3)=x_3+\frac32 \mu ( x_2+1 -2x_3)$, and $\mu =1/(N-T)$. The generating function $G$ is entirely defined from recurrence relation (\[eqev\]) and the initial condition $G(x_1,x_2,x_3 ; 0) =\big( x_3 \big)^{\alpha N}$.]{} Let $\delta _n$ denote the Kronecker function ($\delta _n=1$ if $n=0$, $\delta _n=0$ otherwise), $B _n ^{m,q}={m \choose n} q^n(1-q)^{m-n}$ the binomial distribution. Let $A$ be a node at height $T$, and ${\cal F}_A$ the attached residual instance. Call $\vec C$ the clause vector of ${\cal F}_A$. Assume first that $C_1\ge 1$. Pick up one 1-clause, say, $\ell$. Call $z_j$ the number of $j$-clauses that contain $\bar \ell$ or $\ell$ (for $j=1,2,3$). From Lemma 1, the $z_{j}$’s are binomial variables with parameter $j/(N-T)$ among $C_j-\delta _{j-1}$ (the 1-clause that is satisfied through unit-propagation is removed). Among the $z_j$ clauses, $w_{j-1}$ contained $\bar \ell$ and are reduced to $(j-1)$-clauses, while the remaining $z_j-w_{j-1}$ contained $\ell$ and are satisfied and removed. From Lemma 1 again, $w_{j-1}$ is a binomial variable with parameter $1/2$ among $z_j$. The probability that the instance produced has no empty clause ($w_0=0$) is $B_0^{z_1,\frac 12}=2^{-z_1}$. Thus, setting $\mu = \frac 1{N-T}$, $$\begin{aligned} \label{bbra2} M _{P} && [\vec C',\vec C; T] = \sum _{z_3=0} ^{C_3} B_{z_3}^{ C_3,3\mu} \sum _{w_2=0}^{z_3} B_{w_2}^{z_3,\frac 12} \sum_{z_2=0}^{C_2} B_{z_2}^{ C_2,2\mu} \sum _{w_1=0}^{z_2} B_{w_1}^{z_2,\frac 12} \nonumber \\ && \times\ \sum _{z_1=0}^{C_1-1} B_{z_1}^{C_1-1,\mu} \frac 1{2^{z_1}} \, \delta _{C'_3 - (C_3 - z_3)} \delta _{C'_2- (C_2 -z_2+w_2)} \delta _{C' _1- (C_1-1 -z_1+w_1)} \nonumber\end{aligned}$$ expresses the probability that a residual instance at height $T$ with clause vector $\vec C$ gives rise to a (non-violated) residual instance with clause vector $\vec C'$ at height $T+1$ through unit-propagation. Assume now $C_1=0$. Then, a yet unset variable is chosen and set to True or False uniformly at random. The calculation of the new vector $\vec C'$ is identical to the unit-propagation case above, except that: $z_1=w_0=0$ (absence of 1-clauses), and two nodes are produced (instead of one). Hence, $$\begin{aligned} M _{UC} [\vec C',\vec C; T] &=& 2\sum _{z_3=0} ^{C_3} B_{z_3}^{ C_3,3\mu} \sum _{w_2=0}^{z_3} B_{w_2} ^{z_3,\frac 12} \sum_{z_2=0}^{C_2} B_{z_2}^{ C_2,2\mu} \sum _{w_1=0}^{z_2} B_{w_1}^{z_2,\frac 12} \nonumber \\ &&\times\ \delta _{C'_3 - (C_3 - z_3)} \delta _{C'_2- (C_2 -z_2+w_2)} \delta _{C' _1- w_1 } \nonumber \end{aligned}$$ expresses the expected number of residual instances at height $T+1$ and with clause vector $\vec C'$ produced from a residual instance at height $T$ and with clause vector $\vec C$ through UC branching. Now, consider all the nodes $A_i$ at height $T$, with $i=1, \ldots, {\cal L}$. Let $o_i$ be the operation done by \#DPLL-UC on $A_i$. $o_i$ represents either unit-propagation (literal $\ell_i$ set to True) or variable splitting (literals $\ell _i$ set to T and F on the descendent nodes respectively). Denoting by $\mathbf{E}_{Y}(X)$ the expectation value of a quantity $X$ over variable $Y$, $\displaystyle{ L(\vec C';T+1) = \mathbf{E}_{{\cal L}, \{A_i, o_i\}} \left( \sum _{i=1}^{\cal L} {\cal M} [\vec C' ; A_i,o_i ] \right) }$ where ${\cal M}$ is the number (0, 1 or 2) of residual instances with clause vector $\vec C'$ produced from $A_i$ after \#DPLL-UC has carried out operation $o_i$. Using the linearity of expectation, $\displaystyle{ L(\vec C';T+1) = \mathbf{E}_{{\cal L}} \left(\sum _{i=1}^{\cal L} \mathbf{E}_{\{A_i, o_i\}} \big( {\cal M} [\vec C' ; A_i,o_i ] \big)\right) = \mathbf{E}_{{\cal L}} \left( \sum _{i=1}^{\cal L} M[\vec C',\vec C_i; T] \right)} $ where $\vec C_i$ is the clause vector of the residual instance attached to $A_i$, and $M[\vec C',\vec C; T] = \big( 1-\delta_{C_1} \big) \, M _{P} [\vec C',\vec C; T] + \delta _{C_1}\, M_{UC} [\vec C',\vec C; T]$. Gathering assignments with identical clause vectors gives the reccurence relation $L ( \vec C' , T+1) =$ $\displaystyle{\sum_{\vec C} M [\vec C',\vec C; T] \ L ( \vec C , T)}$. Recurrence relation (\[eqev\]) for the generating function is an immediate consequence. The initial condition over $G$ stems from the fact that the instance is originally drawn from the random 3-SAT distribution, $L(\vec C;0)=\delta _{C_1}\, \delta _{C_2}\, \delta_{C_3-\alpha N}$. Asymptotic analysis and application to DPLL-UC ============================================== The asymptotic analysis of $G$ relies on the following technical lemma: [Let $\gamma (x_2,x_3,t) = (1-t)^3 x_3 + \frac {3t}2 (1-t)^2 x_2+ \frac t8 (12-3t-2t^2)$, with $t\in]0;1[$ and $x_2,x_3>0$. Define $\displaystyle{S_0(T) \equiv \sum _{H=0}^{T} 2^{T-H} \, G(0,0,0;H)}$. Then, in the large $N$ limit, $\displaystyle{ S_0([tN]) \le 2^{N (t + \alpha \log _2 \gamma(0,0,t)) + o(N)}}$ and $G \big(\frac 12,x _2,x_3;[tN]\big) = \displaystyle{2^{N(t + \alpha \log _2 \gamma(x_2,x_3,t)) + o(N)}}$.]{} Due to space limitations, we give here only some elements of the proof. The first step in the proof is inspired by Knuth’s kernel method [@knu]: when $x_1=\frac 12$, $f_1=\frac 12$ and recurrence relation (\[eqev\]) simplifies and is easier to handle. Iterating this equation then allows us to relate the value of $G$ at height $T$ and coordinates $(\frac 12, x_2,x_3)$ to the (known) value of $G$ at height 0 and coordinates $(\frac 12,y_2,y_3)$ which are functions of $x_2,x_3, T,N$, and $\alpha$. The function $\gamma$ is the value of $y_3$ when $T,N$ are sent to infinity at fixed ratio $t$. The asymptotic statement about $S_0(T)$ comes from the previous result and the fact that the dominant terms in the sum defining $S_0$ are the ones with $H$ close to $T$. [Let $L_C(N,T,\alpha)$ be the expected number of contradiction leaves of height $T$ in the \#DPLL-UC resolution tree of random 3-SAT instances with $N$ variables and $\alpha N$ clauses, and $\epsilon >0$. Then, for $t\in [\epsilon; 1-\epsilon]$ and $\alpha >0$, $\displaystyle{ \Omega (t,\alpha,3) \le \frac 1N \log _2 L_C(N,[tN],\alpha) +o(1) \le \max _{h\in[\epsilon, ;t]} \Omega (h,\alpha,3) }$ where $\Omega$ is defined in Theorem 1. ]{} Observe that a contradiction may appear with a positive (and non–exponentially small in $N$) probability as soon as two 1-clauses are present. These 1-clauses will be present as a result of 2-clause reduction when the residual instances include a large number ($\Theta (N)$) of 2-clauses. As this is the case for a finite fraction of residual instances, $G(1,1,1;T)$ is not exponentially larger than $L_C(T)$. Use of the monotonicity of $G$ with respect to $x_1$ and Lemma 2 gives the announced lower bound (recognize that $\Omega (t,\alpha,3)=t + \alpha \log _2 \gamma (1,1;t)$). To derive the upper bound, remark that contradictions leaves cannot be more numerous than the number of branches created through splittings; hence $L_C(T)$ is bounded from above by the number of splittings at smaller heights $H$, that is, $\displaystyle{\sum _{H < T}} G(0,1,1;H)$. Once more, we use the monotonicity of $G$ with respect to $x_1$ and Lemma 2 to obtain the upper bound. The complete proof will be given in the full version. [*(Theorem 1)*]{} By definition, a solution leaf is a node in the search tree where no clauses are left; the average number $L_S$ of solution leaves is thus given by $\displaystyle{ L_S = \sum _{H=0}^N L(0,0,0;H) = \sum _{H=0}^N G(\vec 0;H) }$. A straightforward albeit useful upper bound on $L_S$ is obtained from $L_S \le S_0(N)$. By definition of the algorithm \#DPLL, $S_0 (N)$ is the average number of solutions of an instance with $\alpha N$ clauses over $N$ variables drawn from the random 3-SAT distribution, $S_0 (N) = 2^N \,(7/8) ^{\alpha N}$ [@fra0]. This upper bound is indeed tight (to within terms that are subexponential in $N$), as most solution leaves have heights equal, or close to $N$. To show this, consider $\epsilon >0$, and write $$L_S \ge \sum _{H=N(1-\epsilon)}^N G(\vec 0;H) \ge 2^{-N \epsilon} \; \sum _{H=N(1-\epsilon)}^N 2^{N-H} G(\vec 0;H) = 2^{-N \epsilon} \; S_0(N) \; \big[ 1 - A\big]$$ with $A = 2^{N\epsilon} S_0(N(1-\epsilon))/S_0(N)$. From Lemma 2, $A\le (\kappa +o(1))^{\alpha N}$ with $\displaystyle{ \kappa = \frac{\gamma(0,0,1-\epsilon)}{7/8} = 1 - \frac 97 \, \epsilon^2 + \frac 27 \, \epsilon ^3 <1 }$ for small enough $\epsilon$ (but $\Theta(1)$ with respect to $N$). We conclude that $A$ is exponential small in $N$, and $ -\epsilon + 1 - \alpha \log_2 \frac 87 + o(1) \le \frac 1N \log_2 L_S \le 1 - \alpha \log_2 \frac 87 $. Choosing arbitrarily small $\epsilon$ allows us to establish the statement about the asymptotic behaviour of $L_S$ in Theorem 1. Proposition 2, with arbitrarily small $\epsilon$, immediately leads to Theorem 1 for $k=3$, for the average number of contradiction leaves, $L_C$, equals the sum over all heights $T=tN$ (with $0\le t\le 1$) of $L_C(N,T,\alpha)$, and the sum is bounded from below by its largest term and, from above, by $N$ times this largest term. The statement on the number of leaves following Theorem 1 comes from the observation that the expected total number of leaves is $L_S+L_C$, and $\displaystyle{\omega _S (\alpha,3) = \Omega (1,\alpha,3) \le \max _{t \in[0;1]} \Omega (t,\alpha,3) = \omega _C(\alpha,3 )}$. ([*Corollary 1*]{}) Let $P_{sat}$ be the probability that a random 3-SAT instance with $N$ variables and $\alpha N$ clauses is satisfiable. Define $\#L_{sat}$ and $\#L_{unsat}$ (respectively, $L_{sat}$ and $L_{unsat}$) the expected numbers of leaves in \#DPLL-UC (resp. DPLL-UC) search trees for satisfiable and unsatisfiable instances respectively. All these quantities depend on $\alpha$ and $N$. As the operations of \#DPLL and DPLL coincide for unsatifiable instances, we have $\#L_{unsat} =L_{unsat}$. Conversely, $\#L_{sat} \ge L_{sat}$ since DPLL halts after having encountered the first solution leaf. Therefore, the difference between the average sizes \#L and L of \#DPLL-UC and DPLL-UC search trees satisfies $0 \le \#L - L = P_{sat} \; (\#L_{sat} - L_{sat}) \le P_{sat} \; \#L_{sat}$. Hence, $1 - P_{sat} \; \#L_{sat}/\#L \le L/\#L \le 1$. Using $\#L_{sat}\le 2^N$, $P_{sat} \le 2^N (7/8)^{\alpha N}$ from the first moment theorem and the asymptotic scaling for $\#L$ given in Theorem 1, we see that the left hand side of the previous inequality tends to 1 when $N\to \infty$ and $\alpha >\alpha _u$. Proofs for higher values of $k$ are identical, and will be given in the full version. The GUC heuristic for random SAT and COL ======================================== The above analysis of the DPLL-UC search tree can be extended to the GUC heuristic [@Fra], where literals are preferentially chosen to satisfy 2-clauses (if any). The outlines of the proofs of Theorems 2 and 3 are given below; details will be found in the full version. [**3-SAT**]{}. The main difference with respect to the UC case is that the two branches issued from the split are not statistically identical. In fact, the literal $\ell$ chosen by GUC satisfies at least one clause, while this clause is reduced to a shorter clause when $\ell$ is set to False. The cases $C_2\ge 1$ and $C_2=0$ have also to be considered separately. With $f_1,f_2,f_3$ defined in the same way as in the UC case, we obtain $$\begin{aligned} \label{eqevg} G(x_1,x_2,x_3&;&T+1\,) = \frac 1{f_1}\; G \big( f_1,f_2,f_3; T\,\big) + \bigg( \frac{1+f_1}{f_2} - \frac 1{f_1} \bigg) \; G\big( 0, f_2,f_3; T\, \big) \nonumber \\ &+& \bigg( \frac{1+f_2}{f_3} - \frac {1+f_1}{f_2} \bigg) \; G\big( 0, 0,f_3; T\, \big) - \frac{1+f_2}{f_3} \; G(0,0,0;T) \ . \end{aligned}$$ The asymptotic analysis of $G$ follows the lines of Section 3. Choosing $f_2=f_1+f_1^2$ [*i.e.*]{} $x_1=(-1+\sqrt{1+4 x_2})/2+O(1/N)$ allows us to cancel the second term on the r.h.s. of (\[eqevg\]). Iterating relation (\[eqevg\]), we establish the counterpart of Lemma 2 for GUC: the value of $G$ at height $[tN]$ and argument $x_2,x_3$ is equal to its (known) value at height 0 and argument $y_2,y_3$ times the product of factors $\frac 1{f_1}$, up to an additive term, $A$, including iterates of the third and fourth terms on the right hand side of (\[eqevg\]). $y_2,y_3$ are the values at ’time’ $\tau=0$ of the solutions of the ordinary differential equations (ODE) $dY_2/d\tau=- 2 m(Y_2)/(1-\tau)$, $dY_3 /d\tau = - 3 ((1+Y_2)/2-Y_3)/(1-\tau)$ with ’initial’ condition $Y_2(t)=x_2$, $Y_3(t)=x_3$ (recall that function $m$ is defined in Theorem 2). Eliminating ’time’ between $Y_2,Y_3$ leads to the ODE in Theorem 2. The first term on the r.h.s. in the expression of $\omega ^g$ (\[refiuy\]) corresponds to the logarithm of the product of factors $\frac 1{f_1}$ between heights $0$ and $T$. The maximum over $y_2$ in expression (\[refiuy\]) for $\omega ^g$ is equivalent to the maximum over the reduced height $t$ appearing in $\omega _C$ in Theorem 1 (see also Proposition 2). Finally, choosing $\alpha > \alpha _u^g$ ensures that, from the one hand, the additive term $A$ mentioned above is asymptotically negligible and, from the other hand, the ratio of the expected sizes of \#DPLL-GUC and DPLL-GUC is asymptotically equal to unity (see proof of Corollary 1). [**3-COL**]{}. The uniformity expressed by Lemma 1 holds: the subgraph resulting from the coloring of $T$ vertices is still Erdős-Rényi-like with edge probability $\frac cN$, conditioned to the numbers $C_j$ of vertices with $j$ available colors [@mol]. The generating function $G$ of the average number of residual asignments equals $(x_3)^N$ at height $T=0$ and obeys the reccurence relation, for $T<N$, $$\begin{aligned} \label{eqevgc} G(x_1,x_2,x_3;T+1\,) &=& \frac 1{f_1}\; G \big( f_1,f_2,f_3; T\,\big) + \bigg( \frac{2}{f_2} - \frac 1{f_1} \bigg) \; G\big( 0, f_2,f_3; T\, \big) \nonumber \\ &+& \bigg( \frac{3}{f_3} - \frac {2}{f_2} \bigg) \; G\big( 0, 0,f_3; T\, \big) \end{aligned}$$ with $f_1=(1-\mu)x_1$, $f_2=(1-2\mu)x_2+2\mu x_1$, $f_3=(1-3\mu)x_3+3\mu x_2$, and $\mu =c/(3N)$. Choosing $f_1=\frac 12 f_2$ [*i.e.*]{} $x_1=\frac 12 x_2+O(1/N)$ allows us to cancel the second term on the r.h.s. of (\[eqevgc\]). Iterating relation (\[eqevg\]), we establish the counterpart of Lemma 2 for GUC: the value of $G$ at height $[tN]$ and argument $x_2,x_3$ is equal to its (known) value at height 0 and argument $y_2,y_3$ respectively, times the product of factors $\frac 1{f_1}$, up to an additive term, $A$, including iterates of the last term in (\[eqevgc\]). An explicit calculation leads to $G(\frac 12 x_2,x_2,x_3;[tN])=e^{N \gamma^h(x_2,x_3,t)+o(N)} +A$ for $x_2,x_3>0$, where $\gamma^h(x_2,x_3,t) = \frac c6 t^2 - \frac c3 t +(1-t)\ln (x_2/2)+ \ln[3 + e^{-2ct/3}(2 x_2/x_3-3)]$. As in Proposition 2, we bound from below (respectively, above) the number of contradiction leaves in \#DPLL-GUC tree by the exponential of ($N$ times) the value of function $\gamma ^h$ in $x_2=x_3=1$ at reduced height $t$ (respectively, lower than $t$). The maximum over $t$ in Theorem 3 is equivalent to the maximum over the reduced height $t$ appearing in $\omega _C$ in Theorem 1 (see also Proposition 2). Finally, we choose $c _u^h$ to make the additive term $A$ negligible. Following the notations of Corollary 1, we use $L_{sat}\le 3^N$, and $P_{sat}\le 3^N e^{-Nc/6 + o(N)}$, the expected number of 3-colorings for random graphs from $G(N,c/N)$. Conclusion and perspectives =========================== We emphasize that the average \#DPLL tree size can be calculated for even more complex heuristics e.g. making decisions based on literal degrees [@Kap]. This task requires, in practice, that one is able: first, to find the correct conditioning ensuring uniformity along a branch (as in the study of DPLL in the absence of backtracking); secondly, to determine the asymptotic behaviour of the associated generating function $G$ from the recurrence relation for $G$. To some extent, the present work is an analytical implementation of an idea put forward by Knuth thirty years ago [@Knu; @Coc2]. Knuth indeed proposed to estimate the average computational effort required by a backtracking procedure through successive runs of the non–backtracking counterpart, each weighted in an appropriate way [@Knu]. This weight is, in the language of Section II.B, simply the probability of a branch (given the heuristic under consideration) in \#DPLL search tree times $2^S$ where $S$ is the number of splits [@Coc2]. Since the amount of backtracking seems to have a heavy tail [@Gent; @Jia], the expectation is often not a good predictor in practice. Knowledge of the second moment of the search tree size would be very precious; its calculation, currently under way, requires us to treat the correlations between nodes attached to distinct branches. Calculating the second moment is a step towards the distant goal of finding the expectation of the logarithm, which probably requires a deep understanding of correlations as in the replica theory of statistical mechanics. Last of all, \#DPLL is a complete procedure for enumeration. Understanding its average-case operation will, hopefully, provide us with valuable information not only on the algorithm itself but also on random decision problems e.g. new bounds on the sat/unsat or col/uncol thresholds, or insights on the statistical properties of solutions. [**Acknowledgments:**]{} The present analysis is the outcome of a work started four years ago with S. Cocco to which I am deeply indebted [@Coc; @Coc2]. I am grateful to C. Moore for numerous and illuminating discussions, as well as for a critical reading of the manuscript. I thank J. Franco for his interest and support, and the referee for pointing out Ref. [@ben], the results of which agree with the $\alpha ^{-1/(k-2)}$ asymptotic scaling of $\omega$ found here. Achlioptas, D. and Molloy, M. Analysis of a List-Coloring Algorithm on a Random Graph, in [*Proc. Foundations of Computer Science (FOCS)*]{}, vol 97, 204 (1997). Achlioptas, D. Lower bounds for random 3-SAT via differential equations, [*Theor. Comp. Sci.*]{} [**265**]{}, 159–185 (2001). Achlioptas, D., Beame, P. and Molloy, M. A Sharp Threshold in Proof Complexity. in [*Proceedings of STOC 01*]{}, p.337-346 (2001). Beame, P., Karp, R., Pitassi, T. and Saks, M. On the complexity of unsatisfiability of random $k$-CNF formulas. In [*Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing (STOC98)*]{}, p. 561–571, Dallas, TX (1998). Beame, P., Culberson, J., Mitchell, D. and Moore, C. The resolution complexity of random graph k-colorability. To appear in [*Discrete Applied Mathematics*]{} (2003). Ben-Sasson, E. and Galesi, N. Space Complexity of Random Formulae in Resolution, [*Random Struct. Algorithms*]{} [**23**]{}, 92 (2003). Chao, M.T. and Franco, J. Probabilistic analysis of two heuristics for the 3-satisfiability problem, [*SIAM Journal on Computing*]{} [**15**]{}, 1106-1118 (1986). Chao, M.T. and Franco, J. Probabilistic analysis of a generalization of the unit clause literal selection heuristics for the k-satisfiability problem, [*Information Science*]{} [**51**]{}, 289–314 (1990). Chv[á]{}tal, V. and Szemerédi, E. Many hard examples for resolution, [*J. Assoc. Comput. Mach.*]{} [**35**]{}, 759–768 (1988). Cocco, S. and Monasson, R. Analysis of the computational complexity of solving random satisfiability problems using branch and bound search algorithms, [*Eur. Phys. J. B*]{} [**22**]{}, 505 (2001). Cocco, S. and Monasson, R. Heuristic average-case analysis of the backtrack resolution of random 3-Satisfiability instances. [*Theor. Comp. Sci.*]{} [**320**]{}, 345 (2004). Franco, J. and Paull, M. Probabilistic analysis of the Davis-Putnam procedure for solving the satisfiability problem. [*Discrete Appl. Math.*]{} [**5**]{}, 77–87 (1983). Ein-Dor, L. and Monasson, R. The dynamics of proving uncolourability of large random graphs: I. Symmetric colouring heuristic. [*J. Phys. A*]{} [**36**]{}, 11055-11067 (2003). Franco, J. Results related to thresholds phenomena research in satisfiability: lower bounds. [*Theor. Comp. Sci.*]{} [**265**]{}, 147–157 (2001). Frieze, A. and Suen, S. Analysis of two simple heuristics on a random instance of k-SAT. [*J. Algorithms*]{} [**20**]{}, 312–335 (1996). Gent, I. and Walsh, T. Easy problems are sometimes hard. [*Artificial Intelligence*]{} [**70**]{}, 335 (1993). Jia, H. and Moore, C. How much backtracking does it take to color random graphs? Rigorous results on heavy tails. In [*Proc. 10th International Conference on Principles and Practice of Constraint Programming*]{} (CP ’04) (2004). Kaporis, A.C., Kirousis, L.M. and Lalas, E.G. The Probabilistic Analysis of a Greedy Satisfiability Algorithm. [*ESA*]{}, p. 574-585 (2002). Karp, R.M. The probabilistic analysis of some combinatorial search algorithms, in J.F. Traub, ed. Algorithms and Complexity, Academic Press, New York (1976). Knuth, D.E. The art of computer programming, vol. 1: Fundamental algorithms, Section 2.2.1, Addison-Wesley, New York (1968). Knuth, D.E. Estimating the efficiency of backtrack programs. [*Math. Comp.*]{} [**29**]{}, 136 (1975). Méjean, H-M., Morel, H. and Reynaud, G. A variational method for analysing unit clause search. [*SIAM J. Comput.*]{} [**24**]{}, 621 (1995). Mitchell, D., Selman, B. and Levesque, H. Hard and Easy Distributions of SAT Problems, [*Proc. of the Tenth Natl. Conf. on Artificial Intelligence (AAAI-92)*]{}, 440-446, The AAAI Press / MIT Press, Cambridge, MA (1992). Purdom, P.W. A survey of average time analyses of satisfiability algorithms. [*J. Inform. Process.*]{} [**13**]{}, 449 (1990).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Entangled resources enable quantum sensing that achieves Heisenberg scaling, a quadratic improvement on the standard quantum limit, but preparing large scale entangled states is challenging in the presence of decoherence. We present a quantum control strategy using highly nonlinear geometric phase gates for preparing entangled states on spin ensembles which can be used for practical precision metrology. The method uses a dispersive coupling of $N$ spins to a common bosonic mode and does not require addressability, special detunings, or interactions between the spins. Using a control sequence that executes Grover’s algorithm on a subspace of permutationally symmetric states, a target entangled resource state can be prepared using $O(N^{5/4})$ geometric phase gates. The geometrically closed path of the control operations ensures the gates are insensitive to the initial state of the mode and the sequence has built-in dynamical decoupling providing resilience to dephasing errors.' author: - 'Mattias T. Johnsson, Nabomita Roy Mukty, Daniel Burgarth, Thomas Volz' - 'Gavin K. Brennen' title: A geometric pathway to scalable quantum sensing --- Introduction ============ Quantum enhanced sensing offers the possibility of using entanglement in an essential way to measure fields with a precision superior to that which can be obtained with unentangled resources [@wasilewski2010; @taylor2008; @wang2019; @giovannetti2011]. Entangled resources allow the measurement sensitivity to scale as $1/N$ with respect to the resources applied (so-called Heisenberg scaling), compared to the $1/\sqrt{N}$ obtained otherwise (the standard quantum limit, or shot-noise limit) [@giovannetti2004; @pirandola2018; @giovannetti2011]. Creating large-scale entanglement in multipartite systems for the purposes of metrology is a difficult problem for a number of reasons. There is the difficulty in precisely constructing the required quantum state using realistic quantum operations, the need to protect that quantum state from decoherence and loss throughout the measurement process [@Demkowicz-Dobrzanski:2012xq], and the problem of carrying out an (often large) number of quantum operations on the state with precise control. From a metrology perspective, there is also the issue that many schemes claim to achieve Heisenberg limit by virtue of quadratic scaling of the Fischer information of the system [@zwierz2010]. While this ensures that there is an observable which has a standard deviation uncertainty that scales as $1/N$ with respect to some resource, it does not specify what that observable is. And even if such an observable is found, it need not be a convenient experimentally measurable quantity. ![Illustration of the state preparation protocol. By attaching a bosonic mode dispersively coupled to a system of $N$ spins, geometric phase gates with built in dynamical decoupling pulses drive a system to an entangled state ready for use in quantum sensing. []{data-label="fig:cartoon"}](Figure1.pdf) There have been attempts to address these issues in various ways. For example, to mitigate the decoherence issue, recent work has suggested using quantum error correction assisted metrology (see [@Zhou:2018am] and references therein) or phase protected metrology [@Bartlett_2017]. Such workarounds require the ability to perform complex quantum control in the former case or engineered interactions in the latter. In this paper we present a state preparation scheme and measurement protocol using geometric phase gates that addresses these issues (shown schematically in Fig. \[fig:cartoon\]). The method we use is relatively simple to engineer as it involves only the coupling of an ensemble of qubits to a common bosonic mode, e.g. a cavity or mechanical oscillation, as well as simple global control pulses on the spins and mode. Unlike previous work our scheme does not require special engineering of the physical layout of the spins, nor does it require special detunings for adiabatic state preparation, addressability, or direct interaction between the spins. Furthermore, it exceeds the performance of spin squeezing protocols because of the highly nonlinear nature of the geometric phase gates used in our scheme. Another advantage is that due to the geometric nature of the gate, it is completely insensitive to variations or uncertainties in the rate at which the perimeter is traversed. The observable we require is $\hat{J}_z^2$, corresponding to the square of the collective angular momentum of the spins, and as such is experimentally accessible rather than being an exotic operator that is challenging to arrange in the laboratory. The preparation time scales as $O(N^{5/4})$, which is close to the best-known scaling of $O(N)$ for a (much harder to achieve) fully addressable quantum circuit-based state preparation scheme [@2019arXiv190407358B], and ignoring noise our fidelity actually improves with more spins. Futhermore, our protocol has dynamical decoupling built in which provides resilience against dephasing during the state preparation, which is the dominant source of noise in many physical implementations. While dynamical decoupling has been considered [@Xu2004] in the context of the M[ø]{}llmer-S[ø]{}renson geometric gate [@MS1999], our scheme extends this to a highly nonlinear geometric phase gate and a full quantum state preparation algorithm. For plausible assumptions on the form of the system-bath spectral density, we obtain a suppression on the dephasing rate of two orders of magnitude. Our scheme is presented quite generally as qubits coupled to a bosonic mode, and as such is adaptable to a variety of architectures at the forefront of quantum control including NV centres in diamond, trapped ion arrays, Rydberg atoms, and superconducting qubits. Results ======= We begin by considering a collection of two-level spin half systems, and define the collective raising and lowering angular momentum operators as $J^+=\sum_{j=1}^N \sigma^+_j, J^-=(J^+)^{\dagger}$, and the components of the total angular momentum vector are $J^x=(J^++J^-)/2, J^y=(J^+-J^-)/2i, J^z=\sum_j ({\lvert 0 \rangle}_j{\langle 0 \rvert}-{\lvert 1 \rangle}_j{\langle 1 \rvert})/2$. Dicke states are simultaneous eigenstates of angular momentum $J$ and $J^z$: ${\lvert J=N/2,J^z=M \rangle}$, $M=-J,\ldots, J$. Transition rates between adjacent states in the Dicke ladder are: $$\Gamma_{M\rightarrow M\pm 1}=\Gamma {\langle J,M \rvert}J^{\mp}J^{\pm}{\lvert J,M \rangle}=\Gamma (J\mp M)(J\pm M+1),$$ where $\Gamma$ is the single spin decay rate. At the middle of the Dicke ladder (near $M=0$), these rates are $O(N)$ times faster than for $N$ independent spins and the Dicke state ${\lvert J,0 \rangle}$ is referred to as superradiant when emitting or, in the reciprical process, as superabsorptive. By suitable reservoir engineering, superabsorption can be exploited for photon detection and energy harvesting [@Higgins:2014qq]. More generally, Dicke states can be used for metrology. Consider the measurement of a field which generates a collective rotation of an ensemble of spins described by a unitary evolution $U(\eta)=e^{-i \eta J^y}$. Given a measurement operator $O$ on the system, the single shot estimation of the parameter $\eta$ has variance $$(\Delta \eta)^2=\frac{(\Delta O(\eta))^2}{|\partial_{\eta}\langle O(\eta)\rangle|^2}. \label{eqDeltaBetaDefinition}$$ It has been shown [@Apellaniz_2015] that when the measured observable is $O={J^z}^2$, the parameter variance is $$\begin{aligned} (\Delta \eta)^2&=&\big((\Delta {J^x}^2)^2 f(\eta)+4\langle {J^x}^2\rangle-3\langle {J^y}^2\rangle-2\langle {J^z}^2\rangle \\ &&\times(1+\langle {J^x}^2\rangle)+6\langle J^z{J^x}^2J^z\rangle\big)(4(\langle {J^x}^2\rangle-\langle {J^z}^2\rangle)^2)^{-1}\end{aligned}$$ with $f(\eta)=\frac{(\Delta {J^z}^2)^2}{(\Delta {J^x}^2)^2\tan^2(\eta)}+\tan^2(\eta)$. When the initial state is the Dicke state ${\lvert J,0 \rangle}$, the uncertainty in the measured angle is minimized at $\eta_{\rm min}=0$ such that the quantum Cramér-Rao bound is saturated: $$(\Delta \eta)^2=\frac{2}{N(N+2)}. \label{QCRB}$$ In experimental implementations with access to the linear collective observable $J^z$, the quadratic operator expectation value can be estimated as a classical average over $p$ experiments: $E[\langle{J^z}^2\rangle]=\sum_{k=1}^p \frac{M(k)^2}{p}$, where $M(k)$ is the outcome of the $k$th measurement of $J^z$ [@Lucke:2011xd]. Note that it is not essential that we know the exact direction of the field, e.g. that is aligned along the $y$ axis. The scheme is capable of detecting a field that is only known to lie perpendicular to a defined quantization axis $\hat{z}$. To see this, suppose a field is aligned with an angle $\delta$ in the $\hat{x}-\hat{y}$ plane, meaning the unitary is given by $$\begin{aligned} U(\eta) &=& \exp \left[ i\eta(J^x \sin \delta + J^y\cos\delta) \right] \\ &=& \exp (i\delta J^z) \exp(i\eta J^y) \exp(-i\delta J^z).\end{aligned}$$ We now measure the variance of our observable ${J^z}^2$ on our initial state $|J, M=0\rangle$ as before, with $$(\Delta J^{z2})^2 = \langle {J^z}^4 \rangle - \langle {J^z}^2 \rangle^2$$ where for any power $s$ $$\begin{aligned} \langle {J^z}^s \rangle &=& \langle J, J^z=0 | U^{\dagger} (\eta) \, {J^z}^s \, U(\eta) | J, J^z=0 \rangle \nonumber \\ &=& \langle J, J^z=0 | e^{-i \eta J^y} {J^z}^s e^{i \eta J^y} | J, J^z=0 \rangle.\end{aligned}$$ Our measured observable is $O={J^z}^2$, and the associated precision is given by Eq. (\[eqDeltaBetaDefinition\]), independent of $\delta$. The best known quantum algorithm for deterministically preparing a Dicke state ${\lvert J,M \rangle}$ requires $O((N/2+M)N)$ gates and has a circuit depth $O(N)$ [@2019arXiv190407358B]. This complexity applies even for a linear nearest neighbour quantum computer architecture, but that algorithm requires a universal gate set and full addressability. There are also non-circuit based strategies. The proposal in Ref. [@PhysRevA.95.013845] suggests a way to generate Dicke states in the ultra-strong coupling regime of circuit QED systems that does not require addressability by using selective resonant interactions at different couplings in order to transfer excitations one by one to the spin ensemble. However, it becomes difficult to scale up while satisfying the large detuning constraint required. Another strategy is to use interactions between the spins for state preparation. In the proposal of Ref. [@Higgins:2014qq], a chain of dipole-dipole interacting spins is engineered in a ring geometry that provides a nonlinear first order energy shift in the Dicke ladder. This spectral distinguishability allows for Dicke state preparation using chirped excitation pulses and/or measurement and feedback control. However, the dipole-dipole interaction does not conserve total angular momentum so transitions outside the Dicke space occur, and resolving transitions for a large number of spins is challenging. In contrast, our geometric phase gate (GPG) based approach for preparing Dicke states has depth $O(N^{5/4})$ and requires no direct coupling between spins, no addressability, and uses only global rotations and semi-classical control on an external bosonic mode with no special field detunings required. In our setup (see Fig. \[fig:cavity\_and\_gate\]) we assume $N$ spins with homogeneous energy splittings described by a free Hamiltonian $H_0=\omega_0 J^z$ (setting $\hbar\equiv 1$), which can be controlled by semi-classical fields performing global rotations generated by $J^x,J^y$. Additionally, we assume the ensemble is coupled to a single quantized bosonic mode, with creation and annihilation operators satisfying the equal time commutator $[a,a^{\dagger}]=1$. Our scheme requires a dispersive coupling between the $n$ spins and the bosonic mode of the form $$V=g a^{\dagger}a J^z.$$ We assume $g>0$ but the case $g<0$ follows easily as described below. By complementing this interaction with field displacement operators on a quantized bosonic mode it is possible to generate a GPG which can produce many body entanglement between the spins while in the end being disentanged from the mode. The GPG makes use of two basic operators [@Jiang:2008jo], the displacement operator $D(\alpha)=e^{\alpha a^{\dagger}-\alpha^{\ast}a}$ and the rotation operator $R(\theta)=e^{i\theta a^{\dagger}a}$ which satisfy the relations: $D(\beta)D(\alpha)=e^{i\Im (\beta \alpha^{\ast})}D(\alpha+\beta)$, and $R(\theta)D(\alpha)R(-\theta)=D(\alpha e^{i\theta})$. Furthermore, we have the relations for an operator $A$ acting on a system other than the mode, $D(\alpha e^{i\theta A})=R(\theta A)D(\alpha)R(-\theta A)$, and $R(\theta A)=e^{i\theta A\otimes a^{\dagger}a}$. For our purposes the rotation operator will be generated by the dispersive coupling over a time $t$: $R(-\theta J^z)=e^{-i Vt}$ for $\theta=g t$. Putting these primitives together, one can realize an evolution which performs a closed loop in the mode phase space: $$\begin{aligned} U_{GPG}(\theta,\phi,\chi)&=&D(-\beta)R(\theta J^z)D(-\alpha)R(-\theta J^z) \nonumber \\ &&\times \,\, D(\beta)R(\theta J^z)D(\alpha)R(-\theta J^z) \nonumber \\ &=&e^{-i 2 \chi\sin(\theta J^z+\phi)} \label{seq}\end{aligned}$$ where $\phi=\arg(\alpha)-\arg(\beta)$ and $\chi=|\alpha\beta|$, shown schematically in Fig. \[fig:cavity\_and\_gate\](b). An illustration of the geometric paths taken for different Dicke states is illustrated in Fig. \[fig:GPG\_schematic\_example\]. It is interesting to note that the controllable parameters enter the effective evolution in a highly nonlinear way. While this makes the analysis less straightforward than e.g. the M[ø]{}llmer-S[ø]{}renson gate which is quadratic in collective spin operators, nonetheless we can solve the control problem analytically. ![(a) Ensemble of spin qubits that are to be used for field sensing. In preparing the Dicke state, the spins interact dispersively at a rate $g$ with a single mode, which itself decays at a rate $\kappa$. Here a cavity is depicted but it could be any quantized bosonic mode, e.g. a motional harmonic oscillator. (b) Steps involved in the geometric phase gate (GPG). (c) Phase space of the bosonic mode showing all the GPGs, which can be applied in any order, used to build the unitary $U_s$ for $70$ spins. The dispersive interaction angles $\theta$ are indicated by the shading of the parallelograms. For $U_w$ all the GPGs are equal sized squares in phase space.[]{data-label="fig:cavity_and_gate"}](Figure2.pdf){width="8.6cm"} ![Example of the geometric phase gate (steps 1 to 7) acting on $N$(even) spins. Through simple rotations of the spins, as well as displacements of and dispersive coupling to a continuous mode, arbitrary Dicke states on the spins can be created. The mode may start in an arbitrary mixed state, but for the sake of example we consider the vacuum. The movement of the mode depends on the spin state and entanglement between the mode and the spins is generated. Steps $1,3,5,7$ are mode displacements each by a distance $|\alpha|$, and steps $2,4,6$ are dispersive interactions which enact rotations in phase space by an angle $M\theta$. In this example $\theta=\pi$. Before and after steps $2$ and $6$, the spins are inverted, by which our protocol gains a natural decoupling from noise. At the end, the mode returns to its original state and disentangles from the spins, but the spin states have picked up a relative phase equal to twice the (oriented) area traversed in phase space. Here states ${\lvert J,M \rangle}$ with $M$ odd (even), acquire a phase $2|\alpha|^2\cos(M\theta)=\pm 2|\alpha|^2$, which corresponds to a many body entangling gate $U_{GPG}(\pi,\pi/2,|\alpha|^2)=e^{-i2|\alpha|^2\prod \sigma^z_j}$. []{data-label="fig:GPG_schematic_example"}](Figure3.pdf){width="\columnwidth"} The system and the mode are decoupled at the end of GPG cycle. Also, if the mode begins in the vacuum state, it ends in the vacuum state and the first operation $R(-\theta J^z)$ in Eq. (\[seq\]) is not needed. However, as explained below it can be useful to include the first step as free evolution, in order to negate the total free evolution and to suppress dephasing errors. In the GPG it is necessary to evolve by both $R(\theta J^z)$ and $R(-\theta J^z)$. This can be done by conjugating with a global flip of the spins $R(\theta J^z)=e^{-i\pi J^x}R(-\theta J^z)e^{i\pi J^x}$, implying that the GPG can be generated regardless of the sign of the dispersive coupling strength $g$. Furthermore, because $R(\pm\theta J^z)$ commutes with $H_0$ at all steps, this conjugation will cancel the free evolution accumulated during the GPG. If the displacement operators are fast compared to $1/\omega_0,1/g$ then the total time for the GPG is $t_{GPG}=4\theta/g$. We assume the number of spins $n$ is even, although the protocol can easily be adapted to prepare Dicke states for odd $N$ as described below. Consider $N/2$ sequential applications of the GPG: $$\begin{aligned} W(\ell)&=&\prod_{k=1}^{N/2}U_{GPG}(\theta_k,\phi_k(\ell),\chi) \nonumber \\ &=&\sum_{M=-J}^J e^{-i2\chi\sum_{k=1}^{N/2} \sin(\theta_k M+\phi_k(\ell))}{\lvert J,M \rangle}{\langle J,M \rvert},\end{aligned}$$ with $\ell=0,\ldots, N$. Choosing parameters $$\theta_k=\frac{2\pi k}{N+1},\quad \phi_k(\ell)=\frac{2\pi k (N/2-\ell)}{N+1}+\frac{\pi}{2},\quad \chi=\frac{\pi}{N+1}, \label{angles}$$ then $$\frac{2}{N+1}\sum_{k=1}^{N/2} \cos(\frac{2\pi k (M+N/2-\ell) }{N+1})=\delta_{\ell,M+N/2}-\frac{1}{N+1}, \label{sumcos}$$ the unitary up to a global phase is $$W(\ell)=e^{-i\pi {\lvert J,\ell-N/2 \rangle}{\langle J,\ell-N/2 \rvert}}$$ meaning it applies a $\pi$ phase shift on the symmetric state with $\ell$ excitations. For $N$ odd we have the sum $$\frac{1}{N+1}\sum_{k=1}^{N} \cos(\frac{2\pi k (M+N/2-\ell) }{N+1})=\delta_{\ell,M+N/2}-\frac{1}{N+1},$$ so we can use $N$ GPGs with same angles $\theta_k,\phi_k(\ell)$ as above but with $\chi=\frac{\pi}{2(N+1)}$. Now define an initial state which is easily prepared by starting with all spins down and performing a collective $J^y$ rotation ${\lvert s \rangle}=e^{iJ^y\pi/2}{\lvert J,-J \rangle}$ and the target Dicke state ${\lvert w \rangle}={\lvert J,0 \rangle}.$ We will make use of the operators $U_w=e^{-i\pi {\lvert w \rangle}{\langle w \rvert}}=W(N/2)$ and $U_s=e^{-i\pi {\lvert s \rangle}{\langle s \rvert}}=e^{iJ^y\pi/2}W(0) e^{-iJ^y\pi/2}.$ In total the operators $U_w$ and $U_s$ each use $N/2$ GPGs. The orbit of the initial state ${\lvert s \rangle}$ under the operators $U_w$ and $U_s$, is restricted to a subspace spanned by the orthonormal states ${\lvert w \rangle}$ and ${\lvert s' \rangle}=\frac{{\lvert s \rangle}-{\lvert w \rangle}{\langle w \rvert}s\rangle}{\sqrt{1-|{\langle w \rvert}s\rangle|^2}}.$ Specifically, $U_w$ is a reflection across ${\lvert s' \rangle}$ and $U_s$ is a reflection through ${\lvert s \rangle}$ in this subspace exactly as in Grover’s algorithm. The composite pulse is one Grover step $U_G=U_sU_w$. Geometrically, relative to the state ${\lvert s' \rangle}$, the initial state ${\lvert s \rangle}$ is rotated by an angle $\delta/2$ toward ${\lvert w \rangle}$, where $\delta=2 \sin^{-1}(|{\langle w \rvert}s\rangle|),$ and after each Grover step is rotated a further angle $\delta$ toward the target. The optimal number of Grover iterations to reach the target is $\#G=\Big\lfloor{\frac{\pi}{4|{\langle w \rvert}s\rangle|}}\Big\rfloor$ where the relevant overlap is $$\begin{aligned} {\langle w \rvert}s\rangle &=& {\langle J,J^z=0 \rvert}e^{iJ^y\pi/2}{\lvert J,J^z=-J \rangle} \nonumber \\ &=& d^{J}_{0,-J}(-\frac{\pi}{2})=2^{-J}\sqrt{(2J)!}/{J!},\end{aligned}$$ where $d^{J}_{M',M}(\theta)={\langle J,M' \rvert}e^{-iJ^y\theta}{\lvert J,M \rangle}$ are the Wigner (small) d-matrix elements. For $J\gg 1$, using $x!\approx x^x e^{-x}\sqrt{2\pi x}$, we have ${\langle w \rvert}s\rangle\approx (\pi J)^{-1/4}$. Then the optimal number of Grover steps is $$\#G=\lfloor \pi^{5/4}N^{1/4}/2^{9/4} \rfloor. \label{Tcount}$$ The fidelity overlap of the output state $\rho$ of the protocol with the target state is $F={\operatorname{Tr}( {\lvert w \rangle}{\langle w \rvert} \rho )}$. For the Grover method it is easily calculated as $$\begin{array}{lll} F&=&|{\langle w \rvert}U_G^{\#G}{\lvert s \rangle}|^2\\ &=&\sin^2((\#G+\frac{1}{2})\delta)\\ &>&1-\sqrt{2/\pi N}. \end{array}$$ While the fidelity error falls off at least as fast as $\sqrt{2/\pi N}$ for all $N\gg 1$, if $N$ is near a value where the argument in Eq. (\[Tcount\]) is a half integer, i.e. $\lceil 32 (2k+1)^4/\pi^5\rceil$, with $k\in \mathbb{Z}$, the error will be much lower. For example, at $N=(10,70,260,700,1552)$ the fidelity error is $(1.84\times 10^{-4},1.57\times 10^{-5},1.68\times 10^{-6},3.65\times 10^{-8},1.92\times 10^{-8})$. The performance of our scheme is shown in Figures \[fig:performance\] and \[fig:DeltaEtaAsFunctionOfN\]. Figure \[fig:performance\](a) shows the probability distribution of the spin populations before and after our scheme is applied in the case of $N=70$, while Figure \[fig:performance\](b) shows the fidelity obtainable as a function of the number of spins $N$. The achievable fidelity is clearly optimized for specific spin values. The effectiveness of our scheme when used for metrology is shown in Figure \[fig:DeltaEtaAsFunctionOfN\], which shows the actual precision $\Delta \eta$ obtainable as a function of $N$, compared to that obtained from both the standard quantum limit as well as the ultimate Cram[é]{}r-Rao bound. The resource cost to prepare the Dicke state by the Grover method is $c\times N^{5/4}$ GPGs, with a constant $c<1$, and each GPG has a dispersive interaction action angle of $\theta=g t=O(1)$, implying the total time for the state preparation is $O(N^{5/4}/g)$. So far we have focused on preparing the state ${\lvert J,0 \rangle}$, but with simple modifications our protocol works for preparing any Dicke state ${\lvert J,M \rangle}$. First use the initial state ${\lvert s \rangle}=e^{i\epsilon_M J^y}{\lvert J,-J \rangle}$, and second substitute the operators $U_w=W(M+N/2)$ and $U_s=e^{i\epsilon_M J^y}W(0)e^{-i\epsilon_M J^y}$ where $\epsilon_M=\cos^{-1}(M/J)$. Now the relevant overlap is $|{\langle w \rvert}s\rangle|=|d^{J}_{M,-J}(-\epsilon_M)|$, and for $J-|M|\gg 1$, $|d^{J}_{M,-J}(-\epsilon)|\approx (\sqrt{\pi J}\sin\epsilon_M)^{-1/2}$ [@doi:10.1063/1.1358305], implying $\#G=O(N^{1/4})$ and hence the same overall depth of the protocol. There will be errors due to decay of the bosonic mode during the operations, as well as decoherence due to environmental coupling to the spins, which will degrade the fidelity. We now address these. [*Mode damping*]{}: We treat the mode as an open quantum system with decay rate $\kappa$. In order to disentangle the spins from the mode, the third and fourth displacement stages of the $k$-th GPG should be modified to $D(-\alpha_k)\rightarrow D(-\alpha_k e^{-\kappa \theta_k/g})$ and $D(-\beta_k)\rightarrow D(-\beta_k e^{-\kappa \theta_k/g})$. For simplicity we choose $|\alpha_k|=|\beta_k|$. ![Performance of our protocol for preparing the Dicke state ${\lvert J,0 \rangle}$. (a) Probability distribution $P(M)$ in state ${\lvert J,M \rangle}$ for the initial state ${\lvert s \rangle}$ and the final state $U_G^2{\lvert s \rangle}$ for $N=70$ spins after two Grover steps. The final fidelity error is $1-F=1-P_{\rm final}(0)=1.57\times 10^{-5}$. (b) Scalable performance at high fidelity. Sets of ensemble sizes using the same number of Grover steps, which grow as $N^{1/4}$, are indicated.[]{data-label="fig:performance"}](Figure4.pdf){width="8.6cm"} For an input spin state in the symmetric Dicke space $\rho=\sum_{M,M'}\rho_{M,M'}{\lvert J,M \rangle}{\langle J,M' \rvert}$, the process for the $k-$th GPG with decay on the spins, including the modified displacement operations above, is [@Brennen:09] $$\begin{array}{lll} \mathcal{E}^{(k)}(\rho)&=&U_{GPG}(\theta_k,\phi_k,\chi_k) \big[ \sum_{M,M'}R^{(k)}_{M,M'}\rho_{M,M'}. \\ &&{\lvert J,M \rangle}{\langle J,M' \rvert} \big] \times U_{GPG}^{\dagger}(\theta_k,\phi_k,\chi_k) \end{array}$$ where $$\chi_k=|\alpha_k|^2(e^{-3\theta_k/2}+e^{-\theta_k/2})/2$$ and $$R^{(k)}_{M,M'}=e^{-\Gamma_{M,M'}(\theta_k,\phi_k,\alpha_k)}e^{i\Delta_{M,M'}(\theta_k,\phi_k,\alpha_k)}$$ The factors $\Gamma_{M,M'}$ and $\Delta_{M,M'}$ are given in the Supplementary Material and satisfy $\Gamma_{M,M'}=\Gamma_{M',M}\geq 0$ with $\Gamma_{M,M}=0$ while $\Delta_{M,M'}=-\Delta_{M',M}$. For $M\neq M'$ we find for $\kappa/g\ll 1$: $$\begin{array}{lll} \Gamma_{M,M'}&=& \frac{|\alpha|^2 \frac{\kappa}{g}}{M-M'} \Big(2 \sin (\theta M'+\phi )-\theta M' (\cos (\theta M'+\phi ) \\ &&+\cos (\theta M+\phi )+4)+\theta M \cos (\theta M'+\phi ) \\ && -4 \sin (\theta (M-M')) - 2 \sin (\theta M+\phi) \\ &&+\theta M (\cos (\theta M+\phi ))+4 \theta M\Big) \end{array}$$ $$\begin{array}{lll} \Delta_{M,M'}&=& |\alpha|^2 \theta \frac{\kappa}{g} (-\sin (\theta M')+i \cos (\theta M') \\ &&+\sin (\theta M)-i \cos (\theta M)) (\cos (\theta M'+\theta M+\phi )\\ &&-i \sin (\theta M'+\theta M+\phi )) (i \sin (\theta (M'+M)\\ &&+2 \phi ) + \cos (\theta (M'+M)+2 \phi ) + 1). \end{array}$$ Now if we adjust $\alpha$ such that on the $k$-th GPG, $\chi_k=\pi/(N+1)$, then we have $$|\alpha_k|^2=\frac{2\pi}{(N+1)(e^{-3\theta_k/2}+e^{-\theta_k/2})}.$$ Because the coherent and decoherent maps for different GPGs commute, the entire sequence that phases a Dicke state according to $W(\ell)$ is: $$\begin{array}{lll} \mathcal{E}(\rho)&=&\mathcal{E}^{(N/2)}\circ\cdots\circ \mathcal{E}^{(1)}(\rho)\\ &=&W(\ell) \sum_{M,M'}\Upsilon_{M,M'}(\ell)\rho_{M,M'}{\lvert J,M \rangle}{\langle J,M' \rvert} W(\ell)^{\dagger} \end{array}$$ This describes ideal evolution followed by a nonlinear dephasing map, where the decoherence factor is $$\begin{array}{lll} \Upsilon_{M,M'}(\ell)&=&\prod_{k=1}^{N/2}R^{(k)}_{M,M'}\\ &=& \exp \bigl[ \sum_{k=1}^{N/2}(-\Gamma_{M,M'}(\theta_k,\phi_k(\ell),\alpha_k) \nonumber \\ && \hspace{1cm} + i\Delta_{M,M'}(\theta_k,\phi_k(\ell),\alpha_k)) \bigr]. \end{array}$$ ![Measurement precision $\Delta \eta$ as a function of number of spins (log-log scale). Shown are the shot noise limit, the quantum Cram[é]{}r-Rao bound Eq. (\[QCRB\]), and this protocol (blue).[]{data-label="fig:DeltaEtaAsFunctionOfN"}](Figure5.pdf){width="8.6cm"} The process fidelity $F_{\mathrm{pro}}(\mathcal{E},U)$ measures how close a quantum operation $\mathcal{E}$ is to the ideal operation $U$ as measured by some suitable metric. The fidelity measure we use is the overlap between the induced Jamiołkowski-Choi state representations of the operations. The process fidelity is readily computed using the fact that the noise map $\mathcal{E}(\rho_S(0))$ commutes with the target unitary $U$. Hence, we can compute the fidelity which measures how close the noisy map $\mathcal{E}^{\prime }(\rho_S(0))=U^{\dagger} \mathcal{E}(\rho_S(0)) U$ is to the ideal operation, i.e. the identity operation: $$F_{\mathrm{pro}}(\mathcal{E},U)=F_{\mathrm{pro}}(\mathcal{E}^{\prime },\mathcal{I})=_{S,S^{\prime }}\langle \Phi^{+}|\rho_{\mathcal{E}^{\prime }}|\Phi^{+}\rangle_{S,S^{\prime }}.$$ where $$|\Phi^+\rangle_{S,S^{\prime }} = \frac{1}{\sqrt{D}}\sum_{M}{\lvert J,M \rangle}_S\otimes {\lvert J,M \rangle}_{S^{\prime }},$$ Here we are computing the overlap of the Jamiołkowski-Choi representations of the maps as states in the Hilbert space $\mathcal{H}_S\otimes \mathcal{H}_{S^{\prime }}$ containing our system space and a copy each with dimension $D $: $$\begin{aligned} \rho_{\mathcal{E}^{\prime }} &= \mathcal{I}_{S}\otimes\mathcal{E}^{\prime}_{S^{\prime }}({\lvert \Phi^+ \rangle}_{S,S^{\prime }}) \\ &= \frac{1}{D}\sum_{M,M^{\prime }}\Upsilon_{M,M^{\prime }}(\ell)\phantom{=}{\lvert M \rangle}_S {\langle M' \rvert}\otimes {\lvert M \rangle}_{S^{\prime }} {\langle M' \rvert}.$$Hence $$F_{\mathrm{pro}}(\mathcal{E},W(\ell))=\frac{1}{(N+1)^2}\sum_{M,M'=-J}^J \Upsilon_{M,M^{\prime }}(\ell),$$ For each GPG we can readily find the lower bound on the process fidelity (see Supplementary Material), $$F_{\mathrm{pro}}(\mathcal{E},U_{\rm GPG})>e^{-6\pi|\alpha|^2 \kappa/g}\cos(|\alpha|^24\pi \kappa/g). \label{fullfid}$$ Numerically we find $$F_{\mathrm{pro}}(\mathcal{E},W(\ell))>e^{-\pi^2 \kappa/g}.$$ Notably, this fidelity is independent of $N$. [*Dephasing:*]{} We next address spin decoherence. We assume that amplitude damping due to spin relaxation is small by the choice of encoding. This can be accommodated by choosing qubit states with very long decay times either as a result of selection rules, or by being far detuned from fast spin exchange transitions. Hence we will focus on dephasing. Due to the cyclic evolution during each GPG, there is error tolerance to dephasing because if the interaction strength between the system and environment is small compared to $g$, then the spin flip pulses used between each pair of dispersive gates $R(\theta a^{\dagger}a)$ will echo out this noise to low order. Consider a bath of oscillators that couple bilinearly to the spins described by $H=H_E + H^{\text{global}}_{SE}+ H^{\text{local}}_{SE}$ where the local environmental and coupling Hamiltonians are $$\begin{aligned} H_E & = & \sum_{k}\sum_{j=1}^N \omega_{j,k} b_k^{\dagger}b_k+\sum_{k} \omega_{k} c_k^{\dagger}c_k , \\ H^{\text{global}}_{SE} & = & J^z \sum_k (c_k d^*_k+c^{\dagger}_k d_k), \\ H^{\text{local}}_{SE} & = & \sum_k \sum_{j=1}^{N} (b_{j,k} r^*_{j,k} + b^{\dagger}_{j,k} r_{j,k}) \sigma_j^z\end{aligned}$$ where $j$ is the spin index, and the local baths satisfy $[b_{j,k},b^{\dagger}_{j',k'}]=\delta_{j,j'}\delta_{k,k'}$ and the global bath $[c_{j},c^{\dagger}_{j'}]=\delta_{j,j'}$. The interaction $H^{\text{global}}_{SE}$ couples symmetrically to the spins, while $H^{\text{local}}_{SE}$ couples locally, leading to global and local dephasing respectively. For a given input density matrix $\rho(0)$, the output after a total time $T$ has off-diagonal matrix elements that decay as $\rho_{M,M'}(T)=\rho_{M,M'}(0)e^{-(M-M')^2A(T)}$. For the global dephasing map the numbers $M,M'\in [-N/2,N/2]$ are in the collective Dicke basis, while for local dephasing it is with respect to a local basis $M,M'\in[-1/2,1/2]$. Our argument for suppression of dephasing works for both cases. Global dephasing is the most deleterious form of noise when the state has large support over coherences in the Dicke subspace, due to decay rates that scale quadratically in the difference in $M$ number. However, it leaves the total Dicke space, and in particular the target Dicke state, invariant. In contrast, local dephasing induces coupling outside the Dicke space, but with a rate that is at most linear in $N$. Consider the evolution during the $N/2$ control pulses to realize either of the phasing gates $U_s$ or $U_w$. Assuming Guassian bath statistics, the effective dephasing rate can be written as the overlap of the noise spectrum $S(\omega)$ and the filter function $|f(\omega)|^2$ (see e.g. [@Agarwal_2010; @wang_liu_2013]): $$A(T)=\frac{1}{2\pi}\int_0^{\infty} d\omega S(\omega)|f(\omega)|^2.$$ For an initial system-bath state $\rho(0)=\rho_S(0)\otimes\rho_B(0)$ with the bath in thermal equilibrium $\rho_B(0)=\prod_{k} (1-e^{-\beta\omega_k})e^{-\beta \omega_k b^{\dagger}_kb_k}$ at inverse temperature $\beta$ ($k_B\equiv 1$), the noise spectrum is $S(w)=2\pi (n(\omega)+1/2)I(\omega),$ where $I(\omega)=\sum_k |g_k|^2\delta(\omega-\omega_k)$ is the boson spectral density, and $n(\omega_k)=(e^{\beta \omega_k}-1)^{-1}$ is the thermal occupation number in bath mode $k$. The filter function is obtained from the windowed Fourier transform $f(w)=\int_{0}^T C(t)e^{i\omega t}$, where $C(t)$ is the time-dependent control pulse sequence. In the present case $C(t)$ is a unit sign function that flips every time a collective spin flip is applied: $$C(t)=\left\{\begin{array}{cc}1 & t\in \cup_{k=1}^{N/2} \{[T^{(0)}_k,T^{(1)}_k)\cup [T^{(2)}_k,T^{(3)}_k)\} \\-1 & t\in\cup_{k=1}^{N/2}\{[T^{(1)}_k,T^{(2)}_k)\cup [T^{(3)}_k,T^{(4)}_k)\} \\0 & {\rm otherwise}\end{array}\right.$$ where $T^{(m)}_k=m \theta_k/g+4\sum_{j=1}^{k-1} \theta_j/g$ are the flip times with the duration between pulses growing linearly. The angles $\theta_k=\frac{2\pi k}{N+1}$ (Eq. \[angles\]) and the total time is $$T=T^{(4)}_{N/2}=\frac{\pi N(N+2)}{g (N+1)}.$$ The explicit form of the filter function is $$\begin{array}{lll} |f(\omega)|^2&=&\frac{1}{\omega^2}\Big|\sum_{k=1}^{N/2}(e^{i\omega T^{(0)}_k}-2e^{i\omega T^{(1)}_k}+2 e^{i\omega T^{(2)}_k}\\ &&-2 e^{i\omega T^{(3)}_k}+e^{i\omega T^{(4)}_k})\Big|^2. \end{array}$$ ![Suppression of dephasing via dynamical decoupling inherent in the sequence of GPGs used for each of the operators $U_s$ and $U_w$. Solid curves are filter functions using the GPGs. Dashed curves are plots of Eq. (\[filterapprox\]), which is a good approximation for $\omega/g<1/\pi N$. Dot-dashed curves is the bare case without decoupling. Here (red, green, blue) curves correspond to $n=(10,100,1000)$ spins.[]{data-label="fig:5"}](Figure6.pdf){width="8.6cm"} In comparison, consider evolution where no spin flips are applied during the sequence, in which case the *bare* functions are $C^{(0)}(t)=1\forall t\in[0,T)$, and $|f^{(0)}(\omega)|^2=4 \sin^2(T\omega/2)/\omega^2$. Results are plotted in Fig. \[fig:5\] and show there is substantial decoupling from the dephasing environment when the spectral density has dominant support in the range $\omega<g/2$. For $2\pi k \omega/g\ll1$, the summands in $f(\omega)$ can be expanded in a Taylor series in $\omega/g$ and to lowest order we find $$g^2|f(\omega)|^2\approx \frac{(\omega/g)^2 \pi^4 N^2(N+2)^2}{9 (N+1)^2}. \label{filterapprox}$$ This approximation is valid for $\omega/g <1/\pi N$, and, as shown in Fig. \[fig:5\], for $1/\pi N<\omega/g< 1/2$ the function is essentially flat with an average value $g^2 \overline{|f(\omega)|^2}\approx 3$ independent of $N$. In the region $1/\pi N<\omega/g<1/2$ the bare filter function is oscillatory and has an average $g^2 \overline{|f^{(0)}(\omega)|^2}\approx 13.63$, while for $\omega/g <1/\pi N$ it asymptotes to $ \frac{\pi^2 N^2(N+2)^2}{(N+1)^2}$. Thus, in the region $\omega/g<1/\pi N$ the ratio determining the reduction factor in the dephasing rate is $\frac{|f(\omega)|^2}{|f^{(0)}(\omega)|^2}=\pi^2\omega^2/g^2$, while for $\omega/g\in[1/\pi N,1/2]$, the reduction factor can be approximated by $\frac{\overline{|f(\omega)|^2}}{\overline{|f^{(0)}(\omega)|^2}}\approx 0.22$, provided the noise spectrum is sufficiently flat there. Further, the aforementioned freedom to apply the GPGs in any order allows room for further improvement. For example, consider coupling to a zero temperature Ohmic bath with noise spectrum $S(\omega)=\alpha \omega e^{-\omega/\omega_c}$ and having cutoff frequency $\omega_c/g=0.1$. For $N=20$, the ratio of the effective decay rate for the linearly ordered sequence of GPGs above to that with no decoupling is $A(T)/A_0(T)=0.0085$. However, by sampling over permutations of the ordering of GPGs we find a sequence [@Note1] achieving $A(T)/A_0(T)=0.0026$. To characterise the performance of our scheme in the presence of both mode decay $\kappa$ and effective global dephasing $A$, we performed numerical simulations of the full protocol using the joint mode-spin system with mode Fock space truncated to $15$ excitations. Results are presented in Figure \[fig:relative\_fidelity\] and show the effectiveness of our protocol when used for metrology, and considers the uncertainty $\Delta \eta$, given a single shot measurement of ${J^z}^2$ after a collective rotation $\eta$ as defined by Eq. (\[eqDeltaBetaDefinition\]) on an ensemble of size $N=10$. For values of $\gamma/g \lesssim 0.01$ we beat the standard quantum limit, and for $\gamma=0$ closely approach the Cram[é]{}r-Rao bound. ![Performance of the protocol with $70$ spins in the presence of noise. (a) Precision (log-log scale) obtained with a single shot measurement of ${J^z}^2$ as a function of mode decay for several strengths of global dephasing factors $A(T)$: no global dephasing (blue line), $A(T)=10^{-6}$ (blue dashed), $A(T)=10^{-5}$ (blue dot-dashed), $A(T)=10^{-4}$ (blue dotted). These dephasings correspond to an underlying decoherence rate of $\gamma_{\text{gdp}}= 10^{-4}g$ accumulated over each phasing gate of duration $T$. For a zero temperature Ohmic bath, the corresponding cuttoff frequencies are: $\omega_c/g=\{0.003,0.022,0.094\}$. This is to be contrasted with performance without dynamical decoupling (black line) with $A_0(T)=0.0223$, e.g. if one were to switch the sign of the dispersive coupling during each GPG rather than flipping the spins. (b) Fidelity error for the same environments as above.[]{data-label="fig:relative_fidelity"}](Figure7.pdf){width="\columnwidth"} Discussion ========== The state preparation method we have described so far has some inherent tolerance to decoherence. However, once the state is prepared, further errors could accumulate such as qubit loss or dephasing, while waiting for the accumulation of the measurement signal. Some strategies to address this were recently proposed in Ref. [@ouyang2019] where they suggest using superpositions of Dicke states as probe states. The class of states considered there are $${\lvert \varphi_u \rangle}=\frac{1}{\sqrt{2^n}}\sum_{j=0}^n\sqrt{\binom{n}{j}}{\lvert J=\frac{knu}{2},M=kj-J \rangle}.$$ Here the number of spins $N=k\times n\times u$, and the parameters $u$ and $n$ determine the robustness of the states to some number of loss and dephasing errors respectively, while $k$ is a parameter to scale the number of qubits in the superposition (larger $k$ means better performance). The case $u=1$ tolerates erasure errors; specifically, the state ${\lvert \varphi_1 \rangle}$ has a large quantum Fisher information obeying Heisenberg scaling when the number of erasure errors is less than $n$. We will consider the state performing well in the presence of one erasure error: $u=1, n=2, k=N/2$, which can be written $${\lvert \varphi_1 \rangle}=\frac{1}{2}({\lvert J,-J \rangle}+\sqrt{2}{\lvert J,0 \rangle}+{\lvert J,J \rangle}).$$ The case $u=2$ tolerates a constant number of dephasing errors. We will focus on the state with $u=2,n=1,k=N/2$ which tolerates one dephasing error and can be written $${\lvert \varphi_2 \rangle}=\frac{1}{\sqrt{2}}({\lvert J,-J \rangle}+{\lvert J,0 \rangle}).$$ We now describe how to make these states using our protocol. A key ingredient to prepare a superposition of Dicke states is to perform a controlled state preparation. If we introduce an ancilla spin $A$ which can be allowed to couple to the mode when the other spins do not (e.g. by detuning the other spins far away from the dispersive coupling regime), then a controlled displacements of the mode can be done: $$\begin{array}{lll} \Lambda(\beta)&=&{\lvert 0 \rangle}_A{\langle 0 \rvert}\otimes {\bf 1}+{\lvert 1 \rangle}_A{\langle 1 \rvert}\otimes D(\beta)\\ &=&D(\beta/2)R(\pi {\lvert 1 \rangle}_A{\langle 1 \rvert}) D(-\beta/2)R(-\pi{\lvert 1 \rangle}_A{\langle 1 \rvert}). \end{array} \label{contrdis}$$ Here $R(\pi {\lvert 1 \rangle}_A{\langle 1 \rvert})=e^{i \pi a^{\dagger}a {\lvert 1 \rangle}_A{\langle 1 \rvert}}$, meaning only the ancilla state ${\lvert 1 \rangle}_A$ couples to the mode. Now replacing the displacements $D(\beta)$ and $D(-\beta)$ in Eq. (\[seq\]) with the controlled displacements $\Lambda(\beta)$ and $\Lambda(-\beta)$, the effect is a controlled GPG (see also Ref [@brennen2016]): $$\Lambda(U_{\rm GPG})={\lvert 0 \rangle}_A{\langle 0 \rvert}\otimes {\bf 1}+{\lvert 1 \rangle}_A{\langle 1 \rvert}\otimes U_{\rm GPG}.$$ Thus by simply replacing all instances of GPGs with controlled GPS we can achieve a controlled Grover step unitary $G$. Note the unitary $e^{iJ^y \pi/2}$ conjugating $W(0)$ in $U_s$ does not need to be controlled, meaning the entire unitary $U_G^{\#G}$ can be made into a controlled unitary $$\Lambda(U_G^{\# G})=({\lvert 0 \rangle}_A{\langle 0 \rvert}\otimes {\bf 1}+{\lvert 1 \rangle}_A{\langle 1 \rvert}\otimes U_G)^{\#G}.$$ This is not quite enough. The state preparation of a Dicke states described above applies $U_G^{\# G}$ to a particular initial state, namely the spin coherent state ${\lvert s \rangle}$. We will also require a way to perform a controlled rotation on all the spins of the form $$\Lambda( e^{i J^y\pi/2})={\lvert 0 \rangle}_A{\langle 0 \rvert}\otimes {\bf 1}+{\lvert 1 \rangle}_A{\langle 1 \rvert}\otimes e^{i J^y\pi/2}.$$ Without having direction interactions between the ancilla and the other spins it is not obvious how to do this. However, it is possible to mediate the interaction with the mode by choosing $\phi=0$ and $\theta\ll 1$ in one instance of a controlled GPG. This will give $\Lambda(U_{\rm GPG}(\theta,0,\pi/4\theta)\approx {\lvert 0 \rangle}_A{\langle 0 \rvert}\otimes {\bf 1}+{\lvert 1 \rangle}_A{\langle 1 \rvert}\otimes e^{-i J^z\pi/2}$ where we have approximated $\sin(\theta J^z)\approx \theta J^z$. Note, in order for this to be valid we require $\theta \ll 1/N$ and consequently $\chi=|\alpha|^2\gg N$, i.e. the area of the GPG in phase space needs to grow with $N$, or the gate could be composed into $N$ GPGs each of area of $O(1)$. This will consequently incur a loss of fidelity due to mode decay (see Eq. (\[fullfid\])), but no worse than the performance for state preparation without ancilla. The controlled operation is then $$\Lambda( e^{i J^y\pi/2})=e^{-i J^x\pi/2 } \Lambda(U_{\rm GPG}(\theta,0,\pi/4\theta) e^{i J^x \pi/2}.$$ We can now write the process to prepare the state ${\lvert \varphi_2 \rangle}$: 1. Prepare the product state $\frac{1}{\sqrt{2}}({\lvert 0 \rangle}+{\lvert 1 \rangle})_A\otimes {\lvert J,-J \rangle}$. 2. Apply $e^{-i J^x \pi/2 } \Lambda(U_{\rm GPG}(\theta,0,\pi/4\theta) e^{i J^x \pi/2}$. 3. Apply $\Lambda(U_G^{\# G})$. This involves $N\times \#G$ instances of $\Lambda(U_{\rm GPG})$ for varying parameters. 4. Measure the ancilla in the ${\lvert \pm_x \rangle}_A$ basis. The outcomes $r=\pm 1$ each occur with probability $1/2$. The conditional system state is $$\frac{1}{\sqrt{2}}({\lvert J,-J \rangle}\pm {\lvert J,0 \rangle})$$ 5. Apply the classically controlled product unitary $Z(r)={Z^{(1-r)/2}}^{\otimes N}$. If we assume $N/2$ is odd then $$Z(r)\frac{1}{\sqrt{2}}({\lvert J,-J \rangle}\pm {\lvert J,0 \rangle})={\lvert \varphi_2 \rangle}.$$ To prepare the state ${\lvert \varphi_1 \rangle}$ a similar process can be used. However, rather than the ${\lvert 0 \rangle}_A$ state being correlated with the product state ${\lvert J,-J \rangle}$ we want it correlated with the GHZ state $\frac{1}{\sqrt{2}}({\lvert J,-J \rangle}+{\lvert J,J \rangle})$. Such a state can be prepared using one additional controlled GPG gate. This follows from the observation that $e^{i\frac{J^y\pi}{2}} U_{\rm GPG}(\pi,\pi/2,\frac{\pi}{8})e^{-i\frac{J^y\pi}{2}}{\lvert J,-J \rangle}=\frac{1}{\sqrt{2}}({\lvert J,-J \rangle}+{\lvert J,J \rangle})$. These two processes are summarized by the following circuits: $$\Qcircuit @C=.5em @R=.2em @!R { \lstick{{\lvert 0 \rangle}_A} & \gate{H} & \ctrl{1} & \qw & \ctrl{1} & \gate{H} &\meter \cwx[1] \\ \lstick{{\lvert 0 \rangle}^{\otimes N}} & \gate{e^{i\frac{J^x\pi}{2}}} & \gate{U_{\rm GPG}(\theta,0,\frac{\pi}{2\theta})} & \gate{e^{-i\frac{J^x\pi}{2}}} & \gate{U_G^{\# G}} & \qw & \gate{Z^{\otimes N}} & \rstick{{\lvert \varphi_2 \rangle}} \qw }$$ $$\Qcircuit @C=.5em @R=0em @!R { \lstick{{\lvert 0 \rangle}_A} & \gate{HX} & \qw & \ctrl{1} & \gate{X} & \ctrl{1} & \qw & \ctrl{1} & \gate{H} &\meter \cwx[1] \\ \lstick{{\lvert 0 \rangle}^{\otimes N}} & \gate{e^{i\frac{J^y\pi}{2}}} & \qw & \gate{U_{\rm GPG}(\pi,\pi/2,\frac{\pi}{8})} & \gate{e^{-i\frac{J^y\pi}{2}}e^{-i \frac{J^x\pi}{2}}} & \gate{U_{\rm GPG}(\theta,0,\frac{\pi}{4\theta})} & \gate{e^{i\frac{J^x\pi}{2}}} & \gate{U_G^{\# G}} & \qw & \gate{Z^{\otimes N}} & \rstick{{\lvert \varphi_1 \rangle}} \qw }$$ Overall, the number of GPGs used scales as $O(N^{5/4})$ since $\#G=O(N^{1/4})$, similar to the cost for preparing a single Dicke state. Note the above protocol for preparing superpositions of Dicke states has applications outside of metrology including preparing permutationally invariant quantum codes [@Ruskai]. *Implementations:* The scheme we have presented is amenable to a variety of architectures which allow collective dispersive couplings between spins and an oscillator. These include: trapped Rydberg atoms coupled to an microwave cavity [@Sayrin:2011eu; @garcia2019], trapped ions coupled to a common motional mode [@Pedernales:2015si] or to an optical cavity mode [@PhysRevLett.122.153603], superconducting qubits coupled to microwave resonators [@Wang1087], and NV centres in diamond coupled to a microwave mode inside a superconducting transmission line cavity [@PhysRevLett.118.140502]. One immediate contender for testing our scheme are Rydberg atoms coupled to microwave cavities. In a recent report [@garcia2019], a group at ETHZ reported the dispersive detection of small atomic Rydberg ensembles coupled to a high-Q microwave cavity (Q-factor $1.7 \times 10^6$). Their numbers suggest a ratio of cavity decay rate to single-atom dispersive coupling strength of $\gamma/g \approx 0.8$ (with $\gamma = 2\pi \times 11.8$ kHz and $g = 2\pi \times 14.3$ kHz). Remarkably, the collective coupling rate they observed in the experiment was on the order of a few MHz. This suggests an additional pathway to improving $\gamma/g$ by orders of magnitude by encoding spins through collective subensembles. Consider an encoding where each spin is itself composed of $n$ physical spins with logical states ${\lvert 0 \rangle}={\lvert j=n/2,-j \rangle}$ and ${\lvert 1 \rangle}={\lvert j=n/2,-j+1 \rangle}$, i.e. the permutationally invariant states of zero or one excitation shared among the $n$ spins. If the spins within each logical qubit interact, e.g. via dipole-dipole interactions, then there will be a dipole-blockade to larger numbers of excitations. Hence collective rotations frequency tuned to the transition energy $E_1-E_0$ will be collective but only act on this qubit subspace. The dispersive interaction strength is meanwhile enhanced by $g\rightarrow g\sqrt{n}$. Since we have assumed that the rotations and dispersive couplings are equivalent on all logical spins, it will be important that number $n$ is the same, or nearly so, for all logical spins. By virtue of this kind of collective encoding, dispersive coupling with strength $g\approx 2\pi\times 2.2$ MHz was obtained with NV ensembles in diamond bonded onto a transmission line resonator with quality factor $Q\approx 4300$ at the first harmonic frequency $\omega_c=2\pi\times 2.75$ GHz. Microwave cavities with much higher quality factors, e.g. $Q=3\times 10^6$, can be realized [@doi:10.1063/1.4935346] which for the same dispersive coupling would give $\gamma/g\approx 10^{-3}$. Supplemental Information ======================== Gate fidelity with cavity decay {#Sec:CavDec} ------------------------------- Cavity field decay at a rate $\kappa$ acts as a source of error for the many body interactions which the cavity mode mediates. Consider the the joint evolution of the spins and the mode. The coupling of the mode to its environment is treated as irreversible and thus can be described by the standard master equation in Lindblad form. The equation of motion for the joint state is $$\begin{aligned} \dot{\rho}(t) & = \mathcal{L}(\rho(t)) \nonumber\\ & = -i[V,\rho(t)]+\frac{\kappa}{2} (2a \rho(t) a^{\dagger}-a^{\dagger} a \rho(t)-\rho(t)a^{\dagger}a).\label{rhodot}\end{aligned}$$ The evolution due to decay conserves the quantum number $J$, and it will be convenient to compute the adjoint action on on a joint state state of the spins and mode with Heisenberg evolved operators $e^{\mathcal{L}t}A^{M,M^{\prime }}(0)$ where: $$A^{M,M^{\prime }}(t)\equiv {\lvert J,M \rangle}{\langle J,M' \rvert}\otimes {\lvert \alpha_{M} \rangle}{\langle \beta_{M'} \rvert}(t).$$ The solutions are easily verified to be given by $$\begin{gathered} A^{M,M^{\prime }}(t) = \sum_{n=0}^\infty\frac{b^n_{MM^{\prime }}(t)}{n!} e^{-(ig M+\kappa/2)a^\dagger a t} \\ \times a^nA^{M,M^{\prime }}(0){(a^\dagger)}^ne^{(i g M^{\prime }-\kappa/2) a^\dagger a t} \label{evolveA}\end{gathered}$$ where $$b_{M M^{\prime }}(t)=\frac{\kappa(1-e^{-[\kappa+ig(M-M^{\prime })]t})}{\kappa+ig(M-M^{\prime })}.$$ The evolved state is then $$\rho(t)=e^{\mathcal{L}t}\rho(0)=\sum_{M,,M^{\prime }}\rho_{M,M^{\prime }}(0)A^{M,M^{\prime }}(t).$$ \[gpfid\] In order to evaluate the effect of cavity decay during the the geometric phase gate, we are particularly interested in the case where initially $A^{M,M^{\prime }}(0)={\lvert M \rangle}{\langle M' \rvert}\otimes|\alpha_{M}\rangle\langle\beta_{M^{\prime }}|$, with $|\alpha_{M}\rangle$, ${\lvert \beta_{M'} \rangle}$ coherent states. This kind of factorization is true at any stage of spin coupling to the field. Using Eq. (\[evolveA\]), the sum becomes an exponential and the evolved state is $$\begin{aligned} \rho(t) & = e^{\mathcal{L}t}\rho(0) \nonumber\\ & = \sum_{M,M^{\prime }}e^{d_{M,M'}(t)}\rho_{M,M^{\prime }}(0){\lvert J,M \rangle}{\langle J,M' \rvert} \nonumber\\ & \hspace{0.5in} \otimes |e^{-(igM+\kappa/2)t}\alpha_{M}\rangle\langle e^{-(igM^{\prime }+\kappa/2)t}\beta_{M^{\prime }}|,\label{evolved}\end{aligned}$$ where $$d_{M,M'}(t)=\alpha_{M}\beta_{M^{\prime }}^*b_{M,M^{\prime }}(t) -(|\alpha_{M}|^2+|\beta_{M^{\prime }}|^2){\textstyle\frac{1-e^{-\kappa t}}{2}}.$$ We ignore decay during the displacement stages of the evolution (i.e. we assume these are done quickly relative to the decay rate), and we assume that the system particles do not interact with the field during these steps. For simplicity we evaluate the performance when the cavity begins in the vacuum state, in which case there are seven time steps to consider: $$D(-\beta)e^{-i \tau_5 V}D(-\alpha)e^{i \tau_3 V}D(\beta)e^{-i \tau_1 V}D(\alpha).$$ Let $\tau_5=\tau_3=\tau_1$ so that the periods of spin field coupling are all equal in duration. In order that the field state return to the vacuum at the end of the sequence, we choose $\alpha^{\prime}=\alpha^{ -\kappa \tau_1},\beta^{\prime}=\beta^{ -\kappa \tau_1}$ for the parameters of the second two displacement operators. The total sequence then yields the output state: $$\begin{array}{lll} \rho_{\mathrm{out}} &=& \displaystyle{\sum_{M,M^{\prime }}} \rho_{M,M^{\prime }}(0)R_{M,M^{\prime }}{\lvert J,M \rangle}{\langle J,M' \rvert}\otimes {\lvert {\rm vac} \rangle}{\langle {\rm vac} \rvert} \\ &&\times e^{-i2\chi(\sin(\phi+g\tau_1 M)-\sin(\phi+g\tau_1 M^{\prime }))},\end{array}$$ where we defined $R_{M,M^{\prime }}=e^{d_{M,M'}(t_2)+d_{M,M'}(t_4)+d_{M,M'}(t_6)}$ and $\chi=|\alpha \beta |(e^{-3\kappa \tau_1/2}+e^{-\kappa \tau_1/2})/2$. This can be interpreted as coherent evolution with an evolution operator $$U=e^{-i2\chi\sin(\phi+\theta J^z)},$$ where $\theta=g\tau_1$, followed by further evolution diagonal in the $\{M\}$ basis and dephasing. Matrix elements diagonal in $M$ are invariant. For simplicity, we assume $|\alpha|=|\beta|$, $g>0$ and write $\theta=g\tau_1$. The factor $R_{M,M'}$ that dictates the deviation from perfect evolution can be written $$R_{M,M'}=e^{-\Gamma_{M,M'}}e^{i\Delta_{M,M'}}$$ where $\Gamma_{M,M'}$ and $\Delta_{M,M'}$ are real, and explicitly are $$\begin{array}{lll} \Gamma_{M,M'}&=&\frac{|\alpha|^2 (M-M') e^{-i (\phi +\theta (M'+M-2 i \frac{\kappa}{g}))}}{2 ((M-M')^2+(\frac{\kappa}{g})^2)}\\ &&\Big(-4 (M-M') e^{i (\theta (M'+M)+\phi )}-4 i \frac{\kappa}{g} e^{2 i \theta M'+\theta \frac{\kappa}{g}+i \phi }+(M'-M+i \frac{\kappa}{g}) e^{\frac{1}{2} \theta (2 i M'+4 i M+\frac{\kappa}{g})+2 i \phi }\\ &&+(-M'+M+i \frac{\kappa}{g}) e^{i \theta M'+2 i \theta M+\frac{3 \theta \frac{\kappa}{g}}{2}+2 i \phi }+(M'-M-i \frac{\kappa}{g}) e^{\frac{1}{2} \theta (4 i M'+2 i M+\frac{\kappa}{g})+2 i \phi }\\ &&+(-M'+M-i \frac{\kappa}{g}) e^{2 i \theta M'+i \theta M+\frac{3 \theta \frac{\kappa}{g}}{2}+2 i \phi }+4 (M-M') e^{i (\phi +\theta (M'+M-2 i \frac{\kappa}{g}))}+e^{\frac{1}{2} \theta (\frac{\kappa}{g}+2 i M)} (M'-M+i \frac{\kappa}{g})\\ &&+e^{\frac{3 \theta \frac{\kappa}{g}}{2}+i \theta M} (-M'+M+i \frac{\kappa}{g})+(M'-M-i \frac{\kappa}{g}) e^{\frac{1}{2} \theta (\frac{\kappa}{g}+2 i M')}+(-M'+M-i \frac{\kappa}{g}) e^{\frac{3 \theta \frac{\kappa}{g}}{2}+i \theta M'}\\ &&+4 i \frac{\kappa}{g} e^{2 i \theta M+\theta \frac{\kappa}{g}+i \phi }\Big) \end{array}$$ $$\begin{array}{lll} \Delta_{M,M'}&=&-\frac{|\alpha|^2(1+e^{i (\theta (M'+M)+2 \phi )})e^{-i \theta (M'+M)-\frac{3 \theta \frac{\kappa}{g}}{2}-i \phi }}{2 ((M-M')^2+(\frac{\kappa}{g})^2)}\\ &&\Big(e^{\theta (\frac{\kappa}{g}+i M)} (2 M^2-(4 M+i \frac{\kappa}{g}) M'+2 (M')^2+i M \frac{\kappa}{g}+(\frac{\kappa}{g})^2)-(2 M^2+(-4 M+i \frac{\kappa}{g}) M'+2 (M')^2-i M \frac{\kappa}{g}+(\frac{\kappa}{g})^2)\\ &&e^{\theta (\frac{\kappa}{g}+i M')}-e^{i \theta M} (2 M^2-(4 M+i \frac{\kappa}{g}) M'+2 (M')^2+i M \frac{\kappa}{g}+3 (\frac{\kappa}{g})^2)\\ &&+e^{i \theta M'} (2 M^2+(-4 M+i \frac{\kappa}{g}) M'+2 (M')^2-i M \frac{\kappa}{g}+3 (\frac{\kappa}{g})^2)\Big) \end{array}$$ Notice, $\Gamma_{M,M'}=\Gamma_{M',M}$ and $\Gamma_{M,M}=0$ and also $\Delta_{M,M'}=-\Delta_{M',M}$. An expansion up to first order in $\frac{\kappa}{g}$ yields simplified expressions: $$\begin{array}{lll} \Gamma_{M,M'}&=& \frac{|\alpha|^2 \frac{\kappa}{g}}{M-M'} \Big(2 \sin (\theta M'+\phi )-\theta M' (\cos (\theta M'+\phi )+\cos (\theta M+\phi )+4)+\theta M (\cos (\theta M'+\phi ))\\ &&-4 (\sin (\theta (M-M')))-2 (\sin (\theta M+\phi ))+\theta M (\cos (\theta M+\phi ))+4 \theta M\Big) \end{array}$$ $$\begin{array}{lll} \Delta_{M,M'}&=& |\alpha|^2 \theta \frac{\kappa}{g} (-\sin (\theta M')+i (\cos (\theta M'))+\sin (\theta M)-i (\cos (\theta M))) (\cos (\theta M'+\theta M+\phi )\\ &&-i (\sin (\theta M'+\theta M+\phi ))) (i (\sin (\theta (M'+M)+2 \phi ))+\cos (\theta (M'+M)+2 \phi )+1) \end{array}$$ Now, one can check that the decoherence factors are bounded as follows: $\Gamma_{M,M'}\leq |\alpha|^2 6\pi \kappa/g $ and $|\Delta_{M,M'}|\leq |\alpha|^2 4\pi \kappa/g $. A loose upper bound on precision as function of fidelity {#Sec:fidelity_estimate} -------------------------------------------------------- The precision of the estimation parameter $\eta$ is expressed as $$\begin{aligned} \nonumber (\Delta \eta)^2=&\big((\Delta {J^x}^2)^2 f(\eta)+4\langle {J^x}^2\rangle-3\langle {J^y}^2\rangle-2\langle {J^z}^2\rangle \\ &\times(1+\langle {J^x}^2\rangle)+6\langle J^z{J^x}^2J^z\rangle\big)(4(\langle {J^x}^2\rangle-\langle {J^z}^2\rangle)^2)^{-1} \label{eq:precision_sq}\end{aligned}$$ To check how the precision scales in relation to the fidelity, $F$, we can calculate the precision assuming an input density matrix, $\rho=a {\lvert J,0 \rangle}{\langle J,0 \rvert} + b \mathbb{1}$, where ${\lvert J,0 \rangle}$ is our ideal Dicke state, $\mathbb{1}$ is the identity matrix of size $(N+1) \times (N+1)$ and $a$ and $b$ are related to fidelity as $a=(1+1/N) F- 1/N$ and $b=(1-F)/N$. We choose this form for the input density matrix so that applying a global dephasing map to the output state of our protocol would make it diagonal in the ${\lvert J,M \rangle}$ basis but keep the population in ${\lvert J,0 \rangle}$ constant. Assuming the diagonal matrix elements (except the ${\lvert J,0 \rangle}{\langle J,0 \rvert}$ entry) are equally weighted is a maximally unbiased assumption. After calculating the variances and expectation values of the angular momentum operators as they appear in Equation (\[eq:precision\_sq\]), then taking the high fidelity limit $F\rightarrow 1$ and assuming large $N$, the precision is found to be $$(\Delta \eta)^2=2/(N (N+2)) + \sqrt{(1-F)/10}, \label{eq:approximate_prec}$$ where $2/(N (N+2))$ is the Cramér-Rao bound. Numerically we find this approximate form is extremely good for $1-F<10^{-2}$. For our choice of the input density matrix, there appears to be a lower bound to the precision (as a function of $N$) that is set by the fidelity. If we would want the overall expression to fall off as $1/N^2$, i.e to achieve the Heisenberg scaling, then we would need the error $1-F$ to scale as $1/N^4$. While this requirement is demanding in terms of performance, it should be noted that we have assumed that all the populations in ${\lvert J,M \rangle},\ M\neq 0$ are equal when in fact the non-zero terms in the output density matrix are much more concentrated near the target state ${\lvert J,0 \rangle}$ for our protocol. As the precision involves terms like expectation values of $J^{z^4}$, error terms with support on states far away from ${\lvert J,0 \rangle}$ will give large errors. Thus, we are overestimating the error in this case and this should be viewed as a loose upper bound on the precision. [21]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty W. Wasilewski, K. Jensen, H. Krauter, J. J. Renema, M. V. Balabas, and E. S. Polzik, Phys. Rev. Lett. [**[104]{}**]{}, 133601 (2010). J. Taylor [*et al.*]{}, Nat. Phys. [**[4]{}**]{}, 810 (2008). W. Wang [*et al.*]{} Nat. Comm. [**[10]{}**]{}, 4382 (2019). V. Giovannetti, S. Lloyd and L. Maccone, Nat. Photonics [**[5]{}**]{}, 222 (2011). V. Giovannetti, Science [**[306]{}**]{}, 1330 (2004). S. Pirandola, B. R. Bardhan, T. Gehring, C. Weedbrook, and S. Lloyd, Nat. Photonics [**[12]{}**]{}, 724 (2018). [****, ()](https://doi.org/10.1038/ncomms2067) M. Zwierz, C. A. P[é]{}rez-Delgado and P. Kok, Phys. Rev. Lett. [**[105]{}**]{}, 180402 (2010). [****, ()](\doibase 10.1038/s41467-017-02510-3) [****,  ()](\doibase 10.1088/2058-9565/aa9c56) @noop G. Xu and G. Long, Phys. Rev. A [**90**]{}, 022323 (2014). K. M[ø]{}lmer and A. S[ø]{}rensen, Phys. Rev. Lett. [**82**]{}, 1835 (1999). [****, ()](https://doi.org/10.1038/ncomms5705) [****,  ()](\doibase 10.1088/1367-2630/17/8/083027) [****,  ()](\doibase 10.1126/science.1208798) [****,  ()](\doibase 10.1103/PhysRevA.95.013845) [****,  ()](https://doi.org/10.1038/nphys943) [****,  ()](\doibase 10.1063/1.1358305),  @noop [****,  ()](\doibase 10.1088/0031-8949/82/03/038103) “,” in [**](\doibase 10.1017/CBO9781139034807.016),  (, ) pp.  Y. Ouyang, N. Shettell, and D. Markham, arXiv:1908.02378 (2019). G. K. Brennen, G. Pupillo, E. Rico, T. M. Stace, and D. Vodola, Phys. Rev. Lett. [**[117]{}**]{}, 240504 (2016). H. Pollatsek and M.B. Ruskai, Linear Algebra and Applications, [**392**]{}, 255 (2004). [****,  ()](https://doi.org/10.1038/nature10376) S. Garcia, M. Stammeier, J. Deiglmayr, F. Merkt, and A. Wallraf, Phys. Rev. Lett. [**123**]{}, 193201 (2019). [****,  ()](https://doi.org/10.1038/srep15472) [****,  ()](\doibase 10.1103/PhysRevLett.122.153603) [****,  ()](\doibase 10.1126/science.aaf2941) [****,  ()](\doibase 10.1103/PhysRevLett.118.140502) [****,  ()](\doibase 10.1063/1.4935346),  Acknowledgements ================ We acknowledge helpful discussions with Jason Twamley and Yingkai Ouyang. This research was funded in part by the Australian Research Council Centre of Excellence for Engineered Quantum Systems (Project number CE170100009). Author contributions ==================== GKB and MTJ did analytic modelling and also carried out the numerical simulations. All authors contributed to the theoretical development of this work. Competing interests =================== The authors declare no competing interests.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present an experimental state-independent violation of an inequality for noncontextual theories on single particles. We show that 20 different single-photon states violate an inequality which involves correlations between results of sequential compatible measurements by at least 419 standard deviations. Our results show that, for any physical system, even for a single system, and independent of its state, there is a universal set of tests whose results do not admit a noncontextual interpretation. This sheds new light on the role of quantum mechanics in quantum information processing.' author: - Elias Amselem - 'Magnus R[å]{}dmark' - Mohamed Bourennane - Adán Cabello title: 'State-independent quantum contextuality with single photons' --- The debate on whether quantum mechanics can be completed with hidden variables started in 1935 with an ingenious example proposed by Einstein, Podolsky, and Rosen [@EPR35] (EPR), suggesting that quantum mechanics only gives an incomplete description of nature. Schrödinger pointed out the fundamental role of quantum entanglement in EPR’s example and concluded that entanglement is “[*the*]{} characteristic trait of quantum mechanics” [@Schrodinger35]. For years, this has been a commonly accepted paradigm, stimulated by the impact of the applications of entanglement in quantum communication [@BW92; @BBCJPW93], quantum computation [@RB01], and violations of Bell inequalities [@Bell64; @ADR82; @TBZG98; @WJSWZ98; @RMSIMW01; @GPKBZAZ07; @MMMOM08; @BBGKLLS08]. However, Bohr argued that similar paradoxical examples occur every time we compare different experimental arrangements, without the need of entanglement nor composite systems [@Bohr35]. The Kochen-Specker (KS) theorem [@Specker60; @Bell66; @KS67] illustrates Bohr’s intuition with great precision. The KS theorem states that, for every physical system there is always a finite set of tests such that it is impossible to assign them predefined noncontextual results in agreement with the predictions of quantum mechanics [@Specker60; @KS67]. Remarkably, the proof of the KS theorem [@KS67] requires neither a composite system nor any special quantum state: it holds for any physical system with more than two internal levels (otherwise the notion of noncontextuality becomes trivial), independent of its state. It has been discussed for a long time whether or not the KS theorem can be translated into experiments [@CG98; @Meyer99]. Recently, however, quantum contextuality has been tested with single photons [@SZWZ00; @HLZPG03] and single neutrons [@HLBBR03] in specific states. Very recently it has been shown that the KS theorem can be converted into experimentally testable state-dependent [@CFRH08] and state-independent [@Cabello08] violations of inequalities involving correlations between compatible measurements. For single systems, only a state-dependent violation for a specific state of single neutrons has been reported [@BKSSCRH09]. A state-independent violation has been observed only in composite systems of two $^{40}$Ca$^+$ trapped ions [@KZGKGCBR09]. Following the spirit of the original KS theorem, which deals with the problem of hidden variables in single systems, we report the first state-independent violation for single-particle systems. Any theory in which the nine observables $A, B, C, a, b, c, \alpha, \beta$, and $\gamma$ have predefined noncontextual outcomes $-1$ or $+1$, must satisfy the following inequality [@Cabello08]: $$\chi \equiv \langle A B C \rangle + \langle a b c \rangle + \langle \alpha \beta \gamma \rangle +\langle A a \alpha \rangle + \langle B b \beta \rangle - \langle C c \gamma \rangle \le 4, \label{second}$$ where $\langle A B C \rangle$ denotes the ensemble average of the product of the three outcomes of measuring the mutually compatible observables $A$, $B$, and $C$. Surprisingly, for any four-dimensional system, there is a set of observables for which the prediction of quantum mechanics is $\chi=6$ for any quantum state of the system [@Cabello08]. The purpose of this experiment is to test this prediction on different quantum states of a single-particle system. A physical system particularly well suited for this purpose is the one comprising a single photon carrying two qubits of quantum information: the first qubit is encoded in the spatial path $s$ of the photon, and the second qubit in the polarization $p$. The quantum states $|0\rangle_s=|t\rangle_s$ and $|1\rangle_s=|r\rangle_s$, where $t$ and $r$ denote the transmitted and reflected paths of the photon, respectively, provide a basis for describing any quantum state of the photon’s spatial path. Similarly, $|0\rangle_p=|H\rangle_p$ and $|1\rangle_p=|V\rangle_p$, where $H$ and $V$ denote horizontal and vertical polarization, respectively, provide a basis for describing any quantum state of the photon’s polarization. A suitable choice of observables giving $\chi = 6$ is the following [@Cabello08]: $$\begin{aligned} &&\;\;A=\sigma_z^s,\;\;\;\;\;\;\;\;\;\;\;\;\;\; B=\sigma_z^p,\;\;\;\;\;\;\;\;\;\;\;\; C=\sigma_z^s \otimes \sigma_z^p, \nonumber \\ &&\;\;a=\sigma_x^p,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; b=\sigma_x^s,\;\;\;\;\;\;\;\;\;\;\;\; c=\sigma_x^s \otimes \sigma_x^p, \nonumber \\ &&\alpha=\sigma_z^s \otimes \sigma_x^p,\;\;\;\;\;\;\; \beta=\sigma_x^s \otimes \sigma_z^p,\;\;\;\;\;\; \gamma=\sigma_y^s \otimes \sigma_y^p, \label{observables}\end{aligned}$$ where $\sigma_z^s$ denotes the Pauli matrix along the $z$ direction of the spatial path qubit, $\sigma_x^p$ denotes the Pauli matrix along the $x$ direction of the polarization qubit, and $\otimes$ denotes tensor product. ![Preparation of the polarization-spatial path encoded states of single photons. The setup consists of a source of $H$-polarized single photons followed by a half wave plate (HWP) and a polarizing beam splitter (PBS), allowing any probability distribution of a photon in the paths $t$ and $r$. The wedge (W) placed in one of the paths adds an arbitrary phase shift between both paths. A HWP and a quarter wave plate (QWP) in each path allow us to rotate the outputs of the PBS to any polarization. Symbol definitions are given at the bottom of Fig. \[Blocks\].[]{data-label="Preparation"}](fig_preparation.eps){width="0.70\linewidth"} ![Devices for measuring the nine observables (\[observables\]). A measurement of $A$ requires only to distinguish between paths $t$ and $r$. For measuring $b$, note that its eigenstates are $(|t\rangle \pm |r\rangle)/ \sqrt{2}$ and they need to be mapped to the paths $t$ and $r$; this is accomplished by interference with the help of an additional $50/50$ beam splitter (BS) and a wedge. The measurements of $a$ and $B$ are standard polarization measurements using a PBS and a HWP. Observables $C$, $c$, $\alpha$, $\beta$, and $\gamma$ are the product of a spatial path and a polarization observable $\sigma_i^s \otimes \sigma_j^p$. Each of these observables has a four-dimensional eigenspace, but since the observables need to be rowwise and columnwise compatible, only their common eigenstates can be used for distinguishing the eigenvalues. This implies that $C$, $c$, and $\gamma$ can be implemented as Bell measurements with different distributions of the Bell states. Similarly, $\alpha$ and $\beta$ are Bell measurements preceded by a polarization rotation. In this way $\gamma$ is compatible also with $\alpha$ and $\beta$.[]{data-label="Blocks"}](fig_blocks.eps){width="1.00\linewidth"} To generate polarization-spatial path encoded single-photon states, we used the setup described in Fig. \[Preparation\]. We experimentally tested the value of $\chi$ for 20 different quantum states. It is of utmost importance for the experiment that the measurements of each of the nine observables in (\[observables\]) are context independent [@Cabello08], in the sense that the measurement device used for the measurement of, e.g., $B$ must be the same when $B$ is measured with the compatible observables $A$ and $C$, and when $B$ is measured with $b$ and $\beta$, which are compatible with $B$ but not with $A$ and $C$. For the experiment we used the measurement devices described in Fig. \[Blocks\], which satisfy this requirement. ![Setups for measuring the six sets of observables to test inequality (\[second\]). We explicitly describe the setup for measuring $C$, $A$, and $B$; the description of the other setups is obtained by replacing $C$, $A$, and $B$ with the corresponding observables. The seven boxes are single-observable measuring devices (see Fig. \[Blocks\]). The photon, prepared in a specific state, enters the device for measuring $C$ through the device’s input and follows one of the two possible outcomes. A detection of the photon in one of these outputs would make the measurement of the next observable impossible. Instead, we placed, after each of the two outputs of the $C$-measuring device, a device for measuring the second observable, $A$ (we thus used two identical $A$-measuring devices). Similarly, we also placed, after each of the four outputs of the $A$-measuring devices, a device for measuring the third observable, $B$ (we thus used four identical $B$-measuring devices). Note that we need to recreate the eigenstates of the measured observable before entering the next observable, since our single-observable measuring devices map eigenstates to a fixed spatial path and polarization. Finally, we placed a single-photon detector (D) after each of the eight outputs of the four $B$-measuring devices. An individual photon passing through the whole arrangement is detected only by one of the eight detectors, which indicates which one of the eight combinations of results for $C$, $A$, and $B$ is obtained.[]{data-label="configurations"}](fig_allobservables.eps){width="1.00\linewidth"} ![State independence of the violation of the inequality $\chi \le 4$. The value of $\chi$ was tested for 20 different quantum states: four pure states with maximum internal entanglement between the spatial path and polarization which would maximally violate a Clauser-Horne-Shimony-Holt-Bell-like inequality [@CHSH69] (states $|\psi_1\rangle$–$|\psi_4\rangle$), one mixed state with partial internal entanglement which would violate a Clauser-Horne-Shimony-Holt-Bell-like inequality ($\rho_{5}$), one mixed state with partial internal entanglement which would not violate a Clauser-Horne-Shimony-Holt-Bell-like inequality ($\rho_{6}$), one mixed state without internal entanglement according to the Peres-Horodecki criterion [@Peres96; @HHH96] ($\rho_{7}$), 12 pure states without internal entanglement ($|\psi_{8}\rangle$–$|\psi_{19}\rangle$), and a maximally mixed state ($\rho_{20}$). The explicit expression of each state is given in Table \[Table\]. The red solid line indicates the classical upper bound. The blue dashed line at $5.4550$ indicates the average value of $\chi$ over all the 16 pure states. []{data-label="Violation"}](fig_violation.eps){width="1.00\linewidth"} ![Correlation measurements of all terms in the inequality (\[second\]) for the states $|\psi_{3}\rangle$ ($a$) and $|\psi_{14}\rangle$ ($b$). The figures show experimentally estimated probabilities for detecting a photon in each of the eight detectors. A photon detection corresponds to certain values ($\pm 1$) for the three measured dichotomic observables. For example, the bar height at ($+++;\alpha A a$) represents the probability to obtain the results $\alpha, A, a=+1$, and similarly ($++-;\alpha A a$) represents $\alpha, A=+1$ and $a=-1$. The expectation values for each measurement are also given.[]{data-label="States"}](fig_states.eps){width="1.00\linewidth"} State Expectation value SD ----------------------------------------------------------------------------------------------------------------------- ------------------------- -------- ${\vert \psi_1 \rangle} = \frac{1}{\sqrt{2}}({\vert t \rangle}{\vert H \rangle}+{\vert r \rangle}{\vert V \rangle})$ $5.4366$ $\pm$ $0.0012$ $1169$ ${\vert \psi_2 \rangle} = \frac{1}{\sqrt{2}}({\vert t \rangle}{\vert H \rangle}-{\vert r \rangle}{\vert V \rangle})$ $5.4393$ $\pm$ $0.0023$ $621 $ ${\vert \psi_3 \rangle} = \frac{1}{\sqrt{2}}({\vert t \rangle}{\vert V \rangle}+{\vert r \rangle}{\vert H \rangle})$ $5.4644$ $\pm$ $0.0029$ $498 $ ${\vert \psi_4 \rangle} = \frac{1}{\sqrt{2}}({\vert t \rangle}{\vert V \rangle}-{\vert r \rangle}{\vert H \rangle})$ $5.4343$ $\pm$ $0.0026$ $561 $ $\rho_{5} = \frac{13}{16} |\psi_1\rangle \langle\psi_1| + \frac{1}{16} \sum_{j=2}^{4} |\psi_j\rangle \langle\psi_j|$ $5.4384$ $\pm$ $0.0010$ $1386$ $\rho_{6} = \frac{5}{8} |\psi_1\rangle \langle\psi_1| + \frac{1}{8} \sum_{j=2}^{4} |\psi_j\rangle \langle\psi_j|$ $5.4401$ $\pm$ $0.0010$ $1509$ $\rho_{7} = \frac{7}{16} |\psi_1\rangle \langle\psi_1| + \frac{3}{16} \sum_{j=2}^{4} |\psi_j\rangle \langle\psi_j|$ $5.4419$ $\pm$ $0.0010$ $1433$ ${\vert \psi_{8} \rangle} = {\vert t \rangle}{\vert H \rangle}$ $5.3774$ $\pm$ $0.0020$ $676 $ ${\vert \psi_{9} \rangle} = {\vert t \rangle}{\vert V \rangle}$ $5.5131$ $\pm$ $0.0032$ $475 $ ${\vert \psi_{10} \rangle} = {\vert r \rangle}{\vert H \rangle}$ $5.4306$ $\pm$ $0.0031$ $465 $ ${\vert \psi_{11} \rangle} = {\vert r \rangle}{\vert V \rangle}$ $5.4554$ $\pm$ $0.0017$ $850 $ ${\vert \psi_{12} \rangle} = \frac{1}{\sqrt{2}}{\vert t \rangle}({\vert H \rangle}+{\vert V \rangle})$ $5.4139$ $\pm$ $0.0015$ $960 $ ${\vert \psi_{13} \rangle} = \frac{1}{\sqrt{2}}{\vert t \rangle}({\vert H \rangle}+i{\vert V \rangle})$ $5.4835$ $\pm$ $0.0022$ $667 $ ${\vert \psi_{14} \rangle} = \frac{1}{\sqrt{2}}({\vert t \rangle}+{\vert r \rangle}){\vert H \rangle}$ $5.5652$ $\pm$ $0.0032$ $489 $ ${\vert \psi_{15} \rangle} = \frac{1}{\sqrt{2}}({\vert t \rangle}+i{\vert r \rangle}){\vert H \rangle}$ $5.5137$ $\pm$ $0.0036$ $419 $ ${\vert \psi_{16} \rangle} = \frac{1}{2}({\vert t \rangle}+{\vert r \rangle})({\vert H \rangle}+{\vert V \rangle})$ $5.4304$ $\pm$ $0.0014$ $1029$ ${\vert \psi_{17} \rangle} = \frac{1}{2}({\vert t \rangle}+i{\vert r \rangle})({\vert H \rangle}+{\vert V \rangle})$ $5.2834$ $\pm$ $0.0019$ $674 $ ${\vert \psi_{18} \rangle} = \frac{1}{2}({\vert t \rangle}+{\vert r \rangle})({\vert H \rangle}+i{\vert V \rangle})$ $5.5412$ $\pm$ $0.0032$ $475 $ ${\vert \psi_{19} \rangle} = \frac{1}{2}({\vert t \rangle}+i{\vert r \rangle})({\vert H \rangle}+i{\vert V \rangle})$ $5.4968$ $\pm$ $0.0032$ $462 $ $\rho_{20} = \frac{1}{4} \sum_{j=1}^{4} |\psi_j\rangle\langle\psi_j|$ $5.4437$ $\pm$ $0.0012$ $1229$ : Experimental values of $\langle CAB \rangle + \langle cba \rangle + \langle \beta \gamma \alpha \rangle + \langle \alpha Aa \rangle + \langle \beta b B \rangle - \langle c \gamma C \rangle$ for 20 quantum states. The average value is $5.4550\pm0.0006$ and on average we violate the inequality with $655$ standard deviations (SDs).[]{data-label="Table"} For a sequential measurement of three compatible observables on the same photon, we used the single-observable measuring devices in Fig. \[Blocks\], appropriately arranged as described in Fig. \[configurations\]. Since the predictions of both noncontextual hidden variable theories and quantum mechanics do not depend on the order of the compatible measurements, we chose the most convenient order for each set of observables (e.g., we measured $CBA$ instead of $ABC$). This was usually the configuration which minimized the number of required interferometers and hence maximized the visibility. Specifically, we measured the averages $\langle CAB \rangle$, $\langle c b a \rangle$, $\langle \beta \gamma \alpha \rangle$, $\langle \alpha A a \rangle$, $\langle \beta b B \rangle$, and $\langle c \gamma C \rangle$, as described in Fig. \[configurations\]. Our single-photon source was an attenuated stabilized narrow bandwidth diode laser emitting at 780 nm and offering a long coherence length. The laser was attenuated so that the two-photon coincidences were negligible. The mean photon number per time window was $0.058$. All the interferometers in the experimental setup are based on free space displaced Sagnac interferometers, which possess a very high stability. We have reached a visibility above $99\%$ for phase insensitive interferometers, and a visibility ranging between $90\%$ and $95\%$ for phase sensitive interferometers. Our single-photon detectors were Silicon avalanche photodiodes calibrated to have the same detection efficiency. All single counts were registered using an eight-channel coincidence logic with a time window of $1.7$ ns. To test the prediction of a state-independent violation, we repeated the experiment on 20 quantum states of different purity and entanglement. For each pure state, we checked each of the six correlations in inequality (\[second\]) for about $1.7 \times 10^7$ photons. The results for the mixed states were obtained by suitably combining pure state data. Fig. \[Violation\] shows that a state-independent violation of inequality $\chi \le 4$ occurs, with an average value for $\chi$ of $5.4521$. Because of experimental imperfections, the experimental violation of the inequality falls short of the quantum-mechanical prediction for an ideal experiment ($\chi = 6$). The main systematic error source was due to the large number of optical interferometers involved in the measurements, the nonperfect overlapping of the light modes and the polarization components. The errors were deduced from propagated Poissonian counting statistics of the raw detection events. The number of detected photons was about $1.7 \times 10^6$ per second. The measurement time for each of the six sets of observables was $10$ s for each state. In Fig. \[States\] we also present measurement results for each experimental setup for the maximally entangled state $|\psi_3\rangle$ and the product state $|\psi_{14}\rangle$ defined in Table \[Table\]. Probabilities for each outcome as well as values of the correlations are shown. The overall detection efficiency of the experiment, defined as the ratio of detected to prepared photons, was $\eta = 0.50$. This value was obtained considering that the detection efficiency of the single-photon detectors is $55\%$ and the fiber coupling is $90\%$. Therefore, the fair sampling assumption (i.e., the assumption that detected photons are an unbiased subensemble of the prepared photons) is needed to conclude a violation of the inequality. This is the same assumption as is adopted in all previous state-dependent experimental violations of classical inequalities with photons [@ADR82; @TBZG98; @WJSWZ98; @GPKBZAZ07; @MMMOM08; @SZWZ00; @HLZPG03] and neutrons [@HLBBR03; @BKSSCRH09]. In conclusion, our results show that experimentally observed outcomes of measurements on single photons cannot be described by noncontextual models. A remarkable feature of this experiment is that the quantum violation of a classical inequality requires neither entangled states nor composite systems. It occurs even for single systems which cannot have entanglement. Further on, it occurs for any quantum state, even for maximally mixed states, like $\rho_{20}$ in Fig. \[Violation\], which are usually considered “classical” states. This shows that entanglement is not the only essence of quantum mechanics which distinguishes the theory from classical physics; consequently, entanglement might not be the only resource for quantum information processing. Quantum contextuality of single quantum systems submitted to a sequence of compatible measurements might be an equally powerful, simpler and more fundamental resource. We thank Y. Hasegawa and J.-Å. Larsson for comments, and acknowledge support by the Swedish Research Council (VR), the Spanish MCI Project No. FIS2008-05596, and the Junta de Andalucía Excellence Project No. P06-FQM-02243. [00]{} A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. [**47**]{}, 777 (1935). E. Schrödinger, Proc. Cambridge Philos. Soc. [**31**]{}, 555 (1935). C. H. Bennett and S. J. Wiesner, Phys. Rev. Lett. [**69**]{}, 2881 (1992). C. H. Bennett [*et al.*]{}, Phys. Rev. Lett. [**70**]{}, 1895 (1993). R. Raussendorf and H. J. Briegel, Phys. Rev. Lett. [**86**]{}, 5188 (2001). J. S. Bell, Physics (Long Island City, N.Y.) [**1**]{}, 195 (1964). A. Aspect, J. Dalibard, and G. Roger, Phys. Rev. Lett. [**49**]{}, 1804 (1982). W. Tittel, J. Brendel, H. Zbinden, and N. Gisin, Phys. Rev. Lett. [**81**]{}, 3563 (1998). G. Weihs [*et al.*]{}, Phys. Rev. Lett. [**81**]{}, 5039 (1998). M. A. Rowe [*et al.*]{}, Nature (London) [**409**]{}, 791 (2001). S. Gröblacher [*et al.*]{}, Nature (London) [**446**]{}, 871 (2007). C. Branciard [*et al.*]{}, Nat. Phys. [**4**]{}, 681 (2008). D. N. Matsukevich [*et al.*]{}, Phys. Rev. Lett. [**100**]{}, 150404 (2008). N. Bohr, Phys. Rev. [**48**]{}, 696 (1935). E. Specker, Dialectica [**14**]{}, 239 (1960). J. S. Bell, Rev. Mod. Phys. [**38**]{}, 447 (1966). S. Kochen and E. P. Specker, J. Math. Mech. [**17**]{}, 59 (1967). A. Cabello and G. García-Alcaine, Phys. Rev. Lett. [**80**]{}, 1797 (1998). D. A. Meyer, Phys. Rev. Lett. [**83**]{}, 3751 (1999). C. Simon, M. Żukowski, H. Weinfurter, and A. Zeilinger, Phys. Rev. Lett. [**85**]{}, 1783 (2000). Y.-F. Huang [*et al.*]{}, Phys. Rev. Lett. [**90**]{}, 250401 (2003). Y. Hasegawa [*et al.*]{}, Nature (London) [**425**]{}, 45 (2003). A. Cabello, S. Filipp, H. Rauch, and Y. Hasegawa, Phys. Rev. Lett. [**100**]{}, 130404 (2008). A. Cabello, Phys. Rev. Lett. [**101**]{}, 210401 (2008). H. Bartosik [*et al.*]{}, Phys. Rev. Lett. [**103**]{}, 040403 (2009). G. Kirchmair [*et al.*]{}, Nature (London) [**460**]{}, 494 (2009). J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, Phys. Rev. Lett. [**23**]{}, 880 (1969). A. Peres, Phys. Rev. Lett. [**77**]{}, 1413 (1996). M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Lett. A [**223**]{}, 1 (1996).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Prosodic cues in conversational speech aid listeners in discerning a message. We investigate whether acoustic cues in spoken dialogue can be used to identify the importance of individual words to the meaning of a conversation turn. Individuals who are Deaf and Hard of Hearing often rely on real-time captions in live meetings. Word error rate, a traditional metric for evaluating automatic speech recognition (ASR), fails to capture that some words are more important for a system to transcribe correctly than others. We present and evaluate neural architectures that use acoustic features for 3-class word importance prediction. Our model performs competitively against state-of-the-art text-based word-importance prediction models, and it demonstrates particular benefits when operating on imperfect ASR output.' author: - | Sushant Kafle, Cecilia O. Alm, Matt Huenerfauth\ Rochester Institute of Technology, Rochester NY\ [{sxk5664,matt.huenerfauth,coagla}@rit.edu]{} bibliography: - 'naaclhlt2019.bib' title: | Modeling Acoustic-Prosodic Cues\ for Word Importance Prediction in Spoken Dialogues --- Introduction {#sec:introduction} ============ Not all words are equally important to the meaning of a spoken message. Identifying the importance of words is useful for a variety of tasks including text classification and summarization [@hong2014improving; @yih2007multi]. Considering the relative importance of words can also be valuable when evaluating the quality of output of an automatic speech recognition (ASR) system for specific tasks, such as caption generation for Deaf and Hard of Hearing (DHH) participants in spoken meetings [@kafle2016]. As described by @berke2018 , interlocutors may submit audio of individual utterances through a mobile device to a remote ASR system, with the text output appearing on an app for DHH users. With ASR being applied to new tasks such as this, it is increasingly important to evaluate ASR output effectively. Traditional Word Error Rate (WER)-based evaluation assumes that all word transcription errors equally impact the quality of the ASR output for a user. However, this is less helpful for various applications [@mccowan2004use; @morris2004and]. In particular, @kafle2016 found that metrics with differential weighting of errors based on word importance correlate better with human judgment than WER does for the automatic captioning task. However, prior models based on text features for word importance identification [@kafle2017; @sheikh2016learning] face challenges when applied to conversational speech: - **Difference from Formal Texts**: Unlike formal texts, conversational transcripts may lack capitalization or punctuation, use informal grammatical structures, or contain disfluencies (e.g. incomplete words or edits, hesitations, repetitions), filler words, or more frequent out-of-vocabulary (and invented) words [@mckeown2005text]. - **Availability and Reliability**: Text transcripts of spoken conversations require a human transcriptionist or an ASR system, but ASR transcription is not always reliable or even feasible, especially for noisy environments, nonstandard language use, or low-resource languages, etc. ![Example of conversational transcribed text, *right where you move from*, that is difficult to disambiguate without prosody. The intended sentence structure was: *Right! Where you move from?*[]{data-label="fig:speechinfo"}](img/speech-info_v2.png){width="48.00000%"} While spoken messages include prosodic cues that focus a listener’s attention on the most important parts of the message [@frazier2006prosodic], such information may be omitted from a text transcript, as in Figure \[fig:speechinfo\], in which the speaker pauses after “right" (suggesting a boundary) and uses rising intonation on “from" (suggesting a question). Moreover, there are application scenarios where transcripts of spoken messages are not always available or fully reliable. In such cases, models based on a speech signal (without a text transcript) might be preferred. With this motivation, we investigate modeling acoustic-prosodic cues for predicting the importance of words to the meaning of a spoken dialogue. Our goal is to explore the versatility of speech-based (text-independent) features for word importance modeling. In this work, we frame the task of word importance prediction as sequence labeling and utilize a bi-directional Long Short-Term Memory (LSTM)-based neural architecture for context modeling on speech. Related Work ============ Many researchers have considered how to identify the importance of a word and have proposed methods for this task. Popular methods include frequency-based unsupervised measures of importance, such as Term Frequency-Inverse Document Frequency (TF-IDF), and word co-occurrence measures [@hacohen2005automatic; @matsuo2004keyword], which are primarily used for extracting relevant keywords from text documents. Other supervised measures of word importance have been proposed [@liu2011; @liu2004text; @hulth2003improved; @sheeba2012improved; @kafle2017] for various applications. Closest to our current work, researchers in [@kafle2017] described a neural network-based model for capturing the importance of a word at the sentence level. Their setup differed from traditional importance estimation strategies for document-level keyword-extraction, which had treated each *word* as a *term* in a document such that all *words* identified by a *term* received a uniform importance score, without regard to context. Similar to our application use-case, the model proposed by @kafle2017 identified word importance at a more granular level, i.e. sentence- or utterance-level. However, their model operated on human-generated transcripts of text. Since we focus on real-time captioning applications, we prefer a model that can operate without such human-produced transcripts, as discussed in Section \[sec:introduction\]. Previous researchers have modeled prosodic cues in speech for various applications [@tran2017joint; @brenier2005detection; @xie2009integrating]. For instance, in automatic prominence detection, researchers predict regions of speech with relatively more spoken stress [@wang2007acoustic; @brenier2005detection; @tamburini2003prosodic]. Identification of prominence aids automatically identifying content words [@wang2007acoustic], a crucial sub-task of spoken language understanding [@beckman2000tagging; @mishra2012word]. Moreover, researchers have investigated modeling prosodic patterns in spoken messages to identify syntactic relationships among words [@price1991use; @tran2017joint]. In particular, @tran2017joint demonstrated the effectiveness of speech-based features in improving the constituent parsing of conversational speech texts. In other work, researchers investigated prosodic events to identify important segments in speech, useful for producing a generic summary of the recordings of meetings [@xie2009integrating; @murray2005extractive]. At the same time, prosodic cues are also challenging in that they serve a range of linguistic functions and convey affect. We investigate models applied to spoken messages at a dialogue-turn level, for predicting the importance of words for understanding an utterance. Word Importance Prediction {#sec:model-architecture} ========================== For the task of word importance prediction, we formulate a sequence labeling architecture that takes as input a spoken dialogue turn utterance with word-level timestamps[^1], and assigns an importance label to every spoken word in the turn using a bi-directional LSTM architecture [@huang2015bidirectional; @lample2016neural]. ![image](img/speech-feature-represent.png){width="80.00000%"} $$\begin{aligned} \label{eq:bi_lstm_1} {\overrightarrow{h_t}} = LSTM(s_t, {\overrightarrow{h_{t-1}}}) \\ \label{eq:bi_lstm_2} {\overleftarrow{h_t}} = LSTM(s_t, {\overleftarrow{h_{t-1}}})\end{aligned}$$ The word-level timestamp information is used to generate an acoustic-prosodic representation for each word ($s_t$) from the speech signal. Two LSTM units, moving in opposite directions through these word units ($s_t$) in an utterance, are then used for constructing a context-aware representation for every word. Each LSTM unit takes as input the representation of the word ($s_t$), along with the hidden state from the previous time step, and each outputs a new hidden state. At each time step, the hidden representations from both LSTMs are concatenated $h_t=[{\overrightarrow{h_t}};{\overleftarrow{h_t}}]$, in order to obtain a contextualized representation for each word. This representation is next passed through a projection layer (details below) to the final prediction for a word. Importance as Ordinal Classification {#sec:projection_layers} ------------------------------------ We define word importance prediction as the task of classifying the words into one of the many importance classes, e.g., high importance (<span style="font-variant:small-caps;">hi</span>), medium importance (<span style="font-variant:small-caps;">mid</span>) and low importance (<span style="font-variant:small-caps;">low</span>) (details on Section \[sec:exp-setup\]). These importance class labels have a natural *ordering* such that the cost of misclassification is not uniform e.g., incorrect classification of <span style="font-variant:small-caps;">hi</span> class for <span style="font-variant:small-caps;">li</span> class (or vice-versa) will have higher error cost than classification of <span style="font-variant:small-caps;">hi</span> class for <span style="font-variant:small-caps;">mi</span>. Considering this ordinal nature of the importance class labels, we investigate different projection layers for output prediction: a softmax layer for making local importance prediction (<span style="font-variant:small-caps;">softmax</span>), a relaxed softmax tailored for ordinal classification (<span style="font-variant:small-caps;">ord</span>), and a linear-chain conditional random field (<span style="font-variant:small-caps;">crf</span>) for making a conditioned decision on the whole sequence.\ **Softmax Layer**. For the <span style="font-variant:small-caps;">softmax</span>-layer, the model predicts a normalized distribution over all possible labels ($L$) for every word conditioned on the hidden vector ($h_t$). **Relaxed Softmax Layer**. In contrast, the <span style="font-variant:small-caps;">ord</span>-layer uses a standard sigmoid projection for every output label candidate, without subjecting it to normalization. The intuition is that rather than learning to predict one label per word, the model predicts multiple labels. For a word with label $l \in L$, all other labels ordinally less than $l$ are also predicted. Both the softmax and the relaxed-softmax models are trained to minimize the categorical cross-entropy, which is equivalent to minimizing the negative log-probability of the correct labels. However, they differ in how they make the final prediction: Unlike the <span style="font-variant:small-caps;">softmax</span> layer which considers the most probable label for prediction, the <span style="font-variant:small-caps;">ord</span>-layer uses a special “scanning" strategy [@ordinal] – where for each word, the candidate labels are scanned from low to high (ordinal rank), until the score from a label is smaller than a threshold (usually 0.5) or no labels remain. The last scanned label with score greater than the threshold is selected as the output. **CRF Layer**. The <span style="font-variant:small-caps;">crf</span>-layer explores the possible dependence between the subsequent importance label of words. With this architecture, the network looks for the most optimal path through all possible label sequences to make the prediction. The model is then optimized by maximizing the score of the correct sequence of labels, while minimizing the possibility of all other possible sequences. Considering each of these different projection layers, we investigate different models for the word importance prediction task. Section \[sec:acoustic\_feature\_representation\] describes our architecture for acoustic-prosodic feature representation at the word level, and Sections \[sec:exp-setup\] and \[sec:results\] describe our experimental setup and subsequent evaluations. Acoustic-Prosodic Feature Representation {#sec:acoustic_feature_representation} ======================================== Similar to familiar feature-vector representations of words in a text e.g., word2vec [@word2vec] or GloVe [@glove], various researchers have investigated vector representations of words based on speech. In addition to capturing acoustic-phonetic properties of speech [@he2016multi; @chung2016audio], some recent work on acoustic embeddings has investigated encoding semantic properties of a word directly from speech [@speech2vec]. In a similar way, our work investigates a speech-based feature representation strategy that considers prosodic features of speech at a sub-word level, to learn a word-level representation for the task of importance prediction in spoken dialogue. Sub-word Feature Extraction {#acoustic-feats} --------------------------- We examined four categories of features that have been previously considered in computational models of prosody, including: pitch-related features (10), energy features (11), voicing features (3) and spoken-lexical features (6): $\bullet$ **Pitch (<span style="font-variant:small-caps;">freq</span>) and Energy (<span style="font-variant:small-caps;">eng</span>) Features:** Pitch and energy features have been found effective for modeling intonation and detecting emphasized regions of speech [@brenier2005detection]. From the pitch and energy contours of the speech, we extracted: minimum, time of minimum, maximum, time of maximum, mean, median, range, slope, standard deviation and skewness. We also extracted RMS energy from a mid-range frequency band (500-2000 Hz), which has been shown to be useful for detecting prominence of syllables in speech [@tamburini2003prosodic]. $\bullet$ **Spoken-lexical Features (<span style="font-variant:small-caps;">lex</span>):** We examined spoken-lexical features, including word-level spoken language features such as duration of the spoken word, the position of the word in the utterance, and duration of silence before the word. We also estimated the number of syllables spoken in a word, using the methodology of @de2009praat . Further, we considered the per-word average syllable duration and the per-word articulation rate of the speaker (number of syllables per second). $\bullet$ **Voicing Features (<span style="font-variant:small-caps;">voc</span>):** As a measure of voice quality, we investigated spectral-tilt, which is represented as (H1 - H2), i.e. the difference between the amplitudes of the first harmonic (H1) and the second harmonic (H2) in the Fourier Spectrum. The spectral-tilt measure has been shown to be effective in characterizing glottal constriction [@voicequal06], which is important in distinguishing voicing characteristics, e.g. whisper [@itoh2001acoustic]. We also exmined other voicing measures, e.g. Harmonics-to-Noise Ratio and Voiced Unvoiced Ratio. In total, we extracted 30 features using Praat [@praat], as listed above. Further, we included speaker-normalized (<span style="font-variant:small-caps;">znorm</span>) version of the features. Thereby, we had a total of 60 speech-based features extracted from sub-word units. Sub-word to Word-level Representation ------------------------------------- The acoustic features listed above were extracted from a 50-ms sliding window over each word region with a 10-ms overlap. In our model, each word was represented as a sequence of these sub-word features with varying lengths, as shown in Figure \[fig-feat-represent\]. To get a feature representation for a word, we utilized a bi-directional Recurrent Neural Network (RNN) layer on top of the sub-word features. The spoken-lexical features were then concatenated to this word-level feature representation to get our final feature vectors. For this task, we utilized Gated Recurrent Units (GRUs) [@gru] as our RNN cell, rather than LSTM units, due to better performance observed during our initial analysis. Experimental Setup {#sec:exp-setup} ================== We utilized a portion of the Switchboard corpus [@switchboard] that had been manually annotated with word importance scores, as a part of the Word Importance Annotation project [@kafle2017]. That annotation covers 25,048 utterances spoken by 44 different English speakers, containing word-level timestamp information along with a numeric score (in the range of \[0, 1\]) assigned to each word from the speakers. These numeric importance scores have three natural ordinal ranges [\[0 - 0.3), \[0.3, 0.6), \[0.6, 1\]]{} that the annotators had used during the annotation to indicate the importance of a word in understanding an utterance. The ordinal range represents low importance (<span style="font-variant:small-caps;">li</span>), medium importance (<span style="font-variant:small-caps;">mi</span>) and high importance (<span style="font-variant:small-caps;">hi</span>) of words, respectively. Our models were trained and evaluated using this data, treating the problem as a ordinal classification problem with the labels ordered as (<span style="font-variant:small-caps;">li</span> $<$ <span style="font-variant:small-caps;">mi</span> $<$ <span style="font-variant:small-caps;">hi</span>). We created a 80%, 10% and 10% split of our data for training, validation, and testing. The prediction performance of our model was primarily evaluated using the Root Mean Square (RMS) measure, to account for the ordinal nature of labels. Additionally, our evaluation includes F-score and accuracy results to measure classification performance. As our baseline, we used various text-based importance prediction models trained and evaluated on the same data split, as described in Section \[sec:comparisontext\]. For training, we explored various architectural parameters to find the best-working setup for our models: Our input layer of GRU-cells, used as word-based speech representation, had a dimension of 64. The LSTM units, used for generating contextualized representation of a spoken word, had a dimension of 128. We used the Adam optimizer with an initialized learning rate of $0.001$ for training. Each training batch had a maximum of 20 dialogue-turn utterances, and the model was trained until no improvement was observed in 7 consecutive iterations. Experiments {#sec:results} =========== Tables \[tbl:projection\_eval\], \[tbl:speech\_ablation\] and \[tbl:asr\_eval\] summarize the performance of our models on the word importance prediction task. The performance scores reported in the tables are the average performance across 5 different trials, to account for possible bias due to random initialization of the model. Comparison of the Projection Layers ----------------------------------- We compared the efficacy of the learning architecture’s three projection layers (Section \[sec:projection\_layers\]) by training them separately and comparing their performance on the test corpus. Table \[tbl:projection\_eval\] summarizes the results of this evaluation.\ -- -- -- -- -- -- -- -- : Performance of our speech-based models on the test data under different projection layers. Best performing scores highlighted in **bold**.[]{data-label="tbl:projection_eval"} **Results and Analysis**: The <span style="font-variant:small-caps;">lstm-softmax</span>-based and <span style="font-variant:small-caps;">lstm-crf</span>-based projection layers had nearly identical performance; however, in comparison, the <span style="font-variant:small-caps;">lstm-ord</span> model had better performance with significantly lower RMS score than the other two models. This suggests the utility of the ordinal constraint present in the <span style="font-variant:small-caps;">ord</span>-based model for word importance classification. Ablation Study on Speech Features --------------------------------- To compare the effect of different categories of speech features on the performance of our model, we evaluated variations of the model by removing one feature group at a time from the model during training. Table \[tbl:speech\_ablation\] summarizes the results of the experiment.\ -- -- -- -- -- -- -- -- : Speech feature ablation study. The minus sign indicates the feature group removed from the model during training. Markers ($\star$ and $\dagger$) indicate the biggest and the second-biggest change in model performance for each metric, respectively.[]{data-label="tbl:speech_ablation"} **Results and Analysis:** Omitting speaker-based normalization (<span style="font-variant:small-caps;">znorm</span>) features and omitting spoken-lexical features (<span style="font-variant:small-caps;">lex</span>) resulted in the greatest increase in the overall RMS error (+5.5% and +4.8% relative increase in RMS respectively) – suggesting the discriminative importance of these features for word importance prediction. Further, our results indicated the importance of energy-based (<span style="font-variant:small-caps;">eng</span>) features, which resulted in a substantial drop (-2.4% relative decrease) in accuracy of the model. Comparison with the Text-based Models {#sec:comparisontext} ------------------------------------- In this analysis, we compare our best-performing speech-based model with a state-of-the-art word-prediction model based on text features; this prior text-based model did not utilize any acoustic or prosodic information about the speech signal. The baseline text-based word importance prediction model used in our analysis is described in @kafle2017 , and it uses pre-trained word embeddings and bi-direction LSTM units, with a CRF layer on top, to make a prediction for each word. As discussed in Section \[sec:introduction\], human transcriptions are difficult to obtain in some applications, e.g. real-time conversational settings. Realistically, text-based models need to rely on ASR systems for transcription, which will contain some errors. Thus, we compare our speech-based model and this prior text-based model on two different types of transcripts: manually generated or ASR generated. We processed the original speech recording for each segment of the corpus with an ASR system to produce an automatic transcription. To simulate different word error rate (WER) levels in the transcript, we also artificially injected the original speech recording with white-noise and then processed it again with our ASR system. Specifically, we utilized Google Cloud Speech[^2] ASR with WER$\approx25\%$ on our test data (without the addition of noise) and WER$\approx30\%$ after noise was inserted. Given our interest in generating automatic captions for DHH users in a live meeting on a turn-by-turn basis (Section 1), we provided the ASR system with the recording for each dialogue-turn individually, which may partially explain these somewhat high WER scores. The automatically generated transcripts were then aligned with the reference transcript to compare the importance scores. Insertion errors automatically received a label of low importance (<span style="font-variant:small-caps;">li</span>). The WER for each ASR system was computed by performing a word-to-word comparison, without any pre-processing (e.g., removal of filler words). -- -- -- -- -- -- -- -- : Comparison of our speech-based model with a prior text-based model, under different word error rate conditions.[]{data-label="tbl:asr_eval"} **Result and Analysis:** Given the significant lexical information available for the text-based model, it would be natural to expect that it would achieve higher scores than would a model based only on acoustic-prosodic features. As expected, Table \[tbl:asr\_eval\] reveals that when operating on perfect human-generated transcripts (with zero recognition errors), the text-based model outperformed our speech-based model. However, when operating on ASR transcripts (including recognition errors), the speech-based models were competitive in performance with the text-based models. In particular, prior work has found that WER of $\approx30\%$ is typical for modern ASR in many real-world settings or without good-quality microphones [@lasecki2012real; @barker2017chime]. When operating on such ASR output, the RMS error of the speech-based model and the text-based model were comparable. Conclusion ========== Motivated by recent work on evaluating the accuracy of automatic speech recognition systems for real-time captioning for Deaf and Hard of Hearing (DHH) users [@kafle2017], we investigated how to predict the importance of a word to the overall meaning of a spoken conversation turn. In contrast to prior work, which had depended on text-based features, we have proposed a neural architecture for modeling prosodic cues in spoken messages, for predicting word importance. Our text-independent speech model had an F-score of $56$ in a 3-class word importance classification task. Although a text-based model utilizing pre-trained word representation had better performance, acquisition of accurate speech conversation text-transcripts is impractical for some applications. When utilizing popular ASR systems to automatically generate speech transcripts as input for text-based models, we found that model performance decreased significantly. Given this potential we observed for acoustic-prosodic features to predict word importance continued work involves combining both text- and speech-based features for the task of word importance prediction. Acknowledgements ================ This material was based on work supported by the Department of Health and Human Services under Award No. 90DPCP0002-01-00, by a Google Faculty Research Award, and by the National Technical Institute of the Deaf (NTID). [^1]: For the purposes of accurately evaluating efficacy of speech-based feature for word importance, we currently make use of high-quality human-annotated word-level timestamp information in our train/evaluation corpus; in the future, speech tokenization could be automated. [^2]: <https://cloud.google.com/Speech_API>
{ "pile_set_name": "ArXiv" }
--- abstract: | Laplacian matrices of graphs arise in large-scale computational applications such as semi-supervised machine learning; spectral clustering of images, genetic data and web pages; transportation network flows; electrical resistor circuits; and elliptic partial differential equations discretized on unstructured grids with finite elements. A Lean Algebraic Multigrid (LAMG) solver of the symmetric linear system $Ax=b$ is presented, where $A$ is a graph Laplacian. LAMG’s run time and storage are empirically demonstrated to scale linearly with the number of edges. LAMG consists of a setup phase during which a sequence of increasingly-coarser Laplacian systems is constructed, and an iterative solve phase using multigrid cycles. General graphs pose algorithmic challenges not encountered in traditional multigrid applications. LAMG combines a lean piecewise-constant interpolation, judicious node aggregation based on a new node proximity measure (the affinity), and an energy correction of coarse-level systems. This results in fast convergence and substantial setup and memory savings. A serial LAMG implementation scaled linearly for a diverse set of real-world graphs with up to edges, with no parameter tuning. LAMG was more robust than the UMFPACK direct solver and Combinatorial Multigrid (CMG), although CMG was faster than LAMG on average. Our methodology is extensible to eigenproblems and other graph computations. author: - 'Oren E. Livne [^1]' - 'Achi Brandt [^2]' bibliography: - 'lamg.bib' title: | LEAN ALGEBRAIC MULTIGRID (LAMG):\ FAST GRAPH LAPLACIAN LINEAR SOLVER --- *Dedicated to J. Brahms’ Symphony No. 1 in C minor, Op. 68* Linear-scaling numerical linear solvers, graph Laplacian, aggregation-based algebraic multigrid, piecewise-constant interpolation operator, high-performance computing. 65M55, 65F10, 65F50, 05C50, 68R10, 90C06, 90C35. Introduction ============ Let $G=(\cN,\cE,w)$ be a connected weighted undirected graph, where $\cN$ is a set of $n$ nodes, $\cE$ is a set of $m$ edges, and $w: \cE \rightarrow \Real^{+}$ is a weight function. The Laplacian matrix $\bA_{n \times n}$ is naturally defined by the [*quadratic energy*]{} E() := \^[T]{} = \_[(u,v) ]{} w\_[uv]{} x\_u-x\_v \^2,\^, \[energy\] where $\bx^T$ denotes the transpose of $\bx$. In matrix form, = a\_[uv]{} \_[u,v]{}, a\_[uv]{} := \_[v’ \_u]{} w\_[uv’]{},& u=v,\ -w\_[uv]{},&v \_u := {v’: (u,v’) } ,\ 0,& \[lap\] ![A $5$-node graph and its corresponding Laplacian matrix.[]{data-label="simple_graph"}](figures/graph.eps){height="1in"} $\left( \begin{tabular}{rrrrr} 8 & -1 & -1 & -1 & -5 \\ -1 & 2 & 0 & -1 & 0 \\ -1 & 0 & 3 & -2 & 0 \\ -1 & -1 & -2 & 4 & 0 \\ -5 & 0 & 0 & 0 & 5 \end{tabular} \right)$ $\bA$ is Symmetric Positive Semi-definite (SPS), and has zero row sums and $2m+n$ non-zeros. Typically, $m \ll n^2$ and $\bA$ is sparse. Our approach also handles some SPS Laplacian matrices corresponding to [*negative edge weights*]{}, such as high-order and anisotropic grid discretizations [@alg_distance_anis; @max_weight_basis]; those are discussed in §\[negative\_weights\]. Since $G$ is connected, $\bA$’s null space is spanned by the vector of ones $\bu$ (a disconnected graph can be decomposed into its components in $O(m)$ time [@sedgewick; @tarjan]). We consider the nonsingular compatible linear system [@trot pp. 185–186] \[linsys\] $$\begin{aligned} &&\bA \bx = \bb \label{system} \\ &&\bu^{T} \bx = 0\,, \label{xcompat}\end{aligned}$$ where $\bb \in \Real^{\cN}$ is a given zero-sum vector, and $\bx \in \Real^{\cN}$ is the vector of unknowns. Our goal is to develop an iterative numerical solver of (\[linsys\]) that [*requires $O(m)$ storage and $O(m \loget)$ operations to generate an $\ep$-accurate solution, for graphs arising in real-world applications.*]{} The hidden constants should be small (in the hundreds, not millions). The solver should require a smaller cost to re-solve the system for multiple $\bb$’s – a useful feature for time-dependent and other applications. Importantly, we are interested in [*good empirical performance*]{} (bounded hidden constants over a diverse set of test instances), and do not consider the problem of designing an algorithm with provably linear complexity in any graph [@icm10 Problem 5, p. 18]. While proofs are important, they often provide unrealistic bounds or no bounds at all on the hidden constants that can be attained in practice. Applications ------------ The linear system (\[linsys\]) is fundamental to many applications; see Spielman’s review [@icm10 §2] for more details: Elliptic Partial Differential Equations (PDEs) discretized on unstructured grids by finite elements within a fluid dynamics simulation [@boman; @fischer]. Interior-point methods for network flow linear programming [@DS08; @FG07]. Electrical flow through a resistor network $G$. Additionally, $\bA \bx = \bb$ is [*a stepping stone toward the eigenproblem.*]{} Our multilevel methodology can be extended to compute the smallest eigenpairs of $\bA$ with minor adaptations (cf. §\[eigenvalue\]). The Laplacian eigenproblem is central to graph regression and classification in machine learning [@hongkong_class; @zhu_learning], spectral clustering of images, graph embedding [@radu], and dimension reduction for genetic ancestry discovery [@ancestry]. Of particular interest is the Fiedler value – the smallest non-zero eigenvalue of $\bA$, which measures the algebraic connectivity of $G$ [@chung §1.1] and is related to minimum cuts [@ding]. Although we believe it is preferable to develop multiscale strategies for the original formulations of these problems, as demonstrated by the works [@class04; @clustering06; @alg_distance; @nature] and graph partitioning packages [@chaco], a fast black-box eigensolver is a practical alternative. Related Work ------------ There are two main approaches to solving (\[linsys\]): direct, leading to an exact solution (up to round-off errors); and iterative, which typically requires a one-time setup cost, followed by a solve phase that produces successive approximations $\tbx$ to $\bx$ to achieve $\ep$-accuracy, namely, -\_ \_,\_ := . \[epaccuracy\] ### Direct Methods The Cholesky factorization with a clever elimination order can be applied to $\bA$. A permutation matrix $\bP$ is chosen and factorization $\bP^{T} \bA \bP = \bL \bL^{T}$ constructed so that the lower triangular $\bL$ is as sparse as possible, using Minimum or Approximate Minimum Degree Ordering [@add; @md]. Except for simple graphs, direct algorithms do not scale, requiring $O(n^{1.5})$ operations for planar graphs [@nested_dissection; @generalized_nested_dissection] and $O(n^3)$ in general. Alternatively, fast matrix inversion can be performed in $O(n^{2.376})$ or combined with Cholesky, yet yields similar complexities [@icm10 §3.1]. ### Iterative Methods: Graph Theoretic These are variants of the preconditioned conjugate method [@gvl §10.3] that achieve (\[epaccuracy\]) in $O(\sqrt{\kappa(\bA \bB^{-1})}$ $\loget)$ iterations for a preconditioner $\bB$; $\kappa$ is the finite condition number [@icm10 §3.3]. Spielman and Teng (S-T) [@st06] and subsequent works [@koutis] have been focusing on multilevel graph-sparsifying preconditioners. The S-T setup builds increasingly smaller graphs, alternating between partial Cholesky and ultra-sparsification steps, in which the graph is partitioned into sets of high-conductance nodes without removing too many edges. The complexity is a near-linear $O(m \log^2 n \log(1/\ep))$, guaranteed for any symmetric diagonally-dominant $\bA$. Unfortunately, no implementation is available yet, nor is there a guarantee on the size of the hidden constant. ### Iterative Methods: AMG Algebraic Multigrid (AMG) is a class of high-performance linear solvers, originating in the early 1980s [@guide §1.1], [@geodetic], [@rs86] and under active development. During setup, AMG recursively constructs a multi-level hierarchy of increasingly coarser graphs by examining matrix entries, without relying on geometric information. The solve phase consists of multigrid cycles. AMG can be employed either as a solver or a preconditioner [@trot App. A]. Open-source parallel implementations include Hypre [@hypre] and Trilinos-PETSc [@trilinos]. In classical AMG, the coarse set is a subset of $\cN$; alternatively aggregation AMG [@trot App. A.9],[@braess; @blatt; @notay_etna] and smoothed aggregation [@sa] define coarse nodes as aggregates of fine nodes. AMG mainly targets discretized PDEs, where it has been successful [@fischer]. Advanced techniques have ventured to widen its scope by increasing the interpolation accuracy. These include Bootstrap AMG [@yes §17.2], [@bamg] adaptive smoothed aggregation [@asa], and interpolation energy minimization [@olson]. While these methods approach linear scaling for more systems, their complexity cannot be controlled in general graphs (cf. §\[interpolation\_caliber\]). At the same time, accelerated aggregation AMG has become a hot research topic. A crude [*caliber-1*]{} (piecewise-constant) interpolation is employed between levels to reduce runtime and memory costs, at the expense of a slower cycle that is subsequently accelerated. Notay [@notay_etna; @notay_nonsym] aggregated nodes based on matrix entries and applied multilevel CG acceleration with a large cycle index to obtain a near-optimal solver for convection-diffusion M-matrices, but the method was limited to those grid graphs. Caliber-1 interpolation was tested for a single graph by Bolten et. al within the bootstrap framework [@bamg_markov], but their setup cost was large and required parameter tuning. The present work aims at generalizing AMG to graph Laplacians and addresses peculiarities not encountered in traditional AMG applications. To the best of our knowledge, the only other solver targeting general topologies is Combinatorial Multigrid (CMG) [@cmg], a hybrid graph-theoretic-AMG preconditioner that partitions nodes into high-conductance aggregates, similarly to S-T. CMG outperformed classical AMG for a set of 3-D image segmentation applications. In our experiments over a much larger graph collection, CMG and LAMG had comparable average solve speeds, yet LAMG’s performance was much more robust, with almost no outliers. Our Contribution ---------------- We present Lean Algebraic Multigrid (LAMG): a practical graph Laplacian linear solver. A LAMG implementation scaled linearly for a set of real-world graphs with up to edges, ranging from computational fluid dynamics to web, biological and social networks. Specifically, the setup phase required on average $\setupmvm$ Matrix-Vector Multiplications (MVMs) and $\storageperedge m$ storage bytes, and the average solve time was $\approx \solvemvm \log(1/\ep)$ MVMs per right-hand side. The standard deviations were small with only three outliers. LAMG was more robust than the UMFPACK direct solver and CMG, although CMG was faster on average (§\[smorgasbord\]). Our methodology is extensible beyond the scope of S-T and CMG, to non-diagonally-dominant (§\[negative\_weights\]), eigenvalue, and nonlinear problems (§\[eigenvalue\]). LAMG is an accelerated caliber-1 aggregation-AMG algorithm that builds upon the state-of-the-art AMG variants, yet introduces four new ideas essential to attaining optimal efficiency in general graphs: [*Lean methodology*]{} (§\[lean\]). LAMG advocates using [*minimalistic over sophisticated AMG components*]{}. No parameter tuning should be required. In particular, we apply caliber-1 interpolation between levels, constructed using relaxed Test Vectors (TVs), but without bootstrapping them as in the papers [@bamg_markov; @bamg]. Fast asymptotic convergence is achieved by the following three ideas. [*Low-degree Elimination*]{} (§\[elimination\]). Like the S-T method [@st06], we eliminate low-degree nodes prior to each aggregation. However, the role of elimination here is different: [*it removes the effectively-1-D part of the graph*]{}, thereby eliminating extreme tradeoffs between complexity and accuracy in aggregating the remaining graph. Thus, unlike S-T, our elimination need not strictly reduce the number of edges (allowing us to eliminate nodes of higher degrees) nor be exact (making it also useful in the eigenproblem). The elimination and aggregation are in fact specializations of the same coarsening scheme (§\[eigenvalue\]). [*Affinity*]{} (§\[affinity\]). Aggregation is based on a new [*normalized relaxation-based node proximity*]{} heuristic. Ron et al. [@alg_distance] were the first to use TVs to measure algebraic distance between nodes, but our measure is effective for a wider variety of graph structures. The affinity admits a statistical interpretation and approximates the diffusion distance [@diffusion_maps]. In contrast, the S-T algorithm strives to create high-conductance aggregates [@st06; @koutis_conductance]. [*Energy-corrected Aggregation*]{} (§\[aggregation\]). Recognizing that caliber-1 interpolation leads to an energy inflation of the coarse-level system (§\[inflation\]), our aggregation also [*reuses test vectors to minimize coarse-to-fine energy ratios*]{}. We offer two alternative energy corrections to accelerate the solution cycle: a flat correction to the Galerkin operator resemblant of Braess’ work [@braess], yet resulting in a superior efficiency; and an adaptive correction to the solution via [*multilevel iterate recombination*]{} [@trot §7.8.2] that is even more efficient. Following an AMG prelude in §\[basics\], we discuss each of the main ideas in §\[ideas\]. They are integrated into the complete LAMG algorithm in §\[algorithm\] (cf. the expanded ArXiV e-print [@lamg_arxiv] for implementation details). Our development methodology emphasizes learning from examples: we studied instances for which the original design was slow to derive general aggregation rules. Testing over a large collection ensured that new rules did not spoil previous successes. The results are presented in §\[results\]. Coarsening improvements and extensions to the eigenproblem and other graph computational problems are outlined in §\[extensions\]. Algebraic Multigrid Basics {#basics} ========================== Relaxation methods for $\bA\bx=\bb$ such as Jacobi or Gauss-Seidel slow down asymptotically. Yet after only several sweeps, the [*error*]{} $\bee := \bx-\tbx$ in the approximation $\tbx$ to $\bx$ becomes [*algebraically smooth*]{}: its normalized residuals are much smaller than its magnitude [@guide §1.1] (assuming its mean has been subtracted out). In AMG, these errors are approximated by an interpolation $\bP_{n \times n_c}$ from a coarse subspace: $\bee \approx \bP \bee^c$, where $\bee^c$ is a coarse vector of size $n_c$. Recognizing that $\bA\bx=\bb$ corresponds to the quadratic minimization = (),() := 12 \^[T]{} - \^[T]{} , \[quad\] the variational correction scheme [@geodetic] seeks the optimal correction in the energy norm, \^c = (+ \^c ). \[min\_ec\] The resulting two-level cycle (except for determining $\bx$’s mean) is Perform 2-3 relaxation sweeps on $\bA\bx=\bb$, resulting in $\tbx$. Compute an approximation $\tbee^c$ to the solution $\bee^c$ of \^c \^c = \^c,\^c := \^[T]{} ,\^c := \^[T]{} (- ). \[galerkin\] Correct the fine-level solution: + \^c. \[correction\] Eq. (\[galerkin\]), called Galerkin coarsening, is a smaller linear system. In the multilevel cycle, it is recursively solved using $\gamma$ two-level cycles, where the [*cycle index*]{} $\gamma$ is an input parameter. Our interpolation $\bP$ is full-rank with unit row sums (see §\[exten\]) for which it easy to verify that $\bA^c$ is also a connected graph Laplacian. Overall, LAMG constructs a hierarchy of $L$ increasingly coarser Laplacian systems (“levels”) $\bA^l \bx^l=\bb^l$, $l=1,\dots,L$, the finest being the original system $\bA^1:=\bA, \bb^1:=\bb$. The setup phase depends on $\bA$ only, and produces $\{(\bA^l,\bP^l)\}_{l=2}^L$, where $\bA^l$ is $n_l \times n_l$ and $\bP^l$ is the $n_{l-1} \times n_l$ interpolation matrix from level $l$ to $l-1$. We denote by $G^l=(\cN^l,\cE^l)$ the graph corresponding to $\bA^l$. LAMG: Main Ideas {#ideas} ================ Our description refers to a single coarsening stage ($\bA \rightarrow \bA^c$) and applies to each pair of levels. Lean Methodology {#lean} ---------------- ### Relaxation {#relax} Our choice is Gauss-Seidel (GS) relaxation, defined by the successive updates [@guide §1.1] u=1,…,n,x\_u . \[gs\] We picked GS because it is an effective smoother in SPS systems [@guide §1] and does not require parameter tuning (such as the Jacobi relaxation damping parameter [@bamg_markov]). ### Interpolation Caliber {#interpolation_caliber} Textbook multigrid convergence for the Poisson equation requires that the interpolation of corrections $\bP$ be second-order [@mg_theory §3.3]. The analogous AMG theory implies a similar condition on the interpolation accuracy of low-energy errors. While a piecewise-constant $\bP$ is acceptable in a two-level cycle, it is insufficient for V-cycles ($\gamma=1$); W-cycles ($\gamma=2$) are faster but costly [@trot p. 471],[@notay_etna]. Constructing a second-order $\bP$ is already challenging in grid graphs [@bamg; @sa; @olson]. We argue that it is infeasible in general graphs: The graph’s effective dimension $d$ is unknown; had it been known, the required interpolation [*caliber*]{}, i.e., the number of coarse nodes used to interpolate a fine node, would grow with $d$ and result in unbounded interpolation complexity. Identifying a proper interpolation set (whose “convex hull” contains the fine node) is a complex and costly process [@bamg]. The Galerkin coarse operator $\bP^{T} \bA \bP$ fills in considerably; cf. §\[fillin\]. In contrast, LAMG employs a lean caliber-1 (piecewise-constant) $\bP$, equivalent to an [*aggregation*]{} of the nodes into coarse-level aggregates, and corrects the energy of the coarse-level Galerkin [*operator*]{} to maintain good convergence (in practice, the correction is applied to the coarse right-hand side). This could not have been achieved within the variational setting of §\[basics\], which only permits modifying $\bP$. Here the barrier to fast convergence is the coarse-to-fine operator energy ratio. Our contribution is an algorithm that yields a small energy ratio, which translates into optimal efficiency; cf. §\[aggregation\]. The compatible relaxation performance predictor [@cr_etna; @cr_oren; @cr_james], [@guide §§14.2–14.3] is irrelevant for low interpolation accuracy; the energy ratio is a better predictor. ### Managing Fill-in {#fillin} Frequently, the coarse-level matrices in AMG hierarchies become increasingly dense. This is a result of a poor aggregation, a high-caliber $\bP$, or both: many fine nodes whose neighbor sets are disjoint are aggregated, creating additional edges among coarse-level aggregates. This renders the ideally-accurate interpolation irrelevant, because the actual cycle [*efficiency*]{} (error reduction per unit work) is small albeit convergence may be rapid. While fill-in is often controllable in grid graphs because their coarsening is still local, it is detrimental in non-local graphs. LAMG’s interpolation is designed to minimize fill-in. Heuristically, the sparser $\bP$, the sparser $\bP^{T} \bA \bP$. We further indirectly control fill-in via our affinity criterion (§\[affinity\]), which tends to aggregate nodes that share many neighbors. The cycle work is also restrained by a fractional cycle index [@guide §6.2] between 1 and 2; cf. §§\[setup\],\[solve\]. Occasionally, the interpolation caliber may be slightly increased as long as the number of coarse edges does not become too large; see §\[improvements\]. ### Utilizing Extenuating Circumstances {#exten} Specific properties of graph Laplacians are exploited to simplify the LAMG construction. Since $\bA$ has zero row sums, its null-space consists of constant vectors. A fundamental AMG assumption is that [*all near*]{}-null-space errors can be fitted by a [*single*]{} interpolation from a coarse level [@guide p. 8]. Here it implies that the interpolation weights can be apriori set to $1$. The unit-weight assumption is easily verified for Laplacians with bounded node degrees [@trot p. 439]; in the most interesting applications of this work, however, the node degree is unbounded, and a proof of this conjecture is an open problem. Some graph locales are effectively one-dimensional: many nodes have degree 1–2. Such nodes can be quickly eliminated similarly to the paper [@koutis] (§\[elimination\]). In other graphs, Gauss-Seidel is an efficient solver and no coarsening is required. These include complete graphs, star graphs, expander graphs [@icm10 §1], and certain classes of random graphs. More generally, if GS converges fast for a subset of the nodes, they can all be aggregated together (§\[agg\_algorithm\]). Low-degree Elimination {#elimination} ---------------------- We first attempt to eliminate from $\cN$ an independent set $\cF$ of nodes $u$ of degree $|\cE_u| \leq 4$. The set is identified by initially marking all nodes as “eligible”; we then sweep through nodes, adding each eligible low-degree node to $\cF$ and marking its neighbors as ineligible [@lamg_arxiv Algorithm 1]. Eliminating a node connects all its neighbors; therefore, $\cF$-nodes of degree $\leq 3$ do not increase $m$. When $|\cE_u| = 4$, $m$ might be increased by at most $2$. However, we assume that this is unlikely to happen for many $\cF$-nodes and eliminate these nodes as well (in practice, we have observed that the neighbors are already connected prior to elimination). Eliminating larger degrees results in an impractical fill-in. Let $\cC := \cN \backslash \cF$. $\bA \bx = \bb$ reduces to the Schur complement system \^c \_ = \^c, \^c := \^[T]{} , \^c := \^[T]{} , := (-\_\^[T]{} \_\^[-1]{},\_ )\^[T]{}, \[elim\] where $\bPi$ is a permutation matrix such that $\bPi^{T} \bx$ lists the all $\cF$ node values, then all $\cC$ node values. (\[elim\]) is a smaller Laplacian system for which we perform further elimination rounds, until $|\cF|$ becomes small [@lamg_arxiv Algorithm 2]. The purpose of elimination is to reduce $n$ while incurring a small fill-in, and to remove the 1-D part of the graph, which cannot be effectively coarsened by the energy-corrected aggregation of §\[aggregation\]. Note that $\bP$’s caliber is larger than $1$ here. Viewed as a full approximation scheme (§\[eigenvalue\]), (\[elim\]) could be generalized to a [*non-exact*]{} elimination for approximating the lowest eigenvectors of $\bA$, instead of forming a nonlinear Schur complement [@amls]. Additionally, a larger set of [*loosely-coupled*]{} nodes could also be eliminated: if $\bA_{\cF\cF}$ is strongly diagonally-dominant, its inverse can be approximated by a few Jacobi relaxations. The two-level convergence rate and fill-in would need to be kept in check in that case. We plan to pursue these generalizations in a future research. We hereafter denote the coarse system by $\bA \bx = \bb$ (either (\[system\]), if $q=0$, or (\[elim\])), which is further coarsened by caliber-1 aggregation in §§\[affinity\]–\[aggregation\]. Affinity -------- The construction of an effective aggregate set hinges upon defining which nodes in $\cN$ are “proximal”, i.e., nodes whose values are strongly coupled in all smooth (i.e., low-energy) vectors [@trot p. 473]. Table \[prox\_def\] lists three definitions. Classical AMG [@rs86] $1 - |w_{uv}| / \max\left\{ \max_s |w_{us}|, \max_s |w_{sv}| \right\}$ ------------------------------------ ------------------------------------------------------------------------------------------------------------------ Algebraic Distance [@alg_distance] $ \max_{k=1}^K \left| x_u^{(k)} - x_v^{(k)} \right|$ $c_{uv} := 1 - \left|\left(X_u,X_v \right)\right|^2/\left(\left(X_u,X_u \right) \left(X_v,X_v \right)\right)\,,$ $(X_u,X_v):= \sum_{k=1}^K x^{(k)}_u x^{(k)}_v$ : Comparison of node proximity measures. Nodes are defined as “close” when the measure is smaller than a threshold. $\{\bx^{(k)}\}_{k=1}^K$ is a set of relaxed test vectors; see the text.[]{data-label="prox_def"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Unweighted graph instances that present aggregation difficulties. (a) A 2-D grid with an extra link. (b) Two connected hubs. A hub is a high-degree node.[]{data-label="two_examples"}](figures/grid_extra_link.eps "fig:"){height="1.25in"} ![Unweighted graph instances that present aggregation difficulties. (a) A 2-D grid with an extra link. (b) Two connected hubs. A hub is a high-degree node.[]{data-label="two_examples"}](figures/two_suns.eps "fig:"){height="1.2in"} (a) (b) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ### Existing Proximity Measures Classical AMG defines proximity based on edge weights (Table \[two\_examples\], top row). While this has worked well for coarsening discretized scalar elliptic PDEs, it leads to wrong aggregation decisions in non-local graphs. In a grid graph with an extra link between distant nodes $u$, $v$ (Fig. \[two\_examples\]a), $u$ and $v$ become proximal and may be aggregated. Unless $w_{uv}$ is outstandingly large, this is undesirable because $u$ and $v$ belong to unrelated milieus of the grid. This problem is overcome by the algebraic distance measure introduced by Ron et al. [@alg_distance] (Table \[prox\_def\], middle row; a related definition is used in the work [@alg_distance_anis]). Insofar as coarsening concerns the space of smooth error vectors $\bx$, nodes $u$ and $v$ should be aggregated only if $\bx_u$ and $\bx_v$ are highly correlated for all such $\bx$. A set of $K$ Test Vectors (TVs) $\bx^{(1)},\dots,\bx^{(K)}$ is generated – a sample of this error space [@yes §17.2], [@alg_distance]. Each TV is the result of applying $\nu$ relaxation sweeps to $\bA \bx = \bzero$, starting from $\mathrm{random}[-1,1]$. However, Ron et al.’s definition falls prey to a graph containing two connected high-degree nodes $u$ and $v$ (“hubs”; Fig. \[two\_examples\]b). For each $k$, the value $x^{(k)}_u$ is an average over a large neighborhood of random node values whose size increases with the number of sweeps, hence is small. Similarly, $x^{(k)}_v$ is small, so every such $u$ and $v$ turns out proximal, even though they may be distant. ### The New Proximity Measure LAMG’s proximity measure, the [*affinity*]{} (Table \[prox\_def\], bottom), also relies on TVs, but is scale-invariant, and correctly assesses both Fig. \[two\_examples\]a and b as well as many other constellations. The affinity $c_{uv}$ between $u$ and $v$ is defined as the goodness of fitting the linear model $x_v \approx p\, x_u$ to TV values: c\_[uv]{} := 1 - , ( X, Y ) := \_[k=1]{}\^K x\^[(k)]{} y\^[(k)]{}, X\_u := (x\^[(1)]{}\_u,…,x\^[(K)]{}\_u). \[cuv\] $c_{uu} = 0$, $0 \leq c_{uv} \leq 1$ and $c_{uv}=c_{vu}$. The affinity measures [*distance*]{}: the smaller $c_{uv}$, the closer $u$ and $v$. In the $d$-D discretized Laplace operator on a grid, $c_{uv}$ related to the [*geometric distance*]{} between the gridpoints corresponding to $u$ and $v$. In general graphs, $c_{uv}$ is an alternative definition of the algebraic distance [@alg_distance], and approximates the [*diffusion distance*]{} at a short time $\nu$ [@diffusion_maps]. ### Statistical Interpretation $x_u$ can be thought of as a random variable; $c_{uv}$ is the Fraction of Variance Unexplained of linearly regressing $x_u$ on $x_v$ using the TV samples $X_u$ and $X_v$ [@regression]. We make several observations: $c_{uv}$ is invariant to scaling $X_u$ and $X_v$. This is vital in the two-hub case (Fig. \[two\_examples\]b). Rather than subtracting the sample means $\overline{X}_u := \sum_{k=1}^K x^{(k)}_u$ and $\overline{X}_v$ from $X_u$ and $X_v$, respectively, as in the standard statistical definition, we use their exact means over all error vectors. These are zero, since each $X_u$ is a linear combination of some initial $X_w$’s, each of which has a zero mean. A weighted inner product $(X_u,X_v)$ could be used to account for TV variances, but since all TVs have the same level of smoothness, i.e., comparable normalized residuals [@yes §17.2], [@bamg], they were assigned equal weights. A few vectors $K$ and smoothing sweeps $\nu$ suffice to obtain a good enough $c_{uv}$ estimate, which guides a coarsening of the node set by a modest factor of 2–3 (cf. §\[agg\_algorithm\]). ### Interpolation Accuracy Bootstrap AMG [@yes §17.2] defines a general-caliber $\bP$ using a least-squares fit to TVs. In our case, c\_[uv]{} = \[interp\_accuracy\] relates the affinity to the accuracy of the caliber-$1$ interpolation formula $x_v = p x_u$ for TVs. As a byproduct, we obtain the interpolation coefficient $\hat{p} = (X_u,X_v)/(X_v,X_v)$. Eq. (\[cuv\]) also works for non-zero-sum matrices, such as restricted and normalized Laplacians [@icm10 §2], where $p_{uv}$ is set to $\hat{p}$ (see also §§\[improvements\],\[other\]). For the Laplacian, $\hat{p}$ is abandoned in favor of $p_{uv}=1$ (cf. §\[exten\]). In the Helmholtz equation, $c_{uv}$ is large for all $u,v$, indicating that all nodes are distant and that no single aggregate set can yield fast AMG convergence (indeed, multiple coarse grids are required to restore textbook multigrid efficiency [@wave_accuracy]). Aggregation ----------- Aggregation levels refer to the two-level method of §\[basics\] with a caliber-$1$ interpolation. $\bP$ is equivalent to partitioning $\cN$ into $n_c$ non-overlapping [*aggregates*]{} $\{\cT_U\}_{U \in \cN^c}$; $\cT_U$ is the set of $\bee_u$ interpolated from $\bee^c_U$, and $\cN^c := \{1,\dots,n_c\}$. Each aggregate consists of a [*seed*]{} node and zero or more [*associate*]{} nodes [@lamg_arxiv Fig. 3.2]. This section explains the technical details of aggregate selection, and may be of interest to multigrid experts. Other readers may wish to skip it and assess the quality of our aggregation decisions via the numerical experiments in §\[results\]. ### Aggregation Rules Intuitively, nodes should be aggregated together if their values are “close”; ideal aggregates have strong internal affinities and weaker external affinities. To this end, we formulated five rules: Each node can be associated with one seed. A seed cannot be associated. Aggregate together nodes with smaller affinities before larger affinities. Favor aggregates with small energy ratios (§\[energy\_correction\]). A hub node should be a seed. Rules 1 and 2 prevent an associate from being transitively with a distant seed. Otherwise, long chains might be aggregated together, creating aggregates with weak internal connections and very large energy ratios. Rule 3 favors strongly-connected aggregates. Rule 4 has a dual purpose: (a) Ultimately, the energy ratio determines the AMG asymptotic convergence factor; hence, this rule ensures good convergence. (b) Affinities are based on local information (relaxed TVs); their quantitative value becomes fuzzier as nodes grow apart [@alg_distance §5]. Since small energy ratios usually dictate small aggregates, affinities are thus used only for [*local*]{} aggregation decisions. Rule 5 avoids costly aggregation decisions due to traversing the large neighbor sets of a hubs, which would increase the computational work (cf. §\[energy\_correction\]). On the other hand, aggressively coarsening a large clique into a single node is desirable, as all its nodes are strongly correlated. Thus hubs are defined as [*locally-high degree*]{} nodes.[^3] A typical coarsening ratio in our algorithm ranges between .3–.5. ### Aggregation Algorithm {#agg_algorithm} The algorithm requires the cycle index $\gamma$ as an input. Each node is marked as a seed, associate or undecided. First, hubs are identified and marked as seeds. Second, edges with very small $|w_{uv}|$ are discarded during aggregation. Since relaxation converges fast at the nodes that become disconnected, they need not be coarsened at all; however, to keep the coarse-level matrix a Laplacian, we aggregate all of them into a single (dummy) aggregate. All other nodes are marked as undecided. Aggregation is performed in $r$ stages: aggregate sets $S_1,\dots,S_r$ are generated such that each $S_i$-aggregate is contained in some $S_{i+1}$-aggregate. The set whose coarsening ratio $\alpha:=|S_i|/n$ is closest to $\amax := .7/\gamma$ is selected as the final set, so that the total cycle work would be bounded by $\approx 1 + \gamma \amax + (\gamma \amax)^2 + \dots \approx \frac{10}{3}$ finest-level units, had the same fill-in occurred at all levels (this is a practical guideline for selecting a good coarse set; there exist many sets that yield similar cycle complexities). In our code, at most two stages are performed per coarsening level in the cycle. In each stage, we scan undecided nodes $u$ and decide whether to aggregate each one with a neighbor $s$ that is either an existing seed, or an undecided node that thereby becomes a new seed. $s$ is the non-associate neighbor of $u$ with the smallest affinity $c_{us}$. At the end of the last stage, still-undecided nodes are converted to seeds. The complete algorithm is described in the ArXiV e-print [@lamg_arxiv §3.4.3]. ### Energy Inflation {#inflation} The Galerkin coarse-level correction $\bee^c$ (Eq. (\[min\_ec\])) is the best approximation to a smooth error $\bee$ in the energy norm. Braess [@braess] noted that this does not guarantee a good approximation in the $l_2$ norm. For example, if $\bee$ is a piecewise linear function in a path graph (1-D grid with $w \equiv 1$) coarsened by aggregates of size two, $\bP \bee^c$ is constant on each aggregate and matches $\bee$’s slope across aggregates, resulting in about half the fine-level magnitude. See Fig. \[path\]a. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The Galerkin correction $P \bee^c$ to a piecewise linear error $\bee$ in a path graph for (a) uniform 1:2 aggregation and (b) variable aggregate size. The meshsize $h>0$ is arbitrary.[]{data-label="path"}](figures/path_0.5.eps "fig:"){height="1.5in"} ![The Galerkin correction $P \bee^c$ to a piecewise linear error $\bee$ in a path graph for (a) uniform 1:2 aggregation and (b) variable aggregate size. The meshsize $h>0$ is arbitrary.[]{data-label="path"}](figures/path_variable.eps "fig:"){height="1.5in"} (a) (b) -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- An equivalent and more useful observation is that the energy of $\bP \bT \bee$ is twice larger than $\bee$’s, where $\bT \bee$ is some coarse representation of $\bee$, say, \_U := \_[u \_U]{} e\_u,U \^c. \[t\] $\bT$ is called the [*aggregate type*]{} operator [@cr_james §2]. (\[galerkin\]) can be rewritten as { 12 (\^c)\^[T]{} \^c \^c - \^c \^[T]{} },:= . \[min\_ec\_r\] For an ideal interpolation $\bP$ that satisfies $\bP \bT \bee = \bee$, (\[min\_ec\_r\]) is minimized by $\by^c = \bT \bee$. A caliber-1 interpolation $\bP$ still satisfies $\bP \bT \bee \approx \bee$, but the first term in (\[min\_ec\_r\]) is multiplied by the [*energy inflation factor*]{} q() := = , E\^c(\^c) := 12 (\^c)\^[T]{} \^c \^c. \[q\] Now (\[min\_ec\_r\]) is minimized by $\by^c \approx q^{-1}\bT \bee$. As $\bee$ is not significantly changed by relaxation, its two-level Asymptotic Convergence Factor (ACF) will be $\rho \approx 1 - 1/q$. In Fig. \[path\]a, $q \approx 2$ and $\rho \approx .5$. Several inflation remedies can be pursued: Increase the $\bP$’s caliber and accuracy. This leads to fill-in troubles (§\[interpolation\_caliber\]). Accept an inferior two-level ACF of $1-1/q$ and increase the cycle index $\gamma$ to maintain it in a multilevel cycle. Unfortunately, not only does this increase complexity, the examples in §\[energy\_correction\] demonstrate that $q$ can be arbitrarily large. This ACF cannot be improved by additional smoothing steps either, because it is governed by smooth mode convergence. Correct the coarse level operator $\bA^c$ to match the fine level operator’s energy during the setup phase. This option is considered in §\[energy\_correction\]. Modify the coarse level correction $\bP \bee^c$ to match the fine level error $\bee$ during the solve phase. This option is pursued in §\[adaptive\]. ### Flat Energy Correction {#energy_correction} \[mu\] In this scheme, (\[galerkin\]) is modified to \^[T]{} \^c = \^[T]{}(- ). \[galerkin2\] The key question is how to choose $\mu$. Motivated by Fig. \[path\]a and its two-dimensional analogue, Braess used $\mu=1.8$, but his V-cycle convergence for 2-D grid graphs was mesh-independent only if a fixed number of levels were used per cycle, and if AMG was used as a preconditioner. In fact, no [*predetermined global*]{} factor exists that fits all error [*corrections*]{} in scenarios such as Fig. \[path\]b, because the coarse-level solution depends on all local inflation ratios, which vary among graph nodes. On the other hand, a [*local energy*]{} correction factor $\mu$ does exist. Indeed, the fine-level and coarse-level quadratic energies are separable to nodal energies: \[nodal\] $$\begin{aligned} E(\bx) &=& \sum_{u \in \cN} E_u(\bx)\,,\quad\quad\,\, E_u(\bx) := -\frac12 \sum_{v: v \not = u} a_{uv} \left(x_u-x_v\right)^2\,, \label{nodal_fine} \\ E^c(\bx^c) &=& \sum_{U \in \cN^c} E^c_U(\bx^c)\,,\quad E^c_U(\bx^c) := -\frac12 \sum_{V: V \not = U} a^c_{UV} \left(x^c_U-x^c_V\right)^2\,. \label{nodal_coarse}\end{aligned}$$ Here $E_u(\bx)$ is the nodal energy at node $u$, and $E^c_U(\bx^c)$ is the nodal energy at aggregate $U$. The [*local inflation factor*]{} at aggregate $U$ is defined by q\_U() := . \[q\_local\] In principle, a [*local*]{} $\mu_U$ can be designed using our TVs to at least partially offset $q_U$; unfortunately, new difficulties arise (cf. §\[other\_corrections\]). Thus we chose to still scale the right-hand side by a global $\mu$, but [*modify the aggregation so that $q_U(\bx) \lessapprox Q$ for all smooth vectors $\bx$ and all $U \in \cN^c$*]{}, where $Q>1$ is a parameter. Under this condition a global factor is effective, whose optimal value minimizes the overall convergence factor: \_ = | 1 - | = { | 1 - |, | 1 - | } = . \[optimal\_mu\] In LAMG, we shoot for $Q=2$; hence $\mu_{\text{opt}}=\frac43$, and the expected ACF of smooth errors is $Q/(Q+1)=\frac13$. The worst energy ratio $Q := \max_{\bx} q_U(\bx)$ varies considerably with aggregate size, shape and alignment. Fig. \[grid2d\] depicts four constellations that may arise in an unweighted 2-D grid graph. (While these examples do not represent every scenario that can occur in graphs, they provide necessary conditions under which the algorithm must work.) Limiting the aggregate size to $2$, for instance, would not prevent case (d), whose $d$-dimensional analogue yields an unbounded $Q=d+1$. Fortuitously, we already possess the tool to signal and avoid bad aggregates: test vectors. The algorithm of §\[agg\_algorithm\] is modified so that $u$ is only aggregated with a seed $s$ [*if the local energy ratios of all test vectors are sufficiently small*]{}. [cc]{} ![Coarsening patterns. (a) 1:2 semi-coarsening. The energy ratio of aggregate $\{5,6\}$ depends on $\{x_u\}_{u=1}^{10}$. (b) 1:3 semi-coarsening. (c) 1:2 full coarsening. (d) Staggered semi-coarsening.[]{data-label="grid2d"}](figures/grid2d_semi.eps "fig:"){height="1in"} & ![Coarsening patterns. (a) 1:2 semi-coarsening. The energy ratio of aggregate $\{5,6\}$ depends on $\{x_u\}_{u=1}^{10}$. (b) 1:3 semi-coarsening. (c) 1:2 full coarsening. (d) Staggered semi-coarsening.[]{data-label="grid2d"}](figures/grid2d_13.eps "fig:"){height="1in"}\ (a) $Q=2$ & (b) $Q=3$\ \ ![Coarsening patterns. (a) 1:2 semi-coarsening. The energy ratio of aggregate $\{5,6\}$ depends on $\{x_u\}_{u=1}^{10}$. (b) 1:3 semi-coarsening. (c) 1:2 full coarsening. (d) Staggered semi-coarsening.[]{data-label="grid2d"}](figures/grid2d_full.eps "fig:"){height="1in"} & ![Coarsening patterns. (a) 1:2 semi-coarsening. The energy ratio of aggregate $\{5,6\}$ depends on $\{x_u\}_{u=1}^{10}$. (b) 1:3 semi-coarsening. (c) 1:2 full coarsening. (d) Staggered semi-coarsening.[]{data-label="grid2d"}](figures/grid2d_staggered.eps "fig:"){height="1in"}\ (c) $Q=2$ & (d) $Q=3$\ Specifically, we compare the nodal energy $E_u$ before and after aggregation for each TV. Note that the nodal energy (\[nodal\_fine\]) is a quadratic in $x_u$ and $\{x_v\}_{v \in \cE_u}$. Define $$E_u(\bx;y) := \frac12 a_{uu} y^2 - B_u(\bx) y + C_u(\bx)\,, B_u(\bx) := \sum_{v \in \cE_u} w_{uv} x_v\,, C_u(\bx) := \frac12 \sum_{v \in \cE_u} w_{uv} x_v^2\,,$$ so $E_u(\bx) = E_u(\bx;x_u)$. The energy inflation that would occur upon aggregating $u$ with a seed $s$ is estimated by q\_[us]{} := . \[qut\] The numerator is the local energy after a temporary relaxation step is performed at $u$ (since the coarse-level correction is executed on a relaxed iterate during the cycle, this is the energy it aims to approximate; the papers [@bamg_markov; @rbamg] use a similar idea). The denominator is the energy obtained when $x^{(k)}_u$ is set to $x^{(k)}_s$, simulating the caliber-1 aggregation; more accurate coarse-level energy estimates could be used, but we have not pursued them in the lean spirit of LAMG. We aggregate $u$ with the seed $s$ whose $c_{us}$ is minimal of all seeds $t$ with $q_{ut} \leq 2.5$; if none exist, $u$ is not aggregated at all. (Ratios slightly greater than the target $Q=2$ are accepted because TVs also contain high-energy modes, for which strict ratios are neither attainable nor necessary.) The complexity of the aggregation decision is $O(K |\cE_u|)$ [@lamg_arxiv §3.5.4]. Low-degree elimination (§\[elimination\]) is advantageous because (a) it largely prevents worst case 1-D scenarios such as Fig. \[path\]b, where it is impossible to obtain low energy ratios without excessively increasing the coarsening ratio; (b) it increases the number of neighbors of $u$ and the chance of locating a seed $s$ with small energy inflation. ### Iterate Recombination {#adaptive} Instead of fixing $\mu$ by (\[galerkin2\]), an effectively-[*adaptive energy correction*]{} is obtained by modifying the correction to smooth errors during the solution cycle. Let $l$ be any level such that $l+1$ is an level. When the cycle switches from level $l-1$ to level $l$, $\vartheta$ sub-cycles are applied to $\bA^l \bx^l = \bb^l$, where $\vartheta$ is $1$ or $2$. We save the iterates $\bx^l_i$ obtained after the pre-relaxation of sub-cycle $i$, and, before switching back to level $l-1$, replace the final iterate $\bx^l$ by \^l = \^l + \_1 ( \^l\_1 - \^l ) + + \_ ( \^l\_ - \^l ), \[recomb\] where $\{\alpha_i\}_{i=1}^{\vartheta}$ are chosen so that $\|\bb^l - \bA^l \by^l\|_2$ is minimized (this is an $n_l \times \vartheta$ least-squares problem solved in $O(n_l)$ time). This [*iterate recombination*]{} [@trot §7.8.2] diminishes smooth errors $\bx^l_i-\bx^l$ that were not eliminated by $\kth{(l+1)}$-level corrections. Since the initial [*residuals*]{} obtained after interpolation from level $l+1$ are not smooth, the residual minimization is only effective after $\bx^l_i - \bx^l$ is smoothed. To maximize iterate smoothness, we perform more post- than pre-smoothing relaxations. The optimal splitting turned out to be a (1,2)-cycle; cf. §\[algorithm\]. This acceleration is superior to CG because it is performed at all levels. Iterate recombination at coarse levels has been long recognized as an effective tool in the multigrid literature [@trot Remark 7.8.5]. In LAMG, recombination occurs more frequently at coarser levels because $\gamma>1$. Notay’s K-cycle [@notay_etna; @notay_nonsym] employs a similar multilevel CG acceleration, however with a much larger cycle index (up to $\gamma=4$), which increases the solver’s complexity. The aggregation is still modified here as in §\[energy\_correction\], to ensure small energy ratios and maximum reduction in the residual norm after recombination. The LAMG Algorithm {#algorithm} ================== Setup Phase {#setup} ----------- The setup flow is depicted in Fig. \[flowchart\]. Its sole input is the cycle index $\gamma \geq 1$ to be employed at most levels of subsequent solution cycles. In our program, $\gamma=1.5$; this choice is discussed in §\[solve\]. The original problem ($l=1$) is repeatedly coarsened by either elimination or caliber-1 aggregation until the number of nodes drops below $150$, or until relaxation converges rapidly. We employ $K=4$ TVs at the finest level, and increase $K$ up to $10$ at coarser levels. Each TV is smoothed by $\nu=3$ relaxation sweeps. ![LAMG setup phase flowchart.[]{data-label="flowchart"}](figures/setup.eps){height="1.3in"} Solve Phase {#solve} ----------- The solve phase consists of multigrid cycles [@guide §1.4]. Each $l<L$ is assigned a cycle index $\gamma^l$ and pre- and post-relaxation sweep numbers $\nu^l_1,\nu^l_2$. If level $l+1$ is the result of elimination, $\gamma^l=1$ and $\nu^l_1=\nu^l_2=0$; otherwise, \^l := ,& |\^l| &gt; .1 ||,\ {2, .7|\^[l+1]{}|/|\^l| },& , \_1\^l = 1,\_2\^l = 2. \[cycle\_index\] At fine levels, $\gamma=1.5$ is employed. This value is theoretically marginal for attaining a bounded multilevel ACF if the smoothest error two-level ACF is $\approx \frac13$ [@guide §6.2], as implied by (\[optimal\_mu\]) for $Q=2$. Notwithstanding, worst-case energy ratios occur infrequently, and in practice a smaller ACF is obtained. This issue is further diminished by the adaptive energy correction. At coarse levels, $\gamma^l$ is increased to maximize error reduction while incurring a bounded work increase. Three relaxation sweeps per level provide adequate smoothing, especially in light of the coarse-level correction’s crudeness. The coarsest problem is solved by relaxation (if it is fast) or a direct solver on an augmented system [@lamg_arxiv §3.6.3]. Finally, (\[xcompat\]) is enforced by subtracting the mean of $\bx$ from $\bx$ at the end of the cycle. ![A four-level cycle. A boxed number denotes a number of relaxations. $C$: the coarsest-level solver. $M$: subtracting the iterate mean. Down-arrows: right-hand side coarsening (\[elim\]) or (\[galerkin2\]). Up-arrows: coarse-level corrections [@lamg_arxiv Eq. (3.4a)] or (\[correction\]). $R_{\vartheta}$: a $(\vartheta+1)$-iterate recombination (\[recomb\]). Iterates are saved at the black dots before coarsening.[]{data-label="cycle"}](figures/cycle.eps){height="1.85in"} All cycle parameters are fixed: no fine tuning or parameter optimization is required for specific graphs. The total cycle work is equivalent to about $10$ relaxations. Numerical Results {#results} ================= We provide supporting evidence for LAMG’s practical efficiency for a wide range of graphs. Smorgasbord ----------- An object-oriented 7.13 (R2011b) serial LAMG implementation was developed and is freely available online [@lamg_code]. The time-intensive functions were implemented in C and ported with the mex compiler [@matlab_davis]. It was tested on a diverse set of real-world graphs with up to edges, collected from The University of Florida Sparse Matrix collection (UF) [@uf_collection], C. Walshaw’s graph partitioning archive [@walshaw], I. Safro’s MLogA results archive at Argonne National Laboratory [@mloga], and the FTP site of the DIMACS Implementation Challenges [@dimacs]. Graphs originated from a plethora of applications: airplane and car finite-element meshes; RF electrical circuits; combinatorial optimization; model reduction benchmarks; social networks; and web and biological networks. If the graph was directed, it was converted to undirected by summing the weights of both directions between each two nodes. Then, if it contained a large negative edge weight with $w_{uv} < -10^{-5} \sum_{v' \in \cE_u} |w_{uv'}|$, all weights were made positive by taking their absolute values. Finally, the Laplacian matrix $\bA$ was formed and used. Runs were performed on Beagle, a 150 teraflops, 18,000-core Cray XE6 supercomputer at The University of Chicago (we only took advantage of parallelism by dividing the collection into equal parts, each of which ran a single AMD node with 2.2 GHz CPU and 32GB RAM). For each graph, a zero-sum random $\bb$ was generated. LAMG setup, followed by a linear solve that started from a random guess and proceeded until the residual $l_2$-norm was reduced by $10^{10}$. Six performance measures were computed: [*Setup time per edge $\tsetup$*]{}. [*Solve time per edge per significant figure $\tsolve$*]{}. If the residual norm after $i$ iterations was $r_i$ and $p$ iterations were executed, $\tsolve := t/(m \log_{10}(r_0/r_p))$, where $t$ was the solve time. [*Total time per edge $\ttotal = \tsetup + 10 \tsolve$*]{} to solve $\bA \bx=\bb$ to $10$ significant figures for a single $\bb$. Storage per edge. [*Asymptotic convergence factor*]{}, estimated by $(r_p/r_0)^{1/p}$. [*Percentage spent on setup $\tsetup/\ttotal$*]{}. LAMG scaled linearly with graph size: both $\tsetup$ and $\tsolve$ were approximately constant (Fig. \[times\]a; Table \[times\_avg\]). Times were measured in terms of the most basic sparse matrix operation: a matrix-vector multiplication (MVM), because even MVM time scaled slightly superlinearly for $m \geq 5 \times 10^6$ due to loss of memory locality in the MATLAB compressed-column format [^4]. In wall clock time, the total time per edge was $5.6 \times 10^{-6}$ on average, i.e., LAMG performed a linear solve to $10$ significant figures at $178,000$ edges per second. The LAMG hierarchy required the equivalent of storing $\approx 4 m$ edges in memory (Fig. \[times\]b). Adaptive energy correction provided a $20\%$ speed up over a flat $\mu=\frac43$ and was thus employed in all reported experiments. The ACF was better than the expected $.33$ for flat correction (§\[energy\_correction\]). We compared LAMG with MATLAB’s direct solver (the ’$\backslash$’ operator). Since the direct solver ran out of memory for many graphs with over $10^5$ edges, we did not include it in the plots. LAMG aims at robustness for a wide variety of graphs, and should be compared against solvers that do not often break down or require tuning, even if they are faster for a subset of the graphs (in analogy, many graphs could be solved much faster with a tailored geometric multigrid or classical AMG algorithm). An advantage of iterative solvers over direct is their tunable solution accuracy $\ep$. Since $\bA$’s entries often incur measurement or modeling errors in applications, it does not make sense to solve $\bA\bx=\bb$ to more than 2–3 significant figures. Furthermore, a [*single*]{} multigrid cycle is typically sufficient to solve a nonlinear problems as well as time-dependent problems to the level of discretization errors [@guide Chaps. 7,15]. We also compared LAMG against a implementation of CMG, a hybrid graph-theoretic-AMG solver [@cmg]. Since CMG doesn’t run yet on the Cray architecture, experiments were performed on a smaller 64-bit Dell Inspiron 580 (3.2 GHz CPU; 8GB RAM) for $2668$ graphs with up to $10^7$ edges. Both algorithms successfully solved all graphs. Solve times were similar, while LAMG’s setup time was thrice larger than CMG’s. On the other hand, LAMG was much more robust than CMG: it had only 3 outliers whose solve time was large, as opposed to 26 CMG outliers whose relative magnitude was much larger (4 in setup and 22 in solve; see Fig. \[times\]c-d and Table \[outliers\]). ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![(a) LAMG setup time (blue) and solve time (red) per edge on Beagle (up to $4.7 \times 10^7$ edges). (b) LAMG storage per edge on Beagle. (c) LAMG setup and solve time per edge on a Dell Inspiron (up to $10^7$ edges). (d) CMG setup and solve time per edge on a Dell Inspiron.[]{data-label="times"}](figures/lamg_breakdown_beagle.eps "fig:"){height="1.7in"} ![(a) LAMG setup time (blue) and solve time (red) per edge on Beagle (up to $4.7 \times 10^7$ edges). (b) LAMG storage per edge on Beagle. (c) LAMG setup and solve time per edge on a Dell Inspiron (up to $10^7$ edges). (d) CMG setup and solve time per edge on a Dell Inspiron.[]{data-label="times"}](figures/lamg_storage_beagle.eps "fig:"){height="1.7in"} (a) (b) ![(a) LAMG setup time (blue) and solve time (red) per edge on Beagle (up to $4.7 \times 10^7$ edges). (b) LAMG storage per edge on Beagle. (c) LAMG setup and solve time per edge on a Dell Inspiron (up to $10^7$ edges). (d) CMG setup and solve time per edge on a Dell Inspiron.[]{data-label="times"}](figures/lamg_breakdown.eps "fig:"){height="1.7in"} ![(a) LAMG setup time (blue) and solve time (red) per edge on Beagle (up to $4.7 \times 10^7$ edges). (b) LAMG storage per edge on Beagle. (c) LAMG setup and solve time per edge on a Dell Inspiron (up to $10^7$ edges). (d) CMG setup and solve time per edge on a Dell Inspiron.[]{data-label="times"}](figures/cmg_breakdown.eps "fig:"){height="1.7in"} (c) (d) ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------- ---------- ---------------------- ---------- ---------------------- ---------- -------------------- Median Mean $\pm$ Std. Median Mean $\pm$ Std. Median Mean $\pm$ Std. $\ttotal$ $482.9$ $585.4 \pm 406 $ $558.1$ $680.5 \pm 497 $ $342.2$ $582.6 \pm 1088$ $\tsetup$ $199.6$ $222.5 \pm 108 $ $234.9$ $248.4 \pm 113 $ $66.3$ $89.3 \pm 111 $ $\tsolve$ $27.3$ $36.3 \pm 33 $ $31.6$ $43.2 \pm 42.3$ $25.5$ $47.9 \pm 106 $ ACF $.107$ $.128 \pm .12 $ $.112$ $.132 \pm .11 $ $.500$ $.495 \pm .21 $ %Setup $43.8\%$ $43.7\% \pm 13\% $ $42.4\%$ $43.4\% \pm 13\% $ $21.5\%$ $22.9\% \pm 12\%$ ----------- ---------- ---------------------- ---------- ---------------------- ---------- -------------------- : Left column: median and mean LAMG performance on the Beagle Cray for $892$ graphs with $50,000 \leq m \leq 4.7 \times 10^7$. Middle and right: LAMG vs. CMG performance on a Dell Inspiron for $794$ graphs with $50,000 \leq m \leq 10^7$. Times are measured in matrix-vector multiplications.[]{data-label="times_avg"} -------------------------- ----------- ----------- ---------------- ----------- ----------------- ----------------- ----------------- Name $n$ $m$ ACF $\tsetup$ $\tsolve$ $\tsetup$ $\tsolve$ Ill-conditioned Stokes $ 20896$ $ 87010$ $\mathbf{.66}$ $ 420$ $ \mathbf{323}$ $ 66$ $ 25$ Large basis $ 440020$ $2560040$ $\mathbf{.88}$ $ 322$ $ \mathbf{502}$ $ 70$ $ 107$ RF circuit simulation $4690002$ $6251251$ $\mathbf{.72}$ $ 312$ $ \mathbf{525}$ $ 65$ $ 304$ Law citation network $ 925340$ $6675561$ $.24$ $ 169$ $ 15$ $ 152$ $\mathbf{2037}$ Berkeley-Stanford web $ 512501$ $3480880$ $.17$ $ 168$ $ 18$ $\mathbf{1585}$ $ 126$ Molecule pseudopotential $ 268096$ $8833823$ $.13$ $ 167$ $ 21$ $\mathbf{1879}$ $ 41$ -------------------------- ----------- ----------- ---------------- ----------- ----------------- ----------------- ----------------- : Top section: the three LAMG outliers. Bottom section: three of CMG’s 26 outliers. Times are measured in matrix-vector multiplications.[]{data-label="outliers"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The four finest aggregation levels for the UF 2-D airfoil finite-element planar graph [AG-Monien/airfoil1-dual]{}. Graphs were drawn using GraphViz with the SFDP algorithm [@graphviz].[]{data-label="airfoil_graphs"}](figures/airfoil_1.eps "fig:"){height="1.35in"} ![The four finest aggregation levels for the UF 2-D airfoil finite-element planar graph [AG-Monien/airfoil1-dual]{}. Graphs were drawn using GraphViz with the SFDP algorithm [@graphviz].[]{data-label="airfoil_graphs"}](figures/airfoil_3.eps "fig:"){height="1.35in"} $G^1$ $G^3$ ![The four finest aggregation levels for the UF 2-D airfoil finite-element planar graph [AG-Monien/airfoil1-dual]{}. Graphs were drawn using GraphViz with the SFDP algorithm [@graphviz].[]{data-label="airfoil_graphs"}](figures/airfoil_5.eps "fig:"){height="1.35in"} ![The four finest aggregation levels for the UF 2-D airfoil finite-element planar graph [AG-Monien/airfoil1-dual]{}. Graphs were drawn using GraphViz with the SFDP algorithm [@graphviz].[]{data-label="airfoil_graphs"}](figures/airfoil_7.eps "fig:"){height="1.35in"} $G^5$ $G^7$ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The four finest levels for the UF Harvard 500 non-planar web graph [@matlab_moler].[]{data-label="harvard500_graphs"}](figures/harvard500_level1.eps "fig:"){width="1.7in" height="1.35in"} ![The four finest levels for the UF Harvard 500 non-planar web graph [@matlab_moler].[]{data-label="harvard500_graphs"}](figures/harvard500_level2.eps "fig:"){width="1.7in" height="1.35in"} $G^1$ $G^2$ ![The four finest levels for the UF Harvard 500 non-planar web graph [@matlab_moler].[]{data-label="harvard500_graphs"}](figures/harvard500_level3.eps "fig:"){width="1.7in" height="1.35in"} ![The four finest levels for the UF Harvard 500 non-planar web graph [@matlab_moler].[]{data-label="harvard500_graphs"}](figures/harvard500_level4.eps "fig:"){width="1.7in" height="1.2in"} $G^3$ $G^4$ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ### LAMG’s Outliers The three solve-time outliers (Table \[outliers\]) were characterized by a large portion of small edge weights, which were carried over to all coarse matrices and increased coarsening ratios. Since LAMG’s work was controlled by decreasing $\gamma$ via (\[cycle\_index\]), a slower cycle resulted. We plan to improve those cases in the future by appropriately ignoring weak edges at each level; cf. §\[improvements\]. Grids with Negative Weights {#negative_weights} --------------------------- Unlike CMG, LAMG is not restricted to diagonally-dominant systems, and can also be applied to some graphs with negative edge weights $w_{uv}$, as long as the Laplacian matrix is (or is very close to being) positive semi-definite. To demonstrate this capability, we tested LAMG on the following SPS 2-D grid Laplacians, whose stencils are depicted in Fig. \[negative\_stencils\]: The standard 5-point finite-difference discretization of $U_{xx} + U_{yy}$ on the unit square with Neumann boundary conditions. The 13-point $\kth{4}$-order finite-difference stencil of $U_{xx} + U_{yy}$. The discretized anisotropic-rotated Laplace operator (\^2+ \^2) U\_[xx]{} + (1-) (2) U\_[xy]{} + (\^2+ \^2) U\_[yy]{}, \[anis\_rot\] with $\alpha=-\pi/4$, $\ep=10^{-2}$, standard 5-point stencil of $U_{xx}$,$U_{yy}$, and an alignment-agnostic cross-term $$U_{xy} \approx \frac{1}{4 h^2} \left[ \begin{tabular}{rrr} -1 & 0 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & -1 \end{tabular} \right]\,,$$ where $h$ is the grid meshsize. Neumann boundary conditions were used. The same as (c), but aligning $U_{xy}$ with the northeast and southwest neighbors: $$U_{xy} \approx \frac{1}{2 h^2} \left[ \begin{tabular}{rrr} 0 & -1 & 1 \\ -1 & 2 & -1 \\ 1 & -1 & 0 \end{tabular} \right]\,.$$ ------------------------------------------------ --------------------------------------------- $\left[ $\left[ \begin{tabular}{rrr} \begin{tabular}{rrrrr} & -1 & \\ & & -1 & & \\ -1 & 4 & -1 \\ & & -16 & & \\ & -1 & 1 & -16 & 60 & -16 & 1 \\ \end{tabular} & & -16 & & \\ \right]$ & & 1 & & \end{tabular} \right]$ (a) (b) $\left[ $\left[ \begin{tabular}{rrr} \begin{tabular}{rrr} -.12375 & -.25250 & .12375 \\ & -0.5000 & .24750 \\ -.25250 & 1.5050 & -.25250 \\ -.5000 & 1.5050 & -0.5000 \\ .12375 & -.25250 & -.12375 .24750 & -0.5000 & \end{tabular} \end{tabular} \right]$ \right]$ (c) (d) ------------------------------------------------ --------------------------------------------- Problems (c) and (d) are bad discretizations that do not align with the characteristic direction of (\[anis\_rot\]), and are considered hard for AMG [@alg_distance_anis]. Performance figures for the Dell Inspiron are given in Table \[results\_negative\]. Problem $m$ $L$ ACF %Setup $\ttotal$ ------------------------------- ----------- ------ ------------------ -------- ----------------- -- \(a) 5-point $2095104$ $19$ $0.216$ $29\%$ $ 902$ \(b) 13-point $\kth{4}$ order $4188160$ $20$ $0.262$ $22\%$ $1355$ \(c) Anis. rot. agnostic $4188162$ $19$ $\mathbf{0.816}$ $ 5\%$ $\mathbf{5453}$ \(d) Anis. rot. misaligned $3141633$ $20$ $\mathbf{0.870}$ $ 4\%$ $\mathbf{8136}$ : LAMG performance for grid graphs on a $1024 \times 1024$ grid with $n=1048576$ nodes.[]{data-label="results_negative"} LAMG exhibited mesh-independent convergence and run time in all cases and scaled linearly with grid size, albeit its convergence was much slower for cases (c) and (d), whose negative edge weights are more significant. Compared with the Bootstrap AMG method [@alg_distance_anis], which focused on accurately finding the characteristic directions without sparing setup costs and only presented two-level experiments, LAMG is a full multi-level method with a far shorter setup time, although its ACF could also be significantly reduced using bootstrap tools. These results are certainly preliminary. Lean Geometric Multigrid {#lmg} ------------------------ Higher performance for the Poisson equation discretized on a uniform grid can be obtained by a standard 1:2 coarsening in every dimension at all levels and employing Gauss-Seidel relaxation in red-black ordering [@guide §3.6]. LAMG reduces to [*Lean Geometric Multigrid*]{}: standard multigrid cycle with index $\gamma=1.5$, first-order transfers and energy-corrected coarsening. Since the energy ratio is $2$ for all error modes, a flat correction $\mu=2$ is employed in (\[galerkin2\]). For the 2-D [*periodic*]{} Poisson problem, this cycle turns out to be a record-breaking Poisson solver in terms of asymptotic efficiency: it achieves a convergence factor of $.5$ per unit work, versus $.67$ for the classical multigrid V(1,1) cycle with linear interpolation and second-order full weighting [@lamg_arxiv §4.1]. For other boundary conditions, finding the right $\mu$ is not as easy; while supplementary local relaxations near boundaries theoretically ensure attaining the two-level rates [@guide §5], it would be more beneficial to study the performance of adaptive energy correction in geometric LAMG. Future Research {#extensions} =============== Enhancements and adaptations of the LAMG approach to related computational problems are outlined below. Coarsening Improvements {#improvements} ----------------------- The LAMG algorithm of §\[algorithm\] is by no means final and may be improved in various ways. The average setup time could be reduced by employing classical AMG with no test vectors, and switching to the LAMG strategy only when the former fails. In graphs with many weak edges (such as the outliers in Table. \[outliers\]), efficiency may be increased by temporarily ignoring them in the Galerkin operator computation, yet keeping track of their total contribution to each aggregate’s stencil. If a level is reached at which this total is no longer small compared with the aggregate’s other edge weights, it is reactivated. Currently, a node $u$ can only be aggregated with a direct neighbor $s$. In some problems, $u$’s second-degree neighbors should also be searched to ensure a good aggregation. For instance, in the anisotropic-rotated problem Fig. \[negative\_stencils\]d, $u$ should be aggregated along the characteristic direction, i.e., with its southeast or northwest neighbor, neither of which is contained in $\cE_u$. If no small energy ratio can be found, or if subsequent cycle convergence is slow, isolated bottleneck nodes can be de-aggregated. Alternatively, one can up the interpolation caliber at these troublesome nodes, provided that this does not substantially increase the total coarse edges. Adaptive local relaxation sweeps may improve efficiency in various problems such as PDEs with structural singularities [@bai]. Additionally, user-defined parameters could be supplied to treat special graph families more efficiently. For instance, if node coordinates are available, they can be used to generate smoother initial TVs than the default random initial guess. Optimizing the coarsening is most advantageous when $\bA \bx=\bb$ is solved for multiple $\bb$’s, since a larger setup cost is tolerable. Such is the case in time-dependent problems [@fischer]. Local Energy Correction {#other_corrections} ----------------------- Instead of a flat $\mu=\frac43$ factor in (\[galerkin2\]), one can apply different $\mu$’s to different aggregates. We experimented with different energy correction schemes, some based on fitting the coarse nodal energies of TVs to their fine counterparts (cf. (\[nodal\_coarse\])). While this can dramatically curtail energy inflation, care must be taken to avert over-fitting that ultimately results in the coarse-level correction operator’s instability. Analogously, one can define a local adaptive $\mu$ in the iterate recombination (§\[adaptive\]) at each level $l$, provided that it is properly smoothed [@lamg_arxiv §5.3]. It is unclear whether the extra work would be justified in either case. It may turn out useful for problems re-solved for many $\bb$ vectors, or when a larger setup overhead is tolerable. Other Linear Systems {#other} -------------------- The LAMG caliber-1 aggregation can be applied to non-zero row sum matrices, except that the interpolation weights are no longer $1$. The affinity definition (\[cuv\]) remains intact, while the corresponding $\bP$ entry is set to $p_{uv} := (X_u,X_v)/(X_v,X_v)$ (see (\[interp\_accuracy\])). Normally, relaxed TVs yield an accurate enough $p_{uv}$; in problems with almost-zero modes, e.g., the QCD gauge Laplacian, TVs may need to be improved by a bootstrap cycle [@bamg]. Further research should be conducted for negative-weight graphs such as the high-order finite element and anisotropic grid graphs of §\[negative\_weights\]. The reported convergence factors can be improved by producing bootstrapped TVs via applying multilevel cycles to $\bA \bx=\bzero$. The cycle is far more powerful than plain relaxation in damping smooth characteristic components, which should lead to more meaningful algebraic distances and to correct anisotropic coarsening in a second setup round (much larger spacing in the characteristic direction and no coarsening in the cross-characteristic direction). The bootstrap procedure should be useful in many other graphs. LAMG Eigensolver {#eigenvalue} ---------------- The LAMG hierarchy can be combined with the Full Approximation Scheme (FAS) [@guide Chap. 8] to find the $K$ lowest eigenpairs of $\bA$, similarly to the work [@eis]. We perform the variable substitution $\bx^c = \bee^c + \bR \tbx$, transforming the coarse equation (\[galerkin\]) into \^c \^c = \^[T]{} + ,:= \^c - \^[T]{} , \[galerkin\_fas\] followed by the fine-level correction $\tbx \leftarrow \tbx + \bP (\tbx^c - \bR \tbx)$. The elimination and aggregation are both special cases of (\[galerkin\_fas\]), with $\bR \tbx := \tbx_{\cC}$ and $\bR=\bzero$, respectively. The analogue of (\[galerkin\]) for coarsening $(\bA-\lambda_k \bI) \bx_k = \bzero$ is ( \^c - \_k \^c ) \^c\_k = \_k ,\_k = ( \^c - \_k \^c ) - \^[T]{} ( - \_k \^c ). \[galerkin\_eigen\] Thus a separate affine term appears in the coarse equation of each approximate eigenvector $\bx^c_k$, $k=1,\dots,K$. In particular, the elimination of §\[elimination\] becomes approximate, yet (\[galerkin\_eigen\]) remains linear in $\lambda_k$, as opposed to the exact non-linear Schur complement formed by the AMLS method [@amls]. Gauss-Seidel may be replaced by Kaczmarz relaxation at very coarse levels to prevent the divergence of smooth error modes [@eis]. Alternatively, one can incorporate the LAMG linear solver into a Rayleigh quotient iteration [@gvl §8.2],[@lobpcg]. However, FAS is attractive because it also applies to general nonlinear problems [@guide §8], e.g., quadratic and linear programming. Conclusion ========== Laplacian matrices underlie a plethora of graph computational applications ranging from genetic data clustering to social networks to fluid dynamics. To the best of our knowledge, the presented algorithm, Lean Algebraic Multigrid (LAMG), is the first graph Laplacian linear solver whose empirical performance approaches linear scaling for a wide variety of real-world graphs. Combinatorial Multigrid was also quite successful, performing faster on average, yet with many more outliers. The LAMG approach can also be generalized to non-diagonally-dominant, eigenvalue and nonlinear problems. Acknowledgments =============== The authors wish to thank the referees for their fruitful comments, Ioannis Koutis, Tim Davis, David Gleich and Audrey Fu for useful discussions, Lorenzo Pesce for his help with porting LAMG to the Beagle Cray, and Dan Spielman for algorithmic discussions as well as LaTeX typesetting advice. [^1]: The University of Chicago, Department of Human Genetics, 920 E. 58th St. CLSC 431F, Chicago, IL 60637. Tel: +1-773-702-5898. Email: [[email protected]]{} [^2]: The Weizmann Institute of Science, Department of Mathematics and Computer Science, POB 26 Rehovot 76100, Israel. Tel. +972-8-934-3545. Email: [[email protected]]{} [^3]: In our code, a hub is a node whose degree is significantly larger than a weighted mean of its neighbors’ degrees [@communities]: $|\cE_u| \geq 8 \sum_{v \in \cE_u} |w_{uv}| |\cE_v|/\sum_{v \in \cE_u} |w_{uv}|$. [^4]: T. Davis, private communication.
{ "pile_set_name": "ArXiv" }
--- author: - 'Rebecca Krall,' - 'Francis-Yan Cyr-Racine,' - and Cora Dvorkin bibliography: - 'dmdr\_ref.bib' title: 'Wandering in the Lyman-alpha Forest: A Study of Dark Matter-Dark Radiation Interactions' --- Introduction ============ Dark matter (DM) forms the gravitational backbone upon which baryonic matter accretes to form galaxies and clusters. The cold dark matter (CDM) paradigm [@Davis:1985rj; @Blumenthal:1984bp; @Blumenthal:1982mv; @1981ApJ...250..423D] has so far been extremely successful at describing the large-scale distribution of galaxies and the structure of the anisotropies in the cosmic microwave background (CMB). Detailed observations of the CMB [@Aghanim:2015xee; @Ade:2015xua] have provided us with an exquisite snapshot of the Universe as it stood about 380,000 years after the Big Bang. At that time, the data show that the Universe was mostly smooth and homogeneous except for very small density fluctuations. In our current understanding of structure formation, these small perturbations form the seeds which eventually evolve, through the influence of gravity, into all the rich structure we observe in the Universe today. If this scenario is correct, CMB measurements can be used to predict the properties of structure in the low-redshift universe. By comparing these predictions to the actual observations of the large-scale structure (LSS) of the Universe, we can thus test the consistency of the standard structure formation paradigm based on CDM. Such comparison is often phrased in terms of the quantity $\sigma_8$, which stands for the amplitude of matter fluctuations at scales of $8h^{-1}$ Mpc. Estimates of the value of $\sigma_8$ from recent LSS measurements based on the Sunyaev-Zeldovich (SZ) cluster mass function [@Ade:2013lmv; @Ade:2015fva] and weak gravitational lensing [@Heymans:2013fya; @2017MNRAS.467.3024L] appear in tension with $\Lambda$CDM predictions based on fits to CMB and baryon acoustic oscillation (BAO) data. While the tension is likely caused by systematics in the data, it could also be the result of new physics related to DM. One approach to reconcile the CMB with the discrepant LSS measurements is to suppress the growth of structure by coupling DM to some form of dark radiation (DR) [@Goldberg:1986nk; @HOLDOM198665; @1992ApJ...398..407G; @1992ApJ...398...43C; @Boehm:2001hm; @Boehm:2000gq; @Foot:2003jt; @Foot:2002iy; @Foot:2004wz; @Boehm:2004th; @Foot:2004pa; @Feng:2008mu; @Ackerman:2008gi; @Feng:2009mn; @ArkaniHamed:2008qn; @Kaplan:2009de; @2010PhRvD..81h3522B; @Kaplan:2011yj; @Behbahani:2010xa; @Das:2012aa; @Hooper:2012cw; @Aarssen:2012fx; @Cline:2012is; @Tulin:2013teo; @Tulin:2012wi; @Baldi:2012ua; @Dvorkin:2013cea; @Cyr-Racine:2013ab; @Cline:2013zca; @Chu:2014lja; @Cline:2013pca; @Cyr-Racine:2013fsa; @Bringmann:2013vra; @Archidiacono:2014nda; @2014PhLB..739...62K; @Choquette:2015mca; @Chacko:2015noa; @Buen-Abad:2015ova; @2016PhRvD..93l3527C; @Chacko:2016kgg; @Kamada:2016qjo; @Ko:2016fcd; @Ko:2016uft; @Ko:2017uyb; @Foot:2016wvj; @Foot:2014uba; @Foot:2013vna; @Foot:2011ve]. In several of these models, achieving a sufficient suppression of structure on scales probed by $\sigma_8$ tends to introduce further problem on smaller scales. However, a specific category [@Buen-Abad:2015ova; @Lesgourgues:2015wza; @Chacko:2016kgg; @Ko:2016fcd; @Ko:2016uft] of models where the interaction rate between DM and DR tracks the Hubble rate during the radiation-dominated era have the potential to address the tension without introducing new problems. The key ingredient of these models is the presence of a DM-DR scattering amplitude scaling as $|\mathcal{M}(q)|\propto 1/q^2$, where $q$ is the momentum transferred in a collision. For instance, Ref. [@Buen-Abad:2015ova] realizes this by introducing a non-abelian massless gauge boson coupling to DM. In this scenario, the DR forms a tightly-coupled perfect fluid which provides a weak drag force on DM before matter-radiation equality, hence slowing the growth of structure on scales entering the causal horizon before that epoch. Ref. [@Lesgourgues:2015wza] then used this non-abelian interacting DM model to reanalyze the apparently discrepant CMB and LSS data, finding that the model could alleviate the $\sigma_8$ tension between these datasets. Taken at face value, their analysis implies a statistically significant detection of DM-DR interaction, with an improvement of the $-2\Delta\ln\mathcal{L}$ statistics of 11.4 relative to $\Lambda$CDM. We note that this difference is mostly driven by data from the Planck SZ clusters [@Ade:2013lmv; @Ade:2015fva]. In this paper, we revisit the analysis performed in Ref. [@Lesgourgues:2015wza] by adding Lyman-$\alpha$ forest flux power spectrum data from the Sloan Digital Sky Survey (SDSS) [@McDonald:2004eu; @McDonald:2004xn]. We find that Lyman-$\alpha$ data disfavor the low values of $\sigma_8$ preferred by the Planck SZ clusters, hence reducing the statistical significance of the evidence for DM-DR interaction. A summary of the DM-DR interaction model is presented in Section \[sec:model\]. In Section \[sec:analysis\], we describe in detail our Bayesian analysis of the CMB, BAO, LSS, and Lyman-$\alpha$ data in light of the DM-DR interaction model. The results from this analysis are presented in Section \[sec:results\]. In Section \[sec:forecasts\], we perform a Fisher forecast to determine the projected constraints on the parameters of the DM-DR model from a combination of the Large Synoptic Survey Telescope (LSST) photometric survey and the next generation CMB-S4 experiment. We finally summarize our results and their implications in Section \[sec:discussion\]. Dark matter interaction model {#sec:model} ============================= For the type of model considered in this work, the standard $\Lambda$CDM scenario is extended by adding a massless DR component capable of scattering with the nonrelativistic DM at early times. As in the model proposed in Ref. [@Buen-Abad:2015ova], we treat the DR as a perfect fluid with no viscosity and a speed of sound $c_{\rm s}^2$ = 1/3. The DR energy density is parameterized as an effective number of neutrino species $$\Delta N_{\textrm{fluid}}=N_{\textrm{dr}}\left(\frac{T_{\textrm{dr}}}{T_\nu}\right)^4\times\left\{\begin{array}{ll}\frac{8}{7}\textrm{ (bosonic)}\\ 1 \textrm{ (fermionic)}, \end{array} \right.$$ where $T_{\rm dr}$ is the temperature of the DR, $T_\nu$ is the temperature of the Standard Model (SM) neutrinos, and $N_{\rm dr}$ is the total number of DR species. We focus here on models where the interaction between DM and DR is mediated by a massless particle, hence leading to a scattering amplitude scaling as $|\mathcal{M}(q)|\propto 1/q^2$, where $q$ is the momentum transfer. For such interactions, the linearized collision term between DM and DR can be computed as described in Ref. [@2016PhRvD..93l3527C; @Binder:2016pnr], and results in a drag force on the DM by the DR quantified by the momentum-transfer rate $\Gamma$. Specifically, a non-relativistic DM particle with velocity $\vec{v}$ traveling through the thermal DR bath experiences an acceleration $\dot{\vec{v}}=-a\Gamma\vec{v}$, where the overdot corresponds to the derivative with respect to conformal time, and $a$ is the scale factor. For DM-DR interactions mediated by a massless particle, $\Gamma$ scales as $T_{\rm dr}^2$. Given the value of the momentum-transfer rate today ($\Gamma_0$), its value at another time is then simply given by $$\label{eq:gamma} \Gamma=\Gamma_0\left(\frac{T}{T_0}\right)^2,$$ where $T$ is the photon temperature, $T_0$ is the CMB temperature today, and the above scaling is valid as long as no entropy dump occurs in the dark sector. A key feature of such models is that the momentum-transfer rate has the same temperature dependence as the Hubble rate during radiation domination. This implies that $\Gamma$ tracks the Hubble expansion rate until the epoch of matter-radiation equality and DM kinetic decoupling is thus significantly delayed compared to the standard CDM scenario, leading to a suppression of structures on small scales. Furthermore, the self-interacting nature of the DR implies that its impact on the CMB and structure formation is significantly different than standard free-streaming neutrinos [@Bashinsky:2003tk; @Hou:2011ec; @2016JCAP...01..007B]. As an example, Ref. [@Buen-Abad:2015ova] achieves the $T^2$ scaling by having a non-Abelian DM component transforming in the fundamental representation of a dark $SU(N)$ gauge group. Associated with the $SU(N)$ dark symmetry are $N^2-1$ dark gluons forming the massless DR bath. In the early universe at temperatures above the DM mass, the dark gluons are taken to be in thermal equilibrium with the SM bath through their interactions with DM, which is taken to carry some electroweak charge. At temperatures on the order of the DM mass, DM freeze-out occurs, the dark gluons decouple from SM, and $T_{\rm dr}$ begins to evolve independently from the SM temperature. After DM freeze-out, the dark gluons form a self-interacting tightly-coupled fluid interacting with the DM. The evolution of DM and DR perturbations in the Newtonian gauge is governed by the set of equations [@Ma:1995ey] $$\begin{aligned} \dot{\delta}_{\textrm{dm}}&=-\theta_{\textrm{dm}}+3\dot{\phi},\\ \dot{\theta}_{\textrm{dm}}&= -\frac{\dot{a}}{a}\theta_{\textrm{dm}}+a\Gamma(\theta_{\textrm{dr}}- \theta_{\textrm{dm}}) + k^2\psi,\\ \dot{\delta}_{\textrm{dr}}&=-\frac{4}{3}\theta_{\textrm{dr}}+4\dot{\phi},\\ \dot{\theta}_{\textrm{dr}}&= k^2\frac{\delta_{\textrm{dr}}}{4}+\frac{3}{4}\frac{\rho_{\textrm{dm}}}{\rho_{\textrm{dr}}}a\Gamma(\theta_{\textrm{dm}}-\theta_{\textrm{dr}}) + k^2\psi,\end{aligned}$$ where $k$ is the Fourier wavenumber, $\Gamma$ is defined in Eq. , and where $\delta_{ \textrm{dm} }$ ($\delta_{ \textrm{dr} }$) and $\theta_{ \textrm{dm} }$ ($\theta_{ \textrm{dr} }$) are the DM (DR) density and velocity divergence perturbations, respectively. The average energy densities of DM and DR are $\rho_\textrm{dm}$ and $\rho_\textrm{dr}$, and the scalar metric perturbations in the conformal Newtonian gauge are $\phi$ and $\psi$. We note that the self-interacting nature of the DR ensures that the higher multipoles of the DR Boltzmann hierarchy remain negligible throughout the history of the Universe. Relative to the $\Lambda$CDM paradigm, the DM-DR interaction models considered here have two additional parameters: $\Gamma_0$ and $\Delta N_{\rm fluid}$. We now perform a quantitative analysis of such models. Analysis {#sec:analysis} ======== We implemented the DM-DR model in the Boltzmann code CLASS [@Lesgourgues:2011re; @Blas:2011rf], and we performed several Markov Chain Monte Carlo (MCMC) likelihood analyses for our 8-parameter DM-DR interaction model, as well as for the standard $\Lambda$CDM scenario. We utilize the same data sets used in Ref. [@Lesgourgues:2015wza], namely: CMB, BAO and LSS (see below for details). In addition to those data sets, we also include for the first time in this context constraints on the matter power spectrum from the Lyman-$\alpha$ forest flux power spectrum measurements from the SDSS. More specifically, we use the following cosmological data sets: - **CMB:** Planck 2015 temperature + low-$\ell$ polarization data [@Aghanim:2015xee]. - **BAO:** measurements of the acoustic-scale distance ratio $D_V /r_{\textrm{drag}}$ at $z = 0.106$ by 6dFGS [@2011MNRAS.416.3017B], at $z = 0.1$5 by SDSSMGS [@Ross:2014qpa], at $z = 0.32$ by BOSS- LOWZ [@Anderson:2013zyy], and anisotropic BAO measurements at $z = 0.57$ by BOSS-CMASS-DR11 [@Anderson:2013zyy]. - **LSS:** Planck 2015 lensing likelihood [@Ade:2015zua], the constraint $\sigma_8(\Omega_{\rm{m}}/0.27)^{0.46}= 0.774\pm0.040$ (68% CL) from the weak lensing survey CFHTLenS [@Heymans:2013fya], and the constraint $\sigma_8(\Omega_{\rm{m}}/0.27)^{0.30}= 0.782\pm0.010$ (68% CL) from the Planck SZ cluster mass function [@Ade:2013lmv]. - **Ly$\alpha$:** measurements of the Lyman-$\alpha$ flux power spectrum from the SDSS [@McDonald:2004eu; @McDonald:2004xn]. For our likelihood evaluation, we modified the publicly available code <span style="font-variant:small-caps;">MontePython</span> [@Audren:2012wb] to include the Lyman-$\alpha$ forest likelihood from the SDSS. We did several MCMC runs for four different combinations of these datasets, for both $\Lambda$CDM and for the DM-DR interaction model. These combinations are: CMB+BAO, CMB+BAO+LSS, CMB+BAO+Ly$\alpha$, and CMB+BAO+LSS+ Ly$\alpha$. The parameters of our model are $\left\lbrace \Omega_{\rm b}h^2, \Omega_{\rm dm}h^2, \theta, \ln{(10^{10}A_{\rm s})}, n_{\rm s}, \tau, \Delta N_{\textrm{fluid}}, {10^7\Gamma_0}\right\rbrace$, where $\Omega_{\rm b}h^2$ and $\Omega_{\rm dm}h^2$ are the physical baryon and DM densities, respectively, $\theta$ is the angular size of the horizon at recombination, $A_{\rm s}$ is the amplitude of the initial curvature power spectrum at $k=0.05$ Mpc$^{-1}$, $n_{\rm s}$ is the spectral index, $\tau$ is the optical depth to reionization, $\Delta N_{\textrm{fluid}}$ is the effective number of DR species, and $\Gamma_0$ is the current value of the momentum-transfer rate between the DM and the DR. We set two massless and one massive neutrino species with a total mass of $0.06$ eV, and with an effective neutrino number of $N_{\rm{eff}} = 3.046$. The primordial helium abundance is inferred from standard Big Bang Nucleosynthesis, as a function of $\Omega_{\rm b}$, following Ref. [@Hamann:2007sb]. All of these parameters have flat priors. For the interaction rate we set the lower bound on the prior to $\Gamma_0 \geq 0$, while for the optical depth to reionization, we set the lower bound of the prior to $\tau\geq 0.04$. We also impose the prior $\Delta N_{\textrm{fluid}} \geq 0.07$ since this is the smallest allowed value if the dark sector decoupled from the SM near the weak scale. While there are ways around this limit (for instance if the dark sector is never in thermal equilibrium with the SM), we focus on the broad class of models respecting this bound. As an example, the non-Abelian DM-DR model [@Buen-Abad:2015ova] predicts discrete values for $\Delta N_{\rm{fluid}}=0.07(N^2-1)$, while a model where DM couples to a massless dark photon that also interacts with $N_{\rm f}$ massless fermions predicts $\Delta N_{\rm{fluid}}=0.07(1+\frac{7}{4}N_{\rm f})$. Results {#sec:results} ======= The parameter confidence regions from our analysis are listed in Table \[table:results\]. The bottom row gives the $-2\Delta\ln\mathcal{L}$ values between the DM-DR interaction model and the standard 6-parameter $\Lambda$CDM paradigm. We also display in Figure \[fig:1d\] the marginalized posterior probability distributions for the parameter set $\{\Omega_{\rm b}h^2, \Omega_{\rm dm}h^2, \tau, \ln(10^{10} A_{\rm s}), n_{\rm s}, H_0, \sigma_8, \Delta N_{\textrm{fluid}},10^7\Gamma_0\}$. We observe that the constraints on the dark matter and baryon densities, the Hubble constant, the amplitude and tilt of the primordial spectrum of fluctuations, and on the optical depth to reionization are fairly robust from one data sets to the next. As in Ref. [@Lesgourgues:2015wza], we find that the inclusion of LSS data favors a nonvanishing value of $\Gamma_0$, with an improvement of the log likelihood given by $-2\Delta\ln\mathcal{L}\simeq -12$. With the inclusion of Lyman-$\alpha$ forest data, we find that the DM-DR model still improves the fit relative to $\Lambda$CDM, but now the improvement becomes marginal with $-2\Delta\ln\mathcal{L}\simeq-5.8$. In addition, the Lyman-$\alpha$ data pull the preferred value of $\Gamma_0$ down by about $\sim$35%, indicating that this latter data set does not favor the kind of matter power spectrum suppression (and corresponding small value of $\sigma_8$) necessary to fit the Planck SZ cluster data. ------------------------------------- ------------------------------ ------------------------------ ------------------------------ ------------------------------ Parameter CMB+BAO CMB+BAO CMB+BAO CMB+BAO +LSS +Ly$\alpha$ +LSS+Ly$\alpha$ $100\Omega_{\rm b}h^2$ $2.235^{+0.023}_{-0.026}$ $2.219^{+0.023}_{-0.025} $ $2.243\pm 0.024$ $2.234\pm 0.024$ $\Omega_{\textrm{dm}}h^2$ $0.1246^{+0.0021}_{-0.0040}$ $0.1249^{+0.0023}_{-0.0047}$ $0.1244^{+0.0021}_{-0.0036}$ $0.1241^{+0.0024}_{-0.0045}$ $\Delta N_{\textrm{fluid}}$ $<0.33$ (68%) $<0.34$ (68%) $<0.31$ (68%) $<0.37$ (68%) $\Delta N_{\textrm{fluid}}$ $<0.60$ (95%) $<0.67$ (95%) $ <0.55$ (95%) $<0.65$ (95%) $10^7\Gamma_0$ \[$^{-1}]$ $<1.64$ (95%) $1.82\pm 0.46$ $<0.94$ (95%) $1.19\pm 0.39$ $H_0$ \[km/s/Mpc\] $69.15^{+0.82}_{-1.3} $ $69.11^{+0.86}_{-1.5} $ $69.16^{+0.84}_{-1.2}$ $69.72^{+0.86}_{-1.3}$ $\ln(10^{10}A_{\rm s})$ $3.098^{+0.034}_{-0.038} $ $3.088\pm 0.027$ $3.115\pm 0.035$ $3.076\pm 0.027$ $n_{\rm s}$ $0.9712\pm 0.0050$ $0.9741^{+0.0050}_{-0.0056}$ $0.9704\pm 0.0049$ $0.9750\pm 0.0051$ $\tau$ $0.083^{+0.017}_{-0.019}$ $0.076\pm 0.014$ $0.092\pm 0.018$ $0.072\pm 0.014$ $\sigma_8$ $0.811^{+0.025}_{-0.020}$ $0.761\pm 0.012$ $0.832\pm 0.017$ $0.7778\pm 0.0097$ $-2\Delta\ln\mathcal{L}/\Lambda$CDM 1.5 -11.98 -0.22 -5.84 ------------------------------------- ------------------------------ ------------------------------ ------------------------------ ------------------------------ : Mean values and confidence intervals for parameters of DM-DR interaction model. Unless otherwise noted, we display the 68% confidence interval. The bottom row in the table lists the improvement in $-2\Delta\ln\mathcal{L}$ relative to $\Lambda$CDM.[]{data-label="table:results"} ![Marginalized posterior probability distributions for the most relevant parameters of the DM-DR interaction model for CMB and BAO data (black), combined with Lyman-$\alpha$ (red), LSS (purple), and Lyman-$\alpha$ and LSS (green).[]{data-label="fig:1d"}](Post_1D_DM-DR.pdf){width="85.00000%"} We illustrate the effect of the Lyman-$\alpha$ data in Figure \[fig:mpk\], where we display the matter power spectra for the best-fit parameters for each dataset combination at $z=3$. This figure indeed shows that the matter power spectra for the best-fit parameters of the dataset combinations that include LSS data lie outside of the Lyman-$\alpha$ $95$% confidence interval (indicated with the black error bar). The addition of the Lyman-$\alpha$ data does raise the amplitude of the matter power spectrum, but it still falls significantly below the value preferred by the CMB+BAO+Ly$\alpha$ data combination. Clearly, a significant reduction of the Lyman-$\alpha$ error bar could potentially rule out the DM-DR interaction model as a solution to the discrepancy between Planck SZ data and the CMB. We estimate that the measurement error must be reduced by $\sim$60% in order to exclude the DM-DR model at 3-$ \sigma$.[^1] We further discuss possible improvement of the Lyman-$\alpha$ constraints in Section \[sec:discussion\]. ![The matter power spectra for the best-fit parameters for each dataset combination at $z=3$. The data point corresponds to the best-fit amplitude using Lyman-$\alpha$ data from Ref. [@McDonald:2004xn]. The gray band shows the range of linear matter power spectra slopes at $k = 1.03$ h/Mpc that are allowed at the 95% CL limit. The error bar corresponds to the 95% confidence region on the amplitude. All the solid lines show the power spectrum for the DM-DR interaction model, while the dashed line illustrate a $\Lambda$CDM model.[]{data-label="fig:mpk"}](figure_1.pdf){width="80.00000%"} It is instructive to look at how the different data likelihoods change when the DM-DR interactions are introduced. We display in Table \[table:likelihood\] the likelihood decomposition for the best-fit model for the dataset combinations of CMB+BAO+ LSS and CMB+BAO+LSS+Ly$\alpha$, with and without the DM-DR interaction. The values in the table show that a significant portion of the likelihood improvement when including the DM-DR interaction (relative to $\Lambda$CDM) comes from the Planck SZ dataset, with minor contributions coming from Planck lensing and CFHTLenS. The Planck SZ data impose a tight constraint on $\sigma_8$ of $\sigma_8(\Omega_{\rm{m}}/0.27)^{0.30}= 0.782\pm0.010$, which is in the direction of lower $\sigma_8$ values. However, we also observe that introducing the DM-DR interactions also worsens the fit to the Lyman-$\alpha$ data. -------------------------- ---------------- --------- ----------------- ----------------- Likelihood CMB+BAO CMB+BAO CMB+BAO CMB+BAO +LSS +LSS +LSS+Ly$\alpha$ +LSS+Ly$\alpha$ ($\Lambda$CDM) (DM-DR) ($\Lambda$CDM) (DM-DR) Planck CMB 5635.5 5634.8 5634.2 5634.8 BAO 3.4 2.2 2.4 2.3 Ly$\alpha$ - - 97.3 100.1 Planck SZ 3.9 1.1 6.4 1.7 CFHTLenS 0.7 0.5 1.1 0.5 Planck lensing 7.3 6.2 6.7 5.6 Total -ln$(\mathcal{L})$ 5650.8 5644.8 5748.1 5745.0 -------------------------- ---------------- --------- ----------------- ----------------- : Likelihood decomposition for the best-fit $\Lambda$CDM and DM-DR models for two different combination of data sets.[]{data-label="table:likelihood"} In Figure \[fig:2d\], we show the joint marginalized probability contours for the six standard cosmological parameters, in addition to $\Delta N_{\rm{fluid}}$, $\Gamma_0$, and $\sigma_8$. Most notable are the correlations between $H_0$ and $\Delta N_{\rm{fluid}}$, $\sigma_8$ and $\Gamma_0$, and $\Omega_{\rm dm}h^2$ and $\Delta N_{\rm{fluid}}$. The correlation between $H_0$ and $\Delta N_{\rm{fluid}}$ arises from the fact that an increase in radiation density contributes to the expansion rate of the universe. For $\sigma_8$ and $\Gamma_0$, the DM-DR interaction damps the matter power spectrum, and thus an increase in $\Gamma_0$ is strongly correlated to a decrease in $\sigma_8$. Since the epoch of matter-radiation equality is well-determined by CMB data, an increase of $\Delta N_{\rm{fluid}}$ must be compensated by a corresponding increase in $\Omega_{\rm dm}h^2$. ![The joint marginalized probability contours for the DM-DR model, for CMB and BAO data (black), combined with Lyman-$\alpha$ (red), LSS (purple), and Lyman-$\alpha$ and LSS (green). We display the 68% and 95% confidence regions.[]{data-label="fig:2d"}](triangle_plot_DM-DR.pdf){width="\textwidth"} Forecasts {#sec:forecasts} ========= Given that the current cosmological data sets do not have enough statistical power at the relevant scales to rule out with high significance the current best fit cosmology with DM-DR interactions favored by LSS data, we investigate how well future surveys will be able to test this hypothesis. In this section, we use the galaxy clustering as a tracer of matter fluctuations at intermediate scales between those probed by the CMB and the Lyman-$\alpha$ data, and we forecast the constraints on the DM-DR model from the photometric redshift survey expected from the Large Synoptic Survey Telescope (LSST) [@Abell:2009aa], combined with expected observations of the CMB coming from the proposed CMB-S4 next generation CMB experiment [@Abazajian:2016yjj]. In our analysis, constraints from the CMB are taken into account by adding to the Fisher matrix for LSST a Fisher matrix for CMB-S4. We take as the fiducial model the mean values for the six standard cosmological parameters and the 95% CL values for $\Gamma_0$ and $\Delta N_{\textrm{fluid}}$ from our MCMC analysis above, with CMB+BAO+LSS+Ly$\alpha$ data (see Table \[table:results\] above). To forecast the parameter errors from LSST, we use the specifications in Ref. [@Chen:2016vvw]. Specifically, the Fisher matrix for a galaxy survey is given by [@Kaiser:1987qv; @Peacock1992] $$\label{eq:fisher_gal_clustering} F_{ij}=\int_{-1}^1\int_{k_{\rm{min}}}^{k_{\rm{max}}}\frac{\partial\ln P_g(\bf{k})}{\partial\theta_i}\frac{\partial\ln P_g(\bf{k})}{\partial\theta_j}V_{\rm{eff}}(k,\mu)\frac{k^2dkd\mu}{2(2\pi)^2},$$ where the $\theta_i$ are the parameters of the model, and $P_g$ is the redshift-space galaxy power spectrum, which can be determined from the matter power spectrum through $$\label{eq:galaxyps} P_g(k,\mu)=[1+\beta\mu^2]^2b^2\hat{P}(k)e^{-c^2\sigma_z^2k^2\mu^2/H^2}.$$ Here $b$ denotes the linear galaxy bias, $\beta = f/b$ with $f$ being the growth function (which we approximate as $\Omega_{\rm m}^{0.56}$ [@Chen:2016vvw]), $\hat{P}$ is the smoothed matter power spectrum, $H$ is the Hubble rate, and $\mu$ is the cosine of the angle of the wavevector with respect to the line of sight. Additionally, $\sigma_z$ takes into account the accuracy in the redshifts $\sigma_{0\gamma z}$ and the intrinsic galaxy velocity dispersion $\sigma_{0v}$, and it is given by: $$\sigma_z^2=(1+z)^2[\sigma_{0v}^2+\sigma_{0\gamma z}^2].$$ For this analysis we take $\sigma_{0v}=400 \rm{km/s}/c$, and $\sigma_{0\gamma z}=0.04$, as in Ref. [@Chen:2016vvw]. The matter power spectrum in Eq.  is smoothed with a Gaussian window function $$\hat{P}({\bf k})=\int d^3 {\bf k} P({\bf k'})|W(|{\bf k}-{\bf k'}|)|^2,$$ where the window function is given by $$|W(k)|^2=\frac{1}{(2\pi\sigma_W^2)^{3/2}}\exp\left(-\frac{k^2}{2\sigma_W^2}\right),$$ and the width of the window is $$\sigma_W=\frac{\sqrt{2\ln2}}{2\pi}k_{\rm{min}}.$$ The value of $k_{\rm{min}}$ for each redshift bin is $2\pi(3V/4\pi)^{-1/3}$, with $V$ the volume of the survey. We use the seven redshift bins, and their corresponding $k_{\rm{min}}$, $k_{\rm{max}}$, and number density of galaxies as defined in Table 2 of Ref. [@Chen:2016vvw]. For each redshift bin, the bias used are: $\{1.053, 1.125, 1.126, 1.243, 1.243, 1.292, 1.497, 1.491\}$, which are the bias used for LSST in Ref. [@Chen:2016vvw]. The effective survey volume is $$\label{eq:surveyvolume} V_{\rm{eff}}(k,\mu) = \int\left[\frac{n({\bf r})P_g(k,\mu)}{n({\bf r})P_g(k,\mu)+1}\right]^2d^3r\simeq \left[\frac{\bar{n}P_g(k,\mu)}{\bar{n}P_g(k,\mu)+1}\right]^2 V,$$ in which $\bar{n}$ is the mean number density of galaxies. The volume for a survey over a fraction of the sky $f_{\rm sky}$ is $$V=\frac{4\pi}{3}\times f_{\rm{sky}}[d_c(z_{\rm{max}})^3-d_c(z_{\rm{min}})^3], \label{eq:volume}$$ where $d_c(z)$ is the comoving distance. We take $\bar{n}$={0.154, 0.104, 0.064, 0.036, 0.017, 0.007, 0.002} $[h^3\mathrm{Mpc}^{-3}]$, in each of the redshift bins. For each redshift bin, the volume $V$ is computed with Eq. , with $f_{\rm{sky}}$=0.58, and $$d_c(z) = \int_0^z\frac{c}{H(z')}dz'.$$ We combine the above galaxy clustering forecast with future constraints expected from the proposed CMB-S4 experiment. In this case, the Fisher matrix takes the form $$\label{eq:fisher_CMB} F_{ij}=\sum_l \frac{\partial \vec{C}^{T}_l}{\partial\theta_i}{\bf C}_l^{-1}\frac{\partial \vec{C}_l}{\partial\theta_j},$$ where $\vec{C}_l = \{ C_l^{TT}, C_l^{EE}, C_l^{TE} \}$, and the elements of the covariance matrix ${\bf C}$ are given by: $${\bf C}(C_l^{\alpha\beta},C_l^{\gamma\delta})=\frac{1}{(2l+1)f_{\rm sky}}\left[(C_l^{\alpha\gamma}+N_l^{\alpha\gamma})(C_l^{\beta\delta}+N_l^{\beta\delta})+(C_l^{\alpha\delta}+N_l^{\alpha\delta})(C_l^{\beta\gamma}+N_l^{\beta\gamma})\right],$$ where $\alpha,\beta,\gamma,\delta$ are $T,E$, and $f_{\rm sky}$ is the fractional area of sky used. We model the detector noise as: $$N_l^{\alpha\beta}=\delta_{\alpha\beta}\Delta^2_\alpha \exp\left(\frac{l(l+1)\theta^2_{\rm FWHM}}{8\ln 2}\right),$$ where $\Delta_\alpha$ is the map sensitivity in $\mu$K-arcmin, $\theta_{\rm FWHM}$ is the beam width, and $\delta_{\alpha\beta}$ is the Kronecker delta. For CMB-S4, we use $f_{\rm sky}=0.4$, $\Delta_{T}=1 \mu$K-arcmin, $\Delta_{E}=1.4 \mu$K-arcmin, and $\theta_{\rm FWHM}=3'$ [@Abazajian:2016yjj]. We consider $l\geq 30$, and set $l_{\rm max}=3000$. The $\theta_i$ in our Fisher matrix are the cosmological parameters: $\{\Omega_{\rm b}h^2$, $\Omega_{\rm dm} h^2$, $\Delta N_{\rm{fluid}}$, $10^7\Gamma_0$, $H_0$, $\ln(10^{10}A_{\rm s})$, $n_{\rm s}\}$. When considering LSST, we add to the previous array of parameters, the nuisance parameters $\sigma_{0v}$ and a bias $b_i$ for each bin. Table \[tab:fisher\] displays the expected improvement from the proposed CMB-S4 experiment in combination with the future LSST survey for the DM-DR cosmological model. As can be seen there, we obtain an error on $\Delta N_{\rm fluid}$, for the combination of LSST and CMB-S4 of $\sigma(\Delta N_{\rm fluid})=0.011$, and for the $\Gamma_0$ coefficient is $\sigma(10^7\Gamma_0)=0.069$ Mpc$^{-1}$. Parameter fiducial CMB-S4 CMB-S4 + LSST ------------------------------ ---------- --------------- --------------- 100$\Omega_{\rm b}h^2$ 2.2380 $\pm 0.0050$ $\pm$0.0032 $\Omega_{\textrm{dm}}h^2$ 0.12400 $\pm 0.00134$ $\pm$0.00034 $\Delta N_{\textrm{fluid}}$ 0.079 $\pm 0.053$ $\pm$0.011 $10^7\Gamma_0$ \[Mpc$^{-1}]$ 1.148 $\pm 0.323 $ $\pm$0.069 $H_0$ \[km/s/Mpc\] 69.79 $\pm 0.48$ $\pm$0.20 $\ln(10^{10}A_{\rm s})$ 3.0750 $\pm 0.0161$ $\pm$0.0093 $n_{\rm s}$ 0.9754 $\pm 0.0036$ $\pm$0.0022 : Forecasted DM-DR 68% parameter constraints for CMB-S4 and a LSST-like photometric survey. []{data-label="tab:fisher"} The forecasted constraints on the model parameters show that a combination of CMB-S4 and LSST data could provide a bound on $10^7 \Gamma_0$ that is about a factor of $\sim6$ times better than the constraints from our MCMC analysis using current data. This implies that a value of $10^7 \Gamma_0\sim1$ Mpc$^{-1}$ could be detected with high significance if this model was indeed describing the universe we live in. We caution, however, that the forecasted value of $\sigma(10^7\Gamma_0)$ depends quite strongly on the fiducial value of $\Gamma_0$ used in the analysis, and that it might be difficult to exclude much smaller values of $\Gamma_0$. We also obtain that the constraint on $\Delta N_{\rm fluid}$ will be improved by nearly an order of magnitude compared to current bounds (see Refs. [@2016JCAP...01..007B; @Brust:2017nmv]), hence severely restricting the presence of fluid-like DR in the early Universe. Discussion {#sec:discussion} ========== In light of previous results [@Lesgourgues:2015wza], we have re-examined the evidence for DM-DR interaction within current cosmological data. We find that when adding to the CMB, BAO and LSS data another tracer of the matter fluctuations (i.e., the Lyman-$\alpha$ flux power spectrum measurements from the SDSS), the significance for the DM-DR model decreases from $-2\Delta\ln\mathcal{L}\simeq-12$ to $-2\Delta\ln\mathcal{L}\simeq-6$ relative to $\Lambda$CDM, making the evidence for this model marginal. Since most of the improvement to the total likelihood comes from considering data sets (Planck CMB vs. Planck SZ) that are already in tension within the $\Lambda$CDM paradigm, it is not surprising that adding another data set which is not in tension with $\Lambda$CDM decreases the significance. The fact that the Lyman-$\alpha$ data shows no preference for matter power spectrum suppression while at the same time being fully consistent with the Planck CMB data indicates that the nonvanishing value of $\Gamma_0$ favored by the Planck SZ data might be caused by systematics. We caution, however, that further analyses are necessary to reach a definitive conclusion, especially since other probes show a slight preference for a low value of $\sigma_8$ (see e.g. Ref. [@2017MNRAS.467.3024L]). The main reason why the LSS data used in this analysis (following Ref. [@Lesgourgues:2015wza] for direct comparison) strongly favor the presence of nonvanishing DM-DR interactions is that the information from Planck SZ and CHFTLenS are introduced as a direct Gaussian prior on $\sigma_8$. This direct constraint on what is essentially a derived cosmological parameter exacerbates the need for new DM physics. The ultimate analysis would be, instead, to go back to the actual measurement (e.g. SZ cluster mass function or weak lensing shear correlation function) and perform the analysis directly in that space using the DM-DR interaction model. We leave such analysis to future work. In addition, we advocate for the need of verifying and testing systematics in each of these data sets before reaching any definite conclusion about the need of new physics. Since current data sets do not have enough sensitivity to confirm or rule out the presence of DM-DR interaction, we have performed a Fisher forecasts to test the constraining power of upcoming observations on the DM-DR interactions model. Using galaxy clustering measurements from the LSST photometric survey and CMB measurements from Stage-IV experiments, we find that constraints on the parameter $10^7 \Gamma_0$ should improve by a factor of $~\sim6$ compared to constraints from current data. Also, our analysis show that the constraints on $\Delta N_{\rm fluid}$ could be improved by an order of magnitude compared to current constraints. In this work, we have used the Lyman-$\alpha$ measurements from Refs. [@McDonald:2004eu; @McDonald:2004xn] which constrains the amplitude of the matter power spectrum around $k\sim1h/$Mpc. While more recent Lyman-$\alpha$ forest measurements exist (see e.g. [@2017PhRvD..96b3522I]), the absence of a likelihood code to compare the interacting DM-DR model predictions with these data makes including them in our analysis difficult. Given the tight constraints that these newer Lyman-$\alpha$ data put on warm DM models, it is possible that they could further constrain, and even rule out, the type of interacting DM-DR model considered in this work. As a rough guide, assuming that the inferred mean value of the matter power spectrum at $k\sim1h/$Mpc from the Lyman-$\alpha$ measurement stays the same, we estimate that a $\sim$60% reduction of the error bar could exclude the DM-DR interaction model at the 3-$\sigma$ level. [^1]: This improvement is determined by shrinking the error bars of the Lyman-$\alpha$ measurement at $k=1.03\, h/\rm{Mpc}$, while keeping the mean value fixed, until P(k$=1.03\,h/\rm{Mpc}$) for the best-fit parameters with CMB+BAO+LSS+Ly$\alpha$ data lies 3-$\sigma$ away from the mean.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The present paper reports an inductor-free realization of Chua’s circuit, which is designed by suitably cascading a single amplifier biquad based active band pass filter with a Chua’s diode. The system has been mathematically modeled with three-coupled first-order autonomous nonlinear differential equations. It has been shown through numerical simulations of the mathematical model and hardware experiments, that the circuit emulates the behaviors of a classical Chua’s circuit, e.g., fixed point behavior, limit cycle oscillation, period doubling cascade, chaotic spiral attractors, chaotic double scrolls and boundary crisis. The occurrence of chaotic oscillation has been established through experimental power spectrum, and quantified with the dynamical measure like Lyapunov exponents.' author: - Tanmoy Banerjee date: 'Received: date / Accepted: date' title: 'Single amplifier biquad based inductor-free Chua’s circuit' --- =1 Introduction {#intro} ============ Design of chaotic electronic circuits offers a great challenge to the research community for the last three decades [@og]-[@ramos]. The motivation for designing a chaotic electronic circuit comes mainly from two facts: first, one can ‘observe’ chaos, and can also control the dynamics of the circuit by simply changing the physically accessible parameters of the circuit, e.g., resistor, capacitor, voltage levels, etc.; second, there are multitude of applications of chaotic electronic oscillators starting from chaotic electronic communication to cryptography [@setti]-[@banerjee1]. A Chua’s circuit is the first autonomous electronic circuit where a chaotic waveform was observed experimentally, established numerically, and proven theoretically [@chua]-[@ken3st]. Moreover, it established that chaos is not a mathematical abstraction or numerical artifact, but is a very much realizable phenomenon. After the advent of chaotic Chua’s circuit, a large number of works have been reported on different methods of realization of this circuit. All of these realizations are mainly centered around following goals: inductor-free realization of Chua’s circuit, and realization of Chua’s diode. The reason behind the inductor-free realization lies in the fact that the presence of an inductor makes the circuit bulky, unsuitable for IC design, less robust, etc. In the inductor-free realization, the inductor in Chua’s circuit is replaced by a general impedance converter (GIC) that requires at least two op-amps. Another approach to this end is Wien-bridge based Chua’s circuit variant [@morgul], where a Wien-bridge oscillator is cascaded properly with the Chua’s diode. In this context, Ref. [@kengeneric] and [@kengeneric1] report a general algorithm for designing Chua’s circuit, in which it has been shown that a sinusoidal oscillator can be converted into Chua’s circuit by incorporating a proper type of nonlinearity. Different realizations of Chua’s diode, which is the only locally active nonlinearity in the circuit, have been reported, e.g., Chua’s diode using VOA [@kenrobust], [@rocha1], CFOA [@kenel], IC realization of Chua’s diode [@ic], etc. Also, four element Chua’s circuit has been reported in Ref.[@bar], which is the minimum component Chua’s circuit till date. A detailed overview on the Chua’s circuit implementation can be found in Ref. [@for] and [@kil]. Recently, an inductor-free realization of the Chua’s circuit based on electronic analogy has been reported [@rocha1], [@rocha2], which provided the advantage of exploring the system dynamics in the negative parameter space (i.e., negative values of $\alpha$ and $\beta$ of a classical Chua’s circuit [@chua]). Present paper reports an inductor-free realization of Chua’s circuit, in which we have properly cascaded a single amplifier biquad based active band pass filter with a Chua’s diode. The system has been mathematically modeled with three-coupled first-order autonomous nonlinear differential equations. It has been shown through numerical solutions of the mathematical model and real world hardware experiments that the circuit shows all the behaviors of a classical Chua’s circuit, e.g., fixed point, limit cycle formation, period doubling cascade, chaotic spiral attractors, double scrolls, and boundary crisis. Occurrence of chaos has been established through Lyapunov exponents and experimental power spectrum. The paper is organized in the following manner: next section describes the proposed circuit and its mathematical modeling. Numerical simulations, computations of nonlinear dynamical measures, e.g., Lyapunov exponents are reported in section \[sec:3\]. Section \[sec:4\] gives an account of the experimental results. Finally, section \[sec:5\] concludes the outcome of the whole study. Proposed circuit and its mathematical modeling {#sec:2} ============================================== The proposed circuit is shown in Fig.\[f1\](a). This circuit has two distinct parts: (i) a second order narrow band active bandpass filter (BPF), and (ii) parallel combination of a grounded capacitor ($C_2$) and Chua’s diode ($N_R$). $V_1$ node of the BPF is connected to the parallel combination of capacitor $C_2$ and Chua’s diode ($N_R$) through a passive resistor $R$. Note that, the inductor-capacitor parallel combination of a classical Chua’s circuit has been replaced by a resonator circuit, which is, in the present circuit, an active BPF; resonant (or center) frequency of the BPF can be controlled by simply varying the resistors $R_1$ or/and $R_2$ (instead of varying a capacitor, or an inductor). To keep the number of active component lesser we have chosen the single amplifier biquad based narrow band active BPF proposed by Deliyannis and Friend [@del], [@frnd], which consists of only one single amplifier (in the form of an op-amp), two capacitors having same values ($C$), and four resistors ($R_1, R_2, R_a, \text{and}\, R_b$). This active BPF is a second order system with the following transfer function: $$H(s) = \frac{-(k+1)s/R_1C}{s^2+(2/R_2C-k/R_1C)s+(1/R_1R_2C^2)}.\label{eq1}$$ Here, $k=R_b/R_a$. Proper choice of $R_a$ and $R_b$ (and hence $k$) makes the coefficient of $s$ in the denominator negative, which in turn brings the pole of the circuit to the right half of the $s$-plane, resulting a sinusoidal oscillation. Real time dynamics of the system can be expressed in terms of the following two coupled first-order autonomous differential equations [@bannd]: \[eq1b\] $$\begin{aligned} C\frac{dV_1}{dt} & = &\frac{k}{R_1}V_1-\frac{(2k+1)}{(k+1)R_2}V_0,\\ C\frac{dV_0}{dt} & = &\frac{(k+1)}{R_1}V_1-\frac{2}{R_2}V_0.\end{aligned}$$ Equation (\[eq1b\]) can be written as:\ $$\begin{aligned} \left(\begin{array}{c} \frac{dV_{1}}{dt}\\ \frac{dV_{0}}{dt}\end{array}\right)&=&\left(\begin{array}{cc} k/CR_{1} & -(2k+1)/(k+1)CR_{2}\\ (k+1)/CR_{1} & -2/CR_{2}\end{array}\right)\nonumber\\ &&\times\left(\begin{array}{c} V_{1}\\ V_{0}\end{array}\right)\nonumber\\ &=&\left(\begin{array}{cc} \alpha_{11} & \alpha_{12}\\ \alpha_{21} & \alpha_{22}\end{array}\right)\left(\begin{array}{c} V_{1}\\ V_{0}\end{array}\right)\label{eqn1c}\end{aligned}$$ It can be seen from (\[eqn1c\]) that the BPF can be made to oscillate sinusoidally if one can ensure the following condition: $\alpha_{11}+\alpha_{22}=0$. This in turn gives the condition of oscillation of the sinusoidal oscillator as: $k=2R_1/R_2$; subsequently, the frequency of oscillation of the sinusoidal oscillator can be obtained from the condition:$\sqrt{\alpha_{11}\alpha_{22}-\alpha_{12}\alpha_{21}}=0$. Thus the frequency of oscillation is obtained as: $\omega_0=1/C\sqrt{R_1R_2}$. Chua’s diode is characterized by the function $f(V_2)$, where $V_2$ is the voltage drop across $C_2$. $f(V_2)$ is a three segment piece wise linear function with $G_a$ and $G_b$ as the slopes, and $B_p$ is the break point voltage for those segments. Variation of $f(V_2)$ with $V_2$ has been shown in Fig.\[f1\](b) (showing only the three segment region). $f(V_2)$ is defined by: $$\label{eq3} f(V_2)=G_bV_2+\frac{1}{2}(G_a-G_b)(|V_2+B_p|-|V_2-B_p|).$$ In the present work, we have chosen the VOA implementation of Chua’s diode [@ken3st] as shown in Fig.\[f1\](c). ![(a) Active bandpass filter based Chua’s circuit. (b) Characteristics of the Chua’s diode. (c) VOA implementation of Chua’s diode [@ken3st], [@kenrobust].[]{data-label="f1"}](fig1){width=".45\textwidth"} The dynamics of the proposed circuit can be described by three-coupled first-order autonomous nonlinear differential equations in terms of $V_2$, $V_1$ and $V_0$ (Fig. \[f1\]): \[eq2\] $$\begin{aligned} C_2\frac{dV_2}{dt} & = &\frac{V_1-V_2}{R}-f(V_2),\\ C\frac{dV_1}{dt} & = &-\frac{k}{R}V_2+\frac{k}{R^{'}}V_1-\frac{(2k+1)}{(k+1)R_2}V_0,\\ C\frac{dV_0}{dt} & = &-\frac{(k+1)}{R}V_2+\frac{(k+1)}{R^{'}}V_1-\frac{2}{R_2}V_0.\end{aligned}$$ Where, $R^{'}=R_1R/(R_1+R)$.\ Eq.(\[eq2\]) has been written in the following dimensionless form using dimensionless quantities: $\tau=t/RC$, $\dot{u}=\frac{du}{d\tau}$ ($u\equiv x,y,z$), $x=V_2/B_p$, $y=V_1/B_p$, $z=V_0/B_p$, $r_1=R/R_1$, $r_2=R/R_2$, $\alpha=C/C_2$: \[eq4\] $$\begin{aligned} \dot{x}& = &\alpha[y-h(x)],\\ \dot{y}& = &-kx+k(r_1+1)y-\frac{(2k+1)r_2}{(k+1)}z,\\ \dot{z}& = &-(k+1)x+(k+1)(r_1+1)y-2r_2z.\end{aligned}$$ Here, $h(x)$ is defined as $$\begin{aligned} \label{eq5} h(x)\equiv & x+f(x)\nonumber\\ = & m_1x+0.5(m_0-m_1)(|x+1|-|x-1|),\end{aligned}$$ where, we have defined the following parameters: $m_0=(RG_a+1)$, and $m_1=(RG_b+1)$. Numerical studies: Phase plane plots, Bifurcation diagrams and Lyapunov exponents {#sec:3} ================================================================================= Numerical integration has been carried out on (\[eq4\]) using fourth-order Runge-Kutta algorithm with step size $h=0.001$. We have chosen the resistor $R_1$ as the control parameter (remembering that $R_1$ controls the gain and center frequency of the BPF); thus in the numerical simulations, $r_1$ acts as the control parameter keeping other parameters fixed at: $r_2=0.2$, $k=0.04$, $\alpha=20$, $m_0=-1/7$, and $m_1=2/7$. Note that, the values of $m_0$ and $m_1$ are generally used in a classical Chua’s circuit. It has been observed that, with increasing $r_1$, for $r_1<15.08$, the circuit shows a fixed point. For $r_1 \ge 15.08$, the fixed point loses its stability through Hopf bifurcation, and a stable limit cycle emerges. At $r_1=15.54$, limit cycle of period-1 becomes unstable and a period-2 (P2) cycle appears. Further period doubling occurs at $r_1=15.72$ (P2 to P4), and $r_1=15.76$ (P4 to P8). Through period doubling bifurcation, the circuit enters into the regime of spiral chaos at $r_1=15.78$. With further increase of $r_1$, at $r_1=17.02$, the circuit shows the emergence of a double scroll attractor. Finally, the system-equations show diverging behavior beyond $r_1=24.81$, indicating boundary crises. Phase plane representation (in $y-x$ plane) for different $r_1$ is shown in Fig.\[fphyx\], which shows the following characteristics: period-1 ($r_1=15.5$), period-2 ($r_1=15.6$), period-4 ($r_1=15.75$), period-8 ($r_1=15.77$), spiral chaos($r_1=16.35$), double scroll ($r_1=17.59$). The phase plane plots, for the same parameters, in $z-y$ plane is shown in Fig.\[fphzyx\](a)-(e) also, the same double scroll chaotic attractor in $z-x$ plane is shown in Fig.\[fphzyx\](f). These observations can be summarized through bifurcation diagram with $r_1$ as the control parameter. Bifurcation diagrams are obtained by using Poincaré section [@ahnay] at $y=0.1$ with $\frac{dy}{dt}<0$, excluding the transients. Fig.\[fbif\] (upper and middle trace) shows bifurcation diagrams in $x$ and $z$, respectively, for different $r_1$. Clearly, it shows a period doubling route to chaos. Further, it shows the presence of a period-3 window at $r_1=15.93$. Among the other periodic windows, more prominent is a period-5 window near $r_1=18$ interspersed in a double scroll chaos. ![Phase plane representation (in $y-x$ plane) for different $r_1$: (a)$r_1=15.5$ (period-1), (b)$r_1=15.6$ (period-2), (c)$r_1=15.75$ (period-4), (d) $r_1=15.77$ (period-8), (e)$r_1=16.35$(spiral chaos), (f)$r_1=17.59$ (double scroll). ($r_2=0.2$, $k=0.04$, $\alpha=20$, $m_0=-1/7$, and $m_1=2/7$)[]{data-label="fphyx"}](fig2){width=".48\textwidth"} ![Phase plane plots in $z-y$ plane (a)-(e), and $z-x$ plane (f) for different values of $r_1$. Parameter values are same as Fig.\[fphyx\].[]{data-label="fphzyx"}](fig3){width=".47\textwidth"} ![Bifurcation diagram of $x$ (upper trace) and $z$ (middle trace) with $r_1$ as a control parameter. Lower trace shows the Largest Lyapunov exponent ($\lambda_{max}$) with $r_1$. ($r_2=0.2$, $k=0.04$, $\alpha=20$, $m_0=-1/7$, and $m_1=2/7$).[]{data-label="fbif"}](fig4a "fig:"){width=".45\textwidth"} ![Bifurcation diagram of $x$ (upper trace) and $z$ (middle trace) with $r_1$ as a control parameter. Lower trace shows the Largest Lyapunov exponent ($\lambda_{max}$) with $r_1$. ($r_2=0.2$, $k=0.04$, $\alpha=20$, $m_0=-1/7$, and $m_1=2/7$).[]{data-label="fbif"}](fig4b "fig:"){width=".45\textwidth"} ![Bifurcation diagram of $x$ (upper trace) and $z$ (middle trace) with $r_1$ as a control parameter. Lower trace shows the Largest Lyapunov exponent ($\lambda_{max}$) with $r_1$. ($r_2=0.2$, $k=0.04$, $\alpha=20$, $m_0=-1/7$, and $m_1=2/7$).[]{data-label="fbif"}](fig4c "fig:"){width=".45\textwidth"} For quantitative measure of the chaos generated by the circuit model, Lyapunov exponents have been computed using the algorithm proposed in [@wolf]. Fig. \[fbif\] (lower trace) shows the spectrum of largest Lyapunov exponent in $r_1$ parameter space. It agrees with the bifurcation diagram. Presence of a positive LE and strange attractors (as the double scroll attractor) ensures the occurrence of chaotic behavior in the circuit. Experimental results {#sec:4} ==================== The proposed circuit has been designed in hardware level on a breadboard. Since the circuit needs three VOAs we have chosen IC TL074 (quad JFET op-amp) for the present purpose with $\pm12$ volt power supply. BPF has been constructed with $C=100$ nF, $R_a=1$ k[[$\mathrm{\Omega}$]{}]{}, $R_b=56$[[$\mathrm{\Omega}$]{}]{}, and $R_2=10$k[[$\mathrm{\Omega}$]{}]{}. Chua’s diode is constructed with following parameters [@ken3st], [@kenrobust]: $R_3=2.2$ k[$\mathrm{\Omega}$]{}, $R_4=220$ [$\mathrm{\Omega}$]{}, $R_5=220$ [$\mathrm{\Omega}$]{}, $R_6=3.3$ k[$\mathrm{\Omega}$]{}, $R_7=22$ k[$\mathrm{\Omega}$]{}, and $R_8=22$ k[$\mathrm{\Omega}$]{}. The grounded capacitor value is taken as $C_2=5$ nF. Coupling resistor $R$ is fixed at $1.3$k[[$\mathrm{\Omega}$]{}]{} (approx.) using a $2$ k[[$\mathrm{\Omega}$]{}]{} POT. To explore the dynamics of the circuit we have varied the resistor $R_1$ through a $1$ k[[$\mathrm{\Omega}$]{}]{} POT. All the potentiometers are precision POT having thousand turns. Capacitors and resistors have 5% tolerances. For $R_1>200$[[$\mathrm{\Omega}$]{}]{} the circuit shows a fixed dc value (equilibrium point of the circuit). For $R_1\le 200$[[$\mathrm{\Omega}$]{}]{} a stable limit cycle has been observed with frequency $1955$Hz. At $R_1=185$[[$\mathrm{\Omega}$]{}]{} (approx.) the limit cycle of period-1 loses its stability and a period-2 oscillation emerges. A period-4 behavior has been observed at $R_1=177$[[$\mathrm{\Omega}$]{}]{} (approx.). period-8 is found for $R_1=175$[[$\mathrm{\Omega}$]{}]{} (approx.). Further decrease in $R_1$ results in spiral chaos in the circuit (at $R_1=170$[$\mathrm{\Omega}$]{}(approx.)). The double scroll attractor is observed at $R_1=122$[[$\mathrm{\Omega}$]{}]{} (approx.). The circuit shows a large limit cycle for $R_1\le68$[[$\mathrm{\Omega}$]{}]{} that indicates the occurrence of boundary crises, where the active resistor becomes eventually passive , i.e., for a large voltage across its terminals, the $i-v$ characteristic of Chua’s diode is no longer a three-segment curve but, it becomes a five-segment curve. Two additional segments (situated in two outer regions) give positive resistance behavior, i.e., now the instantaneous power consumption becomes positive [@ken3st]. All the above mentioned behaviors (except the large limit cycle) have been shown in Fig.\[fexpt1\] (in $V_1$-$V_2$ space) and Fig.\[fexpt2\] (in $V_0$-$V_1$ and $V_0$-$V_2$ space), which depict the experimental phase plane plots recorded in a real time oscilloscope (Aplab make, two channel, 60MHz). Apart from the experimental phase plane plots, another very useful tool for exploring the real circuit behavior for a range of circuit parameters is experimental bifurcation diagram [@bus], [@braz]. To grab the behavior of the proposed circuit for a range of values of $R_1$, experimental bifurcation diagram of $V_0$ has been plotted taking $R_1$ as the control parameter (Fig.\[fexptbif\]). For seventy different values of $R_1$ in the range of $195$ [[$\mathrm{\Omega}$]{}]{} to $120$ [[$\mathrm{\Omega}$]{}]{}, we have acquired the experimental time series data of $V_0$ using Agilent make Infinium digital storage oscilloscope (5000 data per set) and find out the poincaré section using the [*local minima*]{} of the time series data. Plotting these for all the $R_1$s gives the experimental bifurcation diagram. Schematic difference between the numerical bifurcation diagram (Fig.\[fbif\] (middle trace)) and experimental bifurcation diagram occurs due to the difference in the sampling schemes employed in finding out the Poincaré section. However, qualitatively they agree with each other, e.g., both the diagrams show period doubling route to chaos. Blurring of points in the experimental bifurcation diagram occurs due to the inherent circuit noise that is reflected in the experimental time series data. Also, since we have to acquire the data manually, resolution of the plot is not high enough in comparison with the numerical bifurcation plot. Time series data and the corresponding FFTs of $V_2$ are measured using Agilent make Infinium digital storage oscilloscope (500MHz, sampling rate 1Gs/s) and are shown in Fig.\[fex3\]. Fig.\[fex3\](a) (upper trace is for time series and the lower trace is for FFT) shows the period-1 oscillation with frequency $1955$Hz; Fig.\[fex3\](b) and (c) shows the results for spiral chaos and double scroll chaos, respectively. Power spectrum of Fig.\[fex3\](b) and (c) (lower traces) are continuous and broad in nature, indicating the occurrence of chaotic oscillations. It can be seen that the experimental results agree with numerical simulations of the mathematical model of the circuit. ![The oscilloscope trace of experimentally obtained phase-plane plots in the $V_1$-$V_2$ space (a) Period-1 at $R_1=195$[[$\mathrm{\Omega}$]{}]{} (b) Period-2 at $R_1=180$[[$\mathrm{\Omega}$]{}]{} (c) Period-4 at $R_1=176$[[$\mathrm{\Omega}$]{}]{} (d) Period-8 at $R_1=175$[[$\mathrm{\Omega}$]{}]{} (e) Spiral chaos at $R_1=150$[[$\mathrm{\Omega}$]{}]{}. (f) Double scroll attractor at $R_1=120$[[$\mathrm{\Omega}$]{}]{}. (Other parameters are: BPF:$C=100$ nF, $R_a=1$ k[[$\mathrm{\Omega}$]{}]{}, $R_b=56$[[$\mathrm{\Omega}$]{}]{}, and $R_2=10$k [[$\mathrm{\Omega}$]{}]{}. Chua’s diode: $R_3=2.2$ k[$\mathrm{\Omega}$]{}, $R_4=220$ [$\mathrm{\Omega}$]{}, $R_5=220$ [$\mathrm{\Omega}$]{}, $R_6=3.3$ k[$\mathrm{\Omega}$]{}, $R_7=22$ k[$\mathrm{\Omega}$]{}, and $R_8=22$ k[$\mathrm{\Omega}$]{}. The grounded capacitor $C_2=5$ nF. $R=1.3$k[$\mathrm{\Omega}$]{}). (a)-(e)$V_1$ ($x$ -axis): $0.1$ v/div, $V_2$ ($y$-axis): $1$ v/div. (f)$V_1$ ($x$ -axis): $0.3$ v/div, $V_2$ ($y$-axis): $2$ v/div.[]{data-label="fexpt1"}](fig5){width=".47\textwidth"} ![The oscilloscope trace of experimentally obtained phase-plane plots in the (a)-(e)$V_0$-$V_1$ space ((a)-(d)$V_0$ ($x$ -axis): $1$ v/div, $V_1$ ($y$-axis): $0.1$ v/div. (e)$V_0$ ($x$ -axis): $2$ v/div, $V_1$ ($y$-axis): $0.3$ v/div.), and (f)$V_0$-$V_2$ space ($V_0$ ($x$ -axis): $2$ v/div, $V_2$ ($y$-axis): $2$ v/div.); Parameter values are same as used in Fig.\[fexpt1\].[]{data-label="fexpt2"}](fig6){width=".47\textwidth"} ![Experimental bifurcation diagram of $V_0$ (v) with $R_1$ ([[$\mathrm{\Omega}$]{}]{}) as the control parameter. Other parameters are same as Fig.\[fexpt1\].[]{data-label="fexptbif"}](fig7){width=".47\textwidth"} ![Experimental time series and FFT of $V_2$ for (a) Period-1 oscillation (frequency $1955$Hz) (b) spiral chaotic oscillation and (c) double scroll chaos (circuit parameters are same as described in Fig.\[fexpt1\]). (Frequency span of the FFT: $25$ kHz).[]{data-label="fex3"}](fig8a "fig:"){width=".45\textwidth"} ![Experimental time series and FFT of $V_2$ for (a) Period-1 oscillation (frequency $1955$Hz) (b) spiral chaotic oscillation and (c) double scroll chaos (circuit parameters are same as described in Fig.\[fexpt1\]). (Frequency span of the FFT: $25$ kHz).[]{data-label="fex3"}](fig8b "fig:"){width=".45\textwidth"} ![Experimental time series and FFT of $V_2$ for (a) Period-1 oscillation (frequency $1955$Hz) (b) spiral chaotic oscillation and (c) double scroll chaos (circuit parameters are same as described in Fig.\[fexpt1\]). (Frequency span of the FFT: $25$ kHz).[]{data-label="fex3"}](fig8c "fig:"){width=".45\textwidth"} Conclusion {#sec:5} ========== In this paper we have reported a single amplifier biquad based inductor-free implementation of Chua’s circuit, where we have suitably cascaded an active band pass filter with a Chua’s diode. The proposed circuit has been modeled mathematically by three-coupled first-order autonomous nonlinear differential equations. Numerical simulations of the mathematical model of the circuit and the real world hardware experiments confirm that the proposed circuit shows the same behavior that is shown by a classical Chua’s circuit (e.g., fixed point behavior, limit cycle formation through Hopf bifurcation, period doubling, spiral chaotic attractor, double scroll attractor, and boundary crisis). The presence of chaos has been established through the Lyapunov exponents and experimental power spectrum. As the circuit is inductor-free it has got all the advantages of an inductor-free circuit (e.g. suitable for IC design, robustness, etc.). Further, instead of varying an inductor or a capacitor, we have used a single resistor (e.g., $R_1$) as a control parameter to observe all the complex behaviors of the circuit. Since the circuit has a large number of choices of control parameters (viz. $R_2$, $R$ and $\alpha$), one can use these parameters to observe different behaviors of the circuit. But, the basic bifurcation routes to chaos remains the same for each of the parameters. The BPF circuit we have used is narrow band (unlike Wien-bridge based circuit[@morgul], where the frequency selective network is not narrow band), thus one has more control over the center frequency of the circuit. The present circuit can be suitably designed (with high frequency op-amps) to generate chaotic oscillations in a high frequency region, and also may be useful for chaos based communication systems. The author is indebted to Prof. B.C. Sarkar (Dept. of Physics, The University of Burdwan, India) for the useful suggestions and insightful discussions. Also, the author would like to thank the anonymous reviewers for their useful suggestions. [99]{} Ogorzalek, M.J.:Chaos and Complexity in nonlinear electronic circuits. World Scientific Series on Nonlinear Science, Series A - Vol. 22 (1997) Ramos, J.S.: Introduction to nonlinear dynamics of electronic systems: tutorial. Nonlinear Dyn. 44. 3–14 (2006) Kennedy, M.P., Rovatti, R. and Setti, G. (eds.), Chaotic Electronics in Telecommunications. Florida: CRC Press. (2000) Banerjee, T., Sarkar, B.C.: Chaos, intermittency and control of bifurcation in a ZC2-DPLL. Int. J. Electron. 96(7), 717-731 (2009) Matsumoto, T., Chua, L.O., Komuro, M.: The double scroll.IEEE Trans. Circuits Syst. 32, 797-818 (1985) Kennedy, M.P.:Three steps to chaos-Part II:A Chua’s circuit primer. IEEE Trans. Ckt. and Syst-I. 40(10), 640-656 (1993) Morgul, O.: Inductorless realization of Chua’s oscillator. Electron. Lett. 31, 1424-1430 (1995) Rocha, R., Medrano-T., R.O.:An inductor-free realization of the Chua’s circuit based on electronic analogy. Nonlinear Dyn. 56(4), 389-400 (2009) Elwakil, A.S, Kennedy, M.P.:Chua’s circuit decomposition: a systematic design approach for chaotic oscillators. Journal of the Franklin Institute. 337, 251-265 (2000) Elwakil, A.S, Kennedy, M.P.:Generic RC realizations of Chua’s circuit. Int. J. Bifurcation Chaos. 10, 1981-1985 (2000) Kennedy, M.P.: Robust op-amp realization of Chua’s circuit. Frequenz 46, 66-80 (1992) Elwakil, A.S., Kennedy, M.P.: Improved implementation of Chua’s chaotic oscillator using current feedback op-amp. IEEE Trans. Circuits Syst. I 47, 289-306 (2000) Cruz, J.M., Chua, L.O.: A CMOS IC nonlinear resistor for Chua’s circuit. IEEE Trans. Circuits Syst. I 39, 985-995 (1992) Barboza, R., Chua, L. O.:The four-element Chua’s circuit. Int. J. Bifurcation and Chaos 18, 943-955 (2008) Fortuna, L., Frasca, M., Xibilia, M.G.: Chua’s circuit implementations: yesterday, today, and tomorrow. World Scientific Series on Nonlinear Science Series A - Vol. 65, (2009) Kilic, R.: A comparative study on realization of Chua’s circuit: hybrid realizations of Chua’s circuit combining the circuit topologies proposed for Chua’s diode and inductor elements. Int. J. Bifurcation and Chaos 13, 1475-1493 (2003) Rocha, R., Andrucioli, G.L.D.,Medrano-T., R.O.: Experimental characterization of nonlinear systems: a real-time evaluation of the analogous Chua’s circuit behavior. Nonlinear Dyn. 62(1-2),237-251 (2010) Deliyannis, T.: High-Q factor circuit with reduced sensitivity. Electron. Lett. 4(26), 577-678 (1968) Friend, J.J.: A single operational-amplifier biquadratic filter section. In: IEEE Int. Symp. Circuit Theory, pp. 189-190 (1970) Banerjee, T., Karmakar, B., Sarkar, B.C.:Single amplifier biquad based autonomous electronic oscillators for chaos generation. Nonlinear Dyn. 62, 859-866 (2010) Nayfeh, A.H., Balachandran, B.: Applied Nonlinear Dynamics: Analytical, Computational, and Experimental Methods. Wiley, New York (1995) Wolf, A., Swift, J. B., Swinney, H. L. and Vastano, J. A.: Determining Lyapunov Exponents from a Time Series. Physica D. 16, 285-317 (1985) Buscarino, A., Fortuna, L., Frasca, M. and Sciuto, G.:Coupled Inductors-Based Chaotic Colpitts Oscillator. Int. J. Bifurcation and Chaos. 2, 569-574 (2011) Viana, E.R., Rubinger, R.M., Albuquerque, H.A., de Oliveira, A.G. and Ribeiro, G.M.:High resolution parameter space of an experimental chaotic circuit. Chaos 20, 023110 (2010)
{ "pile_set_name": "ArXiv" }
--- abstract: 'The existence of a solution to the two dimensional incompressible Euler equations in singular domains was established in \[Gérard-Varet and Lacave, The 2D Euler equation on singular domains, submitted\]. The present work is about the uniqueness of such a solution when the domain is the exterior or the interior of a simply connected set with corners, although the velocity blows up near these corners. In the exterior of a curve with two end-points, it is showed in \[Lacave, Two Dimensional Incompressible Ideal Flow Around a Thin Obstacle Tending to a Curve, Ann. IHP, Anl **26** (2009), 1121-1148\] that this solution has some interesting properties, as to be seen as a special vortex sheet. Therefore, we prove the uniqueness, whereas the problem of general vortex sheets is open.' address: | Université Paris-Diderot (Paris 7)\ Institut de Mathématiques de Jussieu\ UMR 7586 - CNRS\ 175 rue du Chevaleret\ 75013 Paris\ France author: - 'C. Lacave' title: Uniqueness for two dimensional incompressible ideal flow on singular domains --- Introduction ============ The motion of a two dimensional flow can be described by the velocity $u(t,x) = (u_1,u_2)$ and the pressure $p$. Concerning incompressible ideal flow in an open set ${\Omega}$, the pair $(u,p)$ verifies the Euler equations: $$\label{Euler} \left\{ \begin{aligned} {\partial}_t u + u \cdot {\nabla}u + {\nabla}p & = 0, \quad t > 0, x \in \Omega \\ {{\rm div}\,}u & = 0, \quad t > 0, x \in \Omega \end{aligned} \right.$$ endowed with an initial condition and an impermeability condition at the boundary ${\partial}\Omega$: $$\label{conditions} u\vert_{t=0} = u_0, \quad u \cdot \hat n\vert_{{\partial}\Omega} = 0.$$ The vorticity ${\omega}$ defined by $${\omega}:= {{\rm curl}\,}u = {\partial}_1 u_2 - {\partial}_2 u_1$$ plays a crucial role in the study of the ideal flow, thanks to the transport nature governing it: $$\label{transport} {\partial}_t {\omega}+ u\cdot {\nabla}{\omega}= 0 .$$ When ${\Omega}$ and $u_0$ are smooth, the well-posedness of system - has been of course the matter of many works. Starting from the paper of Wolibner in bounded domains [@Wo], McGrath treated the case of the full plane [@MG], and finally Kikuchi studied the exterior domains [@kiku]. In the case where the vorticity is only assumed to be bounded, existence and uniqueness of a weak solution has been established by Yudovich in [@yudo]. We quote that the well-posedness result of Yudovitch applies to smooth bounded domains, and to unbounded ones under further decay assumptions. We stress that all above studies require ${\partial}\Omega$ to be at least $C^{1,1}$. Roughly, the reason is the following: due to the non-local character of the Euler equation, these works rely on global in space estimates of $u$ in terms of $\omega$. These estimates [*up to the boundary*]{} involve Biot and Savart type kernels, corresponding to operators such as ${\nabla}\Delta^{-1}$. Unfortunately, such operators are known to behave badly in general non-smooth domains. This explains why well-posedness results are dedicated to regular domains. However the case of a singular obstacle is physically relevant. For example, the study of the perturbation created by a plane wing stays a capital issue to determine the safety time between two landings in big airports. Without solving the question of uniqueness, Taylor established in [@taylor] the existence of a global weak solution of - in a bounded sharp convex domain. He used that $\Omega$ convex implies that the solution $v$ of the Dirichlet problem $$\Delta v = f \: \mbox{ in } \: \Omega, \quad v\vert_{{\partial}\Omega} = 0$$ belongs to $H^2(\Omega)$ when the source term $f$ belongs to $L^2(\Omega)$, irrespective of the domain regularity. Nevertheless, this interesting result still leaves aside many situations of practical interest, notably flows around irregular obstacles. Recently, the article [@lac_euler] gave such a result in the exterior of a $C^2$ Jordan arc, where it is noted that the velocity blows up near the end-points of the arc. In particular, it shows that the previous property on the Dirichlet problem is false in domains with some bad corners. The question of the existence of global weak solutions is now solved for a large class of singular domains in [@GV_lac]. The authors therein considered two kinds of domains: any open bounded domain where we retrieve a fixed (possibly zero) number of closed sets with positive capacity, and any exterior domain of one connected closed set with positive capacity. [*Our goal here is to prove that such a solution is unique if the domain is bounded, simply connected with some corners, or if it is the complementary of a closed simply connected bounded set with some corners. We prove the uniqueness for an initial vorticity which is bounded, compactly supported in ${\Omega}$ and having a definite sign*]{}. More precisely, we consider two kinds of domains. On one hand, we denote by $\Omega$ a bounded, simply connected open set, such that ${\partial}{\Omega}$ has a finite number of corners $z_i$ with angles ${\alpha}_i$ (i.e. locally, ${\Omega}$ coincides with the sector $\{z_i+(r\cos {\theta}, r\sin {\theta}); r>0, {\theta}_i<{\theta}<{\theta}_i+{\alpha}_i\}$). On the other hand, we denote by ${\Omega}:={{\mathbb R}}^2\setminus {{\mathcal C}}$, where ${{\mathcal C}}$ is a bounded, simply connected closed set, such that ${\partial}{\Omega}$ has a finite number of corners. To define a global weak solution to the Euler equation, let us point out that the space $L^2(\Omega)$ is not suitable for weak solutions in unbounded domain. Working with square integrable velocities in exterior domains is too restrictive (see page to note that $u$ behaves in general like $1/|x|$ at infinity), so we consider initial data satisfying $$\label{typeinitialdata} u_0 \in L^2_{{\operatorname{{loc}}}}(\overline{\Omega}), \quad u_0 \rightarrow 0 \: \mbox{ as } \: |x| \rightarrow +\infty, \quad {{\rm curl}\,}u_0 \in L^\infty_c (\Omega), \quad {{\rm div}\,}u_0 = 0, \quad u_0 \cdot \hat n\vert_{{\partial}\Omega} = 0.$$ Note that the divergence free condition and this last impermeability condition have to be understood in the weak sense: for any $\varphi \in C^{1}_c(\overline{{\Omega}})$, $$\label{imperm} \int_\Omega u_0 \cdot {\nabla}\varphi = - \int_\Omega {{\rm div}\,}u_0 \, \varphi = 0.$$ Let us stress that this set of initial data is large: we will show later that for any function ${\omega}_0 \in L^\infty_c(\Omega)$, there exists $u_0$ verifying with ${{\rm curl}\,}u_0= {\omega}_0$. Similarly, the weak form of the divergence free and tangency conditions on the Euler solution $u$ will read: $$\label{imperm2} \forall \varphi \in \mathcal{D}\left([0,+\infty); C^{1}_c({{\mathbb R}}^2)\right), \quad \int_{{{\mathbb R}}^+} \int_\Omega u \cdot {\nabla}\varphi = 0.$$ Finally, the weak form of the momentum equation on $u$ will read: $$\label{Eulerweak} \forall \, \varphi \in \mathcal{D}\left([0, +\infty[ \times \Omega\right) \text{ with } {{\rm div}\,}\varphi = 0, \quad \int_0^{\infty} \int_\Omega \left( u \cdot {\partial}_t \varphi + (u \otimes u) : {\nabla}\varphi \right) = -\int_\Omega u_0 \cdot \varphi(0, \cdot) .$$ For ${\Omega}$ an open bounded simply connected domain, or ${\Omega}$ the complementary of a compact simply connected domain $\mathcal{C}$, we get the existence of a weak solution from [@GV_lac]: \[theorem1\] Assume that $u_0$ verifies . Then there exists $$u \in L^\infty_{{\operatorname{{loc}}}}({{\mathbb R}}^+; L^2_{{\operatorname{{loc}}}} (\overline{\Omega})), \: {{\rm curl}\,}u \in L^\infty({{\mathbb R}}^+;L^1 \cap L^\infty(\Omega)),$$ which is a global weak solution of - in the sense of and . In a few words, this existence result follows from a compactness argument, performed on a sequence of solutions $u_n$ of the Euler equations on the sequence of approximating domains $\Omega_n$. A key ingredient of the proof is the so-called $\Gamma$-convergence of $\Omega_n$ to $\Omega$ (see [@GV_lac] for the details). The main result of this article concerns the uniqueness of global weak solutions, when the initial vorticity has definite sign. \[main 1\] Let $\Omega$ be a bounded, simply connected open set, such that ${\partial}{\Omega}$ has a finite number of corners with angles greater than $\pi/2$ and let $u_0$ verifying . If ${{\rm curl}\,}u_0$ is non-positive (respectively non-negative), then there exists a unique global weak solution of the Euler equations on ${\Omega}$ verifying $$u \in L^\infty_{{\operatorname{{loc}}}}({{\mathbb R}}^+; L^2_{{\operatorname{{loc}}}} (\Omega)), \: {{\rm curl}\,}u \in L^\infty({{\mathbb R}}^+;L^1 \cap L^\infty(\Omega)).$$ In exterior domains, the vorticity is not sufficient to uniquely determine the velocity. We need the circulation around ${{\mathcal C}}$. As we will see in Subsection \[sect : exist\], for $u_0$ verifying , we can define the initial circulation: $${\gamma}_0:= \oint_{{\partial}{{\mathcal C}}} u_0\cdot \hat \tau\, ds.$$ Inversely, let us mention that we can fix independently the vorticity and the circulation: we will show that for any function ${\omega}_0 \in L^\infty_c(\Omega)$ and any real number ${\gamma}\in {{\mathbb R}}$, there exists a unique $u_0$ verifying with ${{\rm curl}\,}u_0= {\omega}_0$ and with circulation around $\mathcal{C}$ equal to ${\gamma}$. Assuming a sign condition on ${\gamma}_0$, we will prove a uniqueness theorem in exterior domains. \[main 2\] Let ${\Omega}:={{\mathbb R}}^2\setminus {{\mathcal C}}$, where ${{\mathcal C}}$ is a compact, simply connected set, such that ${\partial}{\Omega}$ has a finite number of corners with angles greater than $\pi/2$. Let $u_0$ verifying . If ${{\rm curl}\,}u_0$ is non-positive and ${\gamma}_0\geq -\int {{\rm curl}\,}u_0$(respectively ${{\rm curl}\,}u_0$ non-negative and ${\gamma}_0 \leq -\int {{\rm curl}\,}u_0$), then there exists a unique global weak solution of the Euler equations on ${\Omega}$, verifying $$u \in L^\infty_{{\operatorname{{loc}}}}({{\mathbb R}}^+; L^2_{{\operatorname{{loc}}}} (\Omega)), \: {{\rm curl}\,}u \in L^\infty({{\mathbb R}}^+;L^1 \cap L^\infty(\Omega)).$$ In particular, we will also prove that the velocity blows up near the obtuse corners: if ${\partial}{\Omega}$ admits at $z_0$ a corner of angle ${\alpha}$, then the velocity behaves near $z_0$ like $\dfrac{1}{|x-z_0|^{1-\frac{\pi}{{\alpha}}}}$. We refind that in the case where $\mathcal{C}$ is a Jordan arc (see [@lac_euler]) the velocity blows up like the inverse of the square root of the distance near the end-points (${\alpha}=2\pi$). Therefore, we manage to adapt the Yudovich proof for a velocity (and the gradient of the velocity) not bounded up to the boundary. The key here is to use the sign condition in Theorems \[main 1\] and \[main 2\] in order to prove that the vorticity never meets the boundary. For a sake of clarity, we assume that ${\partial}{\Omega}$ is locally a corner, but we can replace a corner by a singular point, where the jump of the tangent angle is equal to ${\alpha}$ (see Remark \[DT loc\]). The remainder of this work is organized in six sections. We introduce in Section \[sect : 2\] the biholomorphism ${{\mathcal T}}$ and the Biot-Savart law (law giving the velocity in terms of the vorticity) in the interior or the exterior of one simply connected domain. We will recall the existence of weak solution in this section, and derive some formulations (on vorticity and on extensions in ${{\mathbb R}}^2$). We will take advantage of this section to show that the weak solution is a renormalized solution in the sense of DiPerna-Lions [@dip-li], which will allow us to prove that the $L^p$ norm of the vorticity for $p\in [1,\infty]$, the total mass $\int_{{\Omega}} {\omega}(t,\cdot)$ and the circulation of the velocity around ${{\mathcal C}}$ are conserved quantities. Let us mention that the explicit form of the Biot-Savart law is one of the key of this work, and it explains why ${\Omega}$ is assumed to be the interior or the exterior of a simply connected domain. This law will read $$u(t,x) = D{{\mathcal T}}(x)^T R[{\omega}]$$ where $R[{\omega}]$ is an integral operator. Using classical elliptic theory, we will obtain the exact behavior of the biholomorphism ${{\mathcal T}}$ near the corners, and then the behavior of the velocity. We note that the blow-up is stronger if the angle ${\alpha}$ is bigger. Unfortunately, the following study needs sometimes that the integral operator $R[{\omega}]$ verifies good estimates, which are possible only if we assume that all the angles ${\alpha}_i$ are greater than $\pi/2$ (namely in Proposition \[biot est\] to prove that $R[{\omega}]$ is bounded, in Lemma \[ortho\] to establish the equation verified by the extended functions, in Lemma \[W11\] to use the renormalization theory). Section \[sect : 3\] is the central part of this paper: we will prove that the support of ${\omega}$ never meets the boundary if we assume that the characteristics corresponding to exist and are differentiable. The idea is to introduce a good Liapounov function, which blows up if the trajectories meet the boundary. Next, we will establish some estimates implying that this Liapounov energy is bounded which will give the result. Although we cannot say that the characteristics are regular for weak solutions, this computation gives us an excellent intuition. In light of this proof, we rigorously prove in Section \[sect : 4\], thanks to the renormalization theory, that we have the same property, even if we do not consider the characteristics. Finally, we prove Theorems \[main 1\]-\[main 2\] in Section \[sect : 5\]. We will introduce $v:= K_{{{\mathbb R}}^2} * {\omega}$, where $K_{{{\mathbb R}}^2}$ is the Biot-Savart kernel in the full plane. As ${\omega}$ does not meet the boundary, it means that ${{\rm div}\,}v = {{\rm curl}\,}v \equiv 0$ in a neighborhood of the boundary, i.e. $v$ is harmonic therein. This provides in particular a control of its $L^\infty$ norm (as well as the $L^\infty$ norm for the gradient) by its $L^2$ norm. Although the total velocity is not bounded near the boundary, but just integrable, this argument allows us to yield a Gronwall-type estimate, as Yudovich did. Therefore, the fact that the support of the vorticity stays far from the boundary will imply the uniqueness result. This idea was already used in [@lac_miot], in the case of one Dirac mass in the vorticity. In this article, we consider the Euler equations in ${{\mathbb R}}^2$ when the initial vorticity is composed by a regular part $L^\infty_c$ and a Dirac mass. The equation is called the system mixed Euler/point vortex and derived in [@mar_pul]. When trajectories exist, it is proved that they do not meet the point vortex in [@mar_pul] if the point vortex moves on the influence of the regular part, and in [@mar] if the Dirac is fixed. The method is also constructed on Liapounov functions. An important issue in [@lac_miot] is to generalize this result when trajectories are not regular. The Lagrangian formulation gives us a helpful intuition, it is the reason why we choose first to present the proof of uniqueness assuming the differentiability of trajectories (Section \[sect : 3\]). Moreover, proving in Section \[sect : 4\] that the vorticity never meets the boundary, we state that the “weak” Lagrangian flow coming from the renormalization theory evolves in the area far from the corners. As the velocity is regular enough in this region, we can conclude that the flow is actually classical and regular. Section \[sect : technical\] is devoted to the proofs of some technical lemmas. We finish this article by Section \[sect : 6\] with some final comments. In the exterior of the wing plane, we will try to give a mathematical justification of the Kutta-Joukowski condition. In the exterior of the Jordan arc (see [@lac_euler]), we will make a parallel with the vortex sheet problem. We will also give some explanations about the sign assumptions in the main theorems. We warn the reader that we write in general the proofs in the case of exterior domains. In this kind of domain, we have to take care of integrability at infinity, to control the size of the support of the vorticity, and we have also to consider harmonic vector fields and circulations of velocities around ${{\mathcal C}}$. The proofs in the case of bounded domains are strictly easier, without additional arguments. We will make sometimes some remarks about that. Biot-Savart law and existence {#sect : 2} ============================= As in [@ift_lop_euler; @lac_euler; @lac_small], the crucial assumption is that we work in dimension two outside (or inside) one simply connected domain. Identifying ${{\mathbb R}}^2$ with the complex plane ${{\mathbb C}}$, there exists a biholomorphism ${{\mathcal T}}$ mapping ${\Omega}$ to the exterior (resp. to the interior) of the unit disk. Thanks to this biholomorphism, we will obtain an explicit formula for the Biot-Savart law: the law giving the velocity in terms of the vorticity. This explicit formula will be used to construct the Liapounov function. We give in the following subsection the properties of this Riemann mapping. Conformal mapping -----------------   Let ${\Omega}$ as in Theorem \[main 2\] (resp. as in Theorem \[main 1\]), then the Riemann mapping theorem states that there exists a unique biholomorphism ${{\mathcal T}}$ mapping ${\Omega}$ to $\overline{B(0,1)}^c$ (resp. to $B(0,1)$) such that ${{\mathcal T}}(\infty)=\infty$ and ${{\mathcal T}}'(\infty)\in {{\mathbb R}}^+_*$ (resp. ${{\mathcal T}}(z_0)=0$ and ${{\mathcal T}}'(z_0)\in {{\mathbb R}}^+_*$, for a $z_0\in {\Omega}$). We remind that the last two conditions mean $$\mathcal{T}(z) \sim \lambda z, \quad |z| \sim +\infty, \quad \mbox{ for some } \: \lambda> 0.$$ \[grisvard\] Let assume that ${\partial}{\Omega}$ is a $C^\infty$ Jordan curve, except in a finite number of point $z_1$, $z_2$, ..., $z_n$ where ${\partial}{\Omega}$ admits corner of angle ${\alpha}_i>\frac{\pi}{2}$ (i.e. ${\Omega}$ coincides locally with the sector $\{z_i+(r\cos {\theta}, r\sin {\theta}); r>0, {\theta}_i<{\theta}<{\theta}_i+{\alpha}_i\}$). Then the biholomorphism ${{\mathcal T}}$ defined above satisfies - ${{\mathcal T}}^{-1}$ and ${{\mathcal T}}$ extend continuously up to the boundary; - $D {{\mathcal T}}^{-1}$ extends continuously up to the boundary, except at the points ${{\mathcal T}}(z_i)$ with ${\alpha}_i<\pi$ where $D {{\mathcal T}}^{-1}$ behaves like $1/|y-{{\mathcal T}}(z_i)|^{1-\dfrac{{\alpha}_i}{\pi}}$; - $D {{\mathcal T}}$ extends continuously up to the boundary, except at the points $z_i$ with ${\alpha}_i>\pi$ where $D {{\mathcal T}}$ behaves like $1/|x-z_i|^{1-\dfrac{\pi}{{\alpha}_i}}$; - $D^2 {{\mathcal T}}$ belongs to $L^p_{{\operatorname{{loc}}}}(\overline{{\Omega}})$ for any $p<4/3$. As ${\partial}{\Omega}$ is $C^{0,{\alpha}}$, the Kellogg-Warschawski theorem (Theorem 3.6 in [@pomm-2]) states directly that ${{\mathcal T}}$ and ${{\mathcal T}}^{-1}$ is continuous up to the boundary. For the behavior of the derivatives, we use the classical elliptic theory: let $$u(x):= \ln | {{\mathcal T}}(x) |.$$ As ${{\mathcal T}}$ is holomorphic, we have that $${\Delta}u = 0 \text{ in } {\Omega}\text{ and } u=0 \text{ on } {\partial}{\Omega}.$$ To localize near each corners, we can introduce a smooth cutoff function $\chi$ supported in a small neighborhood of $z_i$. Therefore, we are exactly in the setting of elliptic studies: $$\label{elliptic} {\Delta}(u\chi) = f \in C^\infty \text{ in } O_{i} \text{ and } u=0 \text{ on } {\partial}O_{i},$$ where $O_{i}$ is the sector $\{z_i+(r\cos {\theta}, r\sin {\theta}); r>0, {\theta}_i<{\theta}<{\theta}_i+{\alpha}_i\}$. The standard idea is to compose by $z^{\pi/{\alpha}_i}$ in order to maps the sector on the half plane, where the solution of the elliptic problem $g$ is smooth. Therefore, we have that $$u\chi = g \circ z^{\pi/{\alpha}_i},$$which implies that $$\label{approx} {\nabla}u \approx r^{\pi/{\alpha}_i-1} \text{ and } \ {\nabla}^{2} u \approx r^{\pi/{\alpha}_i-2}. $$ More precisely, we used the so-called shift theorem in non-smooth domain (see the preface of [@grisvard]): there exist numbers $c_k$ such that $$u\chi-\sum c_k v_k \in W^{m+2,p}(O_i\cap B(0,R)), \ \forall R>0$$ where the $k$ in the summation ranges over all integers such that $$\pi/{\alpha}_i \leq k\pi/{\alpha}_i <m+2-2/p$$ and with - $v_k = r^{k\pi/{\alpha}_i} \sin(k\pi {\theta}/{\alpha}_i)$ if $k\pi/{\alpha}_i$ is not an integer; - $v_k = r^{k\pi/{\alpha}_i} [ \ln r \sin(k\pi {\theta}/{\alpha}_i) + {\theta}\cos(k\pi {\theta}/{\alpha}_i)]$ if $k\pi/{\alpha}_i$ is an integer. In this theorem, $r$ denotes the distance between $x$ and $z_i$: $r:=|x-z_i|$. We apply it for $m=1$ and $p=2$. As $H^3_{{\operatorname{{loc}}}}({{\mathbb R}}^2)$ embeds in $C^0$, we see again that $u$ is continuous up to the boundary. If $\pi< {\alpha}_i \leq 2\pi$ then $1/2 \leq\pi/{\alpha}_i < 1$, which gives that $\pi/{\alpha}_i$ cannot be an integer. Then, the shift theorem states that $D(u\chi) -\sum c_k Dv_k$ belongs to $H^2_{{\operatorname{{loc}}}}(\overline{O_i})$, so it belongs to $C^0$. Thanks to formula of $v_k$, we see that $Du$ is continuous up to the boundary, except near $z_i$ where $Du = {{\mathcal O}}(r^{\pi/{\alpha}_i-1})$. Next, we derive once more to obtain that $D^2 (u\chi) -\sum c_k D^2 v_k$ belongs to $H^1_{{\operatorname{{loc}}}}(\overline{O_i})$, so it belongs to $L^p_{{\operatorname{{loc}}}}(\overline{O_i})$ for any $p$. As $\sum c_k D^2v_k = {{\mathcal O}}(r^{\pi/{\alpha}_i-2})$, with $2-\pi/{\alpha}_i<3/2$, then $D^2 u$ belongs to $L^p_{{\operatorname{{loc}}}}(\overline{O_i})$ for any $p<4/3$. The case ${\alpha}_i = \pi$ is not interesting because we assume that $z_i$ is a singular point. If $\pi/2 < {\alpha}_i <\pi$, then we note that $\pi/{\alpha}_i$ is not an integer and that $k\pi/{\alpha}_i <2$ is obtained only for $k=1$. We apply the above argument to see that $u$ and $Du$ is continuous up to the boundary, and $D^2 u$ belongs to $L^p_{{\operatorname{{loc}}}}(\overline{O_i})$ for any $p<2$. Therefore, the shift theorem establishes rigorously that $u = {{\mathcal O}}(r^{\pi/{\alpha}_i})$ and $Du = {{\mathcal O}}(r^{\pi/{\alpha}_i-1})$ if all the angles are greater than $\pi/2$. We show now that $Du$ and $D{{\mathcal T}}$ have the same behavior. On one hand, differentiating $u$, we have $${\nabla}u(x)= \frac{{{\mathcal T}}(x)}{|{{\mathcal T}}(x)|^2}D{{\mathcal T}}(x)$$ hence $$\label{ut} | {\nabla}u(x) |_\infty \leq 4 | D{{\mathcal T}}(x) |_\infty$$ where $| A |_\infty = \max |a_{ij}|$. Indeed, by continuity of ${{\mathcal T}}$, we have that $|{{\mathcal T}}(x)|=\sqrt{{{\mathcal T}}_1^2(x)+{{\mathcal T}}_2(x)^2}\geq 1/2$ near the boundary. On the other hand, $$\frac{{{\mathcal T}}(x)}{|{{\mathcal T}}(x)|^2} = {\nabla}u(x) D{{\mathcal T}}(x)^{-1}.$$ By continuity of ${{\mathcal T}}$, there exists a neighborhood of ${\partial}{\Omega}$ such that $|{{\mathcal T}}(x)|\leq 2$. Then, near the boundary, we have $$\frac12\leq \frac{1}{|{{\mathcal T}}(x)|}\leq 2 \sqrt{2} |{\nabla}u(x)|_\infty |D{{\mathcal T}}(x)^{-1}|_\infty.$$ Moreover, as ${{\mathcal T}}$ is holomorphic, $D{{\mathcal T}}$ is a matrix $2\times 2$ on the form $\begin{pmatrix} a&b\\-b&a \end{pmatrix}$. We deduce from this form that $D{{\mathcal T}}(x)^{-1}=\frac{1}{\det D{{\mathcal T}}(x)} D{{\mathcal T}}(x)^T$. We use that $\det D{{\mathcal T}}(x) = a^2 + b^2 \geq | D{{\mathcal T}}(x) |_\infty^2$ to get $$\label{tu} | D{{\mathcal T}}(x) |_\infty \leq 4\sqrt{2} |{\nabla}u(x)|_\infty.$$ Putting together , and , we can conclude on the behavior of $D{{\mathcal T}}$. Differentiating once more, we obtain the result for $D^2 {{\mathcal T}}$. Finally, as $u = {{\mathcal O}}(r^{\pi/{\alpha}_i})$, we state that $$| {{\mathcal T}}(x)| = 1 + {{\mathcal O}}(r^{\pi/{\alpha}_i}), \ {{\mathcal T}}(x) = {{\mathcal T}}(z_i) + {{\mathcal O}}(|x-z_i|^{\pi/{\alpha}_i}), \ {{\mathcal T}}^{-1}(y) = z_i + {{\mathcal O}}(|y- {{\mathcal T}}(z_i)|^{{\alpha}_i/\pi}).$$ Next, we use the fact that $D{{\mathcal T}}(x) = {{\mathcal O}}(|x-z_i|^{\pi/{\alpha}_i-1})$ to write $$D{{\mathcal T}}^{-1}(y)= \Bigl(D{{\mathcal T}}({{\mathcal T}}^{-1}(y)) \Bigl)^{-1} = {{\mathcal O}}\Bigl(\frac{1}{(|y- {{\mathcal T}}(z_i)|^{{\alpha}_i/\pi})^{\pi/{\alpha}_i-1}}\Bigl)= {{\mathcal O}}(|y- {{\mathcal T}}(z_i)|^{{\alpha}_i/\pi-1})$$ which ends the proof. We refind the result of the exterior of the curve (see [@lac_euler]): ${\alpha}=2\pi$ gives that $D{{\mathcal T}}$ behaves like $1/\sqrt{|x|}$. In that paper, we found the behavior of $D{{\mathcal T}}$ thanks to the explicit formula of ${{\mathcal T}}$. The Joukowski function $G(z)=\frac12(z+\frac1z)$ maps the exterior of the unit disk to the exterior of the segment $[(-1,0),(1,0)]$. Then, in the case of this segment ${{\mathcal T}}=G^{-1}$ and we can compute that $$D{{\mathcal T}}(z) = z \pm \frac{z}{\sqrt{z^2-1}}.$$ We also note that $D{{\mathcal T}}$ near a corner (${\alpha}>\pi$) is less singular than around a cusp (as the intuition). \[DT loc\] This kind of theorem will be useful to remark that the velocity in the exterior of a square blows-up like $1/|x|^{1/3}$ near the corner. However, the only things that we need in the sequel are: - there exists $p_0>2$ such that $\det D{{\mathcal T}}^{-1}$ belongs to $L^{p_0}_{{\operatorname{{loc}}}}(\overline{{\Omega}})$: property holding true if all the corners $z_i$ have angles ${\alpha}_i$ greater than $\pi/2$ (as in Theorems \[main 1\]-\[main 2\]); - $D{{\mathcal T}}$ belongs to $L^p_{{\operatorname{{loc}}}}(\overline{{\Omega}})$ for any $p<4$ and $D^2 {{\mathcal T}}$ belongs to $L^p_{{\operatorname{{loc}}}}(\overline{{\Omega}})$ for any $p<4/3$. Therefore, Theorems \[main 1\]-\[main 2\] can be applied for any simply connected domain (or exterior of a simply connected set) such that the two previous points hold true. For a sake of clarity, we express the theorems when the boundary is locally a corner at $z_i$, but we can generalize for ${\Omega}$ such that ${\partial}{\Omega}$ is a $C^{1,1}$ Jordan curve except at a finite number of points $z_i$. In these points, we would define $${\alpha}_i := \lim_{s\to 0} \arg ({\Gamma}'(s_i+s), {\Gamma}'(s_i-s)) + \pi,$$ where ${\Gamma}$ is a parametrization of ${\partial}{\Omega}$ and $z_i={\Gamma}(s_i)$. Indeed, up to a smooth change of variable, the Laplace equation in ${\Omega}$ turns into a divergence form elliptic equation in the exterior of a corner, and we would use results related to elliptic equations in polygons, see [@Mazya]. The previous theorem is about the behavior near the obstacle. In the case of an unbounded domain (as in Theorem \[main 2\]), we will need the following proposition about the behavior of ${{\mathcal T}}$ at infinity. \[T-inf\] If ${{\mathcal T}}$ is a biholomorphism from ${\Omega}$ to the exterior of the unit disk such that ${{\mathcal T}}(\infty) = \infty$ and ${{\mathcal T}}'(\infty) \in {{\mathbb R}}^*_+$, then there exist $({\beta},\tilde{\beta})\in {{\mathbb R}}^+_*\times {{\mathbb C}}$ and a holomorphic function $h:{\Omega}\to {{\mathbb C}}$ such that $${{\mathcal T}}(z) = {\beta}z+ \tilde {\beta}+ h(z)$$ with $$h(z) = {{\mathcal O}}\Bigl(\frac1{|z|}\Bigl) \text{ and } h'(z)={{\mathcal O}}\Bigl(\frac1{|z|^2}\Bigl), \text{ as } |z|\to \infty.$$ Moreover, ${{\mathcal T}}^{-1}$ admits a similar development. We consider $E:= {{\mathcal T}}^{-1}(B(0,2)\setminus B(0,1)) \cup {{\mathcal C}}$, which is an open, bounded, connected, simply connected and smooth subset of the plane. Then, the map $H := {{\mathcal T}}/2$ is a biholomorphism between $E^c$ and $B(0,1)^c$, and we can apply Remark 2.5 of [@lac_euler] to end this proof. Biot-Savart Law {#sect : biot} ---------------   One of the keys of the study for two dimensional ideal flow is to work with the vorticity equation, which is a transport equation. For example, in the case of a smooth obstacle, if we have initially ${\omega}_0:={{\rm curl}\,}u_0 \in L^1\cap L^\infty$, then $\| {\omega}(t,\cdot)\|_{L^p}= \| {\omega}_0\|_{L^p}$ for all $t,p$. So, we have some estimates for the vorticity, and the goal is to establish estimates for the velocity. For that, we introduce the Biot-Savart law, which gives the velocity in terms of the vorticity. Another advantage of the two dimensional space is that we have explicit formula in the exterior of one obstacle, thanks to complex analysis and the identification of ${{\mathbb R}}^2$ and ${{\mathbb C}}$. Let ${\Omega}$ be the exterior (resp. the interior) of a bounded, closed, connected, simply connected subset of the plane, the boundary of which is a Jordan curve. Let ${{\mathcal T}}$ be a biholomorphism from ${\Omega}$ to $(\overline B(0,1))^c$ (resp. $B(0,1)$) such that ${{\mathcal T}}(\infty)=\infty$ (resp. ${{\mathcal T}}(z_0)=0$). We denote by $G_{{\Omega}}=G_{{\Omega}} (x,y)$ the Green’s function, whose the formula is: $$\label{green} G_{{\Omega}}(x,y)=\frac{1}{2\pi}\ln \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*||{{\mathcal T}}(y)|}$$ writing $x^*=\frac{x}{|x|^2}$. The Green’s function verifies: $${\Delta}_y G_{{\Omega}}(x,y)={\delta}(y-x) \ \forall x,y\in {\Omega}, \quad G_{{\Omega}}(x,y)=0 \ \forall (x,y)\in {\Omega}\times {\partial}{\Omega}, \quad G_{{\Omega}}(x,y)=G_{{\Omega}}(y,x)\ \forall x,y\in {\Omega}.$$ The kernel of the Biot-Savart law is $K_{{\Omega}}=K_{{\Omega}}(x,y) := {\nabla}_x^\perp G_{{\Omega}}(x,y)$. With $(x_1,x_2)^\perp=\begin{pmatrix} -x_2 \\ x_1\end{pmatrix}$, the explicit formula of $K_{{\Omega}}$ is given by $$K_{{\Omega}}(x,y)=\dfrac{1}{2\pi} D{{\mathcal T}}^T(x)\Bigl(\dfrac{({{\mathcal T}}(x)-{{\mathcal T}}(y))^\perp}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|^2}-\dfrac{({{\mathcal T}}(x)- {{\mathcal T}}(y)^*)^\perp}{|{{\mathcal T}}(x)- {{\mathcal T}}(y)^*|^2}\Bigl)$$ and we introduce the notation $$K_{{\Omega}}[f]=K_{{\Omega}}[f](x):=\int_{{\Omega}} K_{{\Omega}}(x,y)f(y)dy,$$ with $f\in C_c^\infty({{\Omega}})$. In unbounded domain, we require information on far-field behavior of $K_{{\Omega}}$. We will use several times the following general relation: $$\label{frac} \Bigl| \frac{a}{|a|^2}-\frac{b}{|b|^2}\Bigl|=\frac{|a-b|}{|a||b|},$$ which can be easily checked by squaring both sides. Using the behavior of $D{{\mathcal T}}$ at infinity (Proposition \[T-inf\]), we obtain for large $|x|$ that $$\label{K-inf} |K_{{\Omega}}[f]|(x)\leq \dfrac{C_2}{|x|^2},$$ where $C_2$ depends on the size of the support of $f$. The vector field $u=K_{{\Omega}}[f]$ is a solution of the elliptic system: $${{\rm div}\,}u =0 \text{ in } {{\Omega}}, \quad {{\rm curl}\,}u =f \text{ in } {{\Omega}}, \quad u\cdot \hat{n}=0 \text{ on } {\partial}{\Omega}, \quad \lim_{|x|\to\infty}|u|=0.$$ If we consider a non-simply connected domain (as in Theorem \[main 2\]), the previous system has several solutions. To uniquely determine the solution, we have to take into account the circulation. Let $\hat{n}$ be the unit normal exterior to ${\Omega}$. In what follows all contour integrals are taken in the counter-clockwise sense, so that $\oint_{{\partial}{{\mathcal C}}} F\cdot \hat {\tau}\, ds=-\oint_{{\partial}{{\mathcal C}}} F\cdot \hat{n}^\perp ds$. Then the harmonic vector field $$H_{{\Omega}} (x)=\frac{1}{2\pi}{\nabla}^\perp \ln |{{\mathcal T}}(x)|= \frac{1}{2\pi} D{{\mathcal T}}^T(x)\frac{{{\mathcal T}}(x)^\perp}{|{{\mathcal T}}(x)|^2}$$ is the unique[^1] vector field verifying $${{\rm div}\,}H_{{\Omega}} = {{\rm curl}\,}H_{{\Omega}} =0 \text{ in } {{\Omega}}, \quad H_{{\Omega}}\cdot \hat{n}=0 \text{ on } {\partial}{{\mathcal C}}, \quad H_{{\Omega}} (x)\to 0 \text{ as }|x|\to\infty, \quad \oint_{{\partial}{{\mathcal C}}} H_{{\Omega}} \cdot \hat {\tau}\, ds =1.$$ \[1/x\]Using Proposition \[T-inf\], we see that $H_{{\Omega}}(x)=\mathcal{O}(1/|x|)$ at infinity. Therefore, putting together the previous properties, we obtain the existence part of the following. \[biot\] Let ${\omega}\in L^{\infty}_c ({\Omega})$ and ${\gamma}\in {{\mathbb R}}$. If ${\Omega}$ is an open simply connected bounded subset of ${{\mathbb R}}^2$, then there is a unique solution $u$ of $$\left\lbrace\begin{aligned} {{\rm div}\,}u &=0 &\text{ in } {{\Omega}} \\ {{\rm curl}\,}u &= {\omega}&\text{ in } {{\Omega}} \\ u \cdot \hat{n}&=0 &\text{ on } {\partial}{\Omega}\\ \end{aligned}\right.$$ which is given by $$\label{biot bd} u(x) = K_{{\Omega}}[{\omega}](x).$$ If ${{\mathcal C}}$ is a closed simply connected bounded subset of ${{\mathbb R}}^2$ and ${\Omega}= {{\mathbb R}}^2\setminus {{\mathcal C}}$, then there is a unique solution $u$ of $$\left\lbrace\begin{aligned} {{\rm div}\,}u &=0 &\text{ in } {{\Omega}} \\ {{\rm curl}\,}u &= {\omega}&\text{ in } {{\Omega}} \\ u \cdot \hat{n}&=0 &\text{ on } {\partial}{{\mathcal C}}\\ u(x)\to 0 &\text{ as }|x|\to\infty\\ \oint_{{\partial}{{\mathcal C}}} u \cdot \hat {\tau}\, ds &= {\gamma}\end{aligned}\right.$$ which is given by $$\label{biot unbd} u(x) = K_{{\Omega}}[{\omega}](x) + ({\gamma}+\int {\omega}) H_{{\Omega}}(x).$$ Concerning the uniqueness, we can see e.g. [@kiku Lemma 2.14] (see also [@ift_lop_euler Proposition 2.1]). We take advantage of this explicit formula to give estimates on the kernel. We introduce $$R[{\omega}](x):= \int_{{\Omega}} \Bigl(\dfrac{({{\mathcal T}}(x)-{{\mathcal T}}(y))^\perp}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|^2}-\dfrac{({{\mathcal T}}(x)- {{\mathcal T}}(y)^*)^\perp}{|{{\mathcal T}}(x)- {{\mathcal T}}(y)^*|^2}\Bigl) {\omega}(y)\, dy,$$ so that reads $$u(x) = \frac{1}{2\pi} D{{\mathcal T}}^T(x) \Bigl( R[{\omega}](x)+ ({\gamma}+\int {\omega})\frac{{{\mathcal T}}(x)^\perp}{|{{\mathcal T}}(x)|^2}\Bigl).$$ \[biot est\] Let assume that ${\omega}$ belongs to $L^1\cap L^\infty({\Omega})$. If all the angles of ${\Omega}$ are greater than $\pi/2$, then there exist $(C,a)\in {{\mathbb R}}^+_* \times (0,1/2]$ depending only on the shape of ${\Omega}$ such that $$\|R[{\omega}] \|_{L^\infty({\Omega})} \leq C (\| {\omega}\|_{L^1}^{1/2} \| {\omega}\|_{L^\infty}^{1/2} +\| {\omega}\|_{L^1}^{a} \| {\omega}\|_{L^\infty}^{1-a}+ \| {\omega}\|_{L^1}).$$ Moreover, $R[{\omega}]$ is continuous up to the boundary. In the case where ${{\mathcal C}}$ is a Jordan arc, the uniform bound is proved in [@lac_euler Lemma 4.2] and the continuity in [@lac_euler Proposition 5.7]. The proof here is almost the same, except that we have to take care that $D{{\mathcal T}}^{-1}$ is not bounded if there is an angle less than $\pi$ (see Theorem \[grisvard\]). For completeness, we write the details in Section \[sect : technical\]. In this proof, we can understand why we assume that the angles are greater than $\pi/2$: we need that $\det D {{\mathcal T}}^{-1}$ belongs in $L^{p_0}$ for some $p_0>2$ (see Remark \[DT loc\]). Existence and properties of weak solutions {#sect : exist} ------------------------------------------   The goal of this subsection is to derive some properties about a weak solution obtained in Theorem \[theorem1\] from [@GV_lac]. We will also establish similar formulations verified by extensions on the full plane. [*a) Weak solution in an unbounded domain.*]{} We begin by the hardest case: let ${\Omega}:={{\mathbb R}}^2\setminus {{\mathcal C}}$, where ${{\mathcal C}}$ is a bounded, simply connected closed set, such that ${\partial}{{\mathcal C}}$ is a $C^\infty$ Jordan curve, except in a finite number of point $z_1$, $z_2$, ..., $z_n$ where ${\partial}{\Omega}$ admits corner of angle ${\alpha}_i$. Then, there exists some pieces of the boundary which are smooth, implying that the capacity of ${{\mathcal C}}$ is greater than $0$ (see e.g. [@GV_lac Proposition 6]). Therefore, Theorem \[theorem1\] with our exterior domains is a direct consequence of [@GV_lac Theorem 2]. We know the existence of a global weak solution. We search now some features of such a solution. Let $u_0$ satisfying and $u$ be a global weak solution of - in the sense of and such that $$u \in L^\infty_{{\operatorname{{loc}}}}({{\mathbb R}}^+; L^2_{{\operatorname{{loc}}}} (\Omega)), \: {\omega}:= {{\rm curl}\,}u \in L^\infty({{\mathbb R}}^+;L^1 \cap L^\infty(\Omega)).$$ As ${\omega}_0:={{\rm curl}\,}u_0$ is compactly supported in ${\Omega}$ we note that we can define the initial circulation. Indeed, let $J$ be a smooth closed Jordan curve in ${\Omega}$ such that $\mathcal{C}$ is included in the bounded component of ${{\mathbb R}}^2 \setminus J$ and ${\operatorname{supp\,}}{\omega}_0$ in the unbounded component. Therefore, we can define the real number $${\gamma}_0 : = \oint_{J} u_0 \cdot \hat {\tau}\, ds.$$ Let us remind that $u_0$ satisfies , so that it belongs to $W^{1,q}_{{\operatorname{{loc}}}}$ for all finite $q$, and so that the integral at the r.h.s. is well-defined. Moreover, ${\gamma}_0$ does not depend on the curve separating $\mathcal{C}$ and ${\operatorname{supp\,}}{\omega}_0$ (thanks to the curl free condition near $\mathcal{C}$). Passing to the limit, we obtain $$\label{g_0} {\gamma}_0 = \oint_{{\partial}{{\mathcal C}}} u_0 \cdot \hat {\tau}\, ds.$$ We have proven in the previous subsection that we can reconstruct the velocity in terms of the vorticity and the circulation: $$u_0(x) = K_{{\Omega}}[{\omega}_0](x) + ({\gamma}_0+\int {\omega}_0) H_{{\Omega}}(x).$$ From the definition of weak solution, we know that the quantities $\| {\omega}(t,\cdot) \|_{L^1\cap L^\infty({\Omega})}$ and $\int {\omega}(t,\cdot)$ are bounded in ${{\mathbb R}}^+$. Moreover, we infer that the circulation $${\gamma}(t): = \oint_{{\partial}{{\mathcal C}}} u(t,\cdot) \cdot \hat {\tau}\, ds$$ is bounded locally in time. To show this estimate, first, we note that the previous integral is well defined putting $${\gamma}(t): = \oint_{J} u(t,\cdot) \cdot \hat {\tau}\, ds - \int_{A} {\omega}(t,\cdot) \, dx,$$ with $A={\Omega}\cap ($bounded connected component of ${{\mathbb R}}^2 \setminus J)$. Indeed, thanks to the uniqueness part of Proposition \[biot\] with $\oint_{J} u(t,\cdot) \cdot \hat {\tau}\, ds = g_0$, we state that $u$ can be written as in , and we deduce from Theorem \[grisvard\] and Proposition \[biot est\] that $\oint_{{\partial}{{\mathcal C}}} u(t,\cdot) \cdot \hat {\tau}\, ds$ is well defined. Next, let $K$ be a compact subset of ${\Omega}$. In this subset, we know by the definition of ${{\mathcal T}}$ and Proposition \[biot est\] that $K_{{\Omega}}[{\omega}(t,\cdot)](x)$ is uniformly bounded in ${{\mathbb R}}^+\times K$. Then there exist $C_1, C_2$ such that implies $$C_1 | {\gamma}(t)| \leq \| u(t,\cdot) \|_{L^2(K)} + C_2 + C_1 \| {\omega}(t,\cdot) \|_{L^1({\Omega})} ,$$ for any $t\in {{\mathbb R}}^+$ (we have $C_1=\| H_{{\Omega}} \|_{L^2(K)}$). As $u$ belongs to $L^\infty_{{\operatorname{{loc}}}}({{\mathbb R}}^+;L^2(K))$ (see the definition of weak solution), then we have that $$\label{g bd} {\gamma}\in L^{\infty}_{{\operatorname{{loc}}}}([0,\infty)).$$ Moreover, putting together this estimate of ${\gamma}$, Remark \[DT loc\] and Proposition \[biot est\], then gives that $$\label{est u} u \in L^{\infty}_{{\operatorname{{loc}}}}([0,\infty);L^p_{{\operatorname{{loc}}}}(\overline{{\Omega}})), \ \forall p<4,$$ which is an improvement compared to the definition of weak solution, because we control up to the boundary. Let us derive a formulation verified by ${\omega}$. First, we note that for any test function ${\varphi}\in {{\mathcal D}}([0,\infty)\times {\Omega}; {{\mathbb R}})$, then ${\psi}:= {\nabla}^\perp {\varphi}$ belongs to the set of admissible test functions, and reads $$\label{tourEulerweak} \forall \, {\varphi}\in {{\mathcal D}}([0,\infty)\times {\Omega}; {{\mathbb R}}), \quad \int_0^{\infty} \int_\Omega \left( {\omega}\cdot {\partial}_t {\varphi}+ {\omega}u \cdot {\nabla}{\varphi}\right) = -\int_\Omega {\omega}_0 {\varphi}(0, \cdot).$$ Then, $({\omega},u)$ verifies the transport equation $$\label{transport*} {\partial}_t {\omega}+ u\cdot {\nabla}{\omega}=0$$ in the sense of distribution in ${\Omega}$. We need a formulation on ${{\mathbb R}}^2$. For that, we denote by $\bar {\omega}$ (respectively $\bar u$) the extension of ${\omega}$ (respectively $\bar u$) to ${{\mathbb R}}^2$ by zero in ${\Omega}^c$. Let us check that it verifies the transport equation for any test function ${\varphi}\in C^\infty_c({{\mathbb R}}\times {{\mathbb R}}^2)$. \[tour extension\] Let $({\omega},u)$ a weak solution to the Euler equations in ${\Omega}$. Then, the pair of extension verifies in the sense of distribution $$\left\lbrace \begin{aligned} \label{tour_equa} &{\partial}_t \bar{\omega}+\bar u\cdot {\nabla}\bar{\omega}=0, & \text{ in }{{\mathbb R}}^2\times(0,\infty) \\ & {{\rm div}\,}\bar u=0 \text{ and }{{\rm curl}\,}\bar u=\bar{\omega}+g_{\bar{\omega},{\gamma}}(s){\delta}_{{\partial}{{\mathcal C}}}, &\text{ in }{{\mathbb R}}^2\times[0,\infty) \\ & |\bar u|\to 0, &\text{ as }|x|\to \infty \\ & \bar {\omega}(x,0)=\bar{\omega}_0(x), &\text{ in }{{\mathbb R}}^2. \end{aligned} \right .$$ where ${\delta}_{{\partial}{{\mathcal C}}}$ is the Dirac function along the curve and with$$\label{g_o_bis} \begin{split} g_{\bar{\omega},{\gamma}}(x)=& u\cdot \hat{{\tau}}\\ =&\Bigl[ \lim_{{\rho}\to 0^+} K_{{\Omega}}[\bar{\omega}](x-{\rho}\hat{n})+({\gamma}+\int\bar {\omega})H_{{\Omega}}(x-{\rho}\hat{n}) \Bigl]\cdot \overrightarrow{{\tau}} \end{split}$$ The third and fourth points are obvious. The second point is a classical computation concerning tangent vector fields: there is no additional term on the divergence, whereas it appears on the ${{\rm curl}\,}$ the jump of the tangential velocity (see e.g. the proof of Lemma 5.8 in [@lac_euler]). Concerning the first point, we have to consider the case of a test function whose the support meets the boundary. Let ${\varphi}\in C^\infty_c({{\mathbb R}}\times {{\mathbb R}}^2)$. We introduce ${\Phi}$ a non-decreasing function which is equal to 0 if $s\leq 1$ and to 1 if $s\geq 2$. Let $${\Phi}^{{\varepsilon}}(x):= {\Phi}\Bigl(\frac{|{{\mathcal T}}(x)|-1}{{\varepsilon}}\Bigl).$$ We note that - it is a cutoff function of an ${\varepsilon}$-neighborhood of ${{\mathcal C}}$, because ${{\mathcal T}}$ is continuous up to the boundary (see Theorem \[grisvard\]). - we have ${\nabla}{\Phi}^{\varepsilon}\cdot H_{{\Omega}}\equiv 0$, because $H_{{\Omega}}(x) = \frac{{\nabla}^\perp |{{\mathcal T}}(x)|}{|{{\mathcal T}}(x)|}$ (see Subsection \[sect : biot\]). - the Lebesgue measure of the support of ${\nabla}{\Phi}^{\varepsilon}$ is $o(\sqrt{{\varepsilon}})$. Indeed the support of ${\nabla}{\Phi}^{\varepsilon}$ is contained in the subset $\{ x\in{\Omega}_{\varepsilon}| 1+{\varepsilon}\leq |{{\mathcal T}}(x)| \leq 1 + 2{\varepsilon}\}$. The Lebesgue measure can be estimated thanks to Remark \[DT loc\]: $$\int_{ 1+{\varepsilon}\leq |{{\mathcal T}}(x)| \leq 1 + 2{\varepsilon}}dx=\int_{1+{\varepsilon}\leq |z| \leq 1 + 2{\varepsilon}} |\det(D{{\mathcal T}}^{-1})|(z) dz \leq \sqrt{{\varepsilon}} \|\det(D{{\mathcal T}}^{-1})\|_{L^2(B(0,1+2{\varepsilon})\setminus B(0,1))},$$ where the norm in the right hand side term tends to zero as ${\varepsilon}\to 0$ (by the dominated convergence theorem). Another interesting property is the fact that the velocity is tangent to the boundary whereas ${\nabla}{\Phi}^{\varepsilon}$ is normal. Indeed, we claim the following. \[ortho\] As ${\omega}$ belongs to $L^\infty({{\mathbb R}}^+; L^1\cap L^\infty({\Omega}))$ then $$u \cdot {\nabla}{\Phi}^{\varepsilon}\to 0 \text{ strongly in }L^1({{\mathbb R}}^2),$$ uniformly in time, when ${\varepsilon}\to 0$. This property is not so obvious, because $|u\cdot {\nabla}{\Phi}^{\varepsilon}| \approx \frac{|D{{\mathcal T}}|^2}{{\varepsilon}} R[{\omega}] {\Phi}'\Bigl(\frac{|{{\mathcal T}}(x)|-1}{{\varepsilon}}\Bigl)$ with $\| {\Phi}'\Bigl(\frac{|{{\mathcal T}}(x)|-1}{{\varepsilon}}\Bigl) \|_{L^1} = O({\varepsilon})$ (in the case where $D{{\mathcal T}}^{-1}$ is bounded) and $D{{\mathcal T}}$ blowing up. The perpendicular argument is crucial here and we use the explicit formula to show the cancellation effect. This lemma is proved in the case where ${{\mathcal C}}$ is a Jordan arc in [@lac_euler Lemma 4.6]. For a sake of completeness, we give the general proof in Section \[sect : technical\]. As ${\Phi}^{\varepsilon}{\varphi}$ belongs to $C^{\infty}_c({{\mathbb R}}\times {\Omega})$ for any ${\varepsilon}>0$, we can write that $({\omega},u)$ is a weak solution in ${\Omega}$: $$\int_0^\infty\int_{{{\mathbb R}}^2}({\Phi}^{\varepsilon}{\varphi})_t{\omega}\, dxdt +\int_0^\infty \int_{{{\mathbb R}}^2}{\nabla}({\Phi}^{\varepsilon}{\varphi})\cdot u{\omega}\, dxdt+\int_{{{\mathbb R}}^2}({\Phi}^{\varepsilon}{\varphi})(0,x){\omega}_0(x) \,dx=0.$$ As ${\omega}\in L^\infty(L^1\cap L^\infty)$, it is obvious that the first and the third integrals converge to $$\int_0^\infty\int_{{{\mathbb R}}^2}{\varphi}_t \bar {\omega}\,dxdt \text{ and } \int_{{{\mathbb R}}^2}{\varphi}(0,x)\bar {\omega}_0(x) \, dx$$ as ${\varepsilon}\to 0$. Concerning the second integral, we have $$\int_0^\infty \int_{{{\mathbb R}}^2}{\nabla}({\Phi}^{\varepsilon}{\varphi})\cdot u{\omega}\, dxdt = \int_0^\infty \int_{{{\mathbb R}}^2}{\varphi}({\nabla}{\Phi}^{\varepsilon}\cdot u) {\omega}\, dxdt + \int_0^\infty \int_{{{\mathbb R}}^2}{\Phi}^{\varepsilon}{\nabla}{\varphi}\cdot u{\omega}\, dxdt.$$ The first right hand side term tends to zero because ${\nabla}{\Phi}^{\varepsilon}\cdot u \to 0 $ in $L^1({{\mathbb R}}^2)$ and ${\omega}\in L^\infty({{\mathbb R}}^+\times {{\mathbb R}}^2)$. The second right hand side term converges to $$\int_0^\infty \int_{{{\mathbb R}}^2} {\nabla}{\varphi}\cdot \bar u\bar {\omega}\, dxdt$$ because $u$ belongs to $L^2({\operatorname{supp\,}}{\varphi}\cap ({{\mathbb R}}^+_*\times \overline{{\Omega}}))$ (see ). Putting together these limits, we obtain that: $$\int_0^\infty\int_{{{\mathbb R}}^2}{\varphi}_t\bar{\omega}\, dxdt +\int_0^\infty \int_{{{\mathbb R}}^2}{\nabla}{\varphi}\cdot \bar u\bar{\omega}\, dxdt+\int_{{{\mathbb R}}^2}{\varphi}(0,x)\bar{\omega}_0(x)\, dx=0,$$ which ends the proof. The goal of the following is to prove that the $L^p$ norm, the total mass of the vorticity and the circulation are conserved quantities. In a domain with smooth boundaries, the pair $({\omega},u)$ is a strong solution of the transport equation, and the conservation of the previous quantities is classical. The main point here is to remark that this pair in our case is a renormalized solution in the sense of DiPerna and Lions (see [@dip-li]) of the transport equation. We consider equation as a linear transport equation with given velocity field $\bar{u}$. Our purpose here is to show that if $\bar{\omega}$ solves this linear equation, then so does $\beta(\bar{\omega})$ for a suitable smooth function ${\beta}$. This follows from the theory developed in [@dip-li] where they need that the velocity field belongs to $L_{{\operatorname{{loc}}}}^1\left({{\mathbb R}}^+,W_{{\operatorname{{loc}}}}^{1,1}({{\mathbb R}^2})\right) \cap L^1_{{\operatorname{{loc}}}} \left({{\mathbb R}}^+,L^1({{\mathbb R}}^2)+ L^\infty({{\mathbb R}}^2)\right)$ and that ${{\rm div}\,}u$ is bounded. Let us check that we are in this setting. \[W11\] Let $({\omega},u)$ be a global weak solution in ${\Omega}$. Then $$\bar u \in L^\infty_{{\operatorname{{loc}}}} \left({{\mathbb R}}^+,W_{{\operatorname{{loc}}}}^{1,1}({{\mathbb R}^2})\right)\cap L^\infty_{{\operatorname{{loc}}}} \left({{\mathbb R}}^+,L^1({{\mathbb R}}^2)+ L^\infty({{\mathbb R}}^2)\right).$$ We use the explicit form of the velocity : $u(x)=DT(x) f(T(x))$, where $f$ looks like the Biot-Savart operator in ${{\mathbb R}}^2$. Therefore, the result follows from the fact that $DT$ belongs to $W^{1,p}_{{\operatorname{{loc}}}}(\overline{{\Omega}})$ for any $p<4/3$ (see Theorem \[grisvard\]), and thanks to Proposition \[biot est\] and the Calderon-Zygmund inequality. The proof is written in [@lac_small] in the case where ${{\mathcal C}}$ is a Jordan arc. We generalize it in Section \[sect : technical\]. Therefore, [@dip-li] implies that $\overline{{\omega}}$ is a renormalized solution. \[renorm1\] For $\bar u$ fixed. Let $\overline{{\omega}}$ be a solution of the linear equation in ${{\mathbb R}}^2$. Let $\beta:{{\mathbb R}}\rightarrow {{\mathbb R}}$ be a smooth function such that $$|\beta'(t)|\leq C(1+ |t|^p),\qquad \forall t\in {{\mathbb R}},$$ for some $p\geq 0$. Then $\beta(\bar{\omega})$ is a solution of in ${{\mathbb R}}^2$ (in the sense of distribution) with initial datum $\beta(\bar{\omega}_0)$. We recall that $\bar u$ denotes the extension of $u$ by zero in ${{\mathcal C}}$, and the previous lemma means that for any ${\Phi}\in C^\infty_c([0,\infty)\times {{\mathbb R}}^2)$ we have $$\label{renorm} \frac{d}{dt} \int_{{{\mathbb R}}^2} \beta({\omega}){\Phi}(t,x)\, dx=\int_{{{\mathbb R}}^2} \beta({\omega}) ({\partial}_t {\Phi}+u\cdot {\nabla}{\Phi})\,dx$$ in the sense of distributions on ${{\mathbb R}}^+$. Now, we write a remark from [@lac_miot] in order to establish some desired properties for ${\omega}$. \[remark : conserv\] (1) Since the right-hand side in belongs to $L_{{\operatorname{{loc}}}}^1({{\mathbb R}}^+)$, the equality holds in $L_{{\operatorname{{loc}}}}^1({{\mathbb R}}^+)$. With this sense, actually still holds when ${\Phi}$ is smooth, bounded and has bounded first derivatives in time and space. In this case, we have to consider smooth functions $\beta$ which in addition satisfy $\beta(0)=0$, so that $\beta({\omega})$ is integrable. This may be proved by approximating ${\Phi}$ by smooth and compactly supported functions ${\Phi}_n$ for which applies, and by letting then $n$ go to $+\infty$.\ (2) We apply the point (1) for $\beta(t)=t$ and ${\Phi}\equiv 1$, which gives $$\label{om-est-1} \int_{{\Omega}} {\omega}(t,x)\, dx = \int_{{\Omega}} {\omega}_0(x)\, dx \text{ for all }t>0.$$ (3) We let $1\leq p<+\infty$. Approximating $\beta(t)=|t|^p$ by smooth functions and choosing ${\Phi}\equiv 1$ in , we deduce that for a solution ${\omega}$ to , the maps $t\mapsto \|{\omega}(t)\|_{L^p({\Omega})}$ are continuous and constant. In particular, we have $$\label{om-est-2} \|{\omega}(t)\|_{L^1({\Omega})}+\|{\omega}(t)\|_{L^\infty({\Omega})}\equiv \|{\omega}_0\|_{L^1({\Omega})}+\|{\omega}_0\|_{L^\infty({\Omega})}.$$ In the case of unbounded domain, we will require that ${\omega}$ stays compactly supported. Specifying our choice for ${\Phi}$ in , we are led to the following. \[compact\_vorticity\] Let ${\omega}$ be a weak solution of such that $${\omega}_0 \text{ is compactly supported in } B(0,R_0)$$ for some positive $R_0$. For any ${{\mathbf T}}^*$ fixed, then there exists $C>0$ such that $${\omega}(t,\cdot) \text{ is compactly supported in } B(0,R_0+Ct),$$ for any $t\in [0,{{\mathbf T}}^*]$. The main computation of this proof can be found in [@lac_miot] or in [@lac_small]. For a sake of self-containedness we write the details in Section \[sect : technical\]. Therefore, for ${{\mathbf T}}^*$ fixed, there exists $R_1$ such that the support of the vorticity is included in $B(0,R_1)$ for all $t\in [0,{{\mathbf T}}^*]$. It implies that $u$ is harmonic in $B(0,R_1)^c$ (${{\rm div}\,}u = {{\rm curl}\,}u =0$), and is verified in the strong way on this set. With strong solution, the Kelvin’s circulation theorem can be used, which states that the circulation at infinity is conserved: $${\gamma}(t)+\int {\omega}(t,\cdot) = {\gamma}^\infty(t) \equiv {\gamma}^\infty_0 = {\gamma}_0 + \int {\omega}_0.$$ Using the conservation of the total mass , we obtain that the circulation of the velocity around the obstacle is conserved: $$\label{gamma cons} {\gamma}(t) \equiv {\gamma}_0,\ \forall t \in [0,{{\mathbf T}}^*].$$ [*b) Weak solution in a bounded domain.*]{} The previous part can be adapted easily to the bounded case. In simply connected domain, we do not consider the circulation: $$u_0(x) = K_{{\Omega}}[{\omega}_0].$$ As Proposition \[tour extension\] is about the behavior near the boundary, we can check that we obtain exactly the same. \[tour extension bd\] Let $({\omega},u)$ a weak solution to the Euler equations in ${\Omega}$ bounded. Then, the pair of extension verifies in the sense of distribution $$\left\lbrace \begin{aligned} \label{tour_equa_bis bd} &{\partial}_t \bar{\omega}+\bar u\cdot {\nabla}\bar{\omega}=0, & \text{ in }{{\mathbb R}}^2\times(0,\infty) \\ & {{\rm div}\,}\bar u=0 \text{ and }{{\rm curl}\,}\bar u=\bar{\omega}+g_{\bar{\omega}}(s){\delta}_{{\partial}{\Omega}}, &\text{ in }{{\mathbb R}}^2\times[0,\infty) \\ & \bar {\omega}(x,0)=\bar{\omega}_0(x), &\text{ in }{{\mathbb R}}^2. \end{aligned} \right .$$ where ${\delta}_{{\partial}{\Omega}}$ is the Dirac function along the curve and $g_{\bar{\omega}}$ is : $$\label{g_o_bis bd} \begin{split} g_{\bar{\omega}}(x)=& -u\cdot \hat{{\tau}}\\ =&-\Bigl[ \lim_{{\rho}\to 0^+} K_{{\Omega}}[\bar{\omega}](x - {\rho}\hat{n}) \Bigl]\cdot \hat{{\tau}} \end{split}$$ Moreover, we have a term less compared of the unbounded case, then we can also check that $\overline{{\omega}}$ is a renormalized solution and that $$\label{om-est-1 bd} \int_{{\Omega}} {\omega}(t,x)\, dx = \int_{{\Omega}} {\omega}_0(x)\, dx \text{ for all }t>0$$ and $$\label{om-est-2 bd} \|{\omega}(t)\|_{L^1({\Omega})}+\|{\omega}(t)\|_{L^\infty({\Omega})}\equiv \|{\omega}_0\|_{L^1({\Omega})}+\|{\omega}_0\|_{L^\infty({\Omega})}.$$ Liapounov method {#sect : 3} ================ In this section, we present the proof for a Lagrangian solution. When the velocity $u$ is smooth, it gives rise to a flow $\phi_x(t)$ defined by $$\label{i2} \begin{cases} \frac{d}{dt} \phi_x(t)=u\big(t,\phi_x(t)\big) \\ \phi_x(0)=x \in {{\mathbb R}^2}. \end{cases}$$ In view of , we then have $$\label{i3} \frac{d}{dt} {\omega}\big(t,\phi_x(t)\big)\equiv 0,$$ which gives that ${\omega}$ is constant along the characteristics. We assume here that these trajectories exist and are differentiable in our case, and we prove by the Liapounov method that the support of the vorticity never meets the boundary ${\partial}{\Omega}$. Although we do not know that the flow is smooth, the following computation is the main idea of this article, and it will be rigourously applied in Section \[sect : 4\]. The Liapounov method to prove this kind of result is used by Marchioro and Pulvirenti in [@mar_pul] in the case of a point vortex which moves under the influence of the regular part of the vorticity, and by Marchioro in [@mar] when the dirac is fixed. In both articles, the authors use the explicit formula of the velocity associated to the dirac centered at $z(t)$: $H(x)=(x-z)^\perp/(2\pi |x-z|^2)$. The geometrical structure is the key of their analysis. Indeed, choosing $L(t) = - \ln |\phi_x(t)-z(t) |$ they have that 1. $L(t) \to \infty$ if and only if the trajectory meets the dirac. Then, it is sufficient to prove that $L'(t)$ stays bounded in order to prove the result. 2. $H( \phi_x(t)) \cdot( \phi_x(t)-z(t)) \equiv 0$, which implies that the singular term in the velocity does not appear. Therefore, the explicit blow up in the case of the dirac point is crucial in two points of view: for the symmetry cancelation (point b) and for the fact that the primitive of $1/x$ is $\ln x$ which blows up near the origin (point a). In our case, we do not have such an explicit form of the blow up near the corners and the primitive of $1/\sqrt{x}$ is $\sqrt{x}$ which is bounded near $0$. The idea is to add a logarithm. When ${{\mathcal C}}$ is a Jordan arc, $|{{\mathcal T}}| \approx 1+\sqrt{z^2-1}$ and we note that $\ln \ln (1+ \sqrt{z^2 -1})$ blows up near the end-points $\pm 1$. However, the problem with Liapounov function is that it is very specific on the case studied. For example, this function is different if the dirac point is fixed or if it moves with the fluid (for more details and explanations, see the discussion on Liapounov functions in Section \[sect : 6\]). We fix $x_0\in {\Omega}$ and we consider $\phi=\phi_{x_0}(t)$ the trajectory which comes from $x_0$ (see ). We denote $$L(t):= -\ln |L_1(t, \phi(t)) |$$ with $L_1$ depending on the geometric property of ${\Omega}$:\[assump\] 1. if $\Omega$ is a bounded, simply connected open set, such that ${\partial}{\Omega}$ has a finite number of corner with angles greater than $\pi/2$ (as in Theorem \[main 1\]), then we choose $$\label{L1 1} L_1(t,x):= \frac1{2\pi} \int_{{\Omega}} \ln\Bigl( \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*||{{\mathcal T}}(y)|} \Bigl){\omega}(t,y) \, dy;$$ 2. if ${\Omega}:={{\mathbb R}}^2\setminus {{\mathcal C}}$, where ${{\mathcal C}}$ is a compact, simply connected set, such that ${\partial}{\Omega}$ has a finite number of corner with angles greater than $\pi/2$ (as in Theorem \[main 2\]), then we choose $$\label{L1} L_1(t,x):= \frac1{2\pi} \int_{{\Omega}} \ln\Bigl( \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*||{{\mathcal T}}(y)|} \Bigl){\omega}(t,y) \, dy+ \frac{{\alpha}}{2\pi}\ln |{{\mathcal T}}(x)|,$$ where ${\alpha}:= {\gamma}_0+\int {\omega}_0$. When trajectories exist, it is obvious (without renormalization) that and imply that $$\label{norm om} \| {\omega}(t,\cdot)\|_{L^p} =\| {\omega}_0\|_{L^p} \text{ and } \int_{{\Omega}} {\omega}(t,\cdot) = \int_{{\Omega}} {\omega}_0,\ \forall t>0, \forall p\in[1,\infty].$$ We assume that ${\omega}_0$ is compactly supported, then included in $B(0,R_0)$ for some $R_0>0$. Thanks to Propositions \[T-inf\] and \[biot est\], we see that the velocity $u$ is bounded outside this ball by a constant $C_0$, and and give $$\label{support} {\operatorname{supp\,}}{\omega}(t,\cdot) \subset B(0,R_0+C_0 t),\ \forall t\geq 0.$$ We also have that the circulation is conserved. If we assume that ${\omega}_0$ is non positive, then it follows from that $$\label{signe} {\omega}(t,x) \leq 0,\ \forall t\geq 0,\ \forall x \in {\Omega}.$$ Blow up of the Liapounov function near the curve. -------------------------------------------------   The first required property is that $L$ goes to infinity iff the trajectory meets the boundary. Next, if we prove that $L$ is bounded, then it will follow that the trajectory stays far away the boundary. We fix $\mathbf{T}^*>0$, using we denote by $R_{\mathbf{T}^*}:= R_0+C_0 {\mathbf{T}^*}$, such that ${\operatorname{supp\,}}{\omega}(t,\cdot)\subset B(0,R_{\mathbf{T}^*})$ for all $t\in [0,{\mathbf{T}^*}]$. \[L1 maj\] For any case (1)-(2), there exists $C_1=C_1({\mathbf{T}^*},{\omega}_0,{\gamma}_0)$ such that $$|L_1(t,x)| \leq C_1||{{\mathcal T}}(x)|-1|^{1/2}, \ \forall x\in B(0,R_{\mathbf{T}^*}), \ \forall t\in [0,{\mathbf{T}^*}].$$ For a sake of shortness, we write the proof in the hardest case: case (2). The other case follows easily. Recalling the notation $z^*=z/|z|^2$, we can compute $$\begin{split} \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|^2}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*|^2|{{\mathcal T}}(y)|^2} &= 1-\frac{ |{{\mathcal T}}(x)-{{\mathcal T}}(y)^*|^2|{{\mathcal T}}(y)|^2-|{{\mathcal T}}(x)-{{\mathcal T}}(y)|^2}{ |{{\mathcal T}}(x)-{{\mathcal T}}(y)^*|^2|{{\mathcal T}}(y)|^2}\\ = 1-&\frac{( |{{\mathcal T}}(x)|^2 |{{\mathcal T}}(y)|^2 -2 {{\mathcal T}}(x)\cdot {{\mathcal T}}(y) +1)- (|{{\mathcal T}}(x)|^2 -2{{\mathcal T}}(x)\cdot {{\mathcal T}}(y) + |{{\mathcal T}}(y)|^2)}{ |{{\mathcal T}}(x)-{{\mathcal T}}(y)^*|^2|{{\mathcal T}}(y)|^2}\\ &= 1- \frac{ (|{{\mathcal T}}(x)|^2-1)(|{{\mathcal T}}(y)|^2-1)}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*|^2|{{\mathcal T}}(y)|^2}. \end{split}$$ Therefore, we have $$\begin{aligned} \ln\Bigl( \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*||{{\mathcal T}}(y)|}\Bigl) &=& \frac12 \ln\Bigl( \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|^2}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*|^2|{{\mathcal T}}(y)|^2}\Bigl) \\ &=& \frac12 \ln\Bigl(1- \frac{ (|{{\mathcal T}}(x)|^2-1)(|{{\mathcal T}}(y)|^2-1)}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*|^2|{{\mathcal T}}(y)|^2}\Bigl),\end{aligned}$$ and we need an estimate of $\ln(1-r)$ when $r\in (0,1)$, because we recall that $|{{\mathcal T}}(z)| > 1$ for any $z\in {\Omega}$. It is easy to see (studying the difference of the functions) that $$|\ln(1-r)| = - \ln(1-r)\leq \Bigl(\frac{r}{1-r}\Bigl)^{1/2},\ \forall r\in [0,1).$$ Applying this inequality, we have for any $y\neq x$ that $$\begin{aligned} \Bigl| \ln\Bigl( \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*||{{\mathcal T}}(y)|}\Bigl)\Bigl| &\leq & \frac{1}2 \Bigl(\frac{\frac{ (|{{\mathcal T}}(x)|^2-1)(|{{\mathcal T}}(y)|^2-1)}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*|^2|{{\mathcal T}}(y)|^2}}{\frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|^2}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*|^2|{{\mathcal T}}(y)|^2} }\Bigl)^{1/2}\\ &\leq& \frac{1}2\frac{ \sqrt{(|{{\mathcal T}}(x)|^2-1)(|{{\mathcal T}}(y)|^2-1)}}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|}.\end{aligned}$$ By continuity of ${{\mathcal T}}$, we denote by $C_{\mathbf{T}^*}$ a constant such that ${{\mathcal T}}(B(0,R_{\mathbf{T}^*}))\subset B(0,C_{\mathbf{T}^*})$. Finally, we apply the previous inequality to $L_1$ and we find for all $x\in B(0,R_{\mathbf{T}^*})$ and $t\in [0,{\mathbf{T}^*}]$: $$\begin{aligned} |L_1(t,x)| & \leq & \frac{C_{\mathbf{T}^*}(C_{\mathbf{T}^*}+1)^{1/2}}{4\pi}(|{{\mathcal T}}(x)|-1)^{1/2} \int_{{\Omega}} \frac{|{\omega}(y)|}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|} \, dy + \frac{|{\alpha}|}{2\pi}\ln |{{\mathcal T}}(x)|\\ &\leq & \frac{\sqrt{2}C_{\mathbf{T}^*}^{3/2}}{4\pi}(|{{\mathcal T}}(x)|-1)^{1/2} C (\| {\omega}\|_{L^1}^{1/2} \| {\omega}\|_{L^\infty}^{1/2} + \| {\omega}\|_{L^1}^{a} \| {\omega}\|_{L^\infty}^{1-a}) + \frac{|{\alpha}|}{2\pi} ( |{{\mathcal T}}(x)| -1).\end{aligned}$$ For the last inequality, we used a part of Proposition \[biot est\]. As $( |{{\mathcal T}}(x)| -1)\leq C_{\mathbf{T}^*}^{1/2} ( |{{\mathcal T}}(x)| -1)^{1/2}$, the conclusion follows from . Concerning the lower bound for the case (1)-(2), we need some conditions on the sign. \[L1 est\] If ${\omega}_0$ is non-positive and ${\gamma}_0\geq - \int {\omega}_0 $ (only in case (2)), then there exists $C_2=C_2({\mathbf{T}^*},{\omega}_0)$ such that $$L_1(t,x) \geq C_2||{{\mathcal T}}(x)|-1|, \ \forall x\in B(0,R_{\mathbf{T}^*}), \ \forall t\in [0,{\mathbf{T}^*}].$$ Again, we write the details in the case (2). We denote by $r_\infty:= \|{\omega}_0 \|_{L^\infty}$ and $r_1:= \|{\omega}_0 \|_{L^1}$. For $\rho >0$, we denote by $$V_1:= ({{\mathcal C}}+ B(0,\rho))\cap {\Omega}=\{ x \in {\Omega}; \mathrm{dist}(x,{{\mathcal C}}) < \rho\}, \ V_2:= {\Omega}\setminus V_1.$$ We fix $\rho$ such that the lebesgue measure of $V_1$ is equal to $r_1/(2r_\infty)$. We deduce from that $$r_1 = \| {\omega}(t,\cdot)\|_{L^1(V_1)} + \| {\omega}(t,\cdot)\|_{L^1(V_2)}$$ with $\| {\omega}(t,\cdot)\|_{L^1(V_1)} \leq r_\infty r_1/(2r_\infty)=r_1/2$ which implies that $\| {\omega}(t,\cdot)\|_{L^1(V_2)} \geq r_1/2$. As the logarithm of the fraction is negative (see the proof of Lemma \[L1 maj\]), we have, with the sign condition, that: $$L_1(t,x) \geq \frac1{2\pi} \int_{V_2} \ln\Bigl( \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*||{{\mathcal T}}(y)|} \Bigl){\omega}(y) \, dy.$$ Moreover, thanks to the computation made in the proof of Lemma \[L1 maj\], we have $$\begin{aligned} \ln\Bigl( \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*||{{\mathcal T}}(y)|}\Bigl) &=& \frac12 \ln\Bigl( \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|^2}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*|^2|{{\mathcal T}}(y)|^2}\Bigl) \\ &=& \frac12 \ln\Bigl(1- \frac{ (|{{\mathcal T}}(x)|^2-1)(|{{\mathcal T}}(y)|^2-1)}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*|^2|{{\mathcal T}}(y)|^2}\Bigl)\\ &\leq & - \frac12 \frac{ (|{{\mathcal T}}(x)|^2-1)(|{{\mathcal T}}(y)|^2-1)}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*|^2|{{\mathcal T}}(y)|^2}\end{aligned}$$ because $\ln(1+x)\leq x$ for any $x>-1$. As $\rho >0$ and ${{\mathcal T}}$ is continuous, there exists $C_{\rho}>0 $ such that $|{{\mathcal T}}(y)| \geq 1+ C_{\rho}$, for all $y\in V_2$. Moreover, there exists also $\tilde R_{\mathbf{T}^*}>1$ such that ${{\mathcal T}}(B(0,R_{\mathbf{T}^*}))\subset B(0, \tilde R_{\mathbf{T}^*}).$ Adding the fact that ${\omega}$ is non positive, we have for all $y\in V_2\cap {\operatorname{supp\,}}{\omega}$ and $x\in B(0,R_{\mathbf{T}^*})$ $$\begin{aligned} \ln\Bigl( \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*||{{\mathcal T}}(y)|}\Bigl) {\omega}(y) &\geq & \frac12 \frac{ (|{{\mathcal T}}(x)|^2-1)(|{{\mathcal T}}(y)|^2-1)}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*|^2|{{\mathcal T}}(y)|^2} |{\omega}(y)|\\ &\geq & \frac12 \frac{ (|{{\mathcal T}}(x)|-1)(|{{\mathcal T}}(x)|+1) (|{{\mathcal T}}(y)|-1)(|{{\mathcal T}}(y)|+1)}{(|{{\mathcal T}}(x)|+ 1)^2 |{{\mathcal T}}(y)|^2} |{\omega}(y)|\\ &\geq & \frac12 \frac{ (|{{\mathcal T}}(x)|-1) C_{\rho}}{( \tilde R_{\mathbf{T}^*} + 1) \tilde R_{\mathbf{T}^*}} |{\omega}(y)|.\end{aligned}$$ Integrating this last inequality over $V_2$, we obtain that $$\begin{aligned} L_1(t,x) &\geq& \frac1{2\pi} \int_{V_2} \ln\Bigl( \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*||{{\mathcal T}}(y)|} \Bigl){\omega}(y) \, dy \geq \frac{ (|{{\mathcal T}}(x)|-1) C_{\rho}}{ 4\pi( \tilde R_{\mathbf{T}^*} + 1) \tilde R_{\mathbf{T}^*}} \|{\omega}\|_{L^1(V_2)} \\ &\geq&\frac{ C_{\rho}}{ 8\pi( \tilde R_{\mathbf{T}^*} + 1) \tilde R_{\mathbf{T}^*}} r_1 (|{{\mathcal T}}(x)|-1),\end{aligned}$$ which ends the proof. Multiplying by $-1$ the expression of $L_1$, we can establish the same result with opposite signe condition: \[rem : L1 est\] If ${\omega}_0$ is non-negative and ${\gamma}_0\leq -\int {\omega}_0 $, , then there exists $C_2$ such that $$-L_1(t,x) \geq C_2 ||{{\mathcal T}}(x)|-1|, \ \forall x\in B(0,R_{\mathbf{T}^*}), \ \forall t\in [0,{\mathbf{T}^*}].$$ From these two lemmas, it follows obviously the following. \[lem : sign L1\] If ${\omega}_0$ is non-positive and ${\gamma}_0\geq -\int {\omega}_0$ (only for (2)), then we have that - $L_1(x) > 0$ for all $x\in {\Omega}$; - $L_1(x) \to 0$ if and only if $x\to {\partial}{\Omega}$. If ${\omega}_0$ is non-negative and ${\gamma}_0\leq - \int {\omega}_0$ (only for (2)), then we have that - $L_1(x) < 0$ for all $x\in {\Omega}$; - $L_1(x) \to 0$ if and only if $x\to {\partial}{\Omega}$. Indeed, $|{{\mathcal T}}(x)| \to 1$ iff $x\to {\partial}{\Omega}$. Estimates of the Liapounov --------------------------   The issue of this part is to prove that the trajectory never meets the obstacle in finite time. In other word, let $x_0 \in {\operatorname{supp\,}}{\omega}_0$ (then $L_1(0,x_0) \neq 0$) and ${\mathbf{T}^*}>0$, we will prove that $L(t)$ stays bounded in $[0,{\mathbf{T}^*}]$. Then, we differentiate $L$: $$L'(t)= -\Bigl( {\partial}_t L_1(t,\phi (t)) + \phi'(t)\cdot {\nabla}L_1(t,\phi (t)) \Bigl) / L_1(t,\phi (t))$$ and we want to estimate the right hand side term. As usual, we write the details for the case (2). On one hand, we note that $$\begin{aligned} u(t,x) \cdot {\nabla}L_1(t,x) &= &u(t,x) \cdot \Bigl[ \frac1{2\pi} \int_{{\Omega}}\Bigl( \frac{{{\mathcal T}}(x)-{{\mathcal T}}(y)}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|^2}- \frac{{{\mathcal T}}(x)-{{\mathcal T}}(y)^*}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*|^2} \Bigl){\omega}(y) \, dy D{{\mathcal T}}(x) \\ &&+ \frac{{\alpha}}{2\pi} \frac{{{\mathcal T}}(x)}{|{{\mathcal T}}(x)|^2} D{{\mathcal T}}(x)\Bigl] \\ &\equiv &0 \end{aligned}$$ thanks to the explicit formula of $u$ (see ). On the other hand, we use the equation[^2] verified by ${\omega}$ to have $$\begin{aligned} {\partial}_t L_1(t,x) &=& \frac1{2\pi} \int_{{\Omega}} \ln\Bigl( \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*||{{\mathcal T}}(y)|} \Bigl){\partial}_t {\omega}(y) \, dy \\ &=&-\frac1{2\pi} \int_{{\Omega}} \ln\Bigl( \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*||{{\mathcal T}}(y)|} \Bigl) {{\rm div}\,}(u(y){\omega}(y)) \, dy \\ &=& \frac1{2\pi} \int_{{\Omega}} {\nabla}_y \Bigl[\ln\Bigl( \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*||{{\mathcal T}}(y)|} \Bigl)\Bigl]\cdot u(y){\omega}(y) \, dy .\end{aligned}$$ Now, we use the symmetry of the Green kernel (see Subsection \[sect : biot\]) $${\nabla}_y \ln\Bigl( \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*||{{\mathcal T}}(y)|} \Bigl) = {\nabla}_y \ln\Bigl( \frac{|{{\mathcal T}}(y)-{{\mathcal T}}(x)|}{|{{\mathcal T}}(y)-{{\mathcal T}}(x)^*||{{\mathcal T}}(x)|} \Bigl)$$ and the explicit formula of $u(y)$ to write $$\begin{split} {\partial}_t L_1(t,x) = \frac1{2\pi} \int_{{\Omega}} \Bigl[ \Bigl( \frac{{{\mathcal T}}(y)-{{\mathcal T}}(x)}{|{{\mathcal T}}(y)-{{\mathcal T}}(x)|^2} &- \frac {{{\mathcal T}}(y)-{{\mathcal T}}(x)^*}{|{{\mathcal T}}(y)-{{\mathcal T}}(x)^*|^2} \Bigl) \\ &D{{\mathcal T}}(y)\frac{1}{2\pi} D{{\mathcal T}}^T(y) \Bigl( R[{\omega}](y)+ {\alpha}\frac{{{\mathcal T}}(y)^\perp}{|{{\mathcal T}}(y)|^2}\Bigl) \Bigl] {\omega}(y)\, dy. \end{split}$$ As ${{\mathcal T}}$ is holomorphic, $D{{\mathcal T}}$ is of the form $\begin{pmatrix} a & b \\ -b & a \end{pmatrix}$ and we can check that $D{{\mathcal T}}(y)D{{\mathcal T}}^T(y)=(a^2+b^2)Id=|\det(D{{\mathcal T}})(y)|Id$, so $$\label{dtL1}\begin{split} {\partial}_t L_1(t,x) = \frac1{(2\pi)^2} \int_{{\Omega}} \Bigl[ \Bigl( \frac{{{\mathcal T}}(y)-{{\mathcal T}}(x)}{|{{\mathcal T}}(y)-{{\mathcal T}}(x)|^2} &- \frac {{{\mathcal T}}(y)-{{\mathcal T}}(x)^*}{|{{\mathcal T}}(y)-{{\mathcal T}}(x)^*|^2} \Bigl) \\ & \Bigl( R[{\omega}](y)+ {\alpha}\frac{{{\mathcal T}}(y)^\perp}{|{{\mathcal T}}(y)|^2}\Bigl) \Bigl] |\det(D{{\mathcal T}})(y)| {\omega}(y)\, dy. \end{split}$$ The goal is to estimate ${\partial}_t L_1/L_1$. However, Corollary \[lem : sign L1\] states that $L_1$ goes to zero if and only if $x\to {\partial}{\Omega}$. Then it is important to show that ${\partial}_t L_1$ tends to zero as $x\to {\partial}{\Omega}$, and to prove that it goes to zero faster than $L_1$. We will need the following general lemma. \[technic\] Let $h$ be a bounded function, compactly supported in $B(0,R_h)$ for some $R_h>1$. Then, there exists $C_h=C(\| h \|_{L^\infty},R_h)$ such that $$\int_{D^c}\frac{|h(y)|}{|y-x||y-x^*|}\, dy \leq C_h \Bigl(|\ln (|x|-1)| +|x| \Bigl), \ \forall x \in D^c$$ with the notation $x^*=x/|x|^2$ and $D=B(0,1)$. We fix $x\in D^c$ and we denote $$\rho = |x|-1 \text{ and } \rho^*=1-|x^*|=1-\frac{1}{1+{\rho}}=\frac{{\rho}}{1+{\rho}} .$$ We compute $$\int_{D^c}\frac{|h(y)|}{|y-x||y-x^*|}\, dy = \int_{D^c\cap B(x,4{\rho})}\frac{|h(y)|}{|y-x||y-x^*|}\, dy +\int_{D^c\cap B(x,4{\rho})^c}\frac{|h(y)|}{|y-x||y-x^*|}\, dy =: I_1+I_2.$$ For $I_1$, we know that $|y-x^*| \geq |y|-|x^*| \geq {\rho}^*$, hence $$\begin{aligned} I_1 &\leq & \frac1{{\rho}^*} \int_{D^c\cap B(x,4{\rho})}\frac{|h(y)|}{|y-x|}\, dy \leq \frac{\|h\|_{L^\infty}}{{\rho}^*} \int_{B(x,4{\rho})}\frac{1}{|y-x|}\, dy\\ &\leq & \frac{(1+{\rho})\|h\|_{L^\infty}}{{\rho}}2\pi4{\rho}\end{aligned}$$ which gives that $I_1\leq C_1 |x|$. Concerning $I_2$, we note that $$|x-x^*| = {\rho}+ {\rho}^* = {\rho}+ \frac{{\rho}}{1+{\rho}}\leq 2 {\rho}\leq \frac12 |y-x|$$ for any $y\in B(x,4{\rho})^c$. Hence, $$|y-x^*| \geq |y-x| - |x-x^*| \geq \frac12 |y-x|,$$ and we have $$\begin{aligned} I_2 &\leq& \int_{D^c\cap B(x,4{\rho})^c}\frac{2|h(y)|}{|y-x|^2}\, dy\leq 2\|h\|_{L^\infty} \int_0^{2\pi} \int_{4{\rho}}^{|x|+R_h} \frac1r\, drd\theta\\ &\leq& 4\pi \|h\|_{L^\infty} \ln\frac{|x|+R_h}{4{\rho}}\end{aligned}$$ which implies that $I_2 \leq C_2 \Bigl( |\ln(|x|-1) | + \ln \frac{|x|+R_h}{4}\Bigl)$. We conclude because there exists $C_3 = C_3(R_h)$ such that $\ln \frac{|x|+R_h}{4} \leq C_3 |x|$ for any $x\in D^c$. We recall that we have fixed ${\mathbf{T}^*}>0$ and $x_0\in {\operatorname{supp\,}}{\omega}_0$. Using , we denote by $R_{\mathbf{T}^*}:= R_0+C_0 {\mathbf{T}^*}$, such that ${\operatorname{supp\,}}{\omega}(t,\cdot)\subset B(0,R_{\mathbf{T}^*})$ for all $t\in [0,{\mathbf{T}^*}]$. Finally, we estimate ${\partial}_t L_1$ without sign conditions. \[dtL1 est\] There exists $C_3=C_3({\mathbf{T}^*})$ such that $$|{\partial}_t L_1(t,x) | \leq C_3||{{\mathcal T}}(x)|-1| \Bigl(1+ \Bigl|\ln||{{\mathcal T}}(x)|-1|\Bigl|\Bigl), \ \forall x\in B(0,R_{\mathbf{T}^*}), \ \forall t\in [0,{\mathbf{T}^*}].$$ Using we know that $$\Bigl| \frac{{{\mathcal T}}(y)-{{\mathcal T}}(x)}{|{{\mathcal T}}(y)-{{\mathcal T}}(x)|^2} - \frac {{{\mathcal T}}(y)-{{\mathcal T}}(x)^*}{|{{\mathcal T}}(y)-{{\mathcal T}}(x)^*|^2} \Bigl| = \frac {|{{\mathcal T}}(x)-{{\mathcal T}}(x)^*|}{|{{\mathcal T}}(y)-{{\mathcal T}}(x)| |{{\mathcal T}}(y)-{{\mathcal T}}(x)^*|}.$$ Then, Proposition \[biot est\] and allow us to estimate $$|{\partial}_t L_1(t,x) | \leq C|{{\mathcal T}}(x)-{{\mathcal T}}(x)^*| \int_{{\Omega}} \frac { |{\omega}(y)|}{|{{\mathcal T}}(y)-{{\mathcal T}}(x)| |{{\mathcal T}}(y)-{{\mathcal T}}(x)^*|} |\det(D{{\mathcal T}})(y)|\, dy.$$ On one hand, we have for all $x\in B(0,R_{\mathbf{T}^*})$ $$\begin{aligned} |{{\mathcal T}}(x)-{{\mathcal T}}(x)^*| &=& \frac{\Bigl|{{\mathcal T}}(x)|{{\mathcal T}}(x)|^2 - {{\mathcal T}}(x)\Bigl|}{|{{\mathcal T}}(x)|^2}= \frac{|{{\mathcal T}}(x)|^2 - 1}{|{{\mathcal T}}(x)|}\\ &=&\frac{(|{{\mathcal T}}(x)| - 1)(|{{\mathcal T}}(x)| + 1)}{|{{\mathcal T}}(x)|}\leq 2 (|{{\mathcal T}}(x)| - 1).\end{aligned}$$ On the other hand, we change variables ${\eta}={{\mathcal T}}(y)$ and we compute $$\int_{{\Omega}} \frac { |{\omega}(y)|}{|{{\mathcal T}}(y)-{{\mathcal T}}(x)| |{{\mathcal T}}(y)-{{\mathcal T}}(x)^*|} |\det(D{{\mathcal T}})(y)|\, dy = \int_{D^c} \frac { |{\omega}({{\mathcal T}}^{-1}({\eta}))|}{|{\eta}-{{\mathcal T}}(x)| |{\eta}-{{\mathcal T}}(x)^*|} \, d{\eta}.$$ As $\| {\omega}\circ {{\mathcal T}}^{-1} \|_{L^\infty} = \|{\omega}_0 \|_{L^\infty}$ and as $${\operatorname{supp\,}}{\omega}\circ {{\mathcal T}}^{-1} = {{\mathcal T}}({\operatorname{supp\,}}{\omega}) \subset {{\mathcal T}}(B(0,R_{\mathbf{T}^*}))\subset B(0,\tilde R_{\mathbf{T}^*}),$$ we apply Lemma \[technic\] to establish that $$\int_{{\Omega}} \frac { |{\omega}(y)|}{|{{\mathcal T}}(y)-{{\mathcal T}}(x)| |{{\mathcal T}}(y)-{{\mathcal T}}(x)^*|} |\det(D{{\mathcal T}})(y)|\, dy \leq C \Bigl(|\ln (|{{\mathcal T}}(x)|-1)| + \tilde R_{\mathbf{T}^*} \Bigl), \ \forall x \in B(0,R_{\mathbf{T}^*}).$$ This finishes the proof. In the bounded case, there is a tricky difference in the previous proof. We note that $$|{{\mathcal T}}(x)-{{\mathcal T}}(x)^*| =\frac{(1- |{{\mathcal T}}(x)|)(|{{\mathcal T}}(x)| + 1)}{|{{\mathcal T}}(x)|}\leq 2 \frac{(1-|{{\mathcal T}}(x)| )}{|{{\mathcal T}}(x)|}$$ with $|{{\mathcal T}}(x)|$ which can go to zero. To fix this problem, we can prove a similar result to Lemma \[technic\]: there exists $C_h=C(\| h \|_{L^\infty})$ such that $$\frac1{|x|} \int_{D}\frac{|h(y)|}{|y-x||y-x^*|}\, dy \leq C_h \Bigl(|\ln (1-|x|)| +1 \Bigl), \ \forall x \in D.$$ Indeed, we can write $|x| |y-x^*| = \Bigl|y |x| - \frac{x}{|x|} \Bigl|$ and we deduce putting ${\rho}:= 1- |x|$ that: - for $y\in B(x,4{\rho})\cap D$, $ \Bigl|y |x| - \frac{x}{|x|} \Bigl| \geq \Bigl| \frac{x}{|x|} \Bigl| - |y| |x| \geq 1-|x|={\rho}$; - for $y\in B(x,4{\rho})^c\cap D$, $ \Bigl|y |x| - \frac{x}{|x|} \Bigl|^2 - |y-x|^2 = (1-|y|^2)(1-|x|^2) \geq 0$. Using this two inequality, we follow exactly the proof of Lemma \[technic\], which allows us to establish Lemma \[dtL1 est\] in the bounded case. In light of Lemmas \[L1 est\] and \[dtL1 est\], we see that we have an additional logarithm which implies that $\frac{{\partial}_t L_1}{L_1}\to \infty$ if $x\to {{\mathcal C}}$. However, the logarithm is exactly what we can estimate by Gronwall inequality: $L'(t)= \frac{{\partial}_t L_1}{L_1} \approx \ln L_1 = L(t)$. It is the general idea to establish the main result of this section. \[prop support\] We assume that ${\omega}_0$ is non-positive, compactly supported in ${\Omega}$ and ${\gamma}_0\geq - \int {\omega}_0$. Then, for any ${\mathbf{T}^*}>0$, there exists $C_{\mathbf{T}^*}$ such that $$L(t) \leq C_{\mathbf{T}^*},\ \forall x_0\in {\operatorname{supp\,}}{\omega}_0, \ \forall t\in [0,{\mathbf{T}^*}].$$ As the support of ${\omega}_0$ does not intersect ${\partial}{\Omega}$, we have by continuity of ${{\mathcal T}}$ and by Lemma \[L1 est\] that $$L(0) = - \ln L_1(0,x_0) \leq -\ln C_2(|{{\mathcal T}}(x_0)|-1)$$ is bounded uniformly in $x_0 \in {\operatorname{supp\,}}{\omega}_0$. For any, $x_0 \in {\operatorname{supp\,}}{\omega}_0$, gives that $\phi(t)\in B(0,R_{{\mathbf{T}^*}})$, for all $t\in [0,{\mathbf{T}^*}]$. Therefore, the computation made in the begin of this subsection gives $$L'(t) = -{\partial}_t L_1(t,\phi (t)) / L_1(t,\phi (t)).$$ As $L_1$ is positive, we have $$L'(t) = -{\partial}_t L_1(t,\phi (t)) / L_1(t,\phi (t)) \leq |{\partial}_t L_1(t,\phi (t))| / L_1(t,\phi (t)).$$ Lemma \[L1 est\] states that there exists $C_2$ such that $$\label{L 1} L_1(t,\phi (t)) \geq C_2 ( |{{\mathcal T}}(\phi (t))| -1).$$ Moreover, thanks to Lemma \[L1 maj\], it is easy to find $C_4$ such that $$\label{L 2} L_1(t,x) \leq C_4, \ \forall x \in B(0,R_{\mathbf{T}^*}\cap {\Omega}), \ \forall t\in [0,{\mathbf{T}^*}].$$ Finally, we proved in Lemma \[dtL1 est\] that there exists $C_3$ such that $$\label{L 3} |{\partial}_t L_1(t,\phi (t)) | \leq C_3(|{{\mathcal T}}(\phi (t))|-1)\Bigl(1+ |\ln(|{{\mathcal T}}(\phi (t))|-1)|\Bigl).$$ We can easily check that in the interval $(0,\mathrm{e}^{-1})$ the function $x\mapsto x |\ln x|$ is equal to the map $x\mapsto -x \ln x$, which is increasing. By and , we use the fact that $$0\leq \frac{C_2 ( |{{\mathcal T}}(\phi (t))| -1)}{\mathrm{e} C_4} \leq \frac{L_1(t,\phi (t))}{\mathrm{e} C_4} \leq \mathrm{e}^{-1}$$ to apply this remark on : $$\begin{aligned} |{\partial}_t L_1(t,\phi (t)) |& \leq& C_3(|{{\mathcal T}}(\phi (t))|-1)\Bigl(1+ | \ln \frac{\mathrm{e} C_4}{C_2}|+ |\ln\frac{C_2(|{{\mathcal T}}(\phi (t))|-1)}{\mathrm{e} C_4}|\Bigl) \\ &\leq& C_3(|{{\mathcal T}}(\phi (t))|-1)\Bigl(1+ | \ln \frac{\mathrm{e} C_4}{C_2}| \Bigl) - \frac{\mathrm{e}C_3C_4}{C_2} \frac{C_2 ( |{{\mathcal T}}(\phi (t))| -1)}{\mathrm{e} C_4} \ln\frac{C_2(|{{\mathcal T}}(\phi (t))|-1)}{\mathrm{e} C_4} \\ &\leq& \frac{C_3}{C_2}\Bigl(1+ | \ln \frac{\mathrm{e} C_4}{C_2}| \Bigl) L_1(t,\phi (t)) - \frac{\mathrm{e}C_3C_4}{C_2}\frac{L_1(t,\phi (t))}{\mathrm{e} C_4}\ln\frac{L_1(t,\phi (t))}{\mathrm{e} C_4} \\ &\leq& L_1(t,\phi (t)) (C_5 - C_6 \ln L_1(t,\phi (t))).\end{aligned}$$ As $L_1$ is positive, we finally obtain that $$L'(t) = \frac{-{\partial}_t L_1(t,\phi (t))}{L_1(t,\phi (t))} \leq \frac{|{\partial}_t L_1(t,\phi (t))|}{ L_1(t,\phi (t))} \leq C_5 - C_6 \ln L_1(t,\phi (t)) = C_5 + C_6 L(t).$$ The constants $C_5$ and $C_6$ are uniform for $x_0 \in {\operatorname{supp\,}}{\omega}_0$ and $t\in [0,{\mathbf{T}^*}]$. Gronwall’s lemma gives us that $$L(t) \leq (L(0) + \frac{C_5}{C_6}) \mathrm{e}^{C_6 {\mathbf{T}^*}},\ \forall x_0\in {\operatorname{supp\,}}{\omega}_0, \ \forall t\in [0,{\mathbf{T}^*}].$$ By Corollary \[lem : sign L1\], the corollary of this proposition is that the support of ${\omega}(t,\cdot)$ never meets the boundary. As before, we have the same proposition with opposite sign conditions: \[rem sign\] We assume that the support of ${\omega}_0$ is outside a neighborhood of ${\partial}{\Omega}$, that ${\omega}_0$ is non-negative and ${\gamma}_0\leq - \int {\omega}_0$. Then, for any ${\mathbf{T}^*}>0$, there exists $C_{\mathbf{T}^*}$ such that $$L(t) \leq C_{\mathbf{T}^*},\ \forall x_0\in {\operatorname{supp\,}}{\omega}_0, \ \forall t\in [0,{\mathbf{T}^*}].$$ Indeed, replacing everywhere $L_1$ by $-L_1$, the last inequality in the proof would be $$L'(t) = \frac{{\partial}_t L_1(t,\phi (t))}{-L_1(t,\phi (t))} \leq \frac{|{\partial}_t L_1(t,\phi (t))|}{- L_1(t,\phi (t))} \leq C_5 - C_6 \ln -L_1(t,\phi (t)) = C_5 + C_6 L(t),$$ which allows us to conclude in the same way. Vorticity far from the boundary {#sect : 4} =============================== The role of this section is to apply rigorously the idea of the previous section. In Section \[sect : 3\], we assume that the flows exist and are regular enough to compute derivatives. However, the solution considered in Theorems \[main 1\] and \[main 2\] are weak, and such a property is not established in the existence proofs (see [@lac_euler; @GV_lac]). Without considering trajectories, we have proved, thanks to renormalized solutions, that the weak solutions verified the classical estimates: - conservation of the total mass of the vorticity ; - conservation of the $L^p$ norm of the vorticity for $p\in [1,\infty]$ ; - conservation of the circulation (only for exterior domain); - compact support for the vorticity: Proposition \[compact\_vorticity\] (only for exterior domain). We can easily prove that the conservations of the total mass and the $L^1$ norm of the vorticity implies that $${\omega}_0 \geq 0\ \text{ a.e. in }\ {\Omega}\Longrightarrow {\omega}(t,x) \geq 0,\ \forall t\geq 0, \text{ a.e. in }\ {\Omega}.$$ Thinking of the Liapounov function used in Section \[sect : 3\], we can construct a good test function in order to use the renormalization theory. We establish now the key result for proving the uniqueness. \[constant\_vorticity\_2\] Let ${\omega}$ be a global weak solution of such that ${\omega}_0$ is compactly supported in ${\Omega}$. If ${\omega}_0$ is non-positive and ${\gamma}_0\geq -\int {\omega}_0$ (only for exterior domain), then, for any ${\mathbf{T}^*}>0$, there exists a neighborhood $U_{{\mathbf{T}^*}}$ of ${\partial}{{\Omega}}$ such that $$\begin{aligned} {\omega}(t)\equiv 0 \qquad \textrm{on \; \;} U_{{\mathbf{T}^*}},\qquad \forall t\in [0,{\mathbf{T}^*}].\end{aligned}$$ According to Proposition \[compact\_vorticity\], we have $$\label{support 2} {\operatorname{supp\,}}{\omega}(t) \subset B\left(0,R_0+ C_0 t)\right), \qquad \forall t\geq 0.$$ We note $R_{\mathbf{T}^*}:= R_0+C_0 {\mathbf{T}^*}$. Thanks to Lemma \[L1 maj\], it is easy to find $C_4$ such that $$\label{L 22} L_1(t,x) \leq C_4, \ \forall x \in B(0,R_{\mathbf{T}^*}\cap {\Omega}), \ \forall t\in [0,{\mathbf{T}^*}].$$ We also deduce from the conservation of the vorticity sign that Corollary \[lem : sign L1\] holds true. We aim to apply with the choice ${\beta}(t)=t^2$ and we set $${\Phi}(t,x)=\chi_0 \left( \frac{-\ln L_1(t,x) + \ln C_4}{R(t)}\right),$$ where $\chi_0$ is a smooth function: $\mathbb{R} \to{{\mathbb R}}^+$ which is identically zero for $|x|\leq 1/2$ and identically one for $|x|\geq 1$ and increasing on ${{\mathbb R}}^+$, $L_1$ is defined in and $R(t)$ is an increasing continuous function to be determined later on. As $L_1(t,x) \leq C_4$, we have that $-\ln L_1(t,x) + \ln C_4$ is positive $\forall x \in B(0,R_{\mathbf{T}^*}\cap {\Omega}), \ \forall t\in [0,{\mathbf{T}^*}]$. On one hand, Lemma \[L1 est\] states that there exists $C_2$ such that $$\label{L 12} L_1(t,x) \geq C_2 ( |{{\mathcal T}}(x)| -1), \ \forall x \in B(0,R_{\mathbf{T}^*}\cap {\Omega}), \ \forall t\in [0,{\mathbf{T}^*}].$$ Finally, we proved in Lemma \[dtL1 est\] that there exists $C_3$ such that $$\label{L 32} |{\partial}_t L_1(t,x) | \leq C_3(|{{\mathcal T}}(x)|-1)\Bigl(1+ |\ln(|{{\mathcal T}}(x)|-1)|\Bigl), \ \forall x \in B(0,R_{\mathbf{T}^*}\cap {\Omega}), \ \forall t\in [0,{\mathbf{T}^*}].$$ Then, using the fact that $x\mapsto -x \ln x$ is increasing in $[0,\mathrm{e}^{-1}]$ (see the proof of Proposition \[prop support\]) we have that $$\label{L bis} |{\partial}_t L_1(t,x) | \leq L_1(t,x) (C_5-C_6 \ln \frac{L_1(t,x)}{C_4}), \ \forall x \in B(0,R_{\mathbf{T}^*}\cap {\Omega}), \ \forall t\in [0,{\mathbf{T}^*}].$$ On the other hand, we have $$\nabla_x L_1(t,x)=-u^{\bot}(t,x),$$ therefore $$u\cdot \nabla {\Phi}=u\cdot u^{\bot}\frac{\chi_0'}{R L_1}\equiv 0.$$ Besides, $${\partial_t}{\Phi}(t,x)=\Big(\frac{R'(t)}{R^2(t)} \ln \frac{L_1(t,x)}{C_4}-\frac{1}{R} \frac{{\partial}_t L_1(t,x)}{L_1(t,x)}\Big)\chi_0'\left(\frac{-\ln L_1(t,x) + \ln C_4}{R(t)}\right).$$ In view of , this yields for any[^3] $T\in [0,{\mathbf{T}^*}]$ $$\begin{split} \int_{{{\mathbb R}^2}} &{\Phi}(T,x) {\omega}^2(T,x)\,dx -\int_{{{\mathbb R}^2}} {\Phi}(0,x) {\omega}^2_0(x)\,dx\\& =\int_0^T \int_{{{\mathbb R}^2}} {\omega}^2(t,x) \frac{\chi_0'\left(\frac{-\ln L_1(t,x) + \ln C_4}{R}\right)}{R}\left(\frac{R'}{R} \ln \frac{L_1(t,x)}{C_4}-\frac{{\partial}_t L_1(t,x)}{L_1(t,x)}\right)\, dx\, dt. \end{split}$$ Since $-\ln \frac{L_1(t,x)}{C_4} \geq 0$, the term $\chi_0'(\frac{-\ln L_1(t,x) + \ln C_4}{R})$ is non negative and non zero provided $\frac{1}{2}\leq \frac{-\ln (L_1(t,x)/C_4)}{R}\leq 1$, so we obtain $$\begin{split} \int_{{{\mathbb R}^2}} {\Phi}(T,x) {\omega}^2(T,x)\,dx -\int_{{{\mathbb R}^2}} {\Phi}(0,x) {\omega}^2_0(x)\,dx& \leq \int_0^T \int_{{{\mathbb R}^2}} {\omega}^2 \frac{\chi_0'}{R}\left(-\frac{R'}{2} +C_5 + C_6 R \right)\, dx\, dt. \end{split}$$ In the last inequality, we have used , which is allowed because ${\operatorname{supp\,}}{\omega}\subset B(0,R_{\mathbf{T}^*}\cap {\Omega})$ for all $t\in [0,{\mathbf{T}^*}]$. We now choose $$R(t)={\lambda}_0 \mathrm{e}^{2C_6 t} -\frac{C_5}{C_6},$$ with ${\lambda}_0$ to be determined later on, so that $$\int_{{{\mathbb R}^2}} {\Phi}(T,x) {\omega}^2(T,x)\,dx \leq \int_{{{\mathbb R}^2}} {\Phi}(0,x) {\omega}^2_0(x)\,dx.$$ Since the support of ${\omega}_0$ does not intersect some neighborhood of ${{\mathcal C}}$, the continuity of ${{\mathcal T}}$ implies that there exists ${\mu}_0>0$ such that ${{\mathcal T}}({\operatorname{supp\,}}{\omega}_0)\subset B(0,\mu_0+1)^c$. Then, $$0\leq -\ln L_1(0,x) + \ln C_4 \leq -\ln \Bigl(C_2 ( |{{\mathcal T}}(x)| -1)\Bigl) + \ln C_4 \leq -\ln (C_2{\mu}_0) + \ln C_4$$ for all $x$ in the support of ${\omega}_0$. We finally choose ${\lambda}_0$ so that $$0<\frac{-\ln (C_2{\mu}_0) + \ln C_4}{{\lambda}_0 -\frac{C_5}{C_6}} \leq \frac{1}{2}.$$ For this choice, we have $${\Phi}(0,x){\omega}_0^2(x)=\chi_0\left(\frac{-\ln L_1(0,x) + \ln C_4}{{\lambda}_0 -\frac{C_5}{C_6}}\right){\omega}_0^2(x)\equiv 0.$$ We deduce that for all $T\in [0,{\mathbf{T}^*}]$, ${\Phi}(T,x){\omega}^2(T,x)\equiv 0$. Thanks to Lemma \[L1 maj\], we know that there exists $C_1$ such that $$L_1(T,x) \leq C_1(|{{\mathcal T}}(x)|-1)^{1/2}, \ \forall x\in B(0,R_{\mathbf{T}^*}), \ \forall T\in [0,{\mathbf{T}^*}].$$ Therefore, for any $x\in {{\mathcal T}}^{-1}\Bigl(B(0,1+\mathrm{e}^{-\frac2{C_1} (R({\mathbf{T}^*})-\ln C_4)})\setminus B(0,1)\Bigl)$ and any $T\in[0,{\mathbf{T}^*}]$, we have that $$\begin{aligned} |{{\mathcal T}}(x)|&\leq& 1+\mathrm{e}^{-\frac2{C_1} (R({\mathbf{T}^*})-\ln C_4)}\\ \ln (|{{\mathcal T}}(x)| -1) &\leq & -\frac2{C_1} (R({\mathbf{T}^*})-\ln C_4)\\ -\frac{C_1}2\ln (|{{\mathcal T}}(x)| -1) &\geq & (R({\mathbf{T}^*})-\ln C_4)\end{aligned}$$ which implies that $$\label{ineq1} \frac{-\frac{C_1}2\ln (|{{\mathcal T}}(x)| -1) + \ln C_4}{R({\mathbf{T}^*})}\geq 1.$$ Moreover, for any $x\in B(0,R_{\mathbf{T}^*})$ and $T\in [0,{\mathbf{T}^*}]$ we have that $$\begin{aligned} \ln L_1(T,x)&\leq & \frac{C_1}2 \ln (|{{\mathcal T}}(x)| -1)\\ - \ln L_1(T,x)+\ln C_4 &\geq & -\frac{C_1}2 \ln (|{{\mathcal T}}(x)| -1)+\ln C_4\end{aligned}$$ which gives (using that $R$ is an increasing function and that $- \ln L_1(T,x)+\ln C_4\geq 0$): $$\frac{- \ln L_1(T,x)+\ln C_4}{R(T)} \geq \frac{- \ln L_1(T,x)+\ln C_4}{R({\mathbf{T}^*})} \geq \frac{-\frac{C_1}2\ln (|{{\mathcal T}}(x)| -1) + \ln C_4}{R({\mathbf{T}^*})}.$$ Putting together the last inequality and , ${\Phi}(T,x){\omega}^2(T,x)\equiv 0$ for any $T\in [0,{\mathbf{T}^*}]$ implies that $${\omega}(T,x)\equiv 0,\ \forall x\in {{\mathcal T}}^{-1}\Bigl(B(0,1+\mathrm{e}^{-\frac2{C_1} (R({\mathbf{T}^*})-\ln C_4)})\setminus B(0,1)\Bigl), \ \forall T\in [0,{\mathbf{T}^*}]$$ and the conclusion follows. \[rem : sign\] Of course, as in Remarks \[rem : L1 est\] and \[rem sign\], the previous proposition holds true for the opposite sign condition: ${\omega}_0$ non negative and ${\gamma}_0 \leq -\int {\omega}_0$. Actually, we can prove Propositions \[compact\_vorticity\] and \[constant\_vorticity\_2\] without the renormalized solutions. Indeed, as we proved in Remark \[remark : conserv\] that ${\omega}$ stays definite sign (thanks to the renormalization theory), then we can use ${\omega}$ instead of ${\omega}^2$ in the proofs. In this case, we just need that ${\omega}$ is a weak solution in the sense of distribution. However, we have presented here the proofs with ${\beta}({\omega})={\omega}^2$ in order to extend the theorems in the case where ${\omega}_0$ is constant near the boundary (see Section \[sect : 6\]). Uniqueness of Eulerian solutions {#sect : 5} ================================ Velocity formulation --------------------   In order to follow the proof of Yudovich, we give a velocity formulation[^4] of the extension $\bar u$. We begin by introducing $$v(x):=\int_{{{\mathbb R}}^2} K_{{{\mathbb R}}^{2}}(x-y) \bar{\omega}(y)dy$$ with $K_{{{\mathbb R}}^2}(x)=\frac{1}{2\pi}\frac{x^\perp}{|x|^2}$, the solution in the full plane of $${{\rm div}\,}v =0 \text{ on } {{\mathbb R}}^2, \quad {{\rm curl}\,}v =\bar{\omega}\text{ on } {{\mathbb R}}^2, \quad \lim_{|x|\to\infty}|v|=0.$$ This velocity is bounded, and we denote the perturbation by $w=\bar u-v$, which belongs to $L^\infty_{{\operatorname{{loc}}}}({{\mathbb R}}^+;L^p_{{\operatorname{{loc}}}}({{\mathbb R}}^2))$ for $p<4$, and verifying $${{\rm div}\,}w =0 \text{ on } {{\mathbb R}}^2, \quad {{\rm curl}\,}w =g_{{\omega},{\gamma}}(s) {\delta}_{{\partial}{\Omega}} \text{ on } {{\mathbb R}}^2, \quad \lim_{|x|\to\infty}|w|=0.$$ We infer that $v$ verifies the following equation: $$\label{vit_equa} \begin{cases} v_t+v\cdot {\nabla}v+ v\cdot {\nabla}w + w\cdot {\nabla}v - v(s)^\perp\tilde g_{v,{\gamma}}(s)\cdot {\delta}_{{\partial}{\Omega}}=-{\nabla}p, & \text{ in }{{\mathbb R}}^2\times(0,\infty) \\ {{\rm div}\,}v=0, & \text{ in }{{\mathbb R}}^2\times(0,\infty) \\ w(x)=\frac{1}{2\pi} \oint_{{\partial}{\Omega}} \frac{(x-s)^\perp}{|x-s|^2}\tilde g_{v,{\gamma}}(s) { ds}, & \text{ in }{{\mathbb R}}^2\times(0,\infty) \\ v(x,0)=K_{{{\mathbb R}}^2}[\bar {\omega}_0], & \text{ in }{{\mathbb R}}^2. \end{cases}$$ with $\tilde g_{v,{\gamma}}:=g_{{{\rm curl}\,}v,{\gamma}}$ (see ). In order to prove the equivalence of (\[tour\_equa\]) and (\[vit\_equa\]) it is sufficient to show that $$\label{equiv} {{\rm curl}\,}[v \cdot {\nabla}w + w \cdot {\nabla}v - v(s)^\perp\tilde g_{v,{\gamma}}(s)\cdot {\delta}_{{\partial}{\Omega}}]={{\rm div}\,}(\bar {\omega}w)$$ for all divergence free fields $v\in W^{1,p}_{{\operatorname{{loc}}}}$, with some $p>2$. Indeed, if (\[equiv\]) holds, then we get for $\bar {\omega}={{\rm curl}\,}v$ $$\begin{aligned} 0 &=& -{{\rm curl}\,}{\nabla}p={{\rm curl}\,}[v_t+ v \cdot {\nabla}v + v \cdot {\nabla}w + w \cdot {\nabla}v - v(s)^\perp\tilde g_{v,{\gamma}}(s)\cdot {\delta}_{{\partial}{\Omega}}] \\ &=& {\partial}_t \bar {\omega}+ v \cdot {\nabla}\bar {\omega}+ w \cdot {\nabla}\bar {\omega}= {\partial}_t \bar {\omega}+ \bar u \cdot {\nabla}\bar {\omega}=0\end{aligned}$$ so relation (\[tour\_equa\]) holds true. And vice versa, if (\[tour\_equa\]) holds then we deduce that the left hand side of (\[vit\_equa\]) has zero curl so it must be a gradient. We now prove (\[equiv\]). As $ W^{1,p}_{{\operatorname{{loc}}}} \subset \mathcal{C}^0$, $v(s)$ is well defined. Next, it suffices to prove the equality for smooth $v$, since we can pass to the limit on a subsequence of smooth approximations of $v$ which converges strongly in $W^{1,p}_{{\operatorname{{loc}}}}$ and $\mathcal{C}^0$. Now, it is trivial to check that, for a $2\times 2$ matrix $A$ with distribution coefficients, we have $${{\rm curl}\,}{{\rm div}\,}A ={{\rm div}\,}\begin{pmatrix} {{\rm curl}\,}C_1 \\ {{\rm curl}\,}C_2 \end{pmatrix}$$ where $C_i$ denotes the $i$-th column of $A$. For smooth $v$, we deduce $$\begin{aligned} {{\rm curl}\,}[v \cdot {\nabla}w + w \cdot {\nabla}v] &=& {{\rm curl}\,}{{\rm div}\,}(v\otimes w+w \otimes v)\\ &=& {{\rm div}\,}\begin{pmatrix} {{\rm curl}\,}(vw_1)+{{\rm curl}\,}(wv_1) \\ {{\rm curl}\,}(vw_2)+{{\rm curl}\,}(wv_2) \end{pmatrix} \\ &=& {{\rm div}\,}(w\ {{\rm curl}\,}v+v \cdot {\nabla}^\perp w+ v\ {{\rm curl}\,}w+w \cdot {\nabla}^\perp v).\end{aligned}$$ It is a simple computation to check that $${{\rm div}\,}(v \cdot {\nabla}^\perp w+w \cdot {\nabla}^\perp v) = v \cdot {\nabla}^\perp {{\rm div}\,}w+ w \cdot {\nabla}^\perp {{\rm div}\,}v + {{\rm curl}\,}v\ {{\rm div}\,}w+ {{\rm curl}\,}w\ {{\rm div}\,}v.$$ Taking into account that we have free divergence fields, we can finish by writing $${{\rm curl}\,}[v \cdot {\nabla}w + w \cdot {\nabla}v] = {{\rm div}\,}(w\ {{\rm curl}\,}v +v \tilde g_{v,{\gamma}}(s)\cdot {\delta}_{{\partial}{\Omega}}) = {{\rm div}\,}(w\ {{\rm curl}\,}v) + {{\rm curl}\,}[v(s)^\perp \tilde g_{v,{\gamma}}(s)\cdot {\delta}_{{\partial}{\Omega}}].$$ which proves (\[equiv\]). Proof of Theorems \[main 1\]-\[main 2\] ---------------------------------------   The goal is to adapt the proof of Yudovich: let $u_1$ and $u_2$ be two weak solutions of (Theorem \[theorem1\]) from the same initial data $u_0$ verifying -. We define as above $v_1$, $w_1$ (resp. $v_2$, $w_2$) associated to ${\omega}_1:= {{\rm curl}\,}u_1$ (resp. ${\omega}_2:= {{\rm curl}\,}u_2$) and ${\gamma}_0$ (see and ). We denote $$\tilde {\omega}:= \bar {\omega}_1 - \bar {\omega}_2$$ where the bar means that we extend by zero outside ${\Omega}$ and $${\tilde{v}}:= v_1-v_2,$$ which verifies $$\label{diff_velocity} \begin{split} {\partial}_t \tilde v + \tilde v\cdot {\nabla}v_1+v_2\cdot {\nabla}\tilde v + {{\rm div}\,}(\tilde v \otimes w_1+& v_2\otimes \tilde w+w_1\otimes \tilde v+\tilde w\otimes v_2)\\ &- (v_1(s)^\perp \tilde g_{\tilde v,0}(s) - \tilde v(s)^\perp \tilde g_{v_2,{\gamma}_0}(s) )\cdot \delta_{{\partial}{\Omega}} =-{\nabla}\tilde p. \end{split}$$ Next, we will multiply by $\tilde v$ and integrate. The difficulty compared with the Yudovich’s original proof is that we have some terms as $\int_{{{\mathbb R}}^2} |w_1| |\tilde v| |{\nabla}\tilde v|$ with $w_1$ blowing up near the corners. The general idea is to divide such an integral in two parts: on $U$ a small neighborhood of the boundary where the vorticity vanishes (see Proposition \[constant\_vorticity\_2\]) and on ${{\mathbb R}}^2 \setminus U$ where the velocity $w_1$ is regular. Far from the boundary, we follow what Yudovich did, and near the boundary we compute $$\int_U |w_1| |\tilde v| |{\nabla}\tilde v| \leq \| w_1\|_{L^1(U)} \| \tilde v \|_{L^\infty(U)} \| {\nabla}\tilde v \|_{L^\infty(U)}.$$ Indeed $w_1$ is integrable near the boundary, and as $\tilde v$ is harmonic in $U$ (${{\rm div}\,}\tilde v = {{\rm curl}\,}\tilde v =0$), then we have $$\int_U |w_1| |\tilde v| |{\nabla}\tilde v| \leq C \|\tilde v \|_{L^2(U)}^2$$ which will allow us to conclude by the Gronwall’s lemma. We see here why Proposition \[constant\_vorticity\_2\] is the main key of the uniqueness proof. This idea was used in [@lac_miot] in order to prove the uniqueness of the vortex-wave system, and we follow the same plan. We denote by $W^{1,4}_{\sigma}({{\mathbb R}^2})$ the set of functions belonging to $W^{1,4}({{\mathbb R}^2})$ and which are divergence-free in the sense of distributions, and by $W^{-1,4/3}_{\sigma}({{\mathbb R}^2})$ its dual space. First, we prove that we can multiply by $\tilde v$ and integrate. As a consequence of and , we obtain the following properties for ${\tilde{v}}$. \[prop : cont-velocity\] Let $u_0$ verifying , $u_1,u_2$ be two weak solutions of with initial condition $u_0$. Let ${\tilde{v}}=v_1-v_2$. Then we have $${\tilde{v}}\in L_{{\operatorname{{loc}}}}^2\left({{\mathbb R}}^+,W^{1,4}_{\sigma}({{\mathbb R}^2})\right),\quad {\partial}_t {\tilde{v}}\in L_{{\operatorname{{loc}}}}^2\left({{\mathbb R}}^+,W^{-1,\frac{4}{3}}_{\sigma}({{\mathbb R}^2})\right).$$ In addition, we have ${\tilde{v}}\in C\left({{\mathbb R}}^+, L^2({{\mathbb R}^2})\right)$ and for all $T\in{{\mathbb R}}^+$, $$\|\tilde v(T)\|_{L^2({{\mathbb R}^2})}^2=2\int_0^T \langle {\partial}_t \tilde v,\tilde v \rangle_{W^{-1,4/3}_{\sigma},W^{1,4}_{\sigma}}\,ds,\qquad \forall T\in {{\mathbb R}}^+.$$ The proof follows easily from the estimates established in Section \[sect : 2\]. The reader can find the details in Section \[sect : technical\]. Now, we take advantage of the fact that ${\omega}_i$ is equal to zero near ${\partial}{\Omega}$ (Proposition \[constant\_vorticity\_2\]) to give harmonic regularity estimates on $\tilde v(t)$. \[harm\] Let $\mathbf{T}^*>0$. We assume that ${\omega}_0$ is compactly supported in ${\Omega}$ and has the sign conditions of Proposition \[constant\_vorticity\_2\] (or of Remark \[rem : sign\]). Then, there exists a neighborhood $U_{\mathbf{T}^*}$ of ${\partial}{{\Omega}}$ such that for all $t\leq {{{\mathbf T}}^*}$, $\tilde v(t,\cdot)$ is harmonic on $U_{{{\mathbf T}}^*}$. In particular, for $O_{{{\mathbf T}}^*}$ an open set such that ${\partial}{\Omega}\Subset O_{{{\mathbf T}}^*} \Subset U_{{{\mathbf T}}^*}$, we have the following estimates: - $\|\tilde v(t,.)\|_{L^\infty(O_{{{\mathbf T}}^*})}\leq C \|\tilde v(t,.)\|_{L^2({{\mathbb R}}^2)}$, - $ \|{\nabla}\tilde v(t,.)\|_{L^\infty(O_{{{\mathbf T}}^*})}\leq C \|\tilde v(t,.)\|_{L^2({{\mathbb R}}^2)}$, where $C$ only depends on $O_{{{\mathbf T}}^*}$. The proof is a direct consequence of the mean-value formula (see e.g. the proof of Lemma 3.9 in [@lac_miot]). In order to prepare the Gronwall estimate, we establish the following estimates on $w_1-w_2$. \[est w\] Let $\mathbf{T}^*>0$ and ${\partial}{\Omega}\Subset O_{{{\mathbf T}}^*} \Subset U_{{{\mathbf T}}^*}$ as Lemma \[harm\]. Then $\tilde w:= w_1-w_2$ verifies the following estimates for any $t\in[0,{{{\mathbf T}}^*}]$: - $\|\tilde w(t,\cdot)\|_{L^2({{\mathbb R}}^2)} \leq 2 \|\tilde v(t,\cdot)\|_{L^2({{\mathbb R}}^2)}$, - $ \|\tilde w(t,\cdot)\|_{L^\infty(O_{{{\mathbf T}}^*}^c)}\leq C \|\tilde v(t,\cdot)\|_{L^2({{\mathbb R}}^2)}$, - $ \|{\nabla}\tilde w(t,\cdot)\|_{L^2(O_{{{\mathbf T}}^*}^c)}\leq C \|\tilde v(t,\cdot)\|_{L^2({{\mathbb R}}^2)}$, where $C$ only depends on $O_{{{\mathbf T}}^*}$. We fix $t\in [0,{{{\mathbf T}}^*}]$ and we denote $\tilde u:= \bar u_1-\bar u_2$. From the explicit formula and the conservation law, we have that $$\left\lbrace\begin{aligned} {{\rm div}\,}\tilde u &=0 &\text{ on } {\Omega}, \\ {{\rm curl}\,}\tilde u &= \tilde {\omega}&\text{ on } {\Omega}, \\ \tilde u\cdot \hat n &=0 &\text{ on } {\partial}{\Omega}, \\ \int_{{\partial}{\Omega}} \tilde u\cdot \hat {\tau}&=0 &\text{ (only if ${\Omega}$ is an exterior domain)},\\ \lim_{|x|\to\infty}|\tilde u|&=0 &\text{ (only if ${\Omega}$ is an exterior domain)}, \end{aligned}\right.$$ and $$\left\lbrace\begin{aligned} {{\rm div}\,}\tilde v &=0 &\text{ on } {\Omega}, \\ {{\rm curl}\,}\tilde v &= \tilde{\omega}&\text{ on } {\Omega}, \\ \int_{{\partial}{\Omega}} \tilde v\cdot \hat {\tau}&=0 &\text{ (only if ${\Omega}$ is an exterior domain)},\\ \lim_{|x|\to\infty}|\tilde v|&=0 &\text{ (only if ${\Omega}$ is an exterior domain)}. \end{aligned}\right.$$ Indeed, in the case of exterior domain, $\tilde {\omega}\equiv 0$ on ${{\mathcal C}}$ which implies that the circulation of $\tilde v$ around ${{\mathcal C}}$ is equal to zero. Therefore, we have the following. $\tilde u$ is the orthogonal projection of $\tilde v$ on the set of the vector field defined on ${\Omega}$ square integrable, divergence free and tangent to the boundary. Therefore we have: $$\| \tilde u(t,\cdot) \|_{L^2({\Omega})} \leq \| \tilde v(t,\cdot) \|_{L^2({\Omega})} .$$ This lemma is a classical property of the Leray projector in arbitrary domains (see [@Galdi Theorem 1.1 in Chapter III.1.]). Then the first point is a direct consequence of this lemma: $$\|\tilde w(t,\cdot)\|_{L^2({{\mathbb R}}^2)} \leq \|\tilde u(t,\cdot)\|_{L^2({\Omega})} + \|\tilde v(t,\cdot)\|_{L^2({{\mathbb R}}^2)} \leq \|\tilde v(t,\cdot)\|_{L^2({\Omega})} + \|\tilde v(t,\cdot)\|_{L^2({{\mathbb R}}^2)} \leq 2 \|\tilde v(t,\cdot)\|_{L^2({{\mathbb R}}^2)}.$$ The second point is exactly the same thing as in Lemma \[harm\]: $\tilde w$ is harmonic in ${\Omega}$ then there exists $C$ depending on $O_{{{\mathbf T}}^*}$ such that $$\|\tilde w(t,\cdot)\|_{L^\infty(O_{{{\mathbf T}}^*}^c)} \leq C \|\tilde w(t,\cdot)\|_{L^2({\Omega})} \leq 2C \|\tilde v(t,\cdot)\|_{L^2({{\mathbb R}}^2)}.$$ Another consequence of the mean-value Theorem is that $$\|{\nabla}\tilde w(t,\cdot)\|_{L^2(O_{{{\mathbf T}}^*}^c)} \leq C \|\tilde w(t,\cdot)\|_{L^2({\Omega})} \leq 2C \|\tilde v(t,\cdot)\|_{L^2({{\mathbb R}}^2)}.$$ Indeed, there is $R_1$ such that dist$({\partial}{\Omega},{\partial}O_{{{\mathbf T}}^*})>R_1$, then $$\begin{aligned} \|{\nabla}\tilde w(t,x)\|_{L^2(O_{{{\mathbf T}}^*}^c)} &=& \Bigl\| \frac{1}{\pi R_1^2} \int_{B(x,R_1)} {\nabla}\tilde w(t,y)\, dy \Bigl\|_{L^2(O_{{{\mathbf T}}^*}^c)}=\Bigl\| \frac{1}{\pi R_1^2} \int_0^{2\pi} \tilde w(t,x+R_1e^{i{\theta}}) {\nu}\, R_1 d{\theta}\Bigl\|_{L^2(O_{{{\mathbf T}}^*}^c)}\\ &\leq& \int_0^{2\pi}\frac{1}{\pi R_1} \| \tilde w(t,x+R_1e^{i{\theta}}) \|_{L^2(O_{{{\mathbf T}}^*}^c)} \, d{\theta}\leq \frac{2 \|\tilde w(t,\cdot)\|_{L^2({\Omega})}}{R_1}.\end{aligned}$$ We remark that the result from Galdi’s book does not require regularity of ${\partial}{\Omega}$ when we consider the $L^2$ norm (thanks to the Hilbert structure). In contrast for $p\neq 2$, he states that the Leray projector is continuous from $L^p$ to $L^p$ if the boundary ${\partial}{\Omega}$ is $C^2$. Indeed, in our case we see that $\tilde v$ belongs to $L^p$ for any $p>1$, whereas $\tilde u = \mathbb{P} \tilde v$ does not belongs in $L^p({\Omega})$ for some $p>4$ (if there is an angle greater than $\pi$, see Remark \[DT loc\]). We can adapt now the Yudovich proof, as it is done in [@lac_miot]. We fix ${{{\mathbf T}}^*}>0$ in order to fix $O_{{{\mathbf T}}^*}$ in Lemmata \[harm\] and \[est w\]. We consider smooth and divergence-free functions ${\Phi}_n\in C^\infty_c\left( {{\mathbb R}}^+\times {{\mathbb R}}^2\right)$ converging to $\tilde{v}$ in $L^2_{{\operatorname{{loc}}}}\left({{\mathbb R}}^+,W^{1,4}({{\mathbb R}^2})\right)$ as test functions in , and let $n$ goes to $+\infty$. First, we have for all $T\in [0,{{{\mathbf T}}^*}]$ $$\int_0^T \langle {\partial}_t \tilde v,{\Phi}_n \rangle_{W^{-1,4/3}_{\sigma},W^{1,4}_{\sigma}}\, ds\to \int_0^T \langle {\partial}_t \tilde v,\tilde{v} \rangle_{W^{-1,4/3}_{\sigma},W^{1,4}_{\sigma}}\,ds,$$ and we deduce the limit in the other terms from the several bounds for $v_i$ stated in the proof of Proposition \[prop : cont-velocity\]. This yields $$\label{diff} \frac{1}{2}\| \tilde v (T,\cdot)\|_{L^2}^2 =I+J+K,$$ where $$\begin{split} I&=- \int_0^T \int_{{{\mathbb R}}^2} \tilde v \cdot(\tilde v\cdot {\nabla}v_1+v_2\cdot {\nabla}\tilde v)\,dx\,dt,\\ J&= \int_0^T \int_{{{\mathbb R}}^2} (\tilde v \otimes w_1+ v_2\otimes \tilde w+w_1\otimes \tilde v+\tilde w\otimes v_2):{\nabla}\tilde v\,dx\,dt, \\ K&= \int_0^T \int_{{\partial}{\Omega}} v_1(s)^\perp \tilde g_{\tilde v,0}(s) \cdot \tilde v(s)\,ds. \end{split}$$ The goal is to estimate all the terms in the right-hand side in order to obtain a Gronwall-type inequality. For the first term $I$ in , we begin by noticing that $$\int_{{{\mathbb R}^2}}(v_2\cdot {\nabla}\tilde v)\cdot \tilde v\,dx=\frac{1}{2}\int_{{{\mathbb R}^2}} v_{2}\cdot{\nabla}|\tilde v|^2\,dx=-\frac{1}{2}\int_{{{\mathbb R}^2}} |\tilde v|^2 {{\rm div}\,}v_2\,dx=0,$$ where we have used that $v_2={{\mathcal O}}(1/|x|)$ and $\tilde v={{\mathcal O}}(1/|x|^2)$ at infinity. Moreover, Hölder’s inequality gives $$\left|\int_{{{\mathbb R}^2}}(\tilde v\cdot {\nabla}v_1)\cdot \tilde v\,dx\right|\leq\|\tilde v\|_{L^2}\|\tilde v\|_{L^q}\|{\nabla}v_1\|_{L^p},$$ with $\frac{1}{p}+\frac{1}{q}=\frac{1}{2}$. On one hand, Calderón-Zygmung inequality states that $\|{\nabla}v_1\|_{L^p}\leq Cp\|{\omega}_1\|_{L^p}$ for $p\geq 2$. On the other hand, we write by interpolation $\|\tilde v\|_{L^q}\leq \|\tilde v\|^a_{L^2}\|\tilde v\|^{1-a}_{L^\infty}$ with $\frac{1}{q}=\frac{a}{2}+\frac{1-a}{\infty}$. We have that $a=1-\frac{2}{p}$, so we are led to $$\label{eq : I} |I| \leq Cp\int_0^T \|\tilde v\|_{L^2}^{2-2/p}\,dt.$$ We now estimate $J$. We have $$\begin{split} \int_{{{\mathbb R}^2}} (\tilde v\otimes w_1): {\nabla}\tilde v\,dx& = \int_{{{\mathbb R}^2}}\sum_{i,j}\tilde v_i w_{1,j}{\partial}_j\tilde v_i\,dx = \frac{1}{2}\sum_i\int_{{{\mathbb R}^2}}\sum_j w_{1,j}{\partial}_j\tilde v_i^2\,dx\\ &=-\frac{1}{2}\sum_i\int_{{{\mathbb R}^2}} \tilde v_i^2 {{\rm div}\,}w_1\,dx=0, \end{split}$$ since $w_1$ is divergence-free, and $$\label{special} \Bigl| \int_0^T \int_{{{\mathbb R}}^2} (w_1\otimes \tilde v):{\nabla}\tilde v\,dx\,dt \Bigl| \leq \Bigl| \int_0^T \int_{O_{{{\mathbf T}}^*}} (w_1\otimes \tilde v ):{\nabla}\tilde v\,dx\,dt \Bigl| + \Bigl| \int_0^T \int_{O_{{{\mathbf T}}^*}^c} (w_1\otimes \tilde v ):{\nabla}\tilde v\,dx\,dt \Bigl|.$$ We perform an integration by part for the second term in the right-hand side of . Arguing that ${{\rm div}\,}{\tilde{v}}=0$, we obtain $$\begin{aligned} \Bigl| \int_0^T \int_{{{\mathbb R}}^2} (w_1\otimes \tilde v) : {\nabla}\tilde v \, dx \, dt \Bigl| &\leq & \Bigl| \int_0^T \int_{O_{{{\mathbf T}}^*}} (w_1\otimes \tilde v ) : {\nabla}\tilde v \, dx \, dt \Bigl|\\ && + \Bigl| - \int_0^T \Bigl(\int_{O_{{{\mathbf T}}^*}^c} (\tilde v \cdot {\nabla}w_1 )\cdot \tilde v \, dx + \int_{{\partial}O_{{{\mathbf T}}^*}} (w_1\cdot {\tilde{v}})({\tilde{v}}\cdot {\nu}) \, ds \Bigl)\, dt \Bigl| \\ &\leq& \int_0^T \|w_1\|_{L^1(O_{{{\mathbf T}}^*})} \|\tilde v\|_{L^\infty(O_{{{\mathbf T}}^*})} \|{\nabla}\tilde v\|_{L^\infty(O_{{{\mathbf T}}^*})} \, dt \\ && + \int_0^T \|{\nabla}w_1\|_{L^\infty(O_{{{\mathbf T}}^*}^c)} \|\tilde v\|^2_{L^2} \, dt\\ && + \int_0^T \| w_1\|_{L^\infty({\partial}O_{{{\mathbf T}}^*})} \|\tilde v\|^2_{L^\infty({\partial}O_{{{\mathbf T}}^*})} | {\partial}O_{{{\mathbf T}}^*}| \, dt.\end{aligned}$$ As we remarked when we introduce $w$: $\|w_1\|_{L^1(O_{{{\mathbf T}}^*})}\leq C$ with $C$ depending only on ${\Omega}$, ${{{\mathbf T}}^*}$ and $u_0$. Moreover, using the harmonicity of $w_1$, we know that $\|{\nabla}w_1\|_{L^\infty(O_{{{\mathbf T}}^*}^c)}$ is bounded by a constant times $\|w_1\|_{L^\infty(V_{{{\mathbf T}}^*}^c)}$, with ${\partial}{\Omega}\Subset V_{{{\mathbf T}}^*} \Subset O_{{{\mathbf T}}^*}$. Using the behavior of $D{{\mathcal T}}$ at infinity (Proposition \[T-inf\]), Proposition \[biot est\], conservation laws , , , then allows us to state that $\| u_1 \|_{L^\infty((0,{{{\mathbf T}}^*})\times V_{{{\mathbf T}}^*}^c)}\leq C_0$ with $C_0$ depending only on ${\Omega}$, ${{{\mathbf T}}^*}$ and $u_0$. As $v_1$ is uniformly bounded, we obtain that $\|{\nabla}w_1\|_{L^\infty(O_{{{\mathbf T}}^*}^c)}$ and $\| w_1\|_{L^\infty({\partial}O_{{{\mathbf T}}^*})}$ is bounded uniformly in $(0,{{{\mathbf T}}^*})$. Then, according to Lemma \[harm\], this gives $$\Bigl| \int_0^T \int_{{{\mathbb R}^2}} (w_1 \otimes \tilde v):{\nabla}\tilde v\,dx\,dt \Bigl| \ \ \leq\ \ C \int_0^T \|\tilde v\|^2_{L^2} \, dt.$$ In the same way, we obtain by integration by part $$\begin{aligned} \Bigl| \int_0^T \int_{{{\mathbb R}}^2} (v_2\otimes \tilde w):{\nabla}\tilde v \, dx \, dt \Bigl| &\leq & \Bigl| \int_0^T \int_{O_{{{\mathbf T}}^*}} (v_2\otimes \tilde w):{\nabla}\tilde v\,dx\,dt \Bigl|\\ && + \Bigl|- \int_0^T \Bigl(\int_{O_{{{\mathbf T}}^*}^c} (\tilde w \cdot {\nabla}v_2 )\cdot \tilde v\,dx + \int_{{\partial}O_{{{\mathbf T}}^*}} (v_2\cdot{\tilde{v}})(\tilde w \cdot{\nu}) ds \Bigl)\,dt\Bigl|.\end{aligned}$$ Therefore, $$\begin{aligned} \Bigl| \int_0^T \int_{{{\mathbb R}}^2} (v_2\otimes \tilde w):{\nabla}\tilde v\,dx\,dt \Bigl| &\leq & \int_0^T \|\tilde w\|_{L^2(O_{{{\mathbf T}}^*})} \|v_2\|_{L^2(O_{{{\mathbf T}}^*})} \|{\nabla}\tilde v\|_{L^\infty(O_{{{\mathbf T}}^*})} \, dt \\ &&+ \int_0^T \| \tilde w\|_{L^\infty(O_{{{\mathbf T}}^*}^c)} \|\tilde v\|_{L^2} \|{\nabla}v_2\|_{L^2} \,dt \\ &&+ \int_0^T\| \tilde w\|_{L^\infty({\partial}O_{{{\mathbf T}}^*})} \|\tilde v\|_{L^\infty({\partial}O_{{{\mathbf T}}^*})} \| v_2\|_{L^\infty}|{\partial}B|\,dt.\end{aligned}$$ Using again Calderón-Zygmund inequality for $v_2$ and Lemmata \[harm\] and \[est w\], we get $$\Bigl| \int_0^T \int_{{{\mathbb R}}^2} (v_2\otimes \tilde w):{\nabla}\tilde v\,dx\,dt \Bigl| \leq C \int_0^T \|\tilde v\|_{L^2}^2 \, dt.$$ A very similar computation yields $$\begin{aligned} \Bigl| \int_0^T \int_{{{\mathbb R}}^2} (\tilde w\otimes v_2): {\nabla}\tilde v\,dx\,dt \Bigl| &\leq& C \int_0^T \left(\|\tilde w\|_{L^2(O_{{{\mathbf T}}^*})}+\| {\nabla}\tilde w\|_{L^2(O_{{{\mathbf T}}^*}^c)} +\| \tilde w\|_{L^\infty({\partial}O_{{{\mathbf T}}^*})}\right) \|\tilde v\|_{L^2}\, dt\\ &\leq& C \int_0^T \|\tilde v\|_{L^2}^2 \, dt.\end{aligned}$$ Therefore, we arrive at $$\label{eq : J} |J|\leq 3C \int_0^T \|\tilde v\|_{L^2}^2\,dt.$$ Finally, using we write the third term $K$ in as follows: $$\begin{aligned} K &=&\pm \int_0^T \int_{{\partial}{\Omega}} (\tilde u \cdot \hat {\tau}) (v_1^\perp \cdot \tilde v)\,ds\\ &=& \pm \int_0^T \int_{{\Omega}} {{\rm curl}\,}\tilde u(v_1^\perp \cdot \tilde v)\, dx \pm \int_0^T \int_{{\Omega}} \tilde u \cdot {\nabla}^\perp (v_1^\perp \cdot \tilde v)\, dx,\end{aligned}$$ where $\pm$ depends if we treat exterior or interior domains. Using that ${{\rm curl}\,}\tilde u= {{\rm curl}\,}\tilde v$ in ${\Omega}$, ${{\rm div}\,}\tilde v = 0$ and the behaviors at infinity, we obtain by several integrations by parts: $$\begin{split} \int_{ {\Omega}} {{\rm curl}\,}\tilde u(v_1^\perp \cdot \tilde v)\, dx =& \int_{{\Omega}} {{\rm curl}\,}\tilde v (v_1^\perp \cdot \tilde v)\, dx = \int_{{{\mathbb R}}^2} {{\rm curl}\,}\tilde v (v_1^\perp \cdot \tilde v)\, dx\\ =& \int_{{{\mathbb R}}^2} \Bigl(-v_{1,2} \tilde v_1{\partial}_1 \tilde v_2+v_{1,2} \frac{{\partial}_2 | \tilde v_1|^2}{2}+v_{1,1} \frac{{\partial}_1 | \tilde v_2|^2}{2} - v_{1,1} \tilde v_2 {\partial}_2 \tilde v_1\Bigl)\, dx\\ = \int_{{{\mathbb R}}^2} \Bigl({\partial}_1 v_{1,2} \tilde v_1 \tilde v_2&+ v_{1,2} {\partial}_1 \tilde v_1 \tilde v_2 - {\partial}_2 v_{1,2} \frac{| \tilde v_1|^2}{2}- {\partial}_1 v_{1,1} \frac{ | \tilde v_2|^2}{2} + {\partial}_2 v_{1,1} \tilde v_2 \tilde v_1+ v_{1,1} {\partial}_2\tilde v_2 \tilde v_1\Bigl)\, dx\\ = \int_{{{\mathbb R}}^2} \Bigl({\partial}_1 v_{1,2} \tilde v_1 \tilde v_2&- v_{1,2} \frac{{\partial}_2 |\tilde v_2|^2}{2} - {\partial}_2 v_{1,2} \frac{| \tilde v_1|^2}{2}- {\partial}_1 v_{1,1} \frac{ | \tilde v_2|^2}{2} + {\partial}_2 v_{1,1} \tilde v_2 \tilde v_1 - v_{1,1} \frac{ {\partial}_1 |\tilde v_1|^2}{2} \Bigl)\, dx\\ = \int_{{{\mathbb R}}^2} \Bigl({\partial}_1 v_{1,2} \tilde v_1 \tilde v_2&+ {\partial}_2 v_{1,2} \frac{ |\tilde v_2|^2}{2} - {\partial}_2 v_{1,2} \frac{| \tilde v_1|^2}{2}- {\partial}_1 v_{1,1} \frac{ | \tilde v_2|^2}{2} + {\partial}_2 v_{1,1} \tilde v_2 \tilde v_1 + {\partial}_1 v_{1,1} \frac{ |\tilde v_1|^2}{2} \Bigl)\, dx. \end{split}$$ Hence, $$\Bigl| \int_0^T \int_{{\Omega}} {{\rm curl}\,}\tilde u(v_1^\perp \cdot \tilde v)\, dx \, dt\Bigl| \leq 4 \int_0^T \int_{{{\mathbb R}}^2} |{\nabla}v_1| |\tilde v|^2\, dx \, dt$$ which gives by Calderón-Zygmund inequality (as for $I$): $$\Bigl| \int_0^T \int_{{\Omega}} {{\rm curl}\,}\tilde u(v_1^\perp \cdot \tilde v)\, dx\, dt \Bigl| \leq Cp\int_0^T \|\tilde v\|_{L^2}^{2-2/p}\,dt.$$ With similar computation, and using Lemmata \[harm\] and \[est w\], we can prove that the second term of $K$ can be treated thanks to: $$\begin{aligned} \int_0^T \int_{{\Omega}} |\tilde u| |{\nabla}v_1| | \tilde v| \, dx\, dt &\leq& Cp\int_0^T \|\tilde v\|_{L^2}^{2-2/p}\,dt \\ \int_0^T \int_{O_{{{\mathbf T}}^*}} |\tilde u| | v_1| |{\nabla}\tilde v| \, dx \, dt&\leq& C \int_0^T \|\tilde v\|_{L^2}^2 \, dt \\ \int_0^T \int_{O_{{{\mathbf T}}^*}^c} |\tilde v| | {\nabla}v_1| |\tilde v| \, dx \, dt&\leq& Cp\int_0^T \|\tilde v\|_{L^2}^{2-2/p}\,dt \\ \int_0^T \int_{O_{{{\mathbf T}}^*}^c} |\tilde w| | {\nabla}v_1| |\tilde v| \, dx \, dt&\leq& C \int_0^T \|\tilde v\|_{L^2}^2 \, dt \\ \int_0^T \int_{O_{{{\mathbf T}}^*}^c} |{\nabla}\tilde w| | v_1| |\tilde v| \, dx \, dt&\leq& C \int_0^T \|\tilde v\|_{L^2}^2 \, dt \\ \int_0^T \int_{{\partial}O_{{{\mathbf T}}^*}} (|\tilde v| + |\tilde w| ) | v_1| |\tilde v| \, dx \, dt&\leq& C \int_0^T \|\tilde v\|_{L^2}^2 \, dt,\end{aligned}$$ which implies that $$\label{eq : K} |K|\leq C \int_0^T \|\tilde v\|_{L^2}^2\,dt+Cp\int_0^T \|\tilde v\|_{L^2}^{2-2/p}\,dt.$$ Therefore, the estimates , and with establish that $$\| \tilde v (T,\cdot)\|_{L^2}^2 \leq C \int_0^T \|\tilde v\|_{L^2}^2\,dt+Cp\int_0^T \|\tilde v\|_{L^2}^{2-2/p}\,dt.$$ As we choose $p>2$ and as $\|\tilde v\|_{L^2} \leq C_0$ for all $t\in [0,{{{\mathbf T}}^*}]$ (see Proposition \[prop : cont-velocity\]), we have $ \|\tilde v\|_{L^2}^{2/p} \leq C_0^{2/p}$ which implies that for $p$ large enough, the previous inequality gives $$\| \tilde v (T,\cdot)\|_{L^2}^2 \leq 2Cp\int_0^T \|\tilde v\|_{L^2}^{2-2/p}\,dt.$$ Using a Gronwall-like argument, this implies $$\| \tilde v (T,\cdot)\|_{L^2}^2 \leq (2CT)^p,\qquad \forall p\geq 2.$$ Letting $p$ tend to infinity, we conclude that $\| \tilde v (T,\cdot)\|_{L^2} = 0$ for all $T<\min ({{{\mathbf T}}^*}, 1/(2C))$. Finally, we consider the maximal interval of $[0,{{{\mathbf T}}^*}]$ on which $\|\tilde v (T,\cdot)\|_{L^2} \equiv 0$, which is closed by continuity of $\| \tilde v (T,\cdot)\|_{L^2}$. If it is not equal to the whole of $[0,{{{\mathbf T}}^*}]$, we may repeat the proof above, which leads to a contradiction by maximality. Therefore uniqueness holds on $[0,{{{\mathbf T}}^*}]$, and this concludes the proof of Theorems \[main 1\] and \[main 2\]. Indeed, Lemma \[est w\] implies that $\| u_1-u_2 \|_{L^2} \leq \| \tilde w \|_{L^2} + \| \tilde v \|_{L^2} \leq 2 \| \tilde v \|_{L^2}$. Technical results {#sect : technical} ================= We will use several times the following from [@ift]: \[ift\] Let $S\subset{{\mathbb R}}^2$, ${\alpha}\in (0,2)$ and $g:S\to{{\mathbb R}}^+$ be a function belonging in $L^1(S)\cap L^r(S)$, for $r > \frac{2}{2-{\alpha}}$. Then $$\int_S \frac{g(y)}{|x-y|^{\alpha}}dy\leq C\|g\|_{L^1(S)}^{\frac{2-{\alpha}-2/r}{2-2/r}}\|g\|_{L^r(S)}^{\frac{{\alpha}}{2-2/r}}.$$ Proof of Proposition \[biot est\] --------------------------------- We make the proof in the unbounded case (which is the hardest case). We decompose $R[{\omega}]$ in two parts: $$R_1(x):= \int_{{\Omega}} \dfrac{({{\mathcal T}}(x)-{{\mathcal T}}(y))^\perp}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|^2} {\omega}(y)\, dy \text{ and } R_2(x):= \int_{{\Omega}} \dfrac{({{\mathcal T}}(x)- {{\mathcal T}}(y)^*)^\perp}{|{{\mathcal T}}(x)- {{\mathcal T}}(y)^*|^2} {\omega}(y)\, dy.$$ [*a) Estimate and continuity of $R_1$.*]{} Let $z:= {{\mathcal T}}(x)$ and $f({\eta}):= {\omega}({{\mathcal T}}^{-1}({\eta})) |\det (D{{\mathcal T}}^{-1}({\eta})) | {\chi}_{\{|{\eta}|\geq 1\}}$, with ${\chi}_E$ the characteristic function of the set $E$. Making the change of variables ${\eta}= {{\mathcal T}}(y)$, we find $$R_1 ({{\mathcal T}}^{-1}(z) ) = \int_{{{\mathbb R}}^2} \dfrac{(z-{\eta})^\perp}{|z-{\eta}|}f({\eta}) \, d{\eta}.$$ Changing variables back, we get $$\|f \|_{L^1({{\mathbb R}}^2)}=\|{\omega}\|_{L^1}.$$ We choose $p_0>2$ such that $\det (D{{\mathcal T}}^{-1})$ belongs to $L^{p_0}_{{\operatorname{{loc}}}}(\overline{{\Omega}})$ (see Remark \[DT loc\]). If all the angles are greater than $\pi$, we can choose $p_0=\infty$ (thanks to Theorem \[grisvard\] and Proposition \[T-inf\]) and we would have $\|f \|_{L^\infty({{\mathbb R}}^2)} \leq C\|{\omega}\|_{L^\infty}.$ However, if there is one angle less than $\pi$, we have to decompose the integral in two parts: $$R_1 ({{\mathcal T}}^{-1}(z) ) = \int_{|{\eta}|\geq 2} \dfrac{(z-{\eta})^\perp}{|z-{\eta}|^2}f({\eta}) \, d{\eta}+ \int_{|{\eta}| \leq 2} \dfrac{(z-{\eta})^\perp}{|z-{\eta}|^2}f({\eta}) \, d{\eta}$$ with $$\|f \|_{L^\infty({{\mathbb R}}^2\setminus B(0,2))} \leq C_1 \|{\omega}\|_{L^\infty}$$ by Proposition \[T-inf\], and $$\|f \|_{L^{p_0}(B(0,2))} \leq C_2 \|{\omega}\|_{L^\infty},$$ by Remark \[DT loc\]. Then we use the classical estimate for the Biot-Savart kernel in ${{\mathbb R}}^2$ (see Lemma \[ift\]): $$\Bigl| \int_{|{\eta}|\geq 2} \dfrac{(z-{\eta})^\perp}{|z-{\eta}|^2}f({\eta}) \, d{\eta}\Bigl| \leq C_0 \|f \|_{L^1({{\mathbb R}}^2\setminus B(0,2))}^{1/2} \|f \|_{L^\infty({{\mathbb R}}^2\setminus B(0,2))}^{1/2} \leq C_4 \| {\omega}\|_{L^1}^{1/2} \| {\omega}\|_{L^\infty}^{1/2}$$ and $$\Bigl| \int_{|{\eta}|\leq 2} \dfrac{(z-{\eta})^\perp}{|z-{\eta}|^2}f({\eta}) \, d{\eta}\Bigl| \leq C_0 \|f \|_{L^1(B(0,2))}^{\frac{p_0-2}{2(p_0-1)}} \|f \|_{L^{p_0}(B(0,2))}^{\frac{p_0}{2(p_0-1)}} \leq C_5 \| {\omega}\|_{L^1}^{\frac{p_0-2}{2(p_0-1)}} \| {\omega}\|_{L^\infty}^{\frac{p_0}{2(p_0-1)}}$$ which gives the uniform estimate $$\|R_1 \|_{L^\infty({\Omega})} \leq C (\| {\omega}\|_{L^1}^{1/2} \| {\omega}\|_{L^\infty}^{1/2} + \| {\omega}\|_{L^1}^{a} \| {\omega}\|_{L^\infty}^{1-a} )$$ with $a=\frac{p_0-2}{2(p_0-1)}$ including in $(0,1/2]$. Concerning the continuity, we approximate $f \chi_{B(0,2)}$ by $f_n \in C^\infty_c(B(0,2))$ and $f \chi_{B(0,2)^c}$ by $g_n \in C^\infty_c(B(0,2)^c)$ such that $$\|f_n-f\|_{L^1\cap L^{p_0}(B(0,2))} \to 0,\ \|g_n-f\|_{L^1(B(0,2)^c)} \to 0, \ \|g_n\|_{L^\infty} \leq C(f) \text{ as } n\to \infty.$$ As $f_n$ and $g_n$ are smooth, we infer that the functions $$z \mapsto \int_{{{\mathbb R}}^2} \dfrac{{\xi}^\perp}{|{\xi}|^2}f_n(z-{\xi}) \, d{\xi}\text{ and } t \mapsto \int_{{{\mathbb R}}^2} \dfrac{{\xi}^\perp}{|{\xi}|^2}g_n(z-{\xi}) \, d{\xi}$$ are continuous. Moreover, we deduce from the previous estimates that $$\begin{split} \Bigl\| R_1 &({{\mathcal T}}^{-1}(z)) - \int_{{{\mathbb R}}^2} \dfrac{(z-{\eta})^\perp}{|z-{\eta}|^2}g_n({\eta}) \, d{\eta}- \int_{{{\mathbb R}}^2} \dfrac{(z-{\eta})^\perp}{|z-{\eta}|^2}f_n({\eta}) \, d{\eta}\Bigl\|_{L^\infty(B(0,1)^c)}\\ &\leq C_0 \Bigl(\|f-g_n\|_{L^1(B(0,2)^c)}^{1/2} \|f-g_n \|_{L^\infty(B(0,2)^c)}^{1/2} + \|f-f_n \|_{L^1(B(0,2))}^{\frac{p_0-2}{2(p_0-1)}} \|f-f_n \|_{L^{p_0}(B(0,2))}^{\frac{p_0}{2(p_0-1)}} \Bigl). \end{split}$$ Thanks to the limit $n\to \infty$, we prove the continuity of $R_1 \circ {{\mathcal T}}^{-1}$. Using Theorem \[grisvard\], we conclude that $R_1$ is continuous up to the boundary. [*b) Estimate and continuity of $R_2$.*]{} We use, as before, the notations $f$, $z$ and the change of variables ${\eta}$ $$\begin{aligned} R_2 ({{\mathcal T}}^{-1}(z) ) &=& \int_{|{\eta}|\geq 1}\dfrac{(z-{\eta}^*)^\perp} {|z- {\eta}^*|^2}f({\eta})d{\eta}\\ &=& \int_{|{\eta}|\geq 2}\dfrac{(z-{\eta}^*)^\perp} {|z- {\eta}^*|^2}f({\eta})d{\eta}+ \int_{1\leq |{\eta}|\leq 2}\dfrac{(z-{\eta}^*)^\perp} {|z- {\eta}^*|^2}f({\eta})d{\eta}\\ &:=& R_{21}(z)+R_{22}(z).\end{aligned}$$ If $|{\eta}| \geq 2$, $|z- {\eta}^*|\geq 1/2$ because $|z|\geq 1$ (see the definition of ${{\mathcal T}}$). Therefore, we obtain obviously that $$\| R_{21} \|_{L^{\infty}(B(0,1)^c)} \leq 2 \|f\|_{L^{1}(B(0,2)^c)} \leq 2 \| {\omega}\|_{L^1}.$$ The continuity is easier than above: - we approximate $f \chi_{B(0,2)^c}$ by $g_n \in C^\infty_c(B(0,2)^c)$ such that $\|g_n-f\|_{L^1(B(0,2)^c)} \to 0$ as $n\to \infty$; - the functions $$z \mapsto \int_{|{\eta}|\geq 2}\dfrac{(z-{\eta}^*)^\perp} {|z- {\eta}^*|^2}g_n({\eta})d{\eta}$$ is continuous up to the boundary ${\partial}B(0,1)$ because $|z- {\eta}^*|\geq 1/2$; - the previous estimates gives $$\Bigl\| R_{21}(z)-\int_{|{\eta}|\geq 2}\dfrac{(z-{\eta}^*)^\perp} {|z- {\eta}^*|^2}g_n({\eta})d{\eta}\Bigl\|_{L^\infty(B(0,1)^c)} \leq 2 \|f-g_n \|_{L^{1}(B(0,2)^c)};$$ which gives the continuity of $R_{21}$. Concerning $R_{22}$, we again change variables writing ${\theta}={\eta}^*$, to obtain: $$R_{22} (z) = \int_{1/2 \leq|{\theta}|\leq 1} \frac{(z-{\theta})^\perp}{|z-{\theta}|^2} f({\theta}^*) \frac{d{\theta}}{|{\theta}|^4}.$$ Let $g({\theta}):= \frac{f({\theta}^*)}{|{\theta}|^4}$. As above, we deduce by changing variables back that $$\|g \|_{L^1(1/2\leq|{\theta}|\leq 1)} \leq \|{\omega}\|_{L^1}.$$ It is also easy to see that $$\|g\|_{L^{p_0}(1/2\leq|{\theta}|\leq 1)}\leq 2^{\frac{4(p_0-1)}{p_0}} \| f \|_{L^{p_0}(B(0,2))}\leq C_6 \| {\omega}\|_{L^\infty}.$$ Then, by the classical estimates of the Biot-Savart law in ${{\mathbb R}}^2$, we have $$\| R_{22} \|_{L^\infty(B(0,1)^c)} \leq C \| {\omega}\|_{L^1}^{a} \| {\omega}\|_{L^\infty}^{1-a}.$$ Reasoning as for $R_1$, where we approximate $g$, we get that $R_{22}$ is continuous. The continuity of ${{\mathcal T}}$ allows us to conclude that $R_2$ is continuous up to the boundary, which ends the proof in the case of ${\Omega}$ unbounded. [**Remark about the bounded case.**]{} Concerning $R_1$, we do not need to decompose the integral in two parts: $$\|f \|_{L^{p_0}(B(0,1))} \leq C_2 \|{\omega}\|_{L^\infty},$$ where $f({\eta}):= {\omega}( {{\mathcal T}}^{-1}({\eta})) |\det (D{{\mathcal T}}^{-1}({\eta})) | {\chi}_{\{|{\eta}|\leq 1\}}$. Even of $R_2$, we directly have $$R_2 ({{\mathcal T}}^{-1}(z) ) = \int_{|{\theta}|\geq 1} \frac{(z-{\theta})^\perp}{|z-{\theta}|^2} f({\theta}^*) \frac{d{\theta}}{|{\theta}|^4}$$ and we conclude following the proof concerning $R_{22}$. Proof of Lemma \[ortho\] ------------------------ Using the explicit formula of ${\Phi}$ and , we write $$\begin{aligned} u(x)\cdot {\nabla}{\Phi}^{\varepsilon}(x) & = & u^{\perp}(x) \cdot {\nabla}^\perp{\Phi}^{\varepsilon}(x) \\ & = & -\frac{1}{2\pi{\varepsilon}} {\Phi}'\Bigl(\frac{|{{\mathcal T}}(x)|-1}{{\varepsilon}} \Bigl) \int_{{\Omega}}\Bigl(\dfrac{{{\mathcal T}}(x)-{{\mathcal T}}(y)}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|^2}-\dfrac{{{\mathcal T}}(x)- {{\mathcal T}}(y)^*}{|{{\mathcal T}}(x)- {{\mathcal T}}(y)^*|^2}\Bigl) {\omega}(t,y)\, dy \\ && \times D{{\mathcal T}}(x)D{{\mathcal T}}^T(x)\frac{{{\mathcal T}}(x)^\perp}{|{{\mathcal T}}(x)|}.\end{aligned}$$ As ${{\mathcal T}}$ is holomorphic, $D{{\mathcal T}}$ is of the form $\begin{pmatrix} a & b \\ -b & a \end{pmatrix}$ and we can check that $D{{\mathcal T}}(x)D{{\mathcal T}}^T(x)=(a^2+b^2)Id=|\det(D{{\mathcal T}})(x)|Id$, so $$u(x) \cdot {\nabla}{\Phi}^{\varepsilon}(x) = \frac{{\Phi}'(\frac{|{{\mathcal T}}(x)|-1}{{\varepsilon}})|\det(D{{\mathcal T}})(x)|}{2\pi{\varepsilon}|{{\mathcal T}}(x)|} \int_{{\Omega}}\Bigl(\dfrac{{{\mathcal T}}(y)\cdot {{\mathcal T}}(x)^\perp}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|^2}-\dfrac{ {{\mathcal T}}(y)^* \cdot {{\mathcal T}}(x)^\perp}{|{{\mathcal T}}(x)- {{\mathcal T}}(y)^*|^2}\Bigl) {\omega}(t,y)\ dy.$$ We compute the $L^1$ norm, next we change variables twice ${\eta}={{\mathcal T}}(y)$ and $z={{\mathcal T}}(x)$, to have $$\|u \cdot {\nabla}{\Phi}^{\varepsilon}\|_{L^1} =\frac{1}{2\pi{\varepsilon}} \int_{|z|\geq 1} \Bigl|{\Phi}'\Bigl(\frac{|z|-1}{{\varepsilon}}\Bigl)\Bigl| \Bigl| \displaystyle\int_{|{\eta}|\geq 1} \Bigl(\dfrac{{\eta}\cdot z^\perp/|z|}{|z-{\eta}|^2}-\dfrac{ {\eta}^* \cdot z^\perp/|z|}{|z-{\eta}^*|^2}\Bigl) f(t,{\eta}) \, d{\eta}\Bigl|dz,$$ where $f(t,{\eta})= {\omega}(t,{{\mathcal T}}^{-1}({\eta})) |\det(D{{\mathcal T}}^{-1})({\eta})|$. Thanks to the definition of ${\Phi}$, we know that $\Bigl\|\frac{1}{{\varepsilon}}{\Phi}'\Bigl(\frac{|z|-1}{{\varepsilon}}\Bigl)\Bigl\|_{L^1}\leq C$. So it is sufficient to prove that $$\label{tronc} \Bigl\| \int_{|{\eta}|\geq 1} \Bigl(\dfrac{{\eta}\cdot z^\perp/|z|}{|z-{\eta}|^2}-\dfrac{ {\eta}^*\cdot z^\perp/|z|}{|z- {\eta}^*|^2}\Bigl)f(t,{\eta})\, d{\eta}\Bigl\|_{L^\infty(1+{\varepsilon}\leq |z|\leq 1+2{\varepsilon})}\to 0$$ as ${\varepsilon}\to 0$, uniformly in time. Let $$A :=\dfrac{{\eta}\cdot z^\perp/|z|}{|z-{\eta}|^2}-\dfrac{ {\eta}^* \cdot z^\perp/|z|}{|z- {\eta}^*|^2}.$$ We compute $$\begin{aligned} A&=& \Bigl(\dfrac{(|z|^2-2 z \cdot {\eta}/|{\eta}|^2+1/|{\eta}|^2)-1/|{\eta}|^2(|z|^2-2z \cdot {\eta}+|{\eta}|^2)}{|z-{\eta}|^2|z- {\eta}^*|^2}\Bigl){\eta}\cdot \frac{z^\perp}{|z|} \\ &=& \frac{(|z|^2-1)(1-1/|{\eta}|^2)}{|z-{\eta}|^2|z- {\eta}^*|^2}{\eta}\cdot \frac{z^\perp}{|z|}.\end{aligned}$$ We now use that $|z|\geq 1$, to write $$|z- {\eta}^*|\geq 1-\frac{1}{|{\eta}|}.$$ Moreover, $| {\eta}^*|\leq 1$ allows to have $$|z- {\eta}^*|\geq |z|-1.$$ We can now estimate $A$ by: $$|A| \leq \frac{(|z|+1)(1+1/|{\eta}|)(|z|-1)^b}{ |z-{\eta}|^2 |z- {\eta}^*|^b} \Bigl| {\eta}\cdot \frac{z^\perp}{|z|}\Bigl|$$ with $0\leq b \leq 1$, to be chosen later. We remark also that ${\eta}\cdot \frac{z^\perp}{|z|}=({\eta}-z) \cdot \frac{z^\perp}{|z|}$ and the Cauchy-Schwarz inequality gives $$\Bigl|{\eta}\cdot \frac{z^\perp}{|z|}\Bigl|\leq |{\eta}-z|.$$ We now use the fact that $|z|-1 \leq 2{\varepsilon}$, to estimate (\[tronc\]): $$\Bigl| \int_{|{\eta}|\geq 1}A f(t,{\eta}) \, d{\eta}\Bigl|\leq (2+2{\varepsilon}).2.(2{\varepsilon})^b \int_{|{\eta}|\geq 1} \frac{|f(t,{\eta})|}{|z-{\eta}||z- {\eta}^*|^b}d{\eta},$$ hence, the Hölder inequality gives $$\Bigl| \int_{|{\eta}|\geq 1}A f(t,{\eta}) \, d{\eta}\Bigl|\leq (2+2{\varepsilon}).2.(2{\varepsilon})^b \Bigl\|\frac{|f(t,{\eta})|^{1/p}}{|z-{\eta}|}\Bigl\|_{L^p} \Bigl\|\frac{|f(t,{\eta})|^{1/q}}{|z- {\eta}^*|^b}\Bigl\|_{L^q}$$ with $1/p+1/q=1$ chosen later. In the same way we estimate $R_2$ in the proof of Proposition \[biot est\], we obtain for $bq=1$: $$\Bigl\|\frac{|f(t,{\eta})|^{1/q}}{|z- {\eta}^*|^b}\Bigl\|_{L^q}=\Bigl(\int_{|{\eta}|\geq 1} \frac{|f(t,{\eta})|}{|z-{\eta}^*|}d{\eta}\Bigl)^{1/q}\leq C_q,$$ where we have used that ${\omega}$ belongs to $L^\infty({{\mathbb R}}^+;L^1\cap L^\infty({\Omega}))$. Now we use Lemma \[ift\] for $f\in L^1\cap L^{p_0}$, with $p_0>2$ and for $f\in L^1\cap L^{\infty}$ (see the proof of Proposition \[biot est\]). Then, we choose $p\in (1,2)$ such that $p_0> \frac{2}{2-p}$ and we follow the estimate of $R_1$ in the proof of Proposition \[biot est\] to obtain: $$\Bigl\|\frac{|f(t,{\eta})|^{1/p}}{|z-{\eta}|}\Bigl\|_{L^p}=\Bigl(\int_{|{\eta}|\geq 1} \frac{|f(t,{\eta})|}{|z-{\eta}|^p}d{\eta}\Bigl)^{1/p} \leq C_p.$$ We have used again that ${\omega}$ belongs to $L^\infty({{\mathbb R}}^+;L^1\cap L^\infty({\Omega}))$. Fixing a $p\in (1,2)$ such that $p_0> \frac{2}{2-p}$, it gives $q\in (2,\infty)$ and $b \in (0,1/2)$ and it follows $$\|u \cdot {\nabla}{\Phi}^{\varepsilon}\|_{L^1} \leq C(2+2{\varepsilon}).2.(2{\varepsilon})^{b}C_{p} C_{q}$$ which tends to zero when ${\varepsilon}$ tends to zero, uniformly in time. Proof of Lemma \[W11\] ---------------------- Let $\mathbf{T}>0$ fixed. We rewrite : $$\begin{aligned} u(x) &=& \frac{1}{2\pi} D{{\mathcal T}}^T (x) \Bigl( \int_{{\Omega}} \Bigl(\frac{{{\mathcal T}}(x)-{{\mathcal T}}(y)}{|{{\mathcal T}}(x) - {{\mathcal T}}(y)|^2}- \frac{{{\mathcal T}}(x)-{{\mathcal T}}(y)^*}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*|^2}\Bigl)^\perp {\omega}(y)\, dy + {\alpha}\frac{{{\mathcal T}}(x)^\perp}{|{{\mathcal T}}(x)|^2}\Bigl)\\ &:= & \frac{1}{2\pi} D{{\mathcal T}}^T(x) h({{\mathcal T}}(x))\end{aligned}$$ where ${\alpha}$ is bounded by $\|{\gamma}\|_{L^\infty([0,\mathbf{T}])} + \| {\omega}\|_{L^\infty(L^1)}$ in $[0,\mathbf{T}]$ (see ). We start by treating $h$. We change variable ${\eta}={{\mathcal T}}(y)$, and we obtain $$\begin{aligned} h(z)&=& \int_{B(0,1)^c} \Bigl(\frac{z-{\eta}}{|z-{\eta}|^2}- \frac{z-{\eta}^*}{|z-{\eta}^*|^2}\Bigl)^\perp {\omega}({{\mathcal T}}^{-1}({\eta})) |\det D{{\mathcal T}}^{-1}({\eta})| \, d{\eta}+ {\alpha}\frac{z^\perp}{|z|^2} \\ &=&\int_{B(0,2)^c}\frac{(z-{\eta})^\perp}{|z-{\eta}|^2} f(t,{\eta}) \, d{\eta}+ \int_{B(0,2)\setminus B(0,1)} \frac{(z-{\eta})^\perp}{|z-{\eta}|^2} f(t,{\eta}) \, d{\eta}- \int_{B(0,2)^c}\frac{(z-{\eta}^*)^\perp}{|z-{\eta}^*|^2} f(t,{\eta}) \, d{\eta}\\ &&- \int_{B(0,2)\setminus B(0,1)}\frac{(z-{\eta}^*)^\perp}{|z-{\eta}^*|^2} f(t,{\eta}) \, d{\eta}+ {\alpha}\frac{z^\perp}{|z|^2}\\ &:=& h_1(z)+ h_2(z)-h_3(z)-h_4(z)+{\alpha}h_5(z),\end{aligned}$$ with $f(t,{\eta})={\omega}(t,{{\mathcal T}}^{-1}({\eta})) |\det D{{\mathcal T}}^{-1}({\eta})|$ belongs to $L^\infty(L^1\cap L^{p_0}(B(0,2)\setminus B(0,1))$ with some $p_0>2$ and to $L^\infty(L^1\cap L^{\infty}(B(0,2)^c)$ (see the proof of Proposition \[biot est\]). As $|z|= |{{\mathcal T}}(x)|\geq 1$, we are looking for estimates in $B(0,1)^c$. Obviously we have that $$h_5 \text{ belongs to } L^{\infty}(B(0,1)^c) \text{ and } Dh_5 \text{ belongs to } L^{\infty}(B(0,1)^c).$$ Concerning $h_1$, we introduce $f_1:= f \chi_{B(0,2)^c}$ where $\chi_S$ denotes the characteristic function on $S$. Hence $$h_1(z)=\int_{{{\mathbb R}}^2}\frac{(z-{\eta})^\perp}{|z-{\eta}|^2}f_1({\eta}) \, d{\eta}\text{ with } f_1\in L^\infty({{\mathbb R}}^+;L^1({{\mathbb R}}^2)\cap L^{\infty}({{\mathbb R}}^2)).$$ We have used the work made in the proof of Proposition \[biot est\] about the computation of the $L^p$ norm of $f$ in terms of ${\omega}$. The standard estimates on Biot-Savart kernel in ${{\mathbb R}}^2$ and Calderon-Zygmund inequality give that $$h_1 \text{ belongs to } L^\infty({{\mathbb R}}^+\times B(0,1)^c) \text{ and } Dh_1 \text{ belongs to } L^\infty({{\mathbb R}}^+; L^{p}(B(0,1)^c)), \ \forall p\in (1,\infty).$$ For $h_2$, is almost the same argument: we introduce $f_2:= f \chi_{B(0,2)\setminus B(0,1)}$, hence $$h_2(z)=\int_{{{\mathbb R}}^2}\frac{(z-{\eta})^\perp}{|z-{\eta}|^2}f_2({\eta}) \, d{\eta}\text{ with } f_2\in L^\infty({{\mathbb R}}^+;L^1({{\mathbb R}}^2)\cap L^{p_0}({{\mathbb R}}^2)).$$ The standard estimates on Biot-Savart kernel in ${{\mathbb R}}^2$ and Calderon-Zygmund inequality give that $$h_2 \text{ belongs to } L^\infty({{\mathbb R}}^+\times B(0,1)^c) \text{ and } Dh_2 \text{ belongs to } L^\infty({{\mathbb R}}^+;L^{p_0}(B(0,1)^c)).$$ For $h_3$, we can remark that for any ${\eta}\in B(0,2)^c$ we have $|z-{\eta}^*|\geq \frac12$. Therefore, the function $(z,{\eta})\mapsto \frac{(z-{\eta}^*)^\perp}{|z-{\eta}^*|^2} $ is smooth in $B(0,1)^c\times B(0,2)^c$, which gives us, by a classical integration theorem, that $$h_3 \text{ belongs to } L^\infty({{\mathbb R}}^+\times B(0,1)^c) \text{ and } Dh_3 \text{ belongs to } L^\infty({{\mathbb R}}^+\times B(0,1)^c).$$ To treat the last term, we change variables $\theta = {\eta}^*$ $$h_4(z) = \int_{B(0,1)\setminus B(0,1/2)} \frac{(z-\theta)^\perp}{|z-\theta|^2} f(\theta^*) \frac{d\theta}{|\theta|^4}:= \int_{{{\mathbb R}}^2} \frac{(z-\theta)^\perp}{|z-\theta|^2} f_4(\theta)\, d\theta,$$ with $f_4(\theta):=\displaystyle \frac{f(t,\theta^*)}{|\theta|^4} \chi_{B(0,1)\setminus B(0,1/2)}(\theta)$ which belongs to $L^\infty({{\mathbb R}}^+;L^1({{\mathbb R}}^2)\cap L^{p_0}({{\mathbb R}}^2))$. Therefore, standard estimates on Biot-Savart kernel and Calderon-Zygmund inequality give that $$h_4 \text{ belongs to } L^\infty({{\mathbb R}}^+\times B(0,1)^c) \text{ and } Dh_4 \text{ belongs to } L^\infty({{\mathbb R}}^+;L^{p_0}(B(0,1)^c)).$$ Now, we come back to $u$. As $u (x) = \frac{1}{2\pi} D{{\mathcal T}}^T(x) h({{\mathcal T}}(x))$, with $D{{\mathcal T}}$ belonging to $L^1_{{\operatorname{{loc}}}}(\overline{{\Omega}})$ (see Remark \[DT loc\]) and $h\circ {{\mathcal T}}$ uniformly bounded, we have that $$u \text{ belongs to } L^{\infty}([0,\mathbf{T}];L^1_{{\operatorname{{loc}}}} (\overline{{\Omega}})).$$ Adding the bounded behavior of $D{{\mathcal T}}$ at infinity, we have that $$\bar u \text{ belongs to }L^\infty \left([0,\mathbf{T}] ; L^1({{\mathbb R}}^2)+ L^\infty({{\mathbb R}}^2)\right).$$ Moreover, we have $$\begin{aligned} |Du(x)| &\leq& \frac{1}{2\pi}\Bigl( |D^2 {{\mathcal T}}(x)| |h({{\mathcal T}}(x))| +|D{{\mathcal T}}(x)|^2 |(-Dh_3+{\alpha}Dh_5) ({{\mathcal T}}(x))|\Bigl)\\ &&+ |D{{\mathcal T}}(x)|^2 |(Dh_1+ Dh_2 - Dh_4) ({{\mathcal T}}(x))|. \end{aligned}$$ For the first right hand side term, we know that $h\circ {{\mathcal T}}$ is uniformly bounded and that $D^2 {{\mathcal T}}$ belongs to $L^1_{{\operatorname{{loc}}}}(\overline{{\Omega}})$ (see Theorem \[grisvard\]). We see that the second right hand side term belongs to $L^{\infty}([0,\mathbf{T}] ; L^1_{{\operatorname{{loc}}}} (\overline{{\Omega}}))$ because $D{{\mathcal T}}$ belongs to $L^{2}_{{\operatorname{{loc}}}}(\overline{{\Omega}})$ and $(-Dh_3-{\alpha}Dh_5)({{\mathcal T}}(x))$ belongs to $L^{\infty}([0,\mathbf{T}] \times \overline{{\Omega}}))$. Concerning the third right hand side term, we use that ${{\mathcal T}}$ holomorphic implies that $D{{\mathcal T}}$ is of the form $\begin{pmatrix} a & b \\ -b & a \end{pmatrix}$. Hence, we get easily that $$|D{{\mathcal T}}(x)|_\infty^2 = (\sup (|a|,|b|))^2 \leq a^2+b^2= |\det D{{\mathcal T}}(x)|.$$ Therefore, changing variables, we have for $i=1,2,4$ and $K$ any compact set of $\overline{{\Omega}}$: $$\| |D{{\mathcal T}}|^2 |Dh_i\circ {{\mathcal T}}| \|_{L^1(K)} \leq \| Dh_i \|_{L^1(\tilde K)}$$ with $\tilde K := {{\mathcal T}}(K)$ a compact set (by the continuity of ${{\mathcal T}}$). Therefore, the estimates obtained for $Dh_i$ allow us to conclude. Proof of Proposition \[compact\_vorticity\] ------------------------------------------- We set $\beta(t)=t^2$ and use with this choice. Let ${\Phi}\in \mathcal{D}({{\mathbb R}}^+\times {{\mathbb R}^2})$ $$\int_{{{\mathbb R}^2}} {\Phi}(T,x) (\bar{\omega})^2(T,x)\,dx - \int_{{{\mathbb R}^2}} {\Phi}(0,x) (\bar{\omega})^2(0,x)\,dx =\int_0^T\int_{{{\mathbb R}^2}} (\bar{\omega})^2 ({\partial_t}{\Phi}+\bar u\cdot \nabla {\Phi})\,dx \,dt.$$ This is actually an improvement of , in which the equality holds in $L^1_{{\operatorname{{loc}}}}({{\mathbb R}}^+)$. Indeed, we have ${\partial}_t \bar{\omega}=-{{\rm div}\,}(\bar u \bar{\omega})$ (in the sense of distributions) with $\bar{\omega}\in L^\infty$ and $\bar u \in L^\infty_{{\operatorname{{loc}}}}({{\mathbb R}}^+,L^p_{{\operatorname{{loc}}}}({{\mathbb R}^2}))$ for all $p<4$ (see ), which implies that ${\partial}_t \bar{\omega}$ belongs to $L^1_{{\operatorname{{loc}}}}({{\mathbb R}}^+,W^{-1,p}_{{\operatorname{{loc}}}}({{\mathbb R}^2}))$. Hence, $\bar{\omega}$ belongs to $C({{\mathbb R}}^+,W^{-1,p}_{{\operatorname{{loc}}}}({{\mathbb R}^2}))\subset C_{w}({{\mathbb R}}^+, L^2_{{\operatorname{{loc}}}}({{\mathbb R}}^2))$, where $C_{w} L_{{\operatorname{{loc}}}}^{2}$ stands for the space of maps $f$ such that for any sequence $t_n\to t$, the sequence $f(t_n)$ converges to $f(t)$ weakly in $L^2_{{\operatorname{{loc}}}}$. Since on the other hand $t \mapsto \|\bar {\omega}(t)\|_{L^2}$ is continuous by Remark \[remark : conserv\], we have $\bar{\omega}\in C({{\mathbb R}}^+,L^2({{\mathbb R}^2}))$. Therefore the previous integral equality holds for all $T$. Now, we choose a good test function. We let ${\Phi}_0$ be a non-decreasing function on ${{\mathbb R}}$, which is equal to $1$ for $s\geq 2$ and vanishes for $s\leq 1$ and we set ${\Phi}(t,x)={\Phi}_0(|x|/R(t))$, with $R(t)$ a smooth, positive and increasing function to be determined later on, such that $R(0)=R_0$. For this choice of ${\Phi}$, we have $({\omega}_0(x))^2 {\Phi}(0,x)\equiv 0$. We compute then $${\nabla}{\Phi}= \frac{x}{|x|}\frac{{\Phi}_0'}{R(t)}$$ and $${\partial}_t {\Phi}= -\frac{R'(t)}{R^2(t)}|x|{\Phi}_0'.$$ We obtain $$\begin{aligned} \int_{{{\mathbb R}^2}} {\Phi}(T,x) (\bar{\omega})^2(T,x)\,dx & =&\int_0^T \int_{{{\mathbb R}^2}} (\bar{\omega})^2 \frac{{\Phi}_0'(\frac{|x|}{R})}{R}\Bigl( \bar u(x) \cdot \frac{x}{|x|}-\frac{R'}{R}|x|\Bigl)\, dx\, dt\\ &\leq& \int_0^T \int_{{{\mathbb R}^2}} (\bar{\omega})^2 \frac{|{\Phi}_0'|(\frac{|x|}{R})}{R} (C -R')\, dx\, dt,\end{aligned}$$ where $C$ is independent of $t$ and $x$. Indeed, we have that $$u(t,x)=\dfrac{1}{2\pi}D{{\mathcal T}}^T(x) \Bigl(R[{\omega}](x) +({\gamma}+\int {\omega}_0) \frac{{{\mathcal T}}(x)^\perp}{|{{\mathcal T}}(x)|^2} \Bigl)$$ with $ |R[{\omega}]| \leq C_1$ (see Proposition \[biot est\]) and $1/|{{\mathcal T}}(x)|\leq 1$. Using Proposition \[T-inf\], we know that there exists a positive $C_2$ such that $$|D{{\mathcal T}}({x}) | \leq C_2 |{\beta}|, \ \forall |x|\geq R_0.$$ Putting together all these inequalities with , we obtain $$C=\frac{1}{2\pi}C_2 |{\beta}| \Bigl( C_1 + \|{\gamma}\|_{L^\infty([0,\mathbf{T}^*])}+\| {\omega}_0\|_{L^1}\Bigl).$$ Taking $R(t) =R_0 + Ct$, we arrive at $$\int_{{{\mathbb R}^2}} {\Phi}(T,x) (\bar{\omega})^2(T,x)\,dx\leq 0,$$ which ends the proof. Proof of Proposition \[prop : cont-velocity\] --------------------------------------------- By the conservation of the total mass of ${\omega}_i$ , we have that $$\int_{{{\mathbb R}}^2} {\tilde{{\omega}}}(t,\cdot) \equiv 0, \ \forall t\geq 0.$$ Moreover, Proposition \[compact\_vorticity\] states that there exists $C_1({\omega}_0,{\Omega},{\gamma})$ such that ${\omega}_1(t,\cdot)$ and ${\omega}_2(t,\cdot)$ are compactly supported in $B(0,R_0+C_1 t)$. So we first infer that ${\tilde{v}}(t) \in L^2({{\mathbb R}^2})$ for all $t$ (see e.g. [@maj-bert]). Using that $\|{\omega}_i\|_{L^1({\Omega})\cap L^\infty({\Omega})} \in L^\infty({{\mathbb R}}^+)$, we even obtain $$\label{bound-velocity} {\tilde{v}}\in L_{{\operatorname{{loc}}}}^{\infty}({{\mathbb R}}^+,L^2({{\mathbb R}^2})).$$ We now turn to the first assertion in Proposition \[prop : cont-velocity\]. By Lemma \[ift\] and Calderon-Zygmund inequality we state that implies that $v_i=K_{{{\mathbb R}}^2}*\bar {\omega}_i$ belongs to $L^\infty({{\mathbb R}}^+\times {{\mathbb R}^2})$ and its gradient ${\nabla}v_i$ to $L^\infty({{\mathbb R}}^+,L^4({{\mathbb R}^2}))$. On the other hand, since the vorticity ${\omega}_i$ is compactly supported, we have for large $|x|$ $$|v_i(t,x)|\leq \frac{C}{|x|} \int_{{{\mathbb R}^2}} |\bar {\omega}_i(t,y)|\, dy,$$ hence $v_i$ belongs to $L_{{\operatorname{{loc}}}}^{\infty}({{\mathbb R}}^+,L^p({{\mathbb R}^2}))$ for all $p>2$. It follows in particular that $$v_i \in L_{{\operatorname{{loc}}}}^{\infty}({{\mathbb R}}^+,W^{1,4}({{\mathbb R}^2}))$$ and also that $v_i \otimes v_i$ belongs to $L_{{\operatorname{{loc}}}}^\infty(L^{4/3})$. Since $v_i$ is divergence-free, we have $v_i \cdot {\nabla}v_i={{\rm div}\,}(v_i \otimes v_i)$, and so $v_i \cdot {\nabla}v_i \in L_{{\operatorname{{loc}}}}^2\big({{\mathbb R}}^+,W^{-1,\frac{4}{3}}({{\mathbb R}^2})\big)$. Thanks to , we know that $v_i(t)\otimes w_i(t)$ belongs to $L_{{\operatorname{{loc}}}}^{4/3}$. At infinity, we use the explicit formula of $u$ , the compact support of the vorticity and the behavior of ${{\mathcal T}}$ at infinity (Proposition \[T-inf\]) to note that $w_i$ is bounded by $C/|x|$. $v_i$ has the same behavior at infinity, which belongs to $L^{8/3}$. This yields $${{\rm div}\,}(v_i\otimes w_i),\:\:{{\rm div}\,}(w_i\otimes v_i)\in L_{{\operatorname{{loc}}}}^2\big({{\mathbb R}}^+,W^{-1,\frac{4}{3}}({{\mathbb R}^2})\big).$$ Besides, we can infer from the behavior of ${{\mathcal T}}$ on the boundary (Theorem \[grisvard\]) and Proposition \[biot est\] that $\tilde g_{v_i,{\gamma}_0}$, defined in , is uniformly bounded in $L^1({\partial}{\Omega})$. Then we deduce from the embedding of $W^{1,4}({{\mathbb R}^2})$ in $C_0^0({{\mathbb R}^2})$ that $\tilde g_{v_i,{\gamma}_0} \delta_{{\Omega}}$ belongs to $L_{{\operatorname{{loc}}}}^2(W^{-1,\frac{4}{3}})$. Therefore, $v_i \tilde g_{v_i,{\gamma}_0} \delta_{{\Omega}}\in L_{{\operatorname{{loc}}}}^2({{\mathbb R}}^+,W^{-1,\frac{4}{3}}({{\mathbb R}^2})).$ According to , we finally obtain $$\langle {\partial}_t v_i ,{\Phi}\rangle =\langle {\partial}_t v_i -{\nabla}p_i,{\Phi}\rangle \leq C\|{\Phi}\|_{L^2(W^{1,4}_{\sigma})}$$ for all divergence-free smooth vector field ${\Phi}$. This implies that $${\partial}_t v_i \in L_{{\operatorname{{loc}}}}^2\big({{\mathbb R}}^+,W^{-1,4/3}_{\sigma}({{\mathbb R}^2})\big), \quad i=1,2,$$ and the same holds for ${\partial}_t {\tilde{v}}$. Now, since $\tilde v$ belongs to $L^2_{{\operatorname{{loc}}}}\big({{\mathbb R}}^+,W^{1,4}_{\sigma}\big)$, we deduce from and Lemma 1.2 in Chapter III of [@temam] that $\tilde v$ is almost everywhere equal to a function continuous from ${{\mathbb R}}^+$ into $L^2$ and we have in the sense of distributions on ${{\mathbb R}}^+$: $$\frac{d}{dt} \|\tilde v\|_{L^2({{\mathbb R}^2})}^2 = 2 \langle {\partial}_t \tilde v,\tilde v \rangle_{W^{-1,4/3}_{\sigma},W^{1,4}_{\sigma}}.$$ We finally conclude by using the fact that ${\tilde{v}}(0)=0$. Final remarks and comments {#sect : 6} ========================== Application: safety time in airports ------------------------------------ One of the crucial issues in biggest airports is the following: the wings of a plane create (at landing and take off) an important vortex where another plane should not pass through. We can find in history some examples of crashes due to these turbulences. Consequently, airports decided to define a safety time between two planes, in order that the wake vortex dissipates. However, this safety time have to be redefined because of “very heavy” new airplanes as the Airbus A380. Increasing the time between two planes for security is in contradiction with the traffic jam in the main airports. Several programs were created in order to understand and optimize this phenomena. One of the solutions investigated is the creation of big radars which try to detect if the vortex (of course invisible) has vanished. However, engineers and physicists are not able to interpret the data given by the radar, because we do not know the exact shape of the vortex behind a plane wing. Air around an infinite air plane wing can be modeled by Euler equation in dimension two around a singular obstacle: smooth except in one point where we have a cusp (${\alpha}=2\pi$). The consequence of this article and of [@GV_lac] is to show that we have existence and uniqueness of a solution to such a problem, and that we have an explicit form of the velocity in terms of the vorticity and the shape of the wing. In particular, we show that the velocity blows up like $1/\sqrt{|x|}$. Thanks to this explicit formula (see ), it should be possible to plug it in order to obtain the shape of the stream line. Actually, in order to apply exactly this article, we consider that the wing is fixed in the plane frame, but the velocity at infinity is equal to $-l$ where $l$ is the velocity of the plane. Having vector field constant at infinity instead of vanishing, change a bit the Biot-Savart law but not the final result and the behavior near the obstacle. To be convinced, the reader can read [@lac_these Section 3.4], where we adapt [@lac_euler] with constant velocity at infinity. The Kutta condition ------------------- As the velocity stays tangent to the obstacle, we know that the trajectory of a flow particle which arrives to the solid will be deformed, and the particle will leave behind the solid. Indeed, we have proved that this particle will never meet the boundary. By reversibility of time, it also means that a particle near the boundary stays close to the boundary, even if the particle goes closer and closer to a corner. Therefore, near the trailing edge of a plane wing, the particle has a velocity which blows up, but it stays close to the boundary. For the Euler equation, this particle should take the turn of the corner with a huge tangential velocity, but moving in a small neighborhood of the boundaries. Such a kind of behavior can justify that for infinite velocity, the Euler equations are no longer relevant for modeling the air near the corners. This observation is well known by engineers and they use an unproved principle, the so-called Kutta condition: “[*A body with a sharp trailing edge which is moving through a fluid will create about itself a circulation of sufficient strength to hold the rear stagnation point at the trailing edge*]{}”. In other word, even if initially ${\gamma}=0$, they assume that there is a creation of circulation or vorticity such that the fluid particles near the edge go away from the obstacle. Concerning Euler equations, we see that particles which arrives to the wing can go away from opposite side, and not necessarily from the cups. ![image](kutta.eps){height="5cm"} Solution of the Euler equations. Kutta condition. No extraction in convergence results ------------------------------------ In [@taylor; @lac_small; @GV_lac], the existence of a weak solution is a consequence of a compactness argument. Indeed, we consider therein the unique solutions $u_n$ of the Euler equations on the smooth domain $\Omega_n$, which converges to ${\Omega}$ in some senses. Then, in these articles, we extract a subsequence such that $u_{{\varphi}(n)} \to u$ and we check that $u$ is solution of the Euler equations in ${\Omega}$. Putting together the present result with [@GV_lac], we can state the following. Let ${\omega}_0$, ${\gamma}_0$, ${\Omega}$ as in Theorems \[main 1\] or \[main 2\]. For any sequence of smooth open simply connected domains (or exterior of simply connected domains) ${\Omega}_n$ converging to ${\Omega}$ in the Hausdorff sense, then the unique solution $u_n$ of the Euler equations on ${\Omega}_n$, with initial datum $u_n^0$ such that $${{\rm div}\,}u^0_n = 0, \ {{\rm curl}\,}u^0_n ={\omega}_0, \ u^0_n \cdot \hat n\vert_{{\partial}\Omega_n} = 0, \ \lim_{|x| \rightarrow +\infty} u^0_n = 0 , \ \oint_{{\partial}{\Omega}_n} u^0_n \cdot \hat \tau\, ds={\gamma}_0 \text{ (only for exterior domains)} ,$$ converges in $L^2_{{\operatorname{{loc}}}}({{\mathbb R}}^+\times \overline{{\Omega}})$ to the unique solution $u$ of the Euler equations on ${\Omega}$ with initial datum $u^0$ such that $${{\rm div}\,}u^0 = 0, \ {{\rm curl}\,}u^0 ={\omega}_0, \ u^0 \cdot \hat n\vert_{{\partial}\Omega} = 0, \ \lim_{|x| \rightarrow +\infty} u^0 = 0 , \ \oint_{{\partial}{\Omega}} u^0 \cdot \hat \tau\, ds={\gamma}_0 \text{ (only for exterior domains)}.$$ Special vortex sheet -------------------- In [@lac_euler], we consider some smooth domains ${\Omega}_{\varepsilon}$ which shrink to a $C^2$ Jordan arc ${\Gamma}$ as ${\varepsilon}$ tends to zero. For ${\omega}_0 \in L^\infty_c({\Gamma}^c)$ and ${\gamma}\in {{\mathbb R}}$ given, we denote by $(u_{\varepsilon},\omega_{\varepsilon})$ the corresponding regular solutions of the Euler equations on $\Pi_{\varepsilon}:= {{\mathbb R}}^2\setminus {\Omega}_{\varepsilon}$. Up to a truncated smoothly over a size ${\varepsilon}$ around the obstacle, it is proved therein that the resulting truncations $\tilde u_{\varepsilon}$ and $\tilde \omega_{\varepsilon}$, defined over the whole of ${{\mathbb R}}^2$, converge in appropriate topologies to the solutions $\tilde u$, $\tilde \omega$ of the system $$\label{vorticityformulation2} \left\{ \begin{aligned} & {\partial}_t \tilde \omega + \tilde u \cdot {\nabla}\tilde \omega = 0, \quad t > 0, \: x \in {{\mathbb R}}^2, \\ & {{\rm div}\,}\tilde u = 0, \quad t > 0, \: x \in {{\mathbb R}}^2, \\ &{{\rm curl}\,}\tilde u = \tilde \omega + g_{\tilde \omega,{\gamma}} \delta_{{\Gamma}}, \quad t > 0, \: x \in {{\mathbb R}}^2. \end{aligned} \right.$$ This is an Euler like equation, modified by a Dirac mass along the arc. The density function $g_{\tilde \omega,{\gamma}}$ is given explicitly in terms of $\tilde \omega$ and ${{\Gamma}}$. Moreover, it is shown that it is equal to the jump of the tangential component of the velocity across the arc. We refer to [@lac_euler] for all necessary details. Actually, the presence of this additional measure is mandatory in order that the velocity $\tilde u$ is tangent to the curve, with circulation ${\gamma}$ around it. Therefore, in the exterior of a Jordan arc, appears to be a special vortex sheet, “special” because the support of the dirac mass does not move (staying to be ${\Gamma}$) and because the normal component of the velocity on the curve is equal to zero. For a general vortex sheet, we can prove that the normal component is continuous, but not necessarily zero. In both case, we have a jump of the tangential component. A consequence of the present work is the uniqueness of a solution of , with the good sign conditions for ${\omega}_0$ and ${\gamma}$ (see Theorem \[main 2\]). For instance, if we assume that ${\Gamma}$ is the segment $[(-1,0);(1,0)]$, then we have the explicit expression of the harmonic vector field thanks to the Joukowski function, and we can find in [@lac_euler p. 1144] the following: $${{\rm curl}\,}H_{\Gamma}= \frac{1}{\pi} \frac{1}{\sqrt{1-x_1^2}} \chi_{(-1,1)}(x_1) {\delta}_0(x_2).$$ Then, choosing ${\omega}_0\equiv 0$ and $\gamma=1$, we have proven that the stationary function $u(t,x)=H_{\Gamma}(x)$ is the unique solution of the Euler equations with initial vorticity $\frac{1}{\pi} \frac{1}{\sqrt{1-x_1^2}} \chi_{(-1,1)}(x_1) {\delta}_0(x_2)$. Adding a vorticity or considering other shape for ${\Gamma}$ complicates a lot the expression of $g_{\tilde \omega,{\gamma}}$ (see [@lac_euler]). In particular, we do not prove the uniqueness for the so-called Prandtl-Munk vortex sheet: $\frac{1}{\pi} \frac{x_1}{\sqrt{1-x_1^2}} \chi_{(-1,1)}(x_1) {\delta}_0(x_2)$. Extension for constant vorticity near the boundary -------------------------------------------------- As it is remarked several times, the crucial point is to prove that the vorticity never meets the boundary if we consider an initial vorticity compactly supported in ${\Omega}$. However, we can extend easily this result to the case of an initial vorticity constant to the boundary. Indeed, for ${\alpha}\in {{\mathbb R}}$ given, choosing $\beta(t)=(t-{\alpha})^2$ in the proof of Proposition \[constant\_vorticity\_2\] gives in the same way the following. Let ${\omega}$ be a global weak solution of such that ${\omega}_0$ is compactly supported in $\overline{{\Omega}}$ and such that ${\omega}_0\equiv {\alpha}$ in a neighborhood of the boundary. If ${\omega}_0$ is non-positive and ${\gamma}_0\geq -\int {\omega}_0$ (only for exterior domain), then, for any ${\mathbf{T}^*}>0$, there exists a neighborhood $U_{{\mathbf{T}^*}}$ of ${\partial}{{\Omega}}$ such that $$\begin{aligned} {\omega}(t)\equiv {\alpha}\qquad \textrm{on \; \;} U_{{\mathbf{T}^*}},\qquad \forall t\in [0,{\mathbf{T}^*}].\end{aligned}$$ Therefore, in the proof of the uniqueness, we still have on $U$ $${{\rm curl}\,}\tilde v = {{\rm curl}\,}v_1-{{\rm curl}\,}v_2 = {\alpha}-{\alpha}=0,$$ which implies that the velocity $\tilde v$ is harmonic near the boundary, allowing us to follow exactly the proof made in Section \[sect : 5\]. Liapounov and sign conditions ----------------------------- Let us present in this subsection the different Liapounov functions, the advantage of each, and why it is specific to the case studied. [*Vortex wave system in ${{\mathbb R}}^2$.*]{} Let us consider that the initial vorticity is composed on a regular part plus a dirac mass centered at the point $z(t)$. Then Marchioro and Pulvirenti proved in [@mar_pul] that there exists one solution to the following system: $$\begin{cases} v(t,\cdot)=(K_{{{\mathbb R}}^2} \ast \omega)(\cdot,t),\\ \dot{z}(t)=v(t,z(t)),\\ \dot{\,\phi}_x(t)=v(t,\phi_x(t))+ \frac{(\phi_x(t)-z(t))^\perp}{2\pi |\phi_x(t)-z(t)|^2},\\ \phi_x(0)=x, \; x\neq z_0 ,\\ \omega(t,\phi_x(t))=\omega_0(x), \end{cases}$$ which means that the point vortex $z(t)$ moves under the velocity field $v$ produced by the regular part ${\omega}$ of the vorticity, whereas the regular part and the vortex point give rise to a smooth flow $\phi$ along which ${\omega}$ is constant. In this case, we can prove that the trajectories never meet the point vortex considering the following Liapounov function: $$L(t) := - \ln | \phi_x(t) - z(t) |,$$ for $x\neq z_0$ fixed. We note that $L$ goes to $+\infty$ iff $\phi_x(t) \to z(t)$, so we want to prove that $L$ stays bounded. Next we compute: $$L'(t) =- \frac{ ( \phi_x(t) - z(t)) \cdot (\dot{\phi_x}(t) - \dot{z}(t) ) }{ | \phi_x(t) - z(t) |^2} =- \frac{ ( \phi_x(t) - z(t)) \cdot (v(t,\phi_x(t)) - v(t,z(t)) ) }{ | \phi_x(t) - z(t) |^2}.$$ Next, we use that the regular part $v$ is log-lipschitz in order to obtain a Gronwall-type inequality. To summarize, we remark that in the case, the important points are: $$L(t) \to \infty \text{ iff } \phi_x(t) \to z(t) \quad \text{and} \quad ( \phi_x(t) - z(t)) \cdot \frac{(\phi_x(t)-z(t))^\perp}{2\pi |\phi_x(t)-z(t)|^2}\equiv 0$$ removing the singular part. [*Dirac mass fixed in ${{\mathbb R}}^2$.*]{} Marchioro in [@mar] studied exactly the same problem as above, assuming that the vortex mass cannot move. Therefore, the previous Liapounov does not work, because we do not have a difference of two velocities and we cannot use the log-lipschitz regularity. In this article, the author introduced a new Liapounov: $$L(t) := -\int_{{{\mathbb R}}^2} \Bigl(\ln|\phi_x(t) -y|\Bigl) {\omega}(t,y)\, dy - \ln | \phi_x(t) - z_0 |,$$ where the first integral is the stream function associated to $v$. Then, the first step was to prove that this integral is bounded, which implies that $L$ goes to $+\infty$ iff $\phi_x(t) \to z_0$. Next, he computed: $$\begin{aligned} L'(t) &=&-\Bigl( \int_{{{\mathbb R}}^2} \frac{\phi_x(t) -y}{|\phi_x(t) -y|^2} {\omega}(t,y)\, dy + \frac{ \phi_x(t) - z_0 }{ | \phi_x(t) - z_0 |^2} \Bigl)\cdot \dot{\phi_x}(t) -\int_{{{\mathbb R}}^2} \Bigl(\ln|\phi_x(t) -y|\Bigl) {\partial}_t {\omega}(t,y)\, dy \\ &=&-\int_{{{\mathbb R}}^2} \Bigl(\ln|\phi_x(t) -y|\Bigl) {\partial}_t {\omega}(t,y)\, dy = -\int_{{{\mathbb R}}^2} \nabla \Bigl(\ln|\phi_x(t) -y|\Bigl) \cdot \Bigl(v(t,y)+ \frac{(y-z_0)^\perp}{2\pi |y-z_0|^2}\Bigl){\omega}(t,y)\, dy\end{aligned}$$ Next, the second step was to prove some good estimate for the right hand side integral in order to conclude by the Gronwall lemma. Here, we see that the singular term is now passed in a integral, which is bounded. Similarly, we note that the important points in this case are: $$L(t) \to \infty \text{ iff } \phi_x(t) \to z_0 \quad \text{and} \quad \Bigl( \int_{{{\mathbb R}}^2} \frac{\phi_x(t) -y}{|\phi_x(t) -y|^2} {\omega}(t,y)\, dy + \frac{ \phi_x(t) - z_0 }{ | \phi_x(t) - z_0 |^2} \Bigl)\cdot \dot{\phi_x}(t)\equiv 0.$$ [*Interior or exterior of simply connected domains.*]{} In our case, we have again an explicit formula of the velocity by the Biot-Savart law (see and ). As the velocity near the boundary blows up, we have to make appear some cancellation as Marchioro did, in order that the singular part goes in an integral. To do that, we introduce the stream function associated to the velocity: $$L_1(t,x):= \frac1{2\pi} \int_{{\Omega}} \ln\Bigl( \frac{|{{\mathcal T}}(x)-{{\mathcal T}}(y)|}{|{{\mathcal T}}(x)-{{\mathcal T}}(y)^*||{{\mathcal T}}(y)|} \Bigl){\omega}(y) \, dy+ \frac{{\alpha}}{2\pi}\ln |{{\mathcal T}}(x)|$$ with ${\alpha}=0$ in the bounded case. However, as this function tends to zero (instead to $\infty$) when $x \to {\partial}{\Omega}$ (see Lemma \[L1 maj\]), we add a logarithm: $$L(t):=-\ln | L_1(t,\phi_x(t)) |,$$ and the goal is to prove that $L$ stays bounded. Then, we computed in Section \[sect : 3\] $$L'(t) = -\frac{ {\partial}_t L_1(t,\phi_x(t)) }{| L_1(t,\phi_x(t)) |},$$ and we proved that ${\partial}_t L_1$ tends to zero as $\phi_x(t) \to {\partial}{\Omega}$, comparing the rate with $L_1$. Then, we see here that it is important that ${\partial}_t L_1$ goes to zero where $L_1$ tends to zero. We managed to prove that ${\partial}_t L_1$ tends to zero near the boundary, and the sign condition allows us to state that the boundary is the only set where $L_1$ vanishes (see Lemma \[L1 est\]). For instance, in bounded domain (i.e. ${\alpha}=0$) we see that a vorticity with different sign can imply that $L_1=0$ somewhere else than on ${\partial}{\Omega}$. This last remark is the main reason of the sign condition of the vorticity. Next, the sign condition on the circulation follows from the fact that we want the same sign for both terms in $L_1$. Therefore, one difference with the case studied by Marchioro is that the stream function of the harmonic vector field does not blow-up. To conclude, let us mention that the Liapounov method is specific to the case studied and it is hard to adapt for other cases. For example, we have presented here the case of dirac mass when $\dot{z}(t)=v(t,z(t))$, when $\dot{z}(t)=0$, but we do not know how to prove for other dynamics, like e.g. $\dot{z}(t)=(1,0)$. Acknowledgment {#acknowledgment .unnumbered} ============== I want to thank Milton C. Lopes Filho and Benoit Pausader for early discussions about how to find the good Liapounov function. I also want to thank David Gérard-Varet and Olivier Glass for several fruitful discussions. Finally, I would like to give many thanks to Isabelle Gallagher and Evelyne Miot for their helpful comments about this report. I’m partially supported by the Agence Nationale de la Recherche, Project MathOcéan, grant ANR-08-BLAN-0301-01. [99]{} DiPerna R. J. and Lions P. L., [*Ordinary differential equations, transport theory and Sobolev spaces*]{}, Invent. Math. 98, 511-547, 1989. Galdi G. P., [*An introduction to the mathematical theory of the Navier-Stokes equations. Vol. II. Nonlinear steady problems*]{}, Springer Tracts in Natural Philosophy, 39. Springer-Verlag, New York, 1994. Gérard-Varet D. and Lacave C., [*The 2D Euler equation on singular domains*]{}, submitted, preprint 2011. [arXiv:1107.1419]{} Gilbarg D. and Trudinger N. S., [*Elliptic Partial Differential Equations of Second Order*]{}, Springer, Berlin 1998. Grisvard P., [*Elliptic problems in nonsmooth domains*]{}, Monographs and Studies in Mathematics, 24. Pitman (Advanced Publishing Program), Boston, MA, 1985. Iftimie D., [*Evolution de tourbillon à support compact*]{}, Actes du Colloque de Saint-Jean-de-Monts, 1999. Iftimie D., Lopes Filho M. C. and Nussenzveig Lopes H. J., [*Two Dimensional Incompressible Ideal Flow Around a Small Obstacle*]{}, Comm. Partial Diff. Eqns. 28 (2003), no. 1$\&$2, 349-379. Kikuchi K., [*Exterior problem for the two-dimensional Euler equation*]{}, J Fac Sci Univ Tokyo Sect 1A Math 1983; 30(1):63-92. Kozlov V. A., Mazʹya V. G. and Rossmann J., [*Spectral problems associated with corner singularities of solutions to elliptic equations*]{}, Mathematical Surveys and Monographs, 85. American Mathematical Society, Providence, RI, 2001. Lacave C., [*Two Dimensional Incompressible Ideal Flow Around a Thin Obstacle Tending to a Curve*]{}, Annales de l’IHP, Anl **26** (2009), 1121-1148. Lacave C., [*Fluides autour d’obstacles minces*]{}, thesis (2008). [http://tel.archives-ouvertes.fr/tel-00345665/en/.]{} Lacave C., [*Two-dimensional incompressible ideal flow around a small curve*]{}, to appear in Commun. in Part. Diff. Eq. (2011). Lacave C. and Miot E., [*Uniqueness for the vortex-wave system when the vorticity is constant near the point vortex*]{}, SIAM Journal on Mathematical Analysis, Vol. 41 (2009), No. 3, pp. 1138-1163. Majda A. J. and Bertozzi A. L., [*Vorticity and Incompressible flow*]{}, Cambridge Texts in Applied Mathematics, 2002. Marchioro C., [*On the Euler equations with a singular external velocity field*]{}, Rend. Sem. Mat. Univ. Padova, Vol. 84, 61-69, 1990. Marchioro C. and Pulvirenti M., [*On the vortex-wave system*]{}, Mechanics, analysis, and geometry: 200 years after Lagrange, M. Francaviglia (ed), Elsevier Science, Amsterdam, 1991. McGrath F. J., [*Nonstationary plane flow of viscous and ideal fluids*]{}, Arch. Rational Mech. Anal. 27 1967 329–348. Pommerenke C., [*Boundary behaviour of conformal maps*]{}, Berlin New York: Springer-Verlag, 1992. Taylor M., [*Incompressible fluid flows on rough domains*]{}, Semigroups of operators: theory and applications (Newport Beach, CA, 1998), 320-334, Progr. Nonlinear Differential Equations Appl., 42, Birkhäuser, Basel, 2000. Temam R., [*Navier-Stokes Equations, Theory and Numerical Analysis*]{}, North-Holland, Amsterdam, 1979. Wolibner W., [*Un théorème sur l’existence du mouvement plan d’un fluide parfait homogène, incompressible, pendant un temps infiniment long*]{}, Math. Z. **37** (1933), no. 1, 698-726. Yudovich V. I., [*Non-stationary flows of an ideal incompressible fluid*]{}, Zh Vych Mat, 3:1032-1066, 1963. [^1]: see e.g. [@ift_lop_euler]. [^2]: to justify that it works even for a weak solution, the reader can see the first lines of the proof of Proposition \[compact\_vorticity\]. [^3]: see the proof of Proposition \[compact\_vorticity\] to check that this equality holds for all $T$. [^4]: The original proof comes from [@ift_lop_euler] and we copy it for a sake of clarity.
{ "pile_set_name": "ArXiv" }